uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
1,314,259,993,363 | arxiv | \section{\bf Magnetic Dissipation Spectra }
As it follows from Figure \ref{fig2}{\it a}, the majority of ARs
(especially those of high flare activity) display energy spectra steeper than
the Kolmogorov-type spectrum. To infer a physical meaning of this result,
we will analyze the magnetic energy dissipation rates and magnetic dissipation
spectra.
Magnetic energy dissipation rate is related to the presence of electric
currents, i.e., $ \langle \varepsilon \rangle \sim \eta \langle {\bf
j}^2\rangle$ (e.g., Biskamp 1993). In MHD models of turbulence, dissipative
structures are visualized via (squared) currents (e.g., Biskamp \& Welter 1989;
Biskamp 1996; Schaffenberger et al. 2006; Pietarila Graham et al. 2009;
Servidio et al. 2009) which appear in 2D images to be predominantly located
along magnetic field discontinuities frequently referred to as current sheets.
From 2D MHD modeling, Biskamp and Welter (1989) found that when the magnetic
Reynolds number (which is the ratio of characteristic values of advection terms
to the magnetic diffusivity and quantifies the strength of advection relative to
magnetic diffusion) is low, these current sheets are extended and rare, and they
become shorter and more numerous as Reynolds number increases. Thus, the
magnetic dissipation spectrum represents the distribution of dissipative
structures over many spatial scales, and it is a reasonable proxy for statistics
of current structures in an AR.
The magnetic dissipation spectrum allows us probe the state of
turbulence. For fully developed turbulence (K41, high Reynolds number), the bulk
of magnetic energy dissipation occurs at small scales, $k_d$, whereas the the
energy input occurs at large scales, $k_e$, (Figure \ref{figQS}) and energy
cascades from large to small scales without losses. When the energy input
interval and the dissipation interval overlap, dissipation occurs at
intermediate scales, along the cascade. This condition occurs in the state of
under-developed turbulence (low Reynolds number), when large-scale structures
might interfere with the turbulent cascade at small scales. It is a challenge to
model such a field because no K41 simplifications are applicable.
The magnetic energy dissipation spectra are defined as (Monin \& Yaglom 1975,
Biskamp 1993):
\begin{equation}
E_{dis}(k) = 2\eta k^2E(k),
\label{Edis}
\end{equation}
where $\eta$ is the magnetic diffusivity coefficient. (Note that $E$ and
$E_{dis}$ in Eq. \ref{Edis} have different dimensions.) Then the rate of
magnetic energy dissipation normalized by the magnetic diffusivity can be
derived as (Biskamp 1993):
\begin{equation}
\langle \varepsilon \rangle/\eta = 2{\int_0^{\infty}} k^2 E(k) dk.
\label{epsilon}
\end{equation}
From observations we can derive function $k^2E(k)$, which is proportional to the
dissipation spectrum under an assumption that $\eta$ is uniform over the AR
area. In our case both $k^2E(k)$ and $\langle \varepsilon \rangle/\eta$ are
associated with dissipation of the $B_z$ component only.
We calculated $k^2E(k)$ spectra for all ARs in our data set. Typical examples
are shown in Figure \ref{fig4}. At the early stage of development of emerging
ARs, the separation distance ($k_d - k_e$) is largest, which is similar to the
fully-developed turbulence conditions seen in quiet Sun (see Figure
\ref{figQS}). Later on this distance decreases as $k_d$ shifts toward smaller
wavenumbers (larger scales), so that the intervals of energy and dissipation
become exceedingly overlapping. This implies formation of large-scale
dissipative structures. Decaying magnetic complexes show quite opposite behavior
(Figure \ref{fig4}, middle row). Well-developed ARs (bottom row in Figure
\ref{fig4}) show a significant overlap of the energy and dissipation intervals
suggesting that, to the contrary of the fully developed turbulence
phenomenology, significant dissipation takes place at all spatial scales.
Thus, for the majority of well-developed ARs, one should expect a state
of under-developed turbulence in the photosphere with the dissipation of the
magnetic energy at all observable spatial scales.
We then compared the magnitudes of $\langle \varepsilon \rangle/\eta$ to the
flare index, $A$. Their correlation turned out to be positive with
$\rho=0.53$ with the 95\% confidence interval of 0.43 - 0.62. This indicates
that the rate on magnetic energy dissipation in the photosphere is relevant to
flare activity.
\begin{figure}[!h] \centerline { \epsfxsize=6.0truein
\epsffile{Fig3.eps}} \caption{\sf MDI/HR magnetogram for a quiet sun area
recorded on 2001 June 5/13:07 UT ({\it left}) and corresponding spectra ({\it
right}): the energy spectrum $E(k)$ ({\it blue curve and right axis}) and
dissipation spectrum, $k^2E(k)$ ({\it red curve, left axis}). The image size is
$ 266 \times 202$ arcsec. The magnetogram is scaled from -150 G to 150G. The
arrows $k_e$ and $k_d$ mark the maxima of the energy ($k_e$) and dissipation
($k_d$) spectra. Positions of the maxima were derived by 5 point box car
averaging ({\it black curves}). The maxima of the energy and dissipation spectra
are distinctly separated in the wavenumber space. }
\label{figQS}
\end{figure}
\begin{figure}[!h] \centerline {\epsfxsize=5.5truein
\epsffile{Fig4.eps}}
\caption{\sf {\it Top - } energy spectra, $E(k)$ ({\it blue lines}), and
dissipation spectra, $k^2E(k)$ ({\it double red lines}), plotted for an emerging
AR. Panels {\it a, b} and {\it c} correspond to three consecutive moments during
the emergence. Vertical blue (red) bars mark the maximum of the energy
(dissipation) spectrum. Blue bars correspond to $k_e$ and red bars correspond to
$k_d$. As the active region emerges, $k_d$ shifts toward the smaller
wavenumbers. {\it Middle} - energy and dissipation spectra for a decaying
magnetic complex NOAA ARs 9682 and 9712. The right panel shows a superposition
of the dissipation spectra at the well-developed ({\it double red line}) and
decaying ({\it solid green line}) state of the magnetic complex. As the magnetic
fields decay, $k_d$ shifts toward small scales (larger wavenumbers). {\it Bottom
- } energy spectra and dissipation spectra for three well-developed ARs. The
spectra are overlapped for each case.}
\label{fig4}
\end{figure}
\section {Conclusions and Discussion}
In this study we analyzed second-order statistical moments of solar magnetic
fields of 217 active regions observed with the MDI instrument in high
resolution mode during the 23rd solar cycle.
The angle-integrated magnetic energy spectra of solar ARs display a well-defined
power-law region, which indicates the presence of a turbulent non-linear energy
cascade. The power index, $\alpha$ measured at 3-10~Mm scale range was found to
be well correlated with the flare index, $A$ (correlation coefficient,
$\rho=0.57$). This results further supports our previous findings based on
only 16 ARs (Abramenko 2005). The power indices range between 1.3 and 3.0, with
the majority of ARs having the power index in the range of $1.6 - 2.3$. No
particular preference for the classical 5/3 index was found. These values are in
a surprising agreement with recent numerical simulations of decaying MHD
turbulence (Lee et al. 2009). The model results showed that in equivalent
initial magnetic configurations different types of spectra (from $k^{-3/2}$ to
$k^{-2}$) may emerge depending on the intrinsic non-linear dynamics of the
flow.
The total spectral energy, $W=\int E(k) dk$, is found to be well correlated with
the flare index ($\rho=0.68$), while spectral energy weighted by the power
index shows the strongest correlation to the flare index ($\rho=0.71$), which
allowed us to determine an empirical description of this relationship:
$A=10^b(\alpha W)^c$, where $b=-7.92 \pm 0.58$ and $C=1.85 \pm 0.13$.
Combined analysis of magnetic energy and magnetic dissipation spectra showed
that in majority of well-developed ARs, the turbulent energy cascade is
augmented by magnetic energy dissipation at all scales. We thus argue that a
state of under-developed turbulence exists in the photosphere of mature ARs.
The magnetic energy dissipation rate, $\langle \varepsilon \rangle/\eta$,
correlates with the flare productivity in the same degree as the power index
does ($\rho=0.53$). As long as energy dissipation rate is proportional to
electric currents squared, we argue that the presence of currents is relevant to
flare productivity. Also, good correlation between the energy dissipation rate
and the flaring rate is in agreement with earlier reports by Schrijver et al.
(2005).
It is known from direct calculations based on vector-magnetograms that electric
currents are ubiquitous in ARs (see, e.g., Abramenko et al. 1991; Leka et al.
1993, 1996; Pevtsov et al. 1994; Abramenko et al. 1996; Wheatland 2000; Zhang
2002; Schrijver et al. 2005; Leka \& Barnes 2007; Schrijver et al. 2008;
Schrijver 2009). We found here that photospheric magnetic fields are in a state
of under-developed turbulence when both the energy cascade and energy
dissipation at all scales are present in the system. We therefore, arrive at a
conclusion that both large and small scale dissipative structures (currents) are
relevant to flaring.
On the other hand, Fisher et al. (1998) found no correlation between
photospheric currents and soft X-ray luminosity of ARs. This was also noted, but
not discussed, by Schrijver et al. (2005). We suggest that this apparent
controversy is due to different nature of flares and the soft X-ray emission. We
may consider flares as sporadic explosive events caused by strongly non-linear
dynamics relevant to large- and small-scale magnetic discontinuities (Falconer
et al. 2002, 2003, 2006, see also Schrijver 2009), whereas soft X-ray emission
reflects a more stationary and homogeneous process of coronal heating rather
related to ubiquitous small scale discontinuities formed {\it in-situ} (Klimchuk
2006).
We would also like to note that the power law of the magnetic energy spectra,
explored in this paper, shell not be confused with the power law found in the
distribution of the magnetic flux in flux concentrations reported recently by
Parnell et al. (2009). At first sight, both of them characterize the {\it
structure} of the magnetic field. However, they address different physical
consequences of the magnetic field structuring. The power law of the magnetic
energy spectrum represents distribution of magnetic energy, $B^2$, over spatial
scales and quantifies turbulence in an AR. Here, the smallest magnetic elements
are represented by the tail of the spectrum usually associated with low spectrum
amplitudes. The power law found in distribution of magnetic flux represents
frequency (abundance) of magnetic elements of different sizes and implies a
unique mechanism of formation of magnetic flux concentrations (say,
fragmentation process, see Abramenko \& Longcope (2005) for more discussion).
The smallest magnetic elements are the most frequent and they are represented by
the highest amplitudes of the distribution.
Modern computational capabilities allow us to develop MHD models which take
into account turbulent regime and turbulent dissipation (e.g., Lionello et al.
2010; Klimchuk et al. 2010). Therefore, diagnostics of turbulence derived from a
large uniform data set is essential for constructing and restraining of these
models.
We are grateful to anonymous referee for criticism and useful comments allowing
to improve substantially the manuscript. SOHO is a project of international
cooperation between ESA and NASA. This work was supported by NSF grant
ATM-0716512 and NASA grants NNX08AJ20G and NNX08AQ89G.
|
1,314,259,993,364 | arxiv | \section{Technical details of SM-FRG}
Consider the interaction hamiltonian $H_I=(1/2)c_{1\sigma}^\dagger c_{2\sigma'}^\dagger V_{1234} c_{3\sigma'} c_{4\sigma}$. Here the numerical index labels momentum/position, and we leave implicit the momentum conservation/translation symmetry. The spin SU(2) symmetry is guaranteed in the above convention for $H_I$. The idea of FRG is to get the one-particle-irreducible interaction vertex for fermions whose energy/frequency is above a scale $\Lambda$. Equivalently, such an effective interaction is what's called pseudo-potential for fermions whose energy/frequency is below $\Lambda$. Starting from the local $U$ at $\Lambda=\infty$, the contributions to $\partial V/\partial \Lambda$ are illustrated in \Fig{fig:frgscheme}. In principle there will also be self-energy correction to fermions, which we ignore as usual, given the fact that we are just looking for the instability of the normal state. To proceed, it is useful to define matrix aliases of the rank-4 `tensor' $V$ via
\begin{eqnarray} V_{1234}=P_{(12)(43)}=C_{(13)(42)}=D_{(14)(32)}.\end{eqnarray}
Then $\partial V/\partial \Lambda$ can be compactly written as
\begin{eqnarray} \frac{\partial V_{1234}}{\partial\Lambda} = &&[ {\cal D} \chi^{ph}( {\cal D} - {\cal C} )+( {\cal D} - {\cal C} )\chi^{ph} {\cal D} ]_{(14)(32)}\nonumber\\
&&+ [ {\cal P} \chi^{pp} {\cal P} ]_{(12)(43)} - [ {\cal C} \chi^{ph} {\cal C} ]_{(13)(42)},
\label{Eq:dV}
\end{eqnarray}
where matrix convolutions are understood within the square brackets, and
\begin{eqnarray} && {\cal P} = P +\Pi_\Lambda, \ \ {\cal C} = C + \Pi_\Lambda, \ \ {\cal D} = D + \Pi_0,\nonumber\\
&& \chi^{pp}_{(ab)(cd)} = \frac{1}{2\pi}[G_{ac}(\Lambda)G_{bd}(-\Lambda)+(\Lambda\rightarrow -\Lambda)],\nonumber\\
&& \chi^{ph}_{(ab)(cd)} = -\frac{1}{2\pi}[G_{ac}(\Lambda)G_{db}(\Lambda)+(\Lambda\rightarrow -\Lambda)],
\label{Eq:def}
\end{eqnarray}
where $\Pi$ enters as a matrix (local in real space and flat in momentum space), $G$ is the normal state Green's function, and we used a hard-cutoff in the continuous Matsubara frequency. Notice that $\Pi_0$ enters $ {\cal D} $ because the EPC induced interaction is direct in the charge channel. This is also evident from \Fig{fig:frgscheme}. Since the external lines are set at zero frequency (the frequency dependence is irrelevant for 4-point interactions in the RG sense), the frequency on the phonon lines (thickened wavy lines) overlayed by $D$ is automatically zero in \Fig{fig:frgscheme}(c)-(e).
\begin{figure}
\includegraphics[width=0.45\textwidth]{frgscheme}
\caption{One-loop contributions to $\partial V/\partial\Lambda$. The greyed bar and wavy line denote $V$ and $\Pi$, respectively. They add up where overlayed. Spin is conserved during fermion propagation and is left implicit. The slash denotes the single-scale propagator and can be put on either one of the fermion lines within the loop. The directed-circle indicates circulation of frequency along the loop, and $\Lambda$ the running scale. The thin (thick) wavy line shares the loop frequency (is at zero frequency). Each diagram can be viewed as a convolution of aliases of $V$ (together with $\Pi$) via $V_{1234}=P_{(12)(43)}=C_{(13)(42)}=D_{(14)(32)}$.}
\label{fig:frgscheme}
\end{figure}
The integration of $\partial V/\partial \Lambda$ toward decreasing $\Lambda$ generates all one-particle-irreducible corrections to $V$ from $U$ and $\Pi$ to arbitrary orders and in all possible ways. We extract from $V$ and $\Pi$ the effective interactions in the general SC/SDW/CDW channels
\begin{eqnarray} (V_{SC},V_{SDW},V_{CDW}) = ( {\cal P} , - {\cal C} , 2 {\cal D} - {\cal C} ).\end{eqnarray} (This expression is exactly equivalent to \Eq{eq:VX} in the main text.) Since they all originate from $V+\Pi$, they are overlapped but are naturally treated on equal footing. Viewed as scattering amplitude of composite bosons, the effective interactions can be decomposed into eigen modes. For example, in the SC channel (with a zero collective momentum),
\begin{eqnarray}
[V_{SC}]_{(\v k,-\v k)(\v k',-\v k')} = \sum_m f_m(\v k)S_m f_m^{*}(\v k'),
\end{eqnarray}
where $S_m$ is the eigenvalue, and $f_m(\v k)$ is the eigenfunction. We look for the most negative eigenvalue, say $S=\min[S_m]$, with an associated eigenfunction $f(\v k)$. If $S$ diverges at a scale $\Lambda_c$, it signals the instability of the normal state toward a SC state, with a pairing function described by $f(\v k)$. Similar analysis can be performed in the CDW/SDW channels, with the only exception that in general the collective momentum $\v q$ in such channels is nonzero. Since $\v q$ is a good quantum number in the respective channels, one performs the mode decomposition at each $\v q$. There are multiple modes at each $\v q$, but we are interested in the globally leading mode among all $\v q$. In this way one determines both the ordering vector $\v Q$ and the structure of the order parameter by the leading eigenfunction.
Finally, the instability channel is determined by comparing the leading eigenvalues in the CDW/SDW/SC channels.
In principle, the above procedure is able to capture the most general candidate order parameters. In practice, however, it is impossible to keep all elements of $V$ for computation. Fortunately, the order parameters are always local or short-ranged. This is notwithstanding the possible long-range correlations between the order parameters. For example, the s-wave pairing in the BCS theory is local, since the gap function is a constant in momentum space. The order parameter in usual Landau theories are assumed to be local. The d-wave pairing is nonlocal but short-ranged. The usual CDW/SDW orders are ordering of site-local charges/spins. The valence-bond order is on-bond but short-ranged. In fact, if the order parameter is very nonlocal, it is not likely to be stable. The idea is, if it is not an instability at the tree level, it has to be induced by the overlapping channel. But if the induced order parameter is very nonlocal, it must be true that the donor channel has already developed long-range fluctuations and is ready to order first. These considerations suggest that most elements of the `tensor' $V$ are irrelevant in the RG sense and can be truncated. \Eq{Eq:dV} suggests how this can be done. For fermions, all 4-point interactions are marginal in the RG sense, and the only way a marginal operator could become relevant is through coherent and repeated scattering in a particular channel. Therefore, it is sufficient to truncate the range between 1 and 2, between 3 and 4, in $ {\cal P} _{(12)(43)}$, but leaving the range between the two groups arbitrary (thus thermodynamical limit is not spoiled). Similar considerations apply to $ {\cal C} $ and $ {\cal D} $. Eventually the same type of truncations can be applied in the effective interactions $V_{CDW/SDW/SC}$. Such truncations keep the potentially singular contributions in all channels and their overlaps, underlying the key idea of the SM-FRG.~\cite{Wang2012,Xiang2012a,Wang2014} The merit of SM-FRG is: 1) It guarantees hermiticity of the truncated interactions; 2) It is asymptotically exact if the truncation range is enlarged; 3) It respects all underlying symmetries, and in particular it respects momentum conservation exactly. 4) In systems with multi-orbitals or complex unitcell, it is important to keep the momentum dependence of the Bloch states, both radial and tangential to the Fermi surface. This is guaranteed in SM-FRG since it works with Green's functions in the orbital basis. We take these as advantages of SM-FRG as compared to the patch-FRG applied in the literature.~\cite{Honerkamp2001,Metzner2012,Platt2013}
{\em BCS limit}: We notice that if only \Fig{fig:frgscheme}(a), the pairing channel, is kept, the BCS theory is trivially reproduced. For this to be valid, one requires $\Lambda_c\ll {\omega}_D\ll W$ and the absence of any nesting, so that the contributions from the other channels, \Fig{fig:frgscheme}(b)-(e), are negligible. To make analytical solution accessible, we approximate $\Pi_\nu$ as a step function, $\Pi_\nu =-\lambda W\theta({\omega}_D-|\nu|)$. Thus $\Pi_\Lambda =0$ for $\Lambda >{\omega}_D$, and the RG flow above ${\omega}_D$ merely generates a renormalized Coulomb interaction $V^*$. The flow for $\Lambda<{\omega}_D$ is, with $\Pi_\Lambda=-\lambda W$ in the above approximation,
\begin{eqnarray} \partial (V-\lambda W)/\partial \Lambda = (\rho/\Lambda)(V-\lambda W)^2,\end{eqnarray}
where $\rho$ is the normal state density of states, and we assumed that $V_{\v k,-\v k,-\v k',\v k'}$ is independent of $\v k$ and $\v k'$, as assumed in the BCS theory. (This means that we are treating the s-wave pairing channel.) The solution is, given the boundary condition at $\Lambda={\omega}_D$,
\begin{eqnarray} V-\lambda W = \frac{V^*-\lambda W}{1 +(\lambda-\mu^*)\ln(\Lambda/{\omega}_D)},\end{eqnarray} where we used $\mu^*=\rho V^*$ and $\rho W\sim 1$. There is a divergence $V-\lambda W\rightarrow -\infty$ if and only if $\lambda-\mu^*>0$ (i.e., EPC mediated attraction overwhelms the repulsive $V^*$), at the scale
\begin{eqnarray} \Lambda_c = {\omega}_D e^{-1/(\lambda-\mu^*)}.\end{eqnarray} This is already in nice agreement with the $T_c$ in the Eliashberg theory, given the approximations in $\Pi_\nu$. This example shows that the idea of pseudopotential can be pushed down to any energy scale (not just at ${\omega}_D$ as in the BCS theory) until it diverges, and the divergence scale is just a representative of the transition temperature $T_c$. If sufficiently strong, the CDW/SDW channels neglected in the BCS theory will clearly invalidate the latter, as revealed in the main text.\\
{\em Local limit}: On the other hand, if only the local elements of $V$ is kept, we have $V=P=C=D$. Furthermore, in the presence of particle-hole symmetry (at half filling in HHM), the second line of \Eq{Eq:dV} cancels out (in the local limit), leaving,
\begin{eqnarray} \frac{\partial V}{\partial\Lambda} = - \frac{2}{\pi}(\Pi_0-\Pi_\Lambda)\frac{\partial\chi_\Lambda}{\partial\Lambda}(V+\Pi_0),\end{eqnarray}
where $\chi_\Lambda\sim \alpha/\Lambda$ is a local susceptibility at the scale $\Lambda$, with a factor $\alpha$ of order unity. This can be solved analytically,
\begin{eqnarray} V+\Pi_0 \sim (U+\Pi_0)\exp\left[\frac{\alpha\lambda W}{{\omega}_D}(1-\frac{2}{\pi}\tan^{-1}\frac{\Lambda}{{\omega}_D}) \right],\end{eqnarray} where we used $\Pi_0=-\lambda W$. This is \Eq{Eq:Anal} in the main text.\\
\section{Polaronic band narrowing}
The coupling between electron and phonon can be formally decoupled by the Lang-Firsov transformation.
Define a unitary matrix $ {\cal U} =\exp[(\eta/{\omega}_D)\sum_i n_i(b_i-b_i^\dag)]$,
where we recall that $\eta=g/\sqrt{2M{\omega}_D}=\sqrt{\lambda W{\omega}_D/2}$. It is easy to show that $ {\cal H} = {\cal U} H {\cal U} ^\dag$ becomes
\begin{eqnarray}
{\cal H} =&&-t\sum_{\<ij\>\sigma}(\t c_{i\sigma}^\dag \t c_{j\sigma}+{\rm h.c.}) -\mu\sum_{i\sigma}n_{i\sigma} +\omega_D\sum_i b_i^\dag b_i\nonumber\\ && + \t U \sum_{i}(n_{i\uparrow}-1/2)(n_{i\downarrow}-1/2) ,
\end{eqnarray}
where $\t c_i = c_i e^{-(\eta/{\omega}_D)(b_i-b_i^\dag)}$ and $\t U = U-\lambda W$. The hopping part averaged over the phonon ensemble leads to a polaronic renormalization of $t\rightarrow z t$, with
\begin{eqnarray} z= \exp\left[-\frac{\lambda W}{2{\omega}_D}\frac{1+e^{\beta{\omega}_D}}{e^{\beta {\omega}_D}-1}\right].\end{eqnarray} This factor describes the coherent part of the kinetic energy in the presence of EPC, namely the renormalization factor for the cohorent bandwidth. For the electrons to hop coherently, one requires $T\ll {\omega}_D$ so that phonon excitations are rare, the condition for $z$ to make sense.
\end{document}
|
1,314,259,993,365 | arxiv | \section{Introduction}
Constructing lattice squares in the integer lattice $\mathbb Z^2$ is easy as we can take a vector and its image under a $90$ degree rotation. The analogous problem in dimension three is to find lattice cubes in $\mathbb Z^3$. Although there are axis-parallel lattice cubes, still many other constructions exist, which are much more complicated to construct as there is no canonical rotation in dimension three.
One early result on lattice cubes is due to A.~S\'ark\" ozy \cite{Sarkozy}, who described some constructions of lattice cubes and determined the number of certain lattice cubes in 1961. This topic is still researched, E.~J.~Ionascu studied lattice cubes in dimensions 2, 3, 4 and determined their Ehrhart polynomial in his recently published paper \cite{Ionascu}.
If the edge length of a lattice cube is $d$, then its volume $d^3$ is the determinant of the edge vectors, so it is an integer, while the squared edge length $d^2$ is also an integer. This means that the edge length $d$ is an integer.
The following statement is an easy consequence of the description of the Pythagorean quadruples (see \cite{Carmichael} or \cite{Spira}).
If the length of a vector $\mathbf v\in\mathbb Z^3$ is an integer, then $\mathbf v$ can be extended to a lattice cube. An algorithmic proof is found in \cite{Parris}.
Similar questions are discussed in dimension four using Hurwitz integral quaternions by E.~W.~Kiss and P.~Kutas \cite{Kutas}.
A lattice cube can be easily extended to a cubic sublattice, which is a sublattice having such basis whose elements are pairwise orthogonal and of equal lengths. In this situation, it is naturally to ask which cubic sublattices contain a given vector $\mathbf v\in\mathbb Z^3$ not necessarily as a basis vector. If the edge length of the cubic sublattice is $d$, then $d^2$ must divide the squared length of $\mathbf v$. Our goal is to show that this condition is sufficient in the following sense.
\begin{theorem}\label{main}
For a vector $\mathbf v\in\mathbb Z^3$ whose squared length is divisible by $d^2$ for an integer $d$, there exists a cubic sublattice containing $\mathbf v$ with edge length $d$. If $\mathbf v$ is primitive, then this cubic sublattice is unique.
\end{theorem}
For example, the squared length of the vector $\mathbf v=(5,5,2)$ is $54$, which is divisible by $9$, so there exists a cubic sublattice containing $\mathbf v$ with edge length $3$, see Figure~\ref{fig:mainthm}.
\begin{figure}[h]
\centering
\includegraphics[width=12.5cm]{cuboid}
\caption{The vector $\mathbf v=(5,5,2)$ is contained in the cubic sublattice with edge length $3$ generated by vectors $(-1,2,2), (2,-1,2), (2,2,-1)$. The figure shows the domain $[-1,6]\times[-1,6]\times[-2,4]$.}
\label{fig:mainthm}
\end{figure}
There might be several suitable cubic sublattices if $\mathbf v$ is not primitive. For instance, the vector $\mathbf v=(5,0,0)$ is contained in the cubic sublattices with edge length $d=5$ generated by bases $\{(5,0,0),(0,5,0),(0,0,5)\}$ and $\{(5,0,0),(0,3,4),(0,4,-3)\}$.
For the uniqueness, it is enough to assume that the greatest positive divisor $k$ of $\mathbf v$ and $d$ are coprime.
Indeed, we show that the primitive vector $\mathbf u=\mathbf v/k$ is also contained in the cubic sublattice $\Gamma$ given by the theorem for $\mathbf v$ when $d$ and $k$ are coprime. In the group $\mathbb Z^3/\Gamma$, the order of $\mathbf u\Gamma$ divides $k$ and the order $d^3$ of the group by Lagrange's theorem, hence the order of $\mathbf u\Gamma$ is $1$.
For the greatest possible value of $d$, i.e., in the case when $\|\mathbf v\|^2/d^2$ is square-free, Theorem~\ref{main} was proved in \cite{Goswick} by L.~M.~Goswick, E.~W.~Kiss, G.~Moussong and N.~Sim\'anyi using the decomposition theory of Hurwitz integral quaternions. Our proof builds solely on the structure of the three-dimensional Euclidean space and some basic facts about lattice geometry.
Finally, we give a number-theoretic corollary by considering the squared length of the vector $\mathbf v$. When a number $d^2m$ (where $d$ and $ m$ are integers) is a sum of three squares, then $m$ is also a sum of three squares. This is not surprising because Legendre's three-square theorem states that a natural number is a sum of three squares if and only if it is not of the form $4^n(8k+7)$. As a primitive vector remains primitive vector in the cubic sublattice, we can formulate a similar corollary: If an integer $d^2m$ is a sum of three coprime squares, then $m$ is also a sum of three coprime squares. We will discuss the converse of this claim after Theorem~\ref{thm:reverse} at the end of Section~\ref{characterization}.
In Section~\ref{preliminaries}, we introduce some definitions and propositions from lattice geometry. Section~\ref{proof} is devoted to the proof of Theorem~\ref{main}. Then we characterize the cubic sublattices of $\mathbb Z^3$ and we prove a kind of reverse theorem in Section~\ref{characterization}.
Finally, we investigate cubic sublattices as a partial ordered set in Section~\ref{poset}.
\section{Preliminaries}\label{preliminaries}
In this section, we summarize some basic definitions and statements without proof. We refer to \cite{Cassels} for more details on lattice geometry. After that we prove some easy propositions which will be used later.
Let $\Lambda$ be an $n$-dimensional lattice in a real vector space. We say that a subgroup $K<\Lambda$ is a sublattice if $K$ is also an $n$-dimensional lattice. This is equivalent to the finiteness of the index of $K$ in $\Lambda$. If some linearly independent vectors in $\Lambda$ form a basis in the intersection of $\Lambda$ and the linear subspace generated by them, then these vectors can be extended to a basis of $\Lambda$. The parallelepiped generated by the basis of the lattice is called fundamental parallelepiped. When a scalar product or just a volume form is given on the vector space, the volume of the fundamental parallelepiped does not depend on the choice of the basis of the lattice. For a sublattice $K\subseteq\Lambda$, the ratio of the volumes of the fundamental parallelepipeds is equal to the index of $K$ in $\Lambda$ as a subgroup.
We say that a vector $\mathbf v\in \Lambda$ is divisible by a positive integer $k$ if there exists a vector $\mathbf u\in \Lambda$ such that $\mathbf v=k\mathbf u$. A vector $\mathbf v\in \Lambda$ is called primitive if there is not any positive integer $k\neq1$ which divides $\mathbf v$. For a non-zero vector $\mathbf v\in \Lambda$, there exists a unique primitive vector $\mathbf u$ and a unique positive integer $k$ such that $\mathbf v=k\mathbf u$. In this case, we say that $k$ is the greatest divisor of $\mathbf v$.
The vectors that are divisible by a positive integer $k$ form a sublattice in $\Lambda$, it will be denoted by $k\Lambda$. The index of $k\Lambda$ in $\Lambda$ is $k^n$.
Fixing a basis $\{\mathbf e^1,\dots,\mathbf e^n\}$ of $\Lambda$, we can identify $\Lambda$ with $\mathbb Z^n$ by using the coordinates $(v_1,\dots,v_n)$ of a vector $\mathbf v\in\Lambda$ with respect to this basis.
A vector is divisible by $k$ if and only if its coordinates so are. A vector is primitive if and only if its coordinates are coprime. The standard embedding $\mathbb Z^n\subseteq \mathbb R^n$ and the Euclidean vector space structure of $\mathbb R^n$ define the dot product on $\Lambda$. In particular, the perpendicularity of two vectors of $\Lambda$ and the length $\|\mathbf v\|$ of a vector $\mathbf v\in\Lambda$ are defined. In this case, the volume of the fundamental parallelepiped is the determinant of the basis vectors.
From now, we consider the case $n=3$. The advantage of this dimension is the applicability of the cross product.
We say that a sublattice $\Gamma$ of the standard lattice $\mathbb Z^3$ is a cubic sublattice if there exists a basis of $\Gamma$ whose elements are pairwise orthogonal and of equal lengths. Such basis is called cubic basis, and their common length $d$ is the edge length. When we have a cubic sublattice $\Gamma\subset\mathbb Z^3$, we can identify $\Gamma$ with $\mathbb Z^3$, so we can measure the vectors of $\Gamma$ with respect to this identifying.
For a non-zero vector $\mathbf v\in\mathbb Z^3$, the set of the orthogonal vectors to $\mathbf v$ will be denoted by $\mathbf v^\perp$.
\begin{proposition}\label{perp-lattice}
For a non-zero vector $\mathbf v=(v_1,v_2,v_3)\in\mathbb Z^3$, the set $\mathbf v^\perp$ is a two-dimensional lattice.
\end{proposition}
\begin{proof}
Obviously, $\mathbf v^\perp$ is a discrete subgroup. Its dimension is at most $2$, we show that it is exactly $2$. We can assume that $v_3\neq0$, so none of the linear combinations of $\mathbf e^1,\mathbf e^2$ is parallel to $\mathbf v$. In this case, the linearity of the cross product yields that the vectors $\mathbf e^1\times\mathbf v,\mathbf e^2\times\mathbf v\in\mathbf v^\perp$ are linearly independent.
\end{proof}
\begin{proposition}\label{perp_area}
For a primitive vector $\mathbf v\in\mathbb Z^3$, the area of the fundamental parallelogram in $\mathbf v^\perp$ is equal to the length of $\mathbf v$.
\end{proposition}
\begin{proof}
The area of the fundamental parallelogram equals the length of the cross product $\tilde{\mathbf v}$ of the generating vectors. As $\mathbf v$ is primitive, $\tilde{\mathbf v}$ is a multiple of $\mathbf v$.
Every pair of the vectors $\mathbf e^i\times\mathbf v$ ($i=1,2,3$) generates a sublattice of $\mathbf v^\perp$. The cross product of the generators is
\[(\mathbf e^i\times\mathbf v)\times(\mathbf e^j\times\mathbf v)=\left(\mathbf v\cdot(\mathbf e^i\times\mathbf e^j)\right)\mathbf v=\pm v_k\mathbf v,\]
where $i,j,k\in\{1,2,3\}$ are distinct indices. These vectors are multiples of $\tilde{\mathbf v}$. As $\mathbf v$ is primitive, the coefficients $v_1,v_2,v_3$ are coprime, hence $\tilde{\mathbf v}=\pm\mathbf v$.
\end{proof}
Fixing a primitive vector $\mathbf v$, let the vectors $\mathbf a,\mathbf b\in\mathbb Z^3$ be equivalent if their difference is a multiple of $\mathbf v$. Then the cross product with $\mathbf v$ is a well-defined map $\Phi_{\mathbf v}$ from the equivalence classes to $\mathbf v^\perp$.
\begin{proposition}\label{equivalent}
The map $\Phi_{\mathbf v}$ is a bijection.
\end{proposition}
\begin{proof}
If the cross products with $\mathbf v$ are equal for two vectors of $\mathbb Z^3$, then their difference is parallel to $\mathbf v$. This means that $\Phi_{\mathbf v}$ is injective.
For the surjectivity, consider a vector $k\mathbf u\in\mathbf v^\perp$, where $\mathbf u$ is primitive and $k$ is an integer. The vector $\mathbf v$ is primitive in $\mathbf u^\perp$, so there exists a vector $\mathbf w$ such that $\{\mathbf v,\mathbf w\}$ is a basis of $\mathbf u^\perp$. Then the cross product $\mathbf w\times \mathbf v$ is $\pm\mathbf u$, so $\pm k\mathbf w\times \mathbf v=k\mathbf u$, hence $\Phi_{\mathbf v}$ is surjective.
\end{proof}
\section{The proof of Theorem~\ref{main}}\label{proof}
It is enough to prove the theorem for primitive vector $\mathbf v$.
Indeed, if $\mathbf v=k\mathbf u$ for a primitive vector $\mathbf u\in\mathbb Z^3$, and $d^2$ divides the squared length $\|\mathbf v\|^2=k^2\|\mathbf u\|^2$, then there exists a decomposition $d=d_1d_2$ such that $d_1$ divides $k$ and $d_2^2$ divides $\|\mathbf u\|^2$. Applying the theorem for $d_2$ and $\mathbf u$, we get a cubic sublattice $\Gamma$. Then the cubic sublattice $d_1\Gamma$ has edge length $d_1d_2=d$ and contains $d_1\mathbf u$, therefore also $\mathbf v$.
First we prove the uniqueness part of the theorem in case $\mathbf v$ is primitive. Suppose that we have a cubic sublattice $\Gamma$ containing $\mathbf v$ with edge length $d$.
The first lemma is the key observation.
\begin{lemma}\label{lem:basic}
If $\mathbf a,\mathbf b\in \Gamma$, then $\mathbf a\times\mathbf b$ is divisible by $d$, and $\mathbf a\times\mathbf b/d\in \Gamma$.
\end{lemma}
\begin{proof}
Computing the cross product of $\mathbf a$ and $\mathbf b$ with respect to the Euclidean structure of $\Gamma\cong\mathbb Z^3\subset\mathbb R^3$, we get $\mathbf a\times\mathbf b/d$, which implies the statement.
\end{proof}
Lemma~\ref{lem:basic} yields that $\mathbf a\times\mathbf v$ is divisible by $d$ for $\mathbf a\in\Gamma$, so consider the following subset of $\mathbf v^\perp$
\[M(\mathbf v,d)=\{\mathbf a\in\mathbf v^\perp\mid \mathbf a\times\mathbf v \text{ is divisible by }d\}.\]
We have that $M(\mathbf v,d)$ contains the intersection $\mathbf v^\perp\cap \Gamma$. Later we will see that these sets are coincide.
By the linearity of the cross product, $M(\mathbf v,d)$ is a subgroup. The next lemma shows that it is a sublattice.
\begin{lemma}
The index of $M(\mathbf v,d)$ in $\mathbf v^\perp$ is $d$.
\end{lemma}
\begin{proof}
Let $i$ denote the index of $M(\mathbf v,d)$ in $\mathbf v^\perp$.
Firstly we prove that $i$ is divisible by $d$.
As the vector $\mathbf v=(v_1,v_2,v_3)$ is primitive, there exist integers $t_1,t_2,t_3$ such that $t_1v_1+t_2v_2+t_3v_3=1$. Consider the vector $\mathbf t=(t_1,t_2,t_3)$, and put $\mathbf u=\mathbf t\times\mathbf v\in\mathbf v^\perp$. Then
\[\mathbf u\times\mathbf v=(\mathbf t\times\mathbf v)\times\mathbf v=(\mathbf t\cdot\mathbf v)\mathbf v-(\mathbf v\cdot\mathbf v)\mathbf t=\mathbf v-\ell^2\mathbf t=\left(v_1-\ell^2t_1, v_2-\ell^2t_2,v_3-\ell^2t_3\right).\]
This vector is not divisible by any non-unit divisor of $d$, otherwise such a divisor would divide $\ell^2$ and $v_1,v_2,v_3$, which would contradict the primitiveness of $\mathbf v$.
Thus, the vectors $\mathbf u\times\mathbf v, 2\mathbf u\times\mathbf v,\dots, (d-1)\mathbf u\times\mathbf v$ are not divisible by $d$, so the vectors $\mathbf u, 2\mathbf u,\dots, (d-1)\mathbf u$ are not contained in $M(\mathbf v,d)$. The vector $d\mathbf u$ belongs to $M(\mathbf v,d)$ by definition. In the group $\langle\mathbf u\rangle$ generated by $\mathbf u\in\mathbf v^\perp$, the subgroup $\langle\mathbf u\rangle\cap M(\mathbf v,d)$ has index $d$, hence $i$ is a multiple of $d$ by Noether's isomorphism theorem.
Now we prove that $i$ divides $d$.
Consider the vectors
\begin{align*}
\mathbf r^1&=d\mathbf e^1\times\mathbf v=(0,-dv_3,dv_2),\\
\mathbf s^1&=(\mathbf r^1\times\mathbf v)/d=(\mathbf e^1\times\mathbf v)\times\mathbf v=\left(-v_2^2-v_3^2,v_1v_2,v_1v_3\right).
\end{align*}
We have that $\mathbf r^1\in M(\mathbf v,d)$ and $\mathbf r^1\perp\mathbf s^1$. As $\mathbf r^1$ is perpendicular to $\mathbf v$, we have
\[\mathbf s^1\times\mathbf v=\frac1d(\mathbf r^1\times\mathbf v)\times\mathbf v=-\frac{\ell^2}d\mathbf r^1,\]
which means that $\mathbf s^1\in M(\mathbf v,d)$. The area of the rectangle spanned by $\mathbf r^1$ and $\mathbf s^1$ is $\|\mathbf r^1\|^2\ell/d=d\ell(v_2^2+v_3^2)$. By Proposition~\ref{perp_area}, the area of the fundamental parallelogram in $\mathbf v^\perp$ is $\ell$.
The index of the sublattice generated by $\mathbf r^1$ and $\mathbf s^1$ in $\mathbf v^\perp$ is the quotient of the areas of the fundamental parallelograms, which is equal to $d(v_2^2+v_3^2)$.
This shows that $i$ divides $d(v_2^2+v_3^2)$. We get similarly that $i$ also divides $d(v_1^2+v_3^2)$ and $d(v_1^2+v_2^2)$, which yields that $i$ divides the greatest common divisor
\begin{align*}
\gcd\left(d(v_2^2+v_3^2),d(v_1^2+v_3^2),d(v_1^2+v_2^2)\right)
&=d\gcd\left(v_2^2+v_3^2,v_1^2+v_3^2,v_1^2+v_2^2\right)\\
{}&=d\gcd\left(v_2^2+v_3^2,v_1^2+v_3^2,v_1^2+v_2^2,v_2^2-v_3^2,v_1^2-v_3^2,v_1^2-v_2^2\right)\\
{}&=d\gcd\left(2v_1^2,2v_2^2,2v_3^2,v_2^2+v_3^2,v_1^2+v_3^2,v_1^2+v_2^2\right).
\end{align*}
Using $\gcd(v_1,v_2,v_3)=1$, we obtain $\gcd(d(v_2^2+v_3^2),d(v_1^2+v_3^2),d(v_1^2+v_2^2))=d$ if $v_1,v_2,v_3$ are not all odd. This implies the statement in this case.
If $v_1,v_2,v_3$ are all odd, then $\gcd(d(v_2^2+v_3^2),d(v_1^2+v_3^2),d(v_1^2+v_2^2))=2d$, so we have that $i$ divides $2d$. In this case, $\ell^2=v_1^2+v_2^2+v_3^2$ is odd, so is $d$. The sublattice $d\mathbf v^\perp$ is contained in $M(\mathbf v,d)$, and its index is $d^2$. Therefore $i$ divides $\gcd(2d,d^2)=d$.
\end{proof}
Lemma~\ref{lem:basic} implies for a vector $\mathbf a\in \Gamma$ that $\mathbf a\times\mathbf v$ is divisible by $d$ and $\mathbf a\times\mathbf v/d\in \Gamma$. Since $\mathbf a\times\mathbf v/d\in\mathbf v^\perp$ as well, we obtain $\mathbf a\times\mathbf v/d\in M(\mathbf v,d)$. This motivates the definition of the set
\begin{align*}
\Gamma(\mathbf v,d)&=\{\mathbf a\in\mathbb Z^3\mid\mathbf a\times\mathbf v/d\in M(\mathbf v,d) \}\\
&=\{\mathbf a\in\mathbb Z^3\mid\mathbf a\times\mathbf v\text{ is divisible by $d$, and }(\mathbf a\times\mathbf v)\times\mathbf v\text{ is divisible by $d^2$}\}.
\end{align*}
We have $\Gamma\subseteq \Gamma(\mathbf v,d)$ and $\mathbf v\in \Gamma(\mathbf v,d)$. As the cross product is linear, $\Gamma(\mathbf v,d)$ is a subgroup. The following lemma shows that $\Gamma(\mathbf v,d)$ is a sublattice.
\begin{lemma}
The index of $\Gamma(\mathbf v,d)$ in $\mathbb Z^3$ is $d^3$.
\end{lemma}
\begin{proof}
The sublattice $\Gamma(\mathbf v,d)$ contains such vectors $\mathbf a\in\mathbb Z^3$ that $\mathbf a\times\mathbf v\in dM(\mathbf v,d)$.
By Proposition~\ref{equivalent}, the index of $\Gamma(\mathbf v,d)$ in $\mathbb Z^3$ is equal to the index of $dM(\mathbf v,d)$ in $\mathbf v^\perp$.
As $M(\mathbf v,d)$ has index $d$ in $\mathbf v^\perp$, the index of the sublattice $dM(\mathbf v,d)$ in $\mathbf v^\perp$ is $d^3$.
\end{proof}
Since the index of $\Gamma$ is $d^3$ as well, if there exists an appropriate cubic sublattice, then it is $\Gamma(\mathbf v,d)$, which proves the uniqueness part of Theorem~\ref{main}.
\vspace{3mm}
Now we prove that $\Gamma(\mathbf v,d)$ is indeed a cubic sublattice. The first step is to show that the dot products of the elements of $\Gamma(\mathbf v,d)$ are divisible by $d^2$, but we have to start with weaker lemmas.
\begin{lemma}\label{p1}
For a vector $\mathbf a\in \Gamma(\mathbf v,d)$, the dot product $\mathbf a\cdot\mathbf v$ is an integer and it is divisible by $d^2$.
\end{lemma}
\begin{proof}
For $\mathbf a\in \Gamma(\mathbf v,d)$, we have that $d^2$ divides
\[(\mathbf a\times\mathbf v)\times\mathbf v=(\mathbf a\cdot\mathbf v)\mathbf v-(\mathbf v\cdot\mathbf v)\mathbf a,\]
which means that
\[\frac{\mathbf a\cdot\mathbf v}{d^2}\mathbf v-\frac{\mathbf v\cdot\mathbf v}{d^2}\mathbf a\in\mathbb Z^3.\]
Since $\mathbf v\cdot\mathbf v$ is divisible by $d^2$, we earn $\dfrac{\mathbf a\cdot\mathbf v}{d^2}\mathbf v\in\mathbb Z^3$. This gives the statement as $\mathbf v$ is primitive.
\end{proof}
\begin{lemma}\label{p2}
For vectors $\mathbf a,\mathbf b\in \Gamma(\mathbf v,d)$, the vector $\dfrac{\mathbf a\times\mathbf b}{d}$ is contained in $\Gamma(\mathbf v,d)$.
\end{lemma}
\begin{proof}
Our goal is to prove that $M(\mathbf v,d)$ contains the vector
\[\frac{\frac{\mathbf a\times\mathbf b}{d}\times\mathbf v}{d}=\frac{\mathbf a\cdot\mathbf v}{d^2}\mathbf b-\frac{\mathbf b\cdot\mathbf v}{d^2}\mathbf a,\]
where the coefficients $\dfrac{\mathbf a\cdot\mathbf v}{d^2}$ and $\dfrac{\mathbf b\cdot\mathbf v}{d^2}$ are integers by Lemma~\ref{p1}.
The above vector is in $M(\mathbf v,d)$ if
\[\frac{\mathbf a\cdot\mathbf v}{d^2}\frac{\mathbf b\times\mathbf v}{d}-\frac{\mathbf b\cdot\mathbf v}{d^2}\frac{\mathbf a\times\mathbf v}d\in\mathbb Z^3,\]
which is satisfied, because $\dfrac{\mathbf b\times\mathbf v}{d},\dfrac{\mathbf a\times\mathbf v}d\in\mathbb Z^3$ as $\mathbf a,\mathbf b\in \Gamma(\mathbf v,d)$.
\end{proof}
\begin{lemma}\label{p3}
For vectors $\mathbf a,\mathbf b\in \Gamma(\mathbf v,d)$, the dot product $\mathbf a\cdot\mathbf b$ is divisible by $d^2$.
\end{lemma}
\begin{proof}
We have $\dfrac{\mathbf a\times\mathbf v}{d}\in\Gamma(\mathbf v,d)$ by Lemma~\ref{p2}. Applying Lemma~\ref{p2} again, we get that the vector
\[(\mathbf a\times\mathbf v)\times\mathbf b=(\mathbf a\cdot\mathbf b)\mathbf v-(\mathbf v\cdot\mathbf b)\mathbf a\]
is divisible by $d^2$. Lemma~\ref{p1} implies that $d^2$ divides the coefficient $\mathbf v\cdot\mathbf b$. Therefore the vector $(\mathbf a\cdot\mathbf b)\mathbf v$ is also divisible by $d^2$, which gives the statement.
\end{proof}
\pagebreak
The second step is to find the cubic basis in $\Gamma(\mathbf v,d)$.
\begin{lemma}\label{p11}
There exists a vector $\mathbf a\in \Gamma(\mathbf v,d)$ of length $d$.
\end{lemma}
\begin{proof}
Suppose the contrary. Lemma~\ref{p3} implies that the squared length of every vector of $\Gamma(\mathbf v,d)$ is divisible by $d^2$. This and the indirect assumption yield that the length of every non-zero vector of $\Gamma(\mathbf v,d)$ is at least $\sqrt2d$. Thus, the balls centered at the elements of $\Gamma(\mathbf v,d)$ with radius $\sqrt2d/2$ are disjoint. The parts of the intersection of a fundamental parallelepiped and these balls form an entire ball. Its volume is
\[\frac{4\pi}3\left(\frac{\sqrt2d}{2}\right)^3>\sqrt2d^3,\]
while the volume of the fundamental parallelepiped is $d^3$. This is a contradiction.
\end{proof}
Fix a vector $\mathbf a\in \Gamma(\mathbf v,d)$ of length $d$. By Lemma~\ref{p3}, for a vector $\mathbf b\in \Gamma(\mathbf v,d)$, the dot product $\mathbf a\cdot\mathbf b$ is divisible by $d^2$, so the length of the projection of $\mathbf b$ to $\mathbf a$ is a multiple of $d$. Therefore the elements of $\Gamma(\mathbf v,d)$ lie in $\mathbf a^\perp$ and its translates by the multiples of $\mathbf a$. Consider the sublattice $\Lambda=\mathbf a^\perp\cap \Gamma(\mathbf v,d)$. The area of the fundamental parallelogram in $\Lambda$ is $d^2$ as we can get the basis of the fundamental parallelepiped of $\Gamma(\mathbf v,d)$ by adding $\mathbf a$ to the basis of the fundamental parallelogram of $\Lambda$.
\begin{lemma}\label{p12}
There exists a vector $\mathbf b\in \Lambda$ of length $d$.
\end{lemma}
\begin{proof}
We prove by contradiction in a way similar to the proof of Lemma~\ref{p11}. We suppose that the length of every non-zero vector in $\Lambda$ is at least $\sqrt2d$. In this case, the disks around the elements of $\Lambda$ with radius $\sqrt2d/2$ in the plane perpendicular to $\mathbf a$ are disjoint. The area of the intersection of a fundamental parallelogram and these disks is
\[\left(\frac{\sqrt2d}{2}\right)^2\pi=\frac\pi2d^2>d^2,\]
which is the area of the fundamental parallelogram, this is a contradiction.
\end{proof}
Fix a vector $\mathbf b\in \Lambda$ of length $d$.
The vector $\mathbf c=\frac{\mathbf a\times\mathbf b}d$ is in $\Gamma(\mathbf v,d)$ by Lemma~\ref{p2}. Since the lengths of $\mathbf a$ and $\mathbf b$ are $d$ and they are perpendicular, the length of $\mathbf c$ is also $d$. The vectors $\mathbf a,\mathbf b,\mathbf c$ are pairwise perpendicular, so the volume of the parallelepiped generated by them is $d^3$. As the index of $\Gamma(\mathbf v,d)$ in $\mathbb Z^3$ is also $d^3$, the vectors $\mathbf a,\mathbf b,\mathbf c$ generate the sublattice $\Gamma(\mathbf v,d)$. This means that $\Gamma(\mathbf v,d)$ is indeed a cubic sublattice with edge length $d$.
\section{Characterization of cubic sublattices}\label{characterization}
Although the following two lemmas are well-known, we prove them for the sake of completeness.
\begin{lemma}\label{gcd2}
Let $K$ be a sublattice of a two-dimensional lattice $\Lambda$ and let $k$ be the greatest common divisor of the vectors of $K$. Then there exists a vector in $K$ whose greatest divisor is $k$.
\end{lemma}
\begin{proof}
Let the generators of $K$ be $\mathbf v_1=k_1\mathbf u_1$ and $\mathbf v_2=k_2\mathbf u_2$, where $\mathbf u_1,\mathbf u_2$ are primitive, and $k_1,k_2$ are positive integers. We can assume that $k=\gcd(k_1,k_2)$ is equal to $1$. Unfortunately, the vector $\mathbf v_1+\mathbf v_2$ is not necessarily primitive, see Figure \ref{fig:2dim}, so we have to be more tricky.
\begin{figure}[h]
\centering
\includegraphics[width=10cm]{2dim}
\caption{The vector $\mathbf v_1+\mathbf v_2$ is divisible by $2$.}
\label{fig:2dim}
\end{figure}
As $\mathbf u_1$ is primitive, there exists such a vector $\mathbf w$ that $\mathbf u_1$ and $\mathbf w$ generate $\Lambda$. We can write $\mathbf u_2=a\mathbf u_1+b\mathbf w$, where $a,b$ are coprime integers. Let $c$ be the product of those prime numbers which divide $b$, but do not divide $k_1k_2$ (set $c=1$ if there is not any such prime number). We show that the vector $\mathbf v=c\mathbf v_1+\mathbf v_2\in K$ is a primitive vector. Suppose the contrary, i.e., there exists a prime number $p$ which divides
\[\mathbf v=c\mathbf v_1+\mathbf v_2=ck_1\mathbf u_1+k_2(a\mathbf u_1+b\mathbf w)=(ck_1+ak_2)\mathbf u_1+bk_2\mathbf w.\]
Since $\mathbf u_1,\mathbf w$ are the generators of $\Lambda$, $p$ divides $ck_1+ak_2$ and $bk_2$. As $p$ is prime, $p$ divides $b$ or $k_2$. If $p$ divides $k_2$, then $p$ also divides $ck_1$, which contradicts the definition of $c$ or $\gcd(k_1,k_2)=1$. If $p$ does not divide $k_2$, then $p$ divides $b$. By the definition of $c$, $p$ divides $ck_1$, hence $p$ divides $ak_2$ and consequently also $a$. Thus, $p$ divides $\mathbf u_2=a\mathbf u_1+b\mathbf w$, which is a contradiction.
\end{proof}
The next lemma is the three-dimensional version of Lemma~\ref{gcd2}.
\begin{lemma}\label{gcd3}
If the greatest common divisor of the vectors of a sublattice $K$ of a three-dimensional lattice $\Lambda$ is $k$, then $K$ contains a vector whose greatest divisor is $k$.
\end{lemma}
\begin{proof}
Denote the generators of $K$ by $\mathbf v_1,\mathbf v_2,\mathbf v_3$ and its greatest divisors by $k_1,k_2,k_3$, respectively. We have that $k=\gcd(k_1,k_2,k_3)$. Let $M$ be the sublattice generated by $\mathbf v_1$ and $\mathbf v_2$. By Lemma~\ref{gcd2}, there is a vector $\mathbf v$ in $M$ whose greatest divisor is $\gcd(k_1,k_2)$. If we apply Lemma~\ref{gcd2} again to the sublattice generated by $\mathbf v$ and $\mathbf v_3$, we get a vector in $K$, whose greatest divisor is $\gcd(\gcd(k_1,k_2),k_3)=k$.
\end{proof}
We remark that similar statement holds for arbitrary dimensional lattices, and one can prove it by induction on the dimension.
Now we can characterize the cubic sublattices of $\mathbb Z^3$. We show that every cubic sublattice can be obtained from our construction.
\begin{theorem}\label{cubic_constr}
For an arbitrary cubic sublattice $\Gamma\subseteq\mathbb Z^3$, there exist unique positive integers $k$ and $d$ such that $\Gamma=k\Gamma(\mathbf v,d)$ for a primitive vector $\mathbf v\in\mathbb Z^3$.
\end{theorem}
\begin{proof}
We have that $k$ is the greatest common divisor of the vectors of $\Gamma$ and $d$ is the quotient of the edge length of $\Gamma$ and $k$. Consider the cubic sublattice $\Gamma'$ such that $k\Gamma'=\Gamma$.
By Lemma~\ref{gcd3}, there exists a primitive vector $\mathbf v\in\Gamma'$. Theorem~\ref{main} implies $\Gamma'=\Gamma(\mathbf v,d)$, which gives the statement.
\end{proof}
When a cubic sublattice is constructed from a primitive vector, this vector is also a primitive vector in the cubic sublattice. Our next goal is to show that every primitive vector can be such a vector. It requires two lemmas.
\begin{lemma}\label{d^2}
If a cubic sublattice $\Gamma\subseteq\mathbb Z^3$ has edge length $d$, then it contains the vectors divisible by $d^2$, i.e., the inclusion $d^2\mathbb Z^3\subseteq\Gamma$ is fulfilled.
\end{lemma}
\begin{proof}
When $\Gamma=\Gamma(\mathbf v,d)$ for some primitive vector $\mathbf v$, we have $d^2\mathbf a\in\Gamma(\mathbf v,d)$ for any $\mathbf a\in\mathbb Z^3$ by the construction.
In the general case, $\Gamma=k\Gamma(\mathbf v,d/k)$ for some integer $k$ by Theorem~\ref{cubic_constr}. For an arbitrary $\mathbf a\in\mathbb Z^3$, we obtain $d^2/k^2\mathbf a\in\Gamma(\mathbf v,d/k)$, hence $d^2\mathbf a\in \Gamma$.
\end{proof}
The second lemma is a claim from number theory.
\begin{lemma}\label{prime}
For an odd prime number $p$, there exists a primitive vector in $\mathbb Z^3$ whose squared length is divisible by $p^2$.
\end{lemma}
\begin{proof}
It is enough to construct a vector whose squared length is divisible by $p^2$ and which is not divisible by $p$, as we can divide it by its greatest divisor.
If $-1$ is a quadratic residue (when $p=4k+1$ for a positive integer $k$), then there exists a positive integer $x$ such that $x^2=ap-1$ for some integer $a$. As $p$ is odd, there exists integer $b$ such that $-2b\equiv a$ modulo $p$. Then $(0,x,bp+1)$ is a suitable vector.
When $-1$ is not a quadratic residue (if $p=4k+3$), we search for $x,y\in\mathbb Z_p$ such that $x^2+y^2=-1$. If there were no such elements, then each of the pairs
\[(1,p-2),(2,p-3),\dots,((p-1)/2,(p-1)/2)\]
would contain only one quadratic residue (in particular the last one would not contain any), which would contradict the well-known fact that there exist $(p-1)/2$ quadratic residues modulo $p$. Denoting the corresponding integers by $x,y$ as well, we obtain that $x^2+y^2=ap-1$ for some integer $a$. As in the previous case, we have an integer $b$ such that $-2b\equiv a$ modulo $p$, thus, the vector $(x,y,bp+1)$ is a suitable vector.
\end{proof}
Lemma~\ref{prime} does not hold for $p=2$. Indeed, the square numbers are congruent $0$ or $1$ modulo $4$, so if $4$ divides the sum of the square of the coordinates, then all of them are even, hence the vector is divisible by $2$.
If we construct a cubic sublattice in a cubic sublattice $\Lambda$ instead of $\mathbb Z^3$, we will use the notion $\Gamma_\Lambda(\mathbf v,d)$ for the cubic sublattice of $\Lambda$ with edge length $d$ which contains the primitive vector $\mathbf v\in\Lambda$.
\begin{theorem}\label{thm:reverse}
For an arbitrary primitive vector $\mathbf v=(v_1,v_2,v_3)\in\mathbb Z^3$ and an odd $d$, there exists a primitive vector $\mathbf u\in\mathbb Z^3$ for which we can choose a cubic basis of the cubic sublattice $\Gamma(\mathbf u,d)$ so that the coordinates of $\mathbf u$ are $(v_1,v_2,v_3)$.
\end{theorem}
\begin{proof}
First we show that it is enough to prove the theorem for odd prime number $d$.
Applying the theorem for a vector $\mathbf v=(v_1,v_2,v_3)\in\mathbb Z^3$ and $d_1$ yields a primitive vector $\mathbf u\in\mathbb Z^3$. If we apply again the theorem for $\mathbf u$ and for $d_2$, we get a primitive vector $\mathbf t\in\mathbb Z^3$.
The sublattice $\Gamma_{\Gamma(\mathbf t,d_2)}(\mathbf u,d_1)$ is a cubic sublattice with edge length $d_1d_2$, where the coordinates of the vector $\mathbf t$ are $(v_1,v_2,v_3)$. The uniqueness part of Theorem~\ref{main} implies that this sublattice is $\Gamma(\mathbf t,d_1d_2)$. In the following, we suppose that $d$ is an odd prime number.
As $\mathbf v$ is primitive, $d$ does not divide at least one of its coordinates, let it be the first coordinate $v_1$.
Lemma~\ref{prime} provides a primitive vector $\mathbf w=(w_1,w_2,w_3)$ whose squared length is divisible by $d^2$. Reordering the coordinates, we can achieve that $d$ does not divide the first coordinate $w_1$. Set $\tilde{\mathbf w}=(-w_1,w_2,w_3)$. Now one of the dot products $\mathbf v\cdot\mathbf w$ or $\mathbf v\cdot\tilde{\mathbf w}$ is not divisible by $d$, otherwise $d$ would divide their difference $2v_1w_1$, which would contradict our assumptions on $v_1$ and $w_1$ as $d$ is odd. Assume that $\mathbf v\cdot\mathbf w$ is not divisible by $d$.
Finally, construct the cubic sublattice $\Gamma(\mathbf w,d)$.
By Lemma~\ref{d^2}, it contains the vector $d^2\mathbf v$. Consider this vector as a vector of the cubic sublattice $\Gamma(\mathbf w,d)$, and denote it by $\mathbf u$.
This vector is primitive since $d\mathbf v$ is not contained in $\Gamma(\mathbf w,d)$ by Lemma~\ref{p1}.
The sublattice $d^2\mathbb Z^3$ is a cubic sublattice in $\Gamma(\mathbf w,d)$ with edge length $d$ and it contains $\mathbf u$. Thus, the cubic sublattice $\Gamma_{\Gamma(\mathbf w,d)}(\mathbf u,d)$ is $d^2\mathbb Z^3$. Choosing the basis $(d^2\mathbf e^1,d^2\mathbf e^2,d^2\mathbf e^3)$ of $d^2\mathbb Z^3$, we have that the coordinates of $\mathbf u$ are $(v_1,v_2,v_3)$.
\end{proof}
Theorem~\ref{thm:reverse} allows us to formulate the converse of the number-theoretic corollary mentioned in the Introduction. If an integer $m$ is a sum of three coprime squares, then for an odd $d$, the number $d^2m$ is also a sum of three coprime squares. The assumption on the oddness of $d$ is necessary, as a number divisible by $4$ cannot be a sum of three coprime squares by the investigation of the remainders modulo $4$.
\section{Cubic sublattices as partially ordered set}\label{poset}
The inclusion gives a partial order over the cubic sublattices of $\mathbb Z^3$. This makes the set of the cubic sublattices to a partially ordered set. We decide whether this set is a lattice in the algebraic sense.
Similarly to the cubic sublattices, we can investigate the square sublattices of $\mathbb Z^2$. A square sublattice can be characterized by the invariance of the rotation of $90^\circ$. Therefore the intersection and the union of square sublattices are also square sublattices. These are the supremum and the infimum of the given square sublattices, so the partially ordered set of the square sublattices of $\mathbb Z^2$ forms a lattice in algebraic sense.
We show that the partially ordered set of the cubic sublattices of $\mathbb Z^3$ is not a lattice in algebraic sense. The squared length of $\mathbf v=(1,2,2)$ is $9=3^2$, so there exists the cubic sublattice $\Gamma(\mathbf v,3)\leq\mathbb Z^3$.
\begin{figure}[h]
\centering
\includegraphics{lattice}
\caption{This example shows that the partially ordered set of the cubic sublattices of $\mathbb Z^3$ does not form a lattice in algebraic sense.}
\label{fig:poset}
\end{figure}
We have the inclusions $9\mathbb Z^3\leq3\mathbb Z^3\leq\mathbb Z^3$ and $3\Gamma(\mathbf v,3)\leq\Gamma(\mathbf v,3)\leq\mathbb Z^3$, and from the last one, $3\Gamma(\mathbf v,3)\leq3\mathbb Z^3$, see Figure~\ref{fig:poset}. Lemma~\ref{d^2} gives $9\mathbb Z^3\leq\Gamma(\mathbf v,3)$. In all of the mentioned inclusions, the (relatively) edge lengths of the cubic sublattices are $3$, so there is not any cubic sublattice between the inclusions. This means that there is no supremum of $3\Gamma(\mathbf v,3)$ and $9\mathbb Z^3$. And there is no infimum of $\Gamma(\mathbf v,3)$ and $3\mathbb Z^3$.
However, the partially ordered set of such cubic sublattices that contain a given primitive vector $\mathbf v\in\mathbb Z^3$ is a lattice in algebraic sense. If the squared length of $\mathbf v$ is $kd^2$, where $k$ is square-free, then this lattice is isomorphic to the lattice of the divisors of $d$.
\section*{Acknowledgment}
The author is immensely grateful for G\'abor Moussong, who recommended this topic and proposed to search an elementary proof for Theorem~\ref{main}.
|
1,314,259,993,366 | arxiv | \section{\label{sec:level1}First-level heading}
{\em \bf Introduction:} Whether there exist a consistent extension of General Relativity
by a mass term is a basic question of a classical field theory. A small graviton
mass could also be of a significant physical interest,
notably for the cosmological constant problem.
A ghost-free linear theory of massive
spin-2 -- the Fierz-Pauli (FP) model \cite{FP} -- had been notoriously hard to generalize to the
nonlinear level \cite{BD}: the Hamiltonian constraint gets lost in general and,
as a result, the sixth degree of freedom -- the Boulware-Deser (BD) ghost --
emerges as a mode propagating on otherwise physically meaningful local
backgrounds ({\it e.g.}, on a background of a lump of matter). Part of this problem
can be seen in the effective field theory (EFT) approach to massive gravity \cite{AGS}
in the decoupling limit \cite{AGS,Creminelli}. There, the problem manifests
itself in the Lagrangian for the helicity-0 component of the massive graviton.
This Lagrangian generically contains nonlinear terms with more than two time derivatives.
The latter give rise to the sixth degree of freedom on local
backgrounds, while in general, these terms lead to the loss of well-posedness of the Cauchy problem
for the helicity-0 field theory \cite{AGS,Creminelli}.
A step forward has been made recently in \cite{deRham:2010ik} where it was shown that:
(a) the coefficients of the EFT can be chosen
so that the decoupling limit Lagrangian is ghost-free; this involves
choosing the ``appropriate coefficients'' order-by-order,
and an algorithm was set for this procedure to an arbitrary order;
(b) once the ``appropriate coefficients'' are chosen in the effective Lagrangian,
in the decoupling limit only a few terms up to the quartic order survive,
all the higher order terms vanish identically. Moreover, the surviving
terms are unique as their structure is fixed by symmetries \cite{deRham:2010ik,dRGHP}.
In the present work we build on the above two points,
and go far beyond them. In particular:
(1) We construct Lagrangians that
{\it automatically} produce the ``appropriate coefficients'' once expanded in powers of the
fields; these give rise to theories that are ghost-free automatically
to all orders in the decoupling limit.
(2) Using the obtained Lagrangians we study the issue of the BD ghost
away from the decoupling limit; we show that the Hamiltonian constraint
is maintained at least up to and including quartic order, hence
excluding the possibility of the BD ghost up to this order. We also express the exact potential
for gravity in a simplified (1+1)-dimensional model and show explicitly how
the constraint is preserved to all orders.
The present framework provides explicit resummation of the nonlinear terms in the
EFT Lagrangian of massive spin-2. Another way to resum these terms
is to use an auxiliary extra dimension \cite{GG,Claudia}.
The latter has so far been shown to give
the ghost-free decoupling limit only up to the cubic order
\cite{cubic}. In \cite{GG,Claudia} the
resummation is obtained via the second order partial
non-linear differential equation. The present approach achieves this
via an algebraic non-linear equation.
{\em \bf Formalism:}
Define the tensor $H_{\mu \nu}$ as the covariantization of the
metric perturbation,
$ g_{\mu \nu}=\eta_{\mu \nu}+h_{\mu \nu}=
H_{\mu \nu}+\eta_{ab}\partial_\mu \phi^a \partial_\nu \phi^b$, where the four St\"uckelberg fields $\phi^a$
transform as scalars, and $\eta_{ab}=(-1,1,1,1)$, \cite{AGS}.
The helicity-0 mode $\pi$ of the graviton can be extracted by
expressing $\phi^a= (x^a-\eta^{a\mu}\partial_\mu \pi)$, such that
\begin{eqnarray}
H_{\mu \nu}=h_{\mu \nu}+2\Pi_{\mu \nu}-\eta^{\alpha\beta}\Pi_{\mu\alpha}\Pi_{\beta\nu},~~~\Pi_{\mu \nu}\equiv \partial_\mu \partial_\nu \pi.
\end{eqnarray}
We may therefore define the following quantity
\begin{eqnarray}
\label{Kmn}
{\cal K}^\mu_\nu (g,H)&=&\delta^\mu_\nu -\sqrt{\delta^\mu_\nu - H^\mu_\nu}
=-\sum_{n=1}^{\infty}d_n ( H^n)^\mu_\nu\,,\\
{\rm with} && d_n=\frac{(2n)!}{(1-2n)(n!)^2 4^n}\,.
\end{eqnarray}
Here $H^\mu_\nu = g^{\mu\alpha}H_{\alpha\nu}$, and
$(H^n)^\mu_\nu=H^\mu_{\alpha_1}H^{\alpha_1}_{\alpha_2}\cdots H^{\alpha_{n-1}}_\nu$ denotes the product
of $n$ tensors $H^\alpha_\beta$.
Below, unless stated otherwise, all the contractions
are made using the metric $g_{\mu\nu}$. The
tensor ${\cal K}_{\mu \nu}= g_{\mu\alpha}{\cal K}^\alpha_\nu$ is defined in such a way that
\begin{eqnarray}
{\cal K}_{\mu \nu}(g,H)\Big|_{h_{\mu \nu}=0}\equiv \Pi_{\mu \nu}\,.
\end{eqnarray}
We use the same notation as in \cite{Creminelli}
where square brackets $[\ldots]$ represent the trace of a tensor contracted using the Minkowski
metric, {\it e.g.} $[\Pi]=\eta^{\mu\nu}\Pi_{\mu\nu}$ and $[\Pi^2]=\eta^{\alpha \beta}\eta^{\mu\nu}\Pi_{\alpha\mu}\Pi_{\beta\nu}$, while angle brackets $\langle \ldots \rangle$ represent the trace with respect to the physical metric $g_{\mu \nu}$, so that $\langle H \rangle = g^{\mu\nu}H_{\mu\nu}$ and $\langle H^2 \rangle = g^{\alpha \beta}g^{\mu\nu}H_{\alpha\mu}H_{\beta\nu}$.
We are first interested in the decoupling limit. For that, let us define the canonically
normalized variables, $\hat \pi=\Lambda_3^3 \pi$ with $\Lambda_3^3=m^2 M_{\rm Pl}$ and $\hat h_{\mu \nu}=M_{\rm Pl} h_{\mu \nu}$.
The limit is then obtained by taking $M_{\rm Pl} \to \infty$ and $m\to 0$ while keeping
$\hat \pi$, $\hat h_{\mu \nu}$, and the scale $\Lambda_3$ fixed. First, we construct an explicit
example of a non-linear theory that bears no ghosts in the decoupling limit, and then
give a general formulation and show the absence of the BD ghost beyond the decoupling limit in quartic order.
{\em \bf Massive Gravity:} The consistency of the
Fierz-Pauli combination relies on the fact that the Lagrangian
\begin{eqnarray}
\label{L2der}
\mathcal{L}^{(2)}_{\rm der}&=&[\Pi]^2-[\Pi^2]\,,
\end{eqnarray}
is a total derivative.
To ensure that no ghost appears in the decoupling limit, it is sufficient
to extend $\mathcal{L}^{(2)}_{\rm der}$ covariantly away from $h_{\mu \nu}=0$, {\it i.e. } replace $[\Pi]$ and $[\Pi^2]$
by $\<{\cal K}\>$ and $\<{\cal K}^2\>$ respectively, so that the total Lagrangian reads as
\begin{eqnarray}
\mathcal{L}=\frac{M_{\rm Pl}^2}{2}\sqrt{-g}\(R-\frac{m^2}{4}\, \mathcal{U}(g,H)\)\,,
\label{L2}
\end{eqnarray}
with the potential $\mathcal{U}$ expressed as an expansion in $H$ as
\begin{eqnarray}
\label{U2}
\mathcal{U}(g,H)&=&-4\(\langle {\cal K}\rangle ^2-\langle {\cal K}^2\rangle\)\\
&=&-4\big(\sum_{n\ge1}d_n\langle H^n\rangle\big)^2
-8\sum_{n\ge 2}d_n\langle H^n\rangle\,.\nonumber
\end{eqnarray}
Expanding this expression to quintic order,
\begin{eqnarray}
&&\hspace{-5pt}\mathcal{U}(g,H)=\(\<H^2\>-\<H\>^2\)-\frac{1}{2}\(\<H\>\<H^2\>-\<H^3\>\)\hspace{5pt}\\
&&\hspace{-5pt}-\frac{1}{16}\(\<H^2\>^2+4\<H\>\<H^3\>-5\<H^4\>\)\nonumber\\
&&\hspace{-5pt}-\frac{1}{32}\(2\<H^2\>\<H^3\>+5\<H\>\<H^4\>-7\<H^5\>\)+\cdots \nonumber \,,
\end{eqnarray}
we recover the decoupling limit presented in \cite{deRham:2010ik} with the special indices $c_3=d_5=f_7=0$.
Note that the Lagrangian (\ref {L2}) with (\ref {U2}) can be obtained
from the Lagrangian
\begin{eqnarray}
\mathcal{L}_\lambda =\frac{M_{\rm Pl}^2}{2}\sqrt{-g}\(R- {m^2}({\cal K}_{\mu\nu}^2 - {\cal K}^2)\) \nonumber
\\
+ \sqrt{-g} \lambda^{\mu\nu}( g^{\alpha\beta}{\cal K}_{\mu\alpha}{\cal K}_{\beta\nu} -2 {\cal K}_{\mu\nu} +H_{\mu\nu}),
\label{lambda}
\end{eqnarray}
where ${\cal K}_{\mu\nu} $ is an independent tensor field that gets related to $H_{\mu\nu}$ as in
(\ref {Kmn}) due to the constraint enforced by the Lagrange multiplier $\lambda^{\mu}_\nu$.
Note, the expression (\ref {Kmn}) can be rewritten as ${\cal K}^\mu_\nu = \delta^\mu_\nu
- \sqrt{\partial^\mu \phi^a \partial_\nu \phi^b \eta_{ab}}$, that gives a
square root structure in the full Lagrangian.
{\em \bf Decoupling limit:}
It is straightforward to notice that the leading contribution to the decoupling limit
\begin{eqnarray}
\sqrt{-g}\, \mathcal{U}(g,H)\Big|_{h_{\mu \nu}=0}&=&-4\((\Box \pi)^2-(\partial_\alpha \partial_\beta \pi)^2\),
\end{eqnarray}
is a total derivative. The resulting interaction Lagrangian in the decoupling limit is then given by
\cite{deRham:2010ik}
\begin{eqnarray}
\label{Lint def}
\mathcal{L}_{\rm int}
&=& \hat h_{\mu \nu} \bar X^{\mu\nu}\,,
\end{eqnarray}
with
\begin{eqnarray}
\label{Xdef}
\bar X^{\mu\nu}=
-\frac{M_{\rm Pl}^2m^2}{8}\frac{\delta}{\delta h_{\mu \nu}}\( \sqrt{-g}\, \mathcal{U}(g,H)\)\Big|_{h_{\mu \nu}=0}\,.
\end{eqnarray}
Using the relations
\begin{eqnarray}
\frac{\delta {\cal K}(g,H)}{\delta h_{\mu \nu}}&=&\frac12 \(g^{\mu\nu}-{\cal K}^{\mu\nu}\), \\
\frac{\delta \<{\cal K}(g,H)^2\>}{\delta h_{\mu \nu}}&=&H^{\mu\nu}-{\cal K}^{\mu\nu}\,,
\end{eqnarray}
the expression for $\bar X$ simplifies to
\begin{eqnarray}
\bar X_{\mu \nu}&=&{1\over 2}
\Lambda_3^3\Big[\Pi\eta_{\mu \nu}-\Pi_{\mu \nu}+\Pi_{\mu \nu}^2 -\Pi \Pi_{\mu \nu}\\
&+&\frac 12 (\Pi^2-\Pi_{\alpha \beta}^2)\eta_{\mu \nu}\Big]\nonumber \,.
\end{eqnarray}
The tensor $\bar X_{\mu \nu}$ is conserved and gives rise to at most second order
derivative terms in the equations of motion. This tensor can be expressed as the product of two epsilon
tensors appropriately contracted with powers of $\Pi_{\mu\nu}$ \cite{dRGHP}.
For the potential \eqref{U2}, the Lagrangian in the decoupling limit is then given by, see
Ref.~\cite{deRham:2010ik}
\begin{eqnarray}
\label{L decoupling}
\mathcal{L}^{\rm lim}_{\Lambda_3}=-\frac 1 4 \hat h^{\mu\nu}(\hat{\mathcal{E}} \hat h)_{\mu \nu}
+\hat h_{\mu \nu} \bar X^{\mu\nu}\,,
\end{eqnarray}
and this result is exact ({\it i.e. } no higher order corrections). Notice that this is also in agreement with the results of \cite{deRham:2010ik} up to quintic order, for the special case $c_3=d_5=f_7=0$, but we explicitly demonstrate here that this result remains valid to all orders.
\vspace{10pt}
{\em \bf General formulation:}
As mentioned in \cite{deRham:2010ik}, at each order in the expansion there exists a total derivative contribution
\begin{eqnarray}
\label{Lder n}
\mathcal{L}_{\rm der}^{(n)}(\Pi)=-\sum_{m=1}^{n}(-1)^m\frac{(n-1)!}{(n-m)!}\,
[\Pi^{m}]\,\mathcal{L}^{(n-m)}_{\rm der}(\Pi)\,,
\end{eqnarray}
with $\mathcal{L}^{(0)}_{\rm der}(\Pi)=1$ and $\mathcal{L}^{(1)}_{\rm der}(\Pi)=[\Pi]$.
These total derivatives generalize the ``Fierz-Pauli" structure used previously to all orders.
More generally, the potential of any theory of massive gravity with no ghosts in the decoupling
limit can be expressed non-linearly as
\begin{eqnarray}
\mathcal{U}(g,H)&=&-4\sum_{n\ge2} \alpha_n \, \mathcal{L}^{(n)}_{\rm der}({\cal K})\,,
\end{eqnarray}
where $[\Pi^m]$ in \eqref{Lder n} should be replaced by $\<{\cal K}^m\>$ and expressed in terms
of $g$ and $H$ using \eqref{Kmn}.
Here again this specific structure ensures that the leading contribution to the decoupling
limit is manifestly a total derivative by construction,
\begin{eqnarray}
\sqrt{-g}\, \mathcal{U}(g,H)\Big|_{h_{\mu \nu}=0}=\text{total derivative}\,,
\end{eqnarray}
and the resulting interaction Lagrangian can be derived by noticing the general relation
\begin{eqnarray}
\frac{\delta}{\delta h^{\mu\nu}} \<{\cal K}^n\>\Big|_{h_{\mu \nu}=0}
=\frac n2 \(\Pi_{\mu \nu}^{n-1}-\Pi_{\mu \nu}^{n}\)\,,
\end{eqnarray}
so that
\begin{eqnarray}
&&\hspace{-20pt}\frac{\delta }{\delta h^{\mu\nu}}\(\sqrt{-g}\mathcal{L}_{\rm der}^{(n)}({\cal K})\)
\Big|_{h_{\mu \nu}=0}=\\
&&
\sum_{m=0}^n\frac{(-1)^mn!}{2(n-m)!}\(\Pi^m_{\mu \nu}-\Pi^{m-1}_{\mu \nu}\)\mathcal{L}_{\rm der}^{(n-m)}(\Pi)\,,\nonumber
\end{eqnarray}
using the notation $\Pi^{0}_{\mu \nu}=\eta_{\mu \nu}$ and $\Pi^{-1}_{\mu \nu}=0$.
The decoupling limit Lagrangian is then given by \eqref{L decoupling} with the same definition
\eqref{Xdef} for the tensor $X_{\mu \nu}$, giving here
\begin{eqnarray}
\bar X_{\mu \nu}= {1\over 2} \Lambda_3^3\sum_{n\ge 2}\alpha_n\(X^{(n)}_{\mu \nu}+nX^{(n-1)}_{\mu \nu}\)\,,
\end{eqnarray}
with
\begin{eqnarray}
X^{(n)}_{\mu \nu}=\sum_{m=0}^n(-1)^m\frac{n!}{2(n-m)!}\Pi^m_{\mu \nu} \mathcal{L}_{\rm der}^{(n-m)}(\Pi)\,.
\end{eqnarray}
This is in complete agreement with the results obtained up to quintic order for $\alpha_2=1$,
$\alpha_3=-2c_3$, $\alpha_4=-2^2 d_5$ and
$\alpha_5=-2^3 f_7$. However we emphasize that the results in this paper are now valid to all orders.
The special theory found in \cite{GG,Claudia} corresponds to the specific choices of coefficients
$\alpha_2=1$ and $\alpha_3=-1/2$, see Ref.~\cite{deRham:2010eu}.
Furthermore, at each order the tensors $X_{\mu \nu}^{(n)}$ are given by the recursive relation
\begin{eqnarray}
X^{(n)}_{\mu \nu}=-n \Pi_\mu^{\ \alpha}X^{(n-1)}_{\alpha\nu}+\Pi^{\alpha\beta}X^{(n-1)}_{\alpha\beta}\eta_{\mu \nu}\,.
\end{eqnarray}
with $X^{(0)}_{\mu \nu}=1/2 \eta_{\mu \nu}$. So since $X^{(4)}_{\mu \nu}\equiv 0$ all these tensors vanish beyond the
quartic one, $X^{(n)}_{\mu \nu}\equiv 0$ for any $n\ge 4$, and the decoupling limit therefore stops at that
order, as previously implied in \cite{deRham:2010ik}.
\vspace{10pt}
{\em \bf Boulware-Deser ghost:}
The previous argument ensures the absence of ghost in the decoupling limit, but it is
feasible that the ghost reappears beyond the decoupling limit, and is simply suppressed
by a mass scale larger than $\Lambda_3$. Certain arguments have
hinted towards the existence of a BD ghost, \cite{Creminelli}. We reanalyze
the arguments here and show the absence of ghosts within the regime studied.
To compute the Hamiltonian, we fix unitary gauge for which $\pi=0$, such that
\begin{eqnarray}
\<H^n\>=\sum_{\ell\ge 0}(-1)^{\ell}C^{\ell+n-1}_{\ell}[h^{\ell+n}],
\end{eqnarray}
where the $C^n_m$ are the Bernoulli coefficients. We also focus on the case where $\alpha_2=1$ and $\alpha_n=0$ for $n\ge3$.
In what follows, we work in terms of the ADM variables \cite{Wald:1984rg},
\begin{eqnarray}
g^{00}=-N^{-2},\ \ g_{0i}=N_i, \ {\rm and}\ g_{ij}=\gamma_{ij}\,,
\end{eqnarray}
with the lapse $N=1+\delta N$, and the three-dimensional metric $\gamma_{ij}=\delta_{ij}+h_{ij}$.
In terms of these variables, the potential is then of the form
\begin{eqnarray}
\sqrt{-g}\, \mathcal{U}&=&\mathcal{A}+\delta N \mathcal{B}
+N_iN_j\big[-2 \delta^{ij}+\mathcal{C}^{ij}\\
&&+\delta N(\delta^{ij}+\mathcal{D}^{ij})-\frac 12 \delta N^2 \delta^{ij}-\frac 18 \delta^{ij}N_k^2\big]\,,\nonumber
\end{eqnarray}
where $\mathcal{A}, \mathcal{B}, \mathcal{C}^{ij}$ and $ \mathcal{D}^{ij}$ are functions of $h_{ij}$, at least first order in perturbations, and $\mathcal{C}^{ij}+2 \mathcal{D}^{ij}=-\frac 12 h^{ij}+\mathcal{O}(h_{ij}^2)$, and in this section we raise and lower the space-like indices using $\delta_{ij}$.
Notice that this is completely consistent with the analysis performed in \cite{Creminelli}, and corresponds to
setting the coefficients in (43) of \cite{Creminelli} to $A=B=D=E=0$, while $C=-1/2$.
We emphasize here that the presence of a term of the form $C N_i^2 N^2$ does not signal the presence of a ghost,
since any quadratic terms in the lapse disappear after integration over the shift as we prove in what follows. Indeed, in terms of redefined shift $n_i$,
\begin{eqnarray}
N_j=\(\delta^i_j+\frac 12 \delta N\delta^{i}_{j}-\frac 18 \delta N h^{i}_{j}\)n_i\equiv L^{i}_{j} n_i\,,
\end{eqnarray}
the Hamiltonian is of the form
\begin{eqnarray}
\label{Ham0}
\mathcal{H}=\frac{M_{\rm Pl}^2}{2}\sqrt{\gamma}\(N R^0+N_j R^j\)+\frac{m^2M_{\rm Pl}^2}{8}\(\mathcal{A}+ \mathcal{B} \delta N\) \\
-\frac{m^2M_{\rm Pl}^2}{4}L^{ij}\(n_in_j-\frac 12 \mathcal{C}^k_i n_j n_k+\frac1{16}n_k^2 n_in_j\)\nonumber,
\end{eqnarray}
up to quartic order in the metric perturbations.
Then, it is straightforward to check that the variation of the Hamiltonian
(\ref{Ham0}) w.r.t. the shift $n_i$ gives an equation which is independent of
$N$, and serves to determine $n_j$. Moreover, the lapse
remains a Lagrange multiplier even after integration over the shift, hence giving rise
to a Hamiltonian constraint on the physical variables. Whether this constraint gives rise to a
secondary constraint, and whether the system should be quantized as a first- or second class system,
is a separate interesting question. The mere existence of the Hamiltonian constraint is sufficient to claim
the absence of the BD ghost to that order
\footnote{The approach of \cite{Slava} is equivalent to the EFT approach of \cite{AGS},
as was shown in \cite{BM}. Hence, the claim of \cite{Alberte:2010qb} on
the presence of the BD ghost in the quartic order, if correct, would contradict our results. However, what has really been
diagnosed in \cite{Alberte:2010qb} is the issue already raised in \cite{Creminelli}, which we
have just addressed. In particular,
the apparent ghost-like nonlinear terms identified in \cite{Alberte:2010qb}, to the extent they were presented
in \cite{Alberte:2010qb}, are in fact removable at that order by a nonlinear field redefinition,
in complete consistency with our results above. This will be discussed in more detail elsewhere.}, yet without breaking Lorentz invariance, \cite{Rubakov:2008nh}.
The Hamiltonian evaluated on the constraint surface is proportional to
$m^2$ and whether or not it is positive semi-definite is determined by the
explicit expressions for $\mathcal{A}, \mathcal{B}, \mathcal{C}^{ij}$ and $\mathcal{D}^{ij}$. Thus, in general
certain backgrounds could have slow tachyon-like instabilities, however, this
is a separate issue from that of the BD ghost that we clarified above.
{\em \bf $\bf (1+1)$-d massive gravity:} Proving the absence of the BD ghost in complete generality beyond
the quartic order is a grand task, which we save for a separate study. However, we can analyze here a
similar issue in a $(1+1)$-d toy-model, where
we consider the Hamiltonian
\begin{eqnarray}
\mathcal{H}=M_{\rm Pl}^2\sqrt{\gamma}\left[N R^0+\gamma^{11}N_1 R_1+\frac{m^2}{4}N \mathcal{U}(g,H)\right]\,,
\end{eqnarray}
with $R^0$ and $R_1$ arbitrary functions of the space-like metric $\gamma_{11}$ and its conjugate momentum,
and the potential $\mathcal{U}$ is given in \eqref{U2}. In $1+1$ dimensions, it is relatively easy to
check that the Hamiltonian then takes the exact form
\begin{eqnarray}
\mathcal{H}&=&M_{\rm Pl}^2\sqrt{\gamma}\Big[
N R^0+\gamma^{11}N_1 R_1- 2m^2 N \Big]\\
&&-2m^2\(1-\sqrt{(\sqrt{\gamma}+N)^2-\gamma^{11}N_1^2}\)\nonumber,
\end{eqnarray}
and seemingly includes terms quadratic in the lapse when working at quartic order and beyond,
\begin{eqnarray}
\mathcal{H}\sim \mathcal{H}_0+ \mathcal{H}_1 N +m^2 N_1^2 N^2 +\cdots \,.
\end{eqnarray}
By stopping the analysis at this point one would infer that the lapse no longer enforces a constraint.
However, this should be determined after integrating the shift. In other words, in terms of the redefined shift $n_1$
\begin{eqnarray}
N_1=n_1 \, \(\gamma_{11}+N \sqrt{\gamma}\)\,,
\end{eqnarray}
the Hamiltonian takes the much more pleasant form
\begin{eqnarray}
\mathcal{H}&=&\sqrt{\gamma}N R^0-2m^2\(1+\sqrt{\gamma}N\)\\
&&+\(\sqrt{\gamma}+N\)\(n_1 R_1+2m^2 \sqrt{1-n_1^2}\)\nonumber\,,
\end{eqnarray}
which remains linear in the lapse, even after integration over the shift. It is again straightforward
to see that the lapse does enforce a constraint, and does so for an ``arbitrary background".
{\em \bf Outlook:}
We have given a covariant non-linear realization of massive gravity in 4D which: (1)
is automatically free of ghosts in the decoupling limit,
to all orders in non-linearities; (2) keeps the lapse as a Lagrange multiplier away from the decoupling limit, at least up to
quartic order in non-linearities. These findings constitute what we believe is
a very significant step forward, and strongly suggests the existence of an entirely ghost-free classical theory
of massive gravity. However, to prove this statement in complete generality,
two important ingredients are yet missing: (a) proving that the lapse remains a Lagrange multiplier
to all orders; (b) checking whether the secondary constraint is generated or not, and
whether the theory could be canonically quantized as a first or second class system.
For the consistency of the theory at the quantum loop level one would have to
establish the existence of a symmetry which protects this theory against quantum corrections
that could revive the ghost. These points will be explored in a further study.
\emph{Acknowledgements:} We would like to thank M. Berg, C. Deffayet, S. Dubovsky, F. Hassan, D. Pirtskhalava and R. Rosen
for useful discussions. CdR is funded by the SNF and the work of GG was supported by NSF grant PHY-0758032.
CdR thanks the CoPS group at Stockholm University for its hospitality during completion of this work.
|
1,314,259,993,367 | arxiv | \section{Introduction}
Computational methods are rapidly emerging as an essential tool to understand and solve complex engineering problems complementing the traditional means of experimentation and theory. Richard Feynman's statement \cite{feynman1982simulating} ``with a suitable class of quantum machines you could imitate any quantum system, including the physical world" has driven our vision towards a machine that can solve computational problems inaccessible to classical computers. Early versions of such quantum computers have already appeared. Mirroring gate-based classical computers, gate--based quantum computers with a small number of qubits have been demonstrated and promise an eventual path towards universal quantum computation. However, noise limits the number of gate operations that can be enforced before the quantum states decohere. In parallel, quantum annealers have been developed that provide a significant number of qubits for solving a class of combinatorial optimization problems. In these machines, an Ising hamiltonian is engineered such that the solution to the computational problem is encoded in its ground state. The system evolves adiabatically to the ground state as governed by the Schrodinger equation for the time-dependent hamiltonian.
The D-wave system is a quantum annealer that currently provides more than 2000 qubits modeling a transverse Ising hamiltonian whose ground state is NP-complete. The hamiltonian with $q_i$ describing the state of the $i^{th}$ qubit is given by:
\begin{equation}\label{eq_Ising}
E(\mathbf{q}) = \sum_{i \in \textrm{sites}} H_i q_i + \sum_{(i,j) \in \textrm{links}} J_{ij} q_i q_j
\end{equation}
The hamiltonian includes self-interaction ($H_i$ is the on-site energy of qubit $q_i$) and site-site interaction terms ($J_{ij}$ are the interaction energies of two connected qubits $q_i$ and $q_j$), with the qubits connected in a Chimera graph. The system is first initialized in the ground state of a hamiltonian which is known and easy to prepare.
Then, the Hamiltonian is changed such that the system equilibrates to the ground state of the Ising Hamiltonian $E(\mathbf{q})$ according to the adiabatic theorem.
NP-hard combinatorial optimization problems can be encoded through the field and site interaction strengths. The system particularly holds promise for solving graph coloring problems with large sizes (N) where classical polynomial time algorithms cannot be devised. Many engineering problems in airline scheduling, image segmentation, and pattern recognition have been encoded as graph coloring problems solvable on quantum annealers.
While differential equations are ubiquitous in models of physical phenomena, the use of quantum annealers for scientific computing in solid and fluid mechanics has not yet been explored. Scientific computing mostly involves solving a linear system of equations $Ax=b$ defined on a continuum domain discretized with finite elements. The matrix $A$, generally being sparse, structured and positive definite matrix obtained by assembling element-level stiffness matrices. In the past, gate--based quantum computing algorithms have been devised to solve the system of linear equations using QLSA algorithms (HHL algorithm \cite{harrow2009quantum}) and its variants \cite{ambainis2010variable,childs2015quantum,clader2013preconditioned,wossnig2018quantum}. This algorithm, unlike a classical solver, does not give a direct solution $x$ but rather allows sampling from the solution vector. Nevertheless, this has spawned several works in differential equation modeling on quantum computers (\cite{berry2010quantum,leyton2008quantum,montanaro2016quantum,cao2013quantum,pan2014experimental,steijl2018parallel,costa2017quantum,fillion2017simple,sun2017solving}). The sampling task by itself requires solving $Ax=b$. In the classical setting, the complexity scales with the size of the problem and goes as $O(Nsk\log(1/\epsilon))$ for conjugate gradient method where $N$ is the number of unknowns, $k$ is the condition number, $s$ is the sparsity of $A$ and $\epsilon$ is the precision of the solution. On the other hand, the QLSA \cite{harrow2009quantum} has a favorable running time of $O(\log(N)k^2s^2/\epsilon)$ which scales logarithmically with the size of the problem. Quantum annealers are especially attractive for scientific computing with the ability to scale up the simulations to a more significant number of qubits. However, algorithms for the solution of differential equations have not been devised yet on these systems\cite{lanlreport}. The merit of the paper is the algorithm to solve a differential equation on an annealer.
Here, we note that the solution to $Ax=b$ can be encoded in an equivalent minimization problem $min \left(\frac{1}{2} x^TAx - x^Tb\right)$ which contains field and interaction terms similar to an Ising model. Thus, in this paper, we explore mapping of this energy to Ising hamiltonian on the D-wave machine recognizing that the graph in the D-wave chip by itself models a finite element mesh-like topology. The element level stiffness and force vectors are then encoded in the Ising hamiltonian as interaction weights and field variables. Dirichlet boundary conditions are enforced by modifying field terms to favor one qubit state over another. An illustration of this procedure is presented in Fig \ref{fig_process}. A discretized version of the differential equation is solved using energy minimization on a graph. Direct minimization of energy may hold advantages over the conventional finite element approach in systems which lead to a matrix with large condition numbers, zero or negative eigenvalues leading to bifurcation events such as buckling in shells and phase transitions \cite{ba2010stability}.\\
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.8\textwidth]{process.png}
\caption{Illustration of procedure for solving differential equation}
\label{fig_process}
\end{figure*}
In the solution to a differential equation, the qubits must encode a rational number. However, the qubit encoding the Ising lattice point carries two discrete levels (up/down spin) in the ground state. In classical computers, with similar binary (0/1) encoding, anywhere from 32 bits (float) to 80 bits (long double) of memory can be used to encode more than 12 million high precision variables in 1 GB memory. In contrast, currently available quantum annealers have a limited number of physical qubits. This restriction makes the representation of solutions of double precision similar to a classical computer extremely expensive.
In (\cite{borle2018analyzing,o2018nonnegative}), the problem of minimizing $||Ax-b||$ in the least squares sense was posed by encoding physical qubits to represent rational numbers using a radix 2 representation. This format requires a significant number of physical qubits and connections to represent positive rational numbers and an additional qubit to represent the sign of the number (\cite{borle2018analyzing}).
In comparison, the box algorithm searches within a small discrete set of up/down qubit values with each element of the set mapped to a double precision value, thereby eliminating the need for additional qubits to achieve higher precision.
In this paper, we consider a self-adjoint form of second order differential equation as the model problem. The problem statement and the relevant mathematical details are presented in Section \ref{sec_math}. The Graph representation of the problem is formulated in Section \ref{sec_Graph_model}. The iterative procedure, referred as 'Box algorithm', is presented in Section \ref{sec_Box}. All procedures are accompanied with numerical examples for elucidation. This algorithm is demonstrated by solving a truss mechanics problem on the D-Wave quantum computer in Section \ref{sec_results}
\section{Mathematical Preliminaries}\label{sec_math}
A self-adjoint form of a second order differential equation on an interval $(x_l, x_r)$ is defined as,
\begin{eqnarray}\label{eq_strongform}
-(p(x)u'(x))' + q(x)u(x) = f(x) \qquad x_l<x<x_r
\end{eqnarray}
Dirichlet boundary conditions are considered at both ends i.e.
$u(x_l) = u_l \textrm{ and }u(x_r) = u_r$. Well-posedness of this problem requires $p(x) \geq p_{min} > 0 \textrm{ and } q(x) \geq q_{min} \geq 0$
Furthermore, for convenience, it is assumed that $p,q \in C([x_l,x_r])$ and $f \in L^2([x_l,x_r])$. These conditions are sufficient to show the existence of a unique solution to the weak form (\cite{levitan1975introduction}).
\subsection{Functional minimization}
Motivated by the intractability of direct integration of the differential equation (\ref{eq_strongform}), it is often convenient to employ functional minimization techniques. Calculus of variations can be used to observe that the minimization of the functional \eqref{eq_energyform} leads to the strong form described in Eq \eqref{eq_strongform}.
\begin{equation}\label{eq_energyform}
\Pi\left[u\right] = \int_{x_l}^{x_r} \left( \frac{1}{2}pu'^2 + \frac{1}{2}qu^2 -fu \right)dx
\end{equation}
Square integrability of $u$ and its first derivative are required in this definition of $\Pi\left[u\right]$. The implication is that the minimizing solution, $u$ lies in the Sobolov space $H^1(\left[x_l, x_r\right])$.
A discrete problem is obtained by using a finite basis for the solution defined in Eq \eqref{eq_basis} which satisfies the Dirichlet boundary conditions. The admissible choices of $\mathbf{a}=(a_0, a_1, ..., a_N)$ satisfy $u_N(x_d)=u_d$ where $x_d$ is a Dirichlet boundary and $u_d$ is the prescribed value at that point. This approximation reduces the infinite dimensional functional minimization problem to finite dimensions. The approximated functional $\Pi_N$ is entirely determined by the representation of $u$ in the finite basis as shown in Eq (\ref{eq_discretenergy}).
It is worth observing that the choice of $\phi_i(x)$ is such that $\phi_i \in H^1(\left[ x_l, x_r \right])$ i.e. for any $u_N \in V_N = span\lbrace\phi_1, \phi_2, ...,\phi_N \rbrace \subseteq H^1(\left[ x_l, x_r \right])$. Additionally the proper inclusion, $V_i \subseteq V_{i+1}$, guarantees convergence of the solution with increasing $N$.
\begin{equation}\label{eq_basis}
u_N(x) = \sum_{i=0}^N a_i \phi_i(x)
\end{equation}
\begin{gather}
\Pi_{N}\left[a_0, ...,a_r,.. ,a_N \right] = \int_{x_l}^{x_r} \frac{p}{2}\left(\sum_{i=0}^{N} a_i \phi'_i\right)^2 \nonumber\\+ \frac{q}{2}\left(\sum_{i=0}^{N} a_i \phi_i\right)^2 -f\left(\sum_{i=0}^{N} a_i \phi_i\right) dx \label{eq_discretenergy}
\end{gather}
As the solution is completely determined by the variable $\mathbf{a}$, the functional minimization of Eq \eqref{eq_discretenergy} is reformalized as Eq \eqref{eq_basol} where $\mathbf{a}^{b.a.}$ refers to the coefficients of best approximation of solution, $u_N$, in the subspace $V_N$
\begin{equation}\label{eq_basol}
\mathbf{a}^{b.a.} = \argmin_{\mathbf{a}} \Pi_{N}(\mathbf{a})
\end{equation}
\subsection{Finite Element approximation}
Finite element basis provides a popular choice of compactly supported shape functions. For the purpose of simplicity, `tent/hat functions' (defined in Eq \eqref{eq_hatfunction}) are used in this work. The domain is split into $N$ elements with $N+1$ nodes. The generalization to higher order families of piecewise-continuous basis functions is immediate but is omitted for brevity.
\begin{equation}\label{eq_hatfunction}
{\phi}_i(x) = \left\lbrace
\begin{array}{ll}
0, & x < x_{i-1},\\
(x - x_{i-1})/(x_{i} - x_{i-1}),
& x_{i-1} \leq x < x_{i},\\
1 -
(x - x_{i})/(x_{i+1} - x_{i}),
& x_{i} \leq x < x_{i+1},\\
0, & x\geq x_{i+1}{\thinspace .}
\end{array}
\right.
\end{equation}
The usage of compact basis further reduces the complexity by reducing the integration over the whole domain to a summation of integration over smaller elements. It is shown in section \ref{sec_Graph_model} that this choice of shape functions lead to a relatively sparse graph. It simplifies the computation by reducing the size of the graph optimization problem. The simplified form of $\Pi$ specialized for the hat-functions is presented in Eq \eqref{eq_functional_elementsum}.
\begin{gather}
\Pi_N(\mathbf{a})
=\sum_{i=1}^{N} a_{i-1}^2 \left( \int_{x_{i-1}}^{x_{i}} \frac{p}{2} \phi'^2_{i-1} + \frac{q}{2} \phi^2_{i-1} dx \right) \nonumber\\
+ a_{i}^2 \left( \int_{x_{i-1}}^{x_{i}} \frac{p}{2} \phi'^2_{i} + \frac{q}{2} \phi^2_{i} dx \right)\nonumber\\
+ a_{i-1}a_{i} \left(\int_{x_{i-1}}^{x_{i}} p\phi'_{i-1} \phi'_{i} + q\phi_{i-1} \phi_{i} dx\right)\nonumber \\\label{eq_functional_elementsum}
- a_{i-1}\left(\int_{x_{i-1}}^{x_{i}} f\phi_{i-1}dx\right) -a_{i}\left(\int_{x_{i-1}}^{x_{i}} f\phi_{i}dx\right)
\end{gather}
This form of $\Pi$ promotes modularity in computation
and allows expressing the functional as
\begin{equation}\label{aisi}
\Pi_N = \sum_{i=1}^{N} \mathbf{A_i}.\mathbf{S_i}
\end{equation}
where vectors $\mathbf{A_i}\equiv \mathbf{A_i}(a_{i-1},a_i)$ and $\mathbf{S_i}\equiv \mathbf{S_i}(p,q,f)$ are defined for each element in \eqref{eq_functionalIP}. The vector $\mathbf{S_i}$ is independent of state $\mathbf{a}$ and is therefore only computed once in the whole procedure.
\begin{gather}
\mathbf{A_i} = \Big[a^2_{i-1} \quad,\quad a^2_{i} \quad,\quad a_{i-1}a_{i} \quad,\quad a_{i-1}\quad,\quad a_{i} \Big]^T \nonumber\\
\mathbf{S_i} = \Bigg[ \int_{x_{i-1}}^{x_{i}} \frac{p}{2} \phi'^2_{i-1} + \frac{q}{2} \phi^2_{i-1} dx \quad,\quad \int_{x_{i-1}}^{x_{i}} \frac{p}{2} \phi'^2_{i} + \frac{q}{2} \phi^2_{i} dx \quad, \nonumber \\
\int_{x_{i-1}}^{x_{i}} p\phi'_{i-1} \phi'_{i} + q\phi_{i-1} \phi_{i} dx \quad,\quad -\int_{x_{i-1}}^{x_{i}} f\phi_{i-1}dx \quad,\nonumber \\
\quad -\int_{x_{i-1}}^{x_{i}} f\phi_{i}dx \Bigg]^T\label{eq_functionalIP}
\end{gather}
\vspace{5pt}
\hrule
\vspace{5pt}
\begin{center}
Example
\end{center}
Consider the differential equation with boundary conditions $u(0)=0$ and $u(1) = 1$.
\begin{gather*}
\frac{d^2 u }{dx^2} = 0 \qquad 0<x<1
\end{gather*}
Functional:
\begin{eqnarray*}
\Pi[u] = \frac{1}{2}\int_0^1 u'^2 dx
\end{eqnarray*}
For simplicity, consider a grid with a uniform mesh of 2 elements and 3 nodes:
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{2elemgrid.png}
\end{figure}
Using linear interpolants for the elements,
\begin{eqnarray*}
u(x) = \left\{\begin{matrix}
a_0 (1 - 2x)+ a_1 (2x) & 0<x\leq 0.5\\\\
a_1 (2 - 2x)+ a_2 (2x-1) & 0.5<x\leq 1\\
\end{matrix}\right.
\end{eqnarray*}
The functional with the FE discretization:
\begin{eqnarray*}
\Pi_N(\textbf{a}) = (a_0-a_1)^2 + (a_1-a_2)^2
\end{eqnarray*}
Modular representation of functional ($\Pi_N = \mathbf{A_1}.\mathbf{S_1} + \mathbf{A_2}.\mathbf{S_2}$):
\begin{gather*}
\mathbf{A_1} = \Big[a^2_{0} \quad,\quad a^2_{1} \quad,\quad a_{0}a_{1} \quad,\quad a_{0}\quad,\quad a_{1} \Big]^T \\
\mathbf{A_2} = \Big[a^2_{1} \quad,\quad a^2_{2} \quad,\quad a_{1}a_{2} \quad,\quad a_{1}\quad,\quad a_{2} \Big]^T \\
\mathbf{S_1} = \mathbf{S_2} = \Big[1 \quad,\quad 1 \quad,\quad -2 \quad,\quad 0 \quad,\quad 0 \Big]^T
\end{gather*}
\vspace{5pt}
\hrule
\vspace{5pt}
\section{Graph Coloring Problem}\label{sec_Graph_model}
Quantum annealing methods are tailored to find the lowest energy states in an Ising system defined in Eq \eqref{eq_Ising}. The Ising hamiltonian defines a binary graph coloring problem with each vertex of graph or qubit labeled as $+1$ or $-1$. The value of the qubits determine the free variable, in this case, $\mathbf{a}$. The parameters $H_i$ and $J_{ij}$ are defined such that the Ising hamiltonian, for a labeling representing the state, $\mathbf{a}$, corresponds to the functional $\Pi_N(\mathbf{a})$. These problems, namely, the representation of state and estimation of parameters are addressed in this section.
\subsection{Representation of State}
Representation of a functional in terms of continuous variables is not feasible on quantum architectures. Due to this limitation, the values of each $a_i$ ($i^{th}$ component of $\mathbf{a}$) are chosen from a finite set of values based on the labeling of qubits. The representation presented here permits 3 possible values of $a_i$ at each node. In particular, for each node (indexed `$i$'), the state ($a_i$) is exactly determined by the labeling of qubits $q^i_1$, $q^i_2$ and $q^i_3$ with the $i^{th}$ node taking values in the set $\{ v_{i_1}, v_{i_2}, v_{i_3} \}$. Eq \eqref{eq_evaltestfunc} defines a mapping $(q^i_1,q^i_2,q^i_3)\rightarrow a_i$ as tabulated in Table \ref{table_qub2statemap}. It is observed that the mapping results in $a_i \in \{v_{i_1}, v_{i_2}, v_{i_3} \}$ when two qubits are labeled $-1$ and one qubit is labeled $+1$. Next it shown that the Ising parameters can be manipulated to make these labelings energetically favourable, thereby eliminating the occurrence of undesirable labels.
\begin{equation}\label{eq_evaltestfunc}
a_i = \sum_{j=1}^3 v_{i_j} \frac{q^i_j +1}{2}
\end{equation}
\begin{table}[]
\begin{tabular}{|c|c|}
$(q^i_1,q^i_2,q^i_3)$ & $a_i$ \\\hline
$(1 , 1 , 1 )$& $v_{i_1}+v_{i_2}+v_{i_3}$ \\
$(1 , 1 , -1 )$& $v_{i_1}+v_{i_2}$ \\
$(1 , -1 , 1 )$& $v_{i_1}+v_{i_3}$ \\
$(1 , -1 , -1 )$& $v_{i_1}$ \\
$(-1 , 1 , 1 )$& $v_{i_2}+v_{i_3}$ \\
$(-1 , 1 , -1 )$& $v_{i_2}$ \\
$(-1 , -1 , 1 )$& $v_{i_3}$ \\
$(-1 , -1 , -1 )$& $0$
\end{tabular}
\caption{The mapping from qubits to state $a_i$ at node}
\label{table_qub2statemap}
\end{table}
\vspace{5pt}
\hrule
\vspace{5pt}
\begin{center}
Example (Continued)
\end{center}
In general, the set $\{v_{i_1}, v_{i_2}, v_{i_3}\}$ is different for each node. However, for simplicity, consider the same set of admissible states for all three nodes given by $\{v_{i_1}, v_{i_2}, v_{i_3}\} \equiv \{0,0.5,1\}$. Each node is defined by three qubits as follows:
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{fig_ex_staterep.png}
\end{figure}
We know that the three qubits each defining the solution at the first and last nodes should take up choices 1 and 3, respectively, due to boundary conditions. The choice for the second node is to be solved.
\vspace{5pt}
\hrule
\vspace{5pt}
\subsection{Parameter Estimation}
To promote modularity, the graph representation is decomposed into two component subgraphs, namely, nodal graph and element graph. Each node and element of the FE discretization is endowed with a node graph and element graph, respectively. This allows to refine the mesh by extending the graph.
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.4\textwidth]{subgraph.png}
\caption{Connectivity of (a) nodal graph (b) element graph.}
\label{fig_elementsubgraph}
\end{figure}
\subsubsection{Nodal Graph}
The nodal graph is given by a fully connected graph with three vertices representing the three qubits of the FE node. The nodal graph ensures that the energy minimizing states of the Ising hamiltonian corresponds to state $\mathbf{a}$ with favorable choice of $a_i \in \{ v_{i_1}, v_{i_2}, v_{i_3} \}$ with equal probability. As mentioned earlier, the set of favorable labeling of qubits at a node is given by $\{Q_1, Q_2, Q_3\} \equiv \{ (1,-1,-1),(-1,1,-1),(-1,-1,1) \}$. Since each of the three labelling is equally likely in the absence of any functional minimization, it is expected that the same value of the coupling strength ($\hat{J}$) for each connection and the field strength ($H$) for each node is used. A choice of $\hat{J}$ and $H$ that fulfill these conditions is presented in Fig \ref{fig_nodalgraph}. Here, all the field and interaction terms for the nodal graph are given a value of one. The application of the Dirichlet boundary condition is also done by augmenting the field strength of the nodal graph. For example, by switching the field term $H$ corresponding to the second qubit $q_2$ of a boundary node `b' to -1 forces a lower value of the functional for the boundary node state of $( -1, +1, -1)$, which corresponds to the solution $v_{b_2}$. This allows us to encode the value at the boundary to be $v_{b_2}$.
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.8\textwidth]{nodal_graph.png}
\caption{Self interaction and site-site interaction parameters for nodal subgraph.}
\label{fig_nodalgraph}
\end{figure*}
\subsubsection{Element Graph}
The element graph is used to make the energy of minimizing states of graph correspond to the value of functional $\Pi_N$ of the continuous problem. Each element graph encodes the contribution of the respective element to the functional. Since the contribution of each element is dependent on the values at the nodes of the element, the element graph is constructed by connecting the vertices of neighbouring nodes. In particular, the site-site interaction in the $n^{th}$ element graph can be estimated as a matrix, $\widetilde{J}^n$, where $(\widetilde{J}^n)_{kl}$ represents the coupling energy between qubits $q^i_k$ ($k^{th}$ qubit of $i^{th}$ node) and $q^j_l$ ($l^{th}$ qubit of $j^{th}$ node) with $i,j$ being the nodes of $n^{th}$ element. As shown in the previous section, the contribution of the $n^{th}$ element towards the functional, based on the choice of a compact basis function, is evaluated as $\mathbf{A_n}.\mathbf{S_n}$. The elements of the vector, $\mathbf{A_n}\equiv \mathbf{A_n}(a_{i},a_{j})$ can therefore, take nine ($3\times 3$) possible values based on the values of $(a_{i},a_{j})$. For a particular choice of labeling of qubit the Ising energy of element graph is estimated as $E = \sum_{k=1}^3 \sum_{l=1}^3 (\widetilde{J}^n)_{kl} q^i_k q^j_l$. When the labeling is chosen appropriately (each node has two `-1' and one `+1' label), this energy equals to the value of functional for corresponding state, $\mathbf{a}$, as shown in Eq \eqref{eq_estimatesitesite}. This relation can be used to estimate $\widetilde{J}^n$ by solving a set of nine independent linear equations presented. It is important observation that the independence of these set of equations relies on the fact that for any node, $v_{i_k}\neq v_{i_l}$ for $k\neq l$. Additionally, the energy of the element graph breaks the symmetry between the states that minimize the energy of nodal graph, however, the values of $\widetilde{J}^n$ should be judiciously scaled (uniformly along all elements) such that energy of unfavourable states remain high.
\begin{equation}\label{eq_estimatesitesite}
\sum_{k=1}^3 \sum_{l=1}^3 (\widetilde{J}^n)_{kl} q^i_k q^j_l = \mathbf{A_n}(a_{i},a_{j}).\mathbf{S_n}
\end{equation}
\vspace{5pt}
\hrule
\vspace{5pt}
\begin{center}
Example (Continued)
\end{center}
A single element with two nodes admits to the following connectivity:
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig_ex_elg1.png}
\end{center}
\end{figure}
The estimated parameters reflect the contribution of element to the functional for a given choice of labeling:
Sample 1: \\
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig_ex_elg2.png}
\end{center}
\end{figure}
In the above Figure, both nodes take up choice 1 ($a_i=a_j=0$). The interaction energy for qubits: $E = \widetilde{J}_{11} - \widetilde{J}_{12} - \widetilde{J}_{13} - \widetilde{J}_{21} +\widetilde{J}_{22} + \widetilde{J}_{23} - \widetilde{J}_{31} + \widetilde{J}_{32} + \widetilde{J}_{33} = (a_i-a_j)^2 = 0 $\\
Sample 2: \\
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig_ex_elg3.png}
\end{center}
\end{figure}
In the above Figure, node i takes up choice 1 ($a_i=0$), while node j takes up choice 2 ($a_j=0.5$). The interaction energy for qubits: $E = -\widetilde{J}_{11} + \widetilde{J}_{12} + \widetilde{J}_{13} + \widetilde{J}_{21} -\widetilde{J}_{22} - \widetilde{J}_{23} - \widetilde{J}_{31} + \widetilde{J}_{32} + \widetilde{J}_{33} = (a_i-a_j)^2 = 0.25$ \\
Collectively solving the equation for all 9 such possibilities (as shown in Eq \eqref{eq_estimatesitesite}).
\begin{gather}
\begin{bmatrix}
+1& -1& -1& -1& +1& +1& -1& +1& +1\\
-1& +1& +1& +1& -1& -1& -1& +1& +1\\
-1& +1& +1& -1& +1& +1& +1& -1& -1\\
-1& +1& -1& +1& -1& +1& +1& -1& +1\\
+1& -1& +1& -1& +1& -1& +1& -1& +1\\
+1& -1& +1& +1& -1& +1& -1& +1& -1\\
-1& -1& +1& +1& +1& -1& +1& +1& -1\\
+1& +1& -1& -1& -1& +1& +1& +1& -1\\
+1& +1& -1& +1& +1& -1& -1& -1& +1
\end{bmatrix}
\begin{bmatrix}
\widetilde{J}^n_{11}\\
\widetilde{J}^n_{12}\\
\widetilde{J}^n_{13}\\
\widetilde{J}^n_{21}\\
\widetilde{J}^n_{22}\\
\widetilde{J}^n_{23}\\
\widetilde{J}^n_{31}\\
\widetilde{J}^n_{32}\\
\widetilde{J}^n_{33}
\end{bmatrix} \nonumber\\
= \begin{bmatrix}
(v_{i_{1}}-v_{j_{1}})^2\\
(v_{i_{2}}-v_{j_{1}})^2\\
(v_{i_{3}}-v_{j_{1}})^2\\
(v_{i_{1}}-v_{j_{2}})^2\\
(v_{i_{2}}-v_{j_{2}})^2\\
(v_{i_{3}}-v_{j_{2}})^2\\
(v_{i_{1}}-v_{j_{3}})^2\\
(v_{i_{2}}-v_{j_{3}})^2\\
(v_{i_{3}}-v_{j_{3}})^2 \nonumber
\end{bmatrix}\label{eq_estimatesitesite2}
\end{gather}
\begin{equation*}
\widetilde{J}^1 = \widetilde{J}^2 = \begin{bmatrix}
0.1250 & 0.3750 & 0.3750\\
0.3750 & 0.5000 & 0.3750\\
0.3750 & 0.3750 & 0.1250
\end{bmatrix}
\end{equation*}
The above parameters will exactly reproduce the functional in the interaction term. The boundary conditions are enforced, by setting self interaction term for qubits $q^0_1$, $q^2_3$ to $H=-1$. This locks the state at the $1^{st}$ boundary node as $a_0 = v_{0_1} = 0$ and at the $2^{nd}$ boundary node as $a_2 = v_{2_3} = 1$. Energy minimization of the resulting Ising hamiltonian gives $a_1 = v_{1_2}= 0.5$, which is the exact solution for the discretized problem.
\vspace{5pt}
\hrule
\vspace{5pt}
The process of the graphical representation of the discretized functional using the nodal and element graphs is referred to as ``Assembly". Each node and element is endowed with a nodal and element graph, respectively. The effective site-site interaction energy is estimated by summing the nodal coupling strength, $\hat{J}$, over all nodes, and element coupling strength, $\widetilde{J}$, over all elements. Due to the nature of discretization, $N$ element graphs and $N+1$ nodal graphs are required for representing an $N$-element discretization of the domain. The assembled graph, from here on, is referred to as the logical graph. Connectivity of logical graphs for one-element and four-element discretization is presented in Fig \ref{fig_assembly}.
Two fundamental issues in this approach are addressed next using the box algorithm. Firstly, the choices at a node $\{ v_{i_1}, v_{i_2}, v_{i_3} \}$ were set in stone during initialization. The box algorithm makes this choice flexible. Secondly, as the number of nodes increase, three choices are insufficient. The number of qubits needed at a node must increase to make more choices available. Box algorithm, however, only requires three qubits per node for any level of discretization.
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.7\textwidth]{assembly.png}
\caption{Assembled graph for a domain discretized with (a) 1 element (b) 4 elements.}
\label{fig_assembly}
\end{figure*}
\section{Box Algorithm}\label{sec_Box}
In this section, an iterative procedure is developed to minimize the functional, $\Pi_N$, using the graph coloring representation discussed in the previous section. For a particular choice of $\{v_{i_1},v_{i_2},v_{i_3}\}$, defined as Eq \eqref{eq_centerbound}, the possible values of the state $a_i$ at the $i^{th}$ node are specialized to the set $\lbrace u_i^c - r , u_i^c , u_i^c + r \rbrace$, i.e.,
\begin{equation}\label{eq_centerbound}
v_{i_j} = u_i^c + r(j-2)
\end{equation}
The quantities, $\mathbf{u^c}= ( u_0^c, u_2^c,..., u_{N}^c )$ and $r$ are hereafter referred to as box center and the slack variable, respectively. The intention is to approximate functions using the box center while the slack variable provides a bound on this approximation. The precise meaning of this bound is presented later in this section. A linear approximation of $f(x) = \sqrt{x}$ using ten nodes is presented in Fig \ref{fig_box} for different box centers and the slack variable (which can be interpreted as the box size). The function, $f(x)$, is approximated as $\mathbf{u^c}$ at the nodes with linear interpolation in between the nodes. The blue region describes the possible value of the interpolation if the value at any node is perturbed within the range of $\pm r$. In Fig \ref{fig_box}(a), an exact approximation of the function at the nodes is presented with a slack variable of $0.2$. In (b) the same approximation with a slack variable of $0.02$ is presented. The same approximation is given in the two cases, but the bound on nodal values of (b) is tighter than (a). In part (c), the approximation is not exact, however, it lies within the bounds. In part (d), the approximation is neither exact nor within the bound. In the context of the vectorial representation of the coefficients, $\mathbf{a}$, these bounds are represented as $3^N - 1$ points on the surface of a box, defined as, $||\mathbf{a}- \mathbf{u^c}||_\infty = r$. An illustration for the vectorial representation of a two nodes element is presented in Fig \ref{fig_box2}. The solution is sampled from a $3\times3$ grid in the $a_1-a_2$ vector space.\\
In the discrete setting, the solution to the differential equation can be equivalently reduced to minimization of a function of the form: $\mathbf{a^T M a}$ where $\mathbf{M}$ is some positive definite matrix. The vector $\mathbf{a}$ takes value in one of the $3^N$ possibilities. The minimizer (not necessarily unique) is given by Eq \eqref{eq_isingmin}. The solution, $\mathbf{a}^{min}$, need not coincide with the best approximation solution, $\mathbf{a}^{b.a.}$ of the continuous problem. In the illustration presented in Fig \ref{fig_box2}, the center is depicted as the solution ($\mathbf{a}^{min} = \mathbf{u}^c$), the minimum is then contained within the elliptic region of the contour with $\mathbf{a}^{min}$ on the edge. Geometrically, this gives $||\mathbf{a}^{min}- \mathbf{a}^{b.a}|| \leq d \leq \sqrt{2}r(1+\lambda_{max}/\lambda_{min})$ where $\lambda_{max}$ and $\lambda_{min}$ are the maximum and the minimum non--zero eigenvalues of $M$, respectively. This suggests that as the box size decreases, the corresponding $\mathbf{u}^c$ approaches to the best approximation solution of $u$. This argument is extended to $n$ dimensions with the bound given in Eq \eqref{eq_bound}.
\begin{equation}\label{eq_isingmin}
\mathbf{a}^{min} = \argmin_{\substack{a_i\in \{u_i^c - r , u_i^c , u_i^c + r \}}} \mathbf{a^T M a}
\end{equation}
\begin{equation}\label{eq_bound}
||\mathbf{a}^{min}- \mathbf{a}^{b.a}|| \leq 2\left(1+(n-1)\frac{\lambda_{max}}{\lambda_{min}}\right)\frac{r}{\sqrt{n}}
\end{equation}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.9\textwidth]{box_approx.png}
\caption{Approximation of $\sqrt{x}$ function using boxed domain: (a) Exact fit with a slack of $0.2$ (Loose fit) (b) Exact fit with a slack of $0.02$ (tight fit) (c) inexact fit but bounded in a box size of $0.2$ (d) inexact fit and unbounded by a slack of $0.2$.}
\label{fig_box}
\end{figure*}
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.35\textwidth]{box2.png}
\caption{ An illustration of a two-node approximation in $a_1-a_2$ vector space with contours plot of the functional, $\Pi_2( a_1,a_2)$ and a representative box with center at $\mathbf{u^c}$ and a box size of $r$.}
\label{fig_box2}
\end{figure}
\subsection{Iterative Procedure}
In this section we present the details of the iterative procedure which locates the solution of the discretized problem, $\mathbf{a}^{min}$, and updates the box center and slack variable such that $\mathbf{u}^C$ approaches the solution of the continuous problem (in the sense of best approximation).
The necessary information required for defining the functional is stored in the vector $\mathbf{S_i}$. It is computed once at the beginning of the procedure as the problem definition stage. The procedure is initiated with a guessed solution of the vector, $\mathbf{a}$, provided as a box centered at $\mathbf{u^c}$. The boundary nodes with Dirichlet boundary conditions are assigned the boundary value as the initial guess. The slack variable is initialized with an arbitrary scalar value. A better initial guess for $r$ is the one which bounds the solution in the box defined by $\mathbf{u^c}$. Such initial guesses require fewer iterations in comparison to arbitrary ones; however, starting with a good guess is not a necessary condition for convergence. The Ising parameters, $H$, $\hat{J}$ and $\widetilde{J}$ are estimated as discussed in section \ref{sec_Graph_model}. \\
In this work, D-Wave's 2000Q processor is used. This processor has a Chimera-type structure with 2048 qubits and 6016 couplers \cite{dwavemanual}. A direct solution of the optimization problem by re-numeration of qubits is not possible as the assembled graph cannot be found in any subgraph of the physical graph, i.e., the processor. Therefore, it is required that the logical graph is mapped onto the physical graph via the process of embedding. This problem in itself is NP-hard and is not discussed here for brevity. The reader is referred to \cite{cai2014practical} for a discussion on this topic. The topology of the logical graph remains unchanged over the iterations. The search for embedding of a map is only conducted once, and in subsequent iterations, the self-interaction and the site-site interaction parameters are updated for the same embedding.
The use of three qubits per node in this paper allows the D-wave system to search for a minimum over a space of $3^N$ solution vectors in a single run. In each iteration, the box center is translated to the energy minimizer, $\mathbf{a}^{min}$. This move is referred to as the translation step. In the case where the minimizing state is found at the center, the box size is reduced, and the search is continued with a smaller bound on error. This move is referred to as the contraction step. The complete procedure is presented in Algorithm \ref{algo}.
\begin{comment}
\begin{equation}\label{eq_estimatesitesite}
\sum_{\left<k,l\right>} \widetilde{J}^i_{kl} q^{i-1}_{k} q^{i}_{l} = \mathbf{A_i}.\mathbf{S_i}
\end{equation}
In each iteration, the best solution among $3^N$ possibilities is obtained using quantum annealing.
map all nodes from the graph model onto the physical graph.
This mapping should preserve the minimum energy states.
The energy minimization $G(V,E) \xrightarrow{} G'(V',E')$. $T(v_i) \subset V'$ for all $v_i \in V$ such that (1) $T(v_i)\cap T(v_j) = \phi$ for all $v_i \neq v_j$, (2) $T(v_i)$ is connected in $G'$, and (3) If $(v_i,v_j) \in E$ then there exist an edge between subsets $T(v_i)$ and $T(v_j)$ in $E'$
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.6\textwidth]{embedding.png}
\caption{Connectivity of (a) nodal graph (b) element graph}
\label{fig_embedding}
\end{figure}
$\mathbf{v_c} = \lbrace v^1_c,...,v^i_c,...,v^n_c \rbrace$
\textcolor{red}{Moved from conclusions: These solution vectors are at corners, face and edge centers and center of a box in the N-dimensional solution space. The iteration process shown here either moves along the direction of the minimum energy vertex (excluding center) in the box or shrinks the box towards the center.}
\end{comment}
\begin{algorithm}[H]
\caption{Box Algorithm}\label{algo}
\begin{algorithmic}[1]
\State Problem definition: Calculate $\mathbf{S_i}$
\State Initialize $\lbrace u^c_i \rbrace$, $r$
\State Estimate $H$, $\hat{J}$ and $\widetilde{J}$
\State Find embedding: $\textrm{Logical graph} \xrightarrow{\text{embed}} \textrm{Physical graph}$
\While {$r>r_{\textrm{min}}$}
\State Update $\widetilde{J}$ for current $(u_i^c,r)$
\State Anneal for $\lbrace q^i_j \rbrace$
\State Map to $\mathbf{a}^{min}$, $(\Pi_N)_{min}$
\If{$(\Pi_N)_{min} < \Pi_N \left[\mathbf{u^c}\right]$}
\State $u_i^c = \mathbf{a}^{min}$ (Translation step)
\Else
\State $r = \frac{r}{2}$ (Contraction step)
\EndIf
\EndWhile
\State \textbf{end}
\end{algorithmic}
\end{algorithm}
\vspace{5pt}
\hrule
\vspace{5pt}
\begin{center}
Example (Continued)
\end{center}
In the box algorithm, the set $\{v_{i_1},v_{i_2},v_{i_3}\}$ is constructed using the box center and the slack variable.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.4\textwidth]{fig_ex_boxlabel.png}
\end{center}
\end{figure}
With the application of the boundary condition, the favourable labeling of qubits give following three choices in the solution ($\mathbf{I}$, $\mathbf{II}$, $\mathbf{III}$).
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.4\textwidth]{fig_ex_box.png}
\end{center}
\end{figure}
The values of $u^c_1$ and $r$ are initialized arbitrarily. One of the solutions among $\mathbf{I}$, $\mathbf{II}$ or $\mathbf{III}$ is selected by the annealer. If the minimizer is found to be solution $\mathbf{II}$ then the algorithm proceeds with the contraction step by halving the value of $r$.
If solutions $\mathbf{I}$ or $\mathbf{III}$ are selected, then the algorithm proceeds with the translation step by setting the new box center to $u_1^c + r$ or $u_1^c - r$, respectively.
\vspace{5pt}
\hrule
\vspace{5pt}
\section{Results}\label{sec_results}
The deformation of a bar under axial loading is modelled using an equation of form \eqref{eq_strongform}. In particular, the deflection $(u)$ of a bar is related to the elastic stiffness, $(E)$, cross-sectional area $(A)$, and the applied body force, $(f)$ using Eq \eqref{eq_Elasticstrong}. The functional (Eq \ref{eq_energyform}) is referred to as the potential energy of the system. The corresponding discretized form of the potential energy for piece-wise linear $E$,$A$ and $f$ is calculated using Eq \eqref{eq_Elasticweak} where $E_i$,$A_i$ and $f_i$ represents the elastic stiffness, area and applied body force, respectively, at the center of the $i^{th}$ element.
\begin{eqnarray}\label{eq_Elasticstrong}
(EAu')' + f = 0
\end{eqnarray}
\begin{equation}\label{eq_Elasticweak}
\Pi_N\left[u\right] = \sum_{i=1}^{N} \frac{N}{2} E_iA_i (a_{i}-a_{i-1})^2 - \frac{1}{2N}f_i (a_{i}+a_{i-1})
\end{equation}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{barsolution.png}
\caption{Axial deformation of a bar with (a) a discontinuous cross-section with a tip displacement (b) a continuously varying cross section with a body force and a tip displacement.}
\label{fig_bars}
\end{figure*}
Two test cases are presented in Fig \ref{fig_bars}. In test case (a), a bar with a discontinuity in $EA$ is simulated. The body force is not applied in this case. A four-element discretization is used. The initial guesses are taken as $\mathbf{u^c} = \lbrace0,0.25, 0.5, 0.75, 1 \rbrace$ and $r=0.2$. The numerical solution is observed to approach the exact solution in this case. The convergence in the functional is also evident.
In the test case (b), a bar with continuously varying $EA$ is simulated. A linearly varying body-force is supplied. A six-element discretization of the bar is used with $\mathbf{u^c} = \lbrace0,\frac{1}{6}, \frac{2}{6},\frac{3}{6},\frac{4}{6},\frac{5}{6},1 \rbrace$ and $r=0.2$. Based on the theory of finite element methods, the exact minimization of the energy in discretized space leads to a stiffer solution in comparison to the exact solution. It is observed that the numerical solution approaches the exact solution at nodes which is characteristic of finite element methods. Energy is also observed to be converging towards the finite element solution $u^{fem}$ in this case. The mismatch of $u$ within the element is expected to decrease with refinement in discretization.
Some implementation details on the D-wave architecture are relevant here. Although the mapping only requires three qubits per node, embedding of this graph into the Chimera graph produced an overhead of 9 qubits per node - constant over a range of discretizations. It is understandable since a complete graph of three qubits used to represent a node is not directly represented on the Chimera graph. In the future, the use of two qubits to represent a node can also be explored. While this ensures we still sample a large enough ($2^N$ vectors) solution space in a single run, we lose out information on the box center energy that is important for reducing the range of the slack variable. However, it is possible to compute the solution at the box center classically. The overhead of performing classical solutions can be offset by the fact that a two-qubit representation has smaller complete sub-graphs and is easier to embed in the physical graph. Another important task in quantum computing is error suppression. Quantum processors, unlike classical computers, do not have parity correction algorithms due to the no-cloning theorem. A compilation of popular methods for quantum error correction is presented in \cite{devitt2013quantum}. Energy re-scaling is one of the simpler approaches and is employed in this work. Here, in the estimation of $\widetilde{J}^n$, the energy was rescaled to ensure that the energy gap between feasible and unfeasible states is increased while maintaining a similar energy landscape. This step is a heuristic remedy for minimizing noise in quantum computation, and has no bearing in the theoretical convergence of the algorithm.
\section{Conclusions and Future Work}
Recent rapid developments in quantum annealers warrant further investigation into re-formulation of scientific computation problems as graph coloring problems. The use of quantum computing for solving differential equations has, to date, focused on the use of a gate--computing based QLSA algorithm. This algorithm attempts to sample from the solution space of the linear system of equations $Ax=b$. In the quantum annealer based algorithm described here, we do not solve the system of equations. We instead map the discretized version of the energy function of the differential equation to an Ising hamiltonian. The solution vector, $x$, is directly obtained as the ground state of the qubits. The algorithm has low memory requirements since the global matrix is not stored and the local matrices are encoded in the interaction weights of the Ising model. Further, the box algorithm allows mapping of up/down spin states of qubits in the ground state to rational numbers involved in the solution vector.
Since we primarily solve the Ising model, the cost of computation is tied to the performance of the quantum annealer (\cite{mcgeoch2013experimental}). Within each iteration however, equation Eq \eqref{eq_estimatesitesite} is solved for each element, leading to at least $O(n)$ operations.
We have shown that the box algorithm indeed guarantees convergence to the best approximation of the solution in the discretized space as the box size goes to zero. However, some improvements could be made to reduce the number of minimization runs. We could use the statistics of solutions that D-wave system returns from a single minimization run to drive the iteration process in an arbitrary direction. This data can also be used to heuristically develop `local' potential energy maps that can be used to identify larger step sizes for faster convergence. With future scaling up of quantum annealers up to millions of qubits, it will be possible to solve challenging engineering solid and fluid mechanics problems using quantum annealers.
\section{Acknowledgments}
The authors would like to acknowledge Universities Space Research Association (USRA) for providing access to the use of the D-Wave quantum computer in the USRA-NASA-Google quantum Artificial Intelligence Laboratory at NASA's Ames Research Center.
\begin{comment}
\begin{tcolorbox}[breakable,enhanced]
\begin{center}
Example (Cont.)
\end{center}
\end{tcolorbox}
Representation of a functional in terms of continuous variables is not feasible on quantum architectures. Thus it is imperative to further approximate this functional using discrete variables. In this section, a graph coloring based functional representation is formulated. An Ising hamiltonian is used to encode the functional on discrete points in $V_N$. Quantum annealing methods are tailored to find the lowest energy states in an Ising system defined in Eq \eqref{eq_Ising}. The parameters of self-interaction term ($H_i$) and the site-site interaction term ($J_{ij}$) are chosen in a way that the lower energy states correspond to feasible solutions for $u(\lbrace a_i \rbrace)$. Additionally, it is required that the Ising hamiltonian of the lower energy states should correspond to $\Pi\left[\lbrace a_i \rbrace\right]$.
The value of $a_i$ for each node is represented using $3$ logical qubits, namely, $\lbrace q_i^j\rbrace$. The variable, $a_i$ takes value from a finite set $\lbrace v_i^j\rbrace$ with cardinality $3$. The evaluation of the test function at the $i^{th}$ node is estimated using Eq \eqref{eq_evaltestfunc} where $q_i^j$ takes value in $\lbrace+1,-1\rbrace$.
Different graphs are required to study the different levels of discretization. Two sub-graphs are introduced to maintain generality, namely, Element graph and Nodal graph. All logical graphs can be generated using a combination of these sub-graphs. The contribution of each element to the functional is encoded in the element graph. The hamiltonian is augmented to enforce the preference of feasible states. This additional energy is provided through the Nodal graph.
\subsection{Nodal Graph}
The three qubits assigned to individual nodes are fully connected among themselves. For the purpose of elucidation, the state of the $i^{th}$ node can be denoted as a triplet $\lbrace q^1_i,q^2_i,q^3_i \rbrace$. The states defined in the set $Q = \lbrace\lbrace 1,-1,-1 \rbrace,\>\lbrace -1,1,-1 \rbrace, \>\lbrace -1,-1,1 \rbrace\rbrace \equiv \lbrace\lbrace Q_1 \rbrace,\>\lbrace Q_2 \rbrace, \>\lbrace Q_3 \rbrace\rbrace$ are chosen as allowable states. It is left for the reader to observe that these states result in $a_i = \lbrace\lbrace v_i^1\rbrace,\> \lbrace v_i^2\rbrace, \> \lbrace v_i^3\rbrace \rbrace$, respectively. The strength of parameters are decided in a way that the energy landscape of the initial problem is preserved. This requirement is imposed by using the same value of the coupling strength ($\hat{J}$) for each connection and the field strength ($H$) for each node. A choice of $\hat{J}$ and $H$ that fulfill these conditions is presented in Fig \ref{fig_nodalgraph}.
The connection between the qubits of consecutive nodes is determined using the element graph. This graph is constructed in a manner that for a feasible labeling of nodes (i.e. when state of all nodes are in the set Q), the evaluation of the Ising hamiltonian equals the evaluation of functional for corresponding $a_i$. The site-site interaction for the $i^{th}$ element can be estimated as a matrix, $\widetilde{J}^i$, where $(\widetilde{J}^i)_{kl}$ represents the coupling energy between qubits $q_{i-1}^k$ and $q_i^l$,
The contribution of the $i^{th}$ element towards the functional, based on the choice of a compact basis function, is evaluated as $\mathbf{A_i}.\mathbf{S_i}$. The elements of the vector, $\mathbf{A_i}$ are polynomials in $a_{i-1}$ and $a_{i}$ which are uniquely defined for a feasible state of nodes. This allows determination the matrix $\widetilde{J}^i$ by solving the system of nine linear equations given in Eq \eqref{eq_estimatesitesite}. The notation $\mathbf{A_i}(Q_k,Q_l)$ refers to the evaluation of the vector $\mathbf{A_i}$ when the $k^{th}$ state is selected for the $(i-1)^{th}$ node and the $l^{th}$ state is selected for the $i^{th}$ node.
\begin{gather}
\begin{bmatrix}
\longleftarrow Q_1 \longrightarrow\\
\longleftarrow Q_2 \longrightarrow\\
\longleftarrow Q_3 \longrightarrow
\end{bmatrix}
\begin{bmatrix}
\widetilde{J^i_{11}} & \widetilde{J^i_{12}} & \widetilde{J^i_{13}}\\
\widetilde{J^i_{21}} & \widetilde{J^i_{22}} & \widetilde{J^i_{23}}\\
\widetilde{J^i_{31}} & \widetilde{J^i_{32}}& \widetilde{J^i_{33}}
\end{bmatrix}
\begin{bmatrix}
\uparrow & \uparrow & \uparrow\\
Q_1 & Q_2 & Q_3\\
\downarrow & \downarrow & \downarrow
\end{bmatrix}\nonumber \\
=
\begin{bmatrix}
\mathbf{A_i}(Q_1,Q_1).\mathbf{S_i} & \mathbf{A_i}(Q_1,Q_2).\mathbf{S_i} & \mathbf{A_i}(Q_1,Q_3).\mathbf{S_i}\\
\mathbf{A_i}(Q_2,Q_1).\mathbf{S_i} & \mathbf{A_i}(Q_2,Q_2).\mathbf{S_i} & \mathbf{A_i}(Q_2,Q_3).\mathbf{S_i}\\
\mathbf{A_i}(Q_3,Q_1).\mathbf{S_i} & \mathbf{A_i}(Q_3,Q_2).\mathbf{S_i}& \mathbf{A_i}(Q_3,Q_3).\mathbf{S_i}
\end{bmatrix}\label{eq_estimatesitesite}
\end{gather}
\begin{table}[h]
\begin{tabular}{|c|c|c|c|c|}
\hline
$i$ & $q^i_1$ & $q^i_2$ & $q^i_3$ & $a_i$ \\ \hline
0 & -1 & 1 & -1 & 0.0 \\ \hline
1 & 1 & -1 & -1 & 0.1 \\ \hline
2 & -1 & 1 & -1 & 1.0 \\ \hline
\end{tabular}
\end{table}
\begin{gather}
\begin{bmatrix}
1& -1& -1& -1& +1& +1& -1& +1& +1\\
-1& +1& +1& +1& -1& -1& -1& +1& +1\\
-1& +1& +1& -1& +1& +1& +1& -1& -1\\
-1& +1& -1& +1& -1& +1& +1& -1& +1\\
1& -1& +1& -1& +1& -1& +1& -1& +1\\
1& -1& +1& +1& -1& +1& -1& +1& -1\\
-1& -1& +1& +1& +1& -1& +1& +1& -1\\
1& +1& -1& -1& -1& +1& +1& +1& -1\\
1& +1& -1& +1& +1& -1& -1& -1& +1
\end{bmatrix}
\begin{bmatrix}
\widetilde{J^i_{11}}\\
\widetilde{J^i_{12}}\\
\widetilde{J^i_{13}}\\
\widetilde{J^i_{21}}\\
\widetilde{J^i_{22}}\\
\widetilde{J^i_{23}}\\
\widetilde{J^i_{31}}\\
\widetilde{J^i_{32}}\\
\widetilde{J^i_{33}}
\end{bmatrix} \nonumber\\
= \begin{bmatrix}
\mathbf{A_i} (v_{(i-1)_{1}},v_{i_{1}}).\mathbf{S_i}\\
\mathbf{A_i} (v_{(i-1)_{2}},v_{i_{1}}).\mathbf{S_i}\\
\mathbf{A_i} (v_{(i-1)_{3}},v_{i_{1}}).\mathbf{S_i}\\
\mathbf{A_i} (v_{(i-1)_{1}},v_{i_{2}}).\mathbf{S_i}\\
\mathbf{A_i} (v_{(i-1)_{2}},v_{i_{2}}).\mathbf{S_i}\\
\mathbf{A_i} (v_{(i-1)_{3}},v_{i_{2}}).\mathbf{S_i}\\
\mathbf{A_i} (v_{(i-1)_{1}},v_{i_{3}}).\mathbf{S_i}\\
\mathbf{A_i} (v_{(i-1)_{2}},v_{i_{3}}).\mathbf{S_i}\\
\mathbf{A_i} (v_{(i-1)_{3}},v_{i_{3}}).\mathbf{S_i}
\end{bmatrix}\label{eq_estimatesitesite2}
\end{gather}
\end{comment}
\bibliographystyle{apsrev}
|
1,314,259,993,368 | arxiv | \section{Introduction\label{sec:intro}}
The intra-cluster medium is magnetised. Direct evidence for
cluster-wide magnetic fields are the large-scale diffuse radio sources
of synchrotron origin. There is growing evidence that these fields are
of the order of $\sim \mu$G and are ordered on kiloparsec scales
\citep[see e.g. recent reviews][]{2002ARA&A..40..319C,
2002RvMP...74..775W, 2004astro.ph.10182G}.
One method to investigate magnetic field structure and strength is the
detection of the Faraday rotation effect. This effect is observed
whenever linearly polarised radio emission passes through a magnetised
medium. A linearly polarised wave can be described by two circularly
polarised waves. Their motion along magnetic field lines in a plasma
introduces a phase difference between the two waves resulting in a
rotation of the plane of polarisation. If the Faraday active medium is
external to the source of the polarised emission, one expects the
change in polarisation angle to be proportional to the squared
wavelength. The proportionality factor is called the rotation measure
($RM$). This quantity can be evaluated in terms of the line of sight
integral over the product of the electron density and the magnetic
field component along the line of sight.
Observed $RM$ maps of extended extragalactic radio sources are
especially valuable in order to study the intra-cluster magnetic
fields. Simple analytical approaches based on the patchy structure of
the $RM$ maps to measure the characteristic length scale of the
magnetic fields, which are necessary to translate $RM$ values into
field strength, result in magnetic field strength of $\sim$ 5 $\mu$G up
to $\sim$ 30 $\mu$G for cooling flow clusters, e.g. Cygnus A
\citep{1987ApJ...316..611D}, Hydra A \citep{1993ApJ...416..554T}, A1795
\citep{1993AJ....105..778G}, 3C295 \citep{2001MNRAS.324..842A}. The
same arguments have lead to estimates of a cluster magnetic field
strength of 2...8 $\mu$G for non-cooling flow clusters, e.g. Coma
\citep{1995A&A...302..680F}, A119 \citep{1999A&A...344..472F}, 3C129
\citep{2001MNRAS.326....2T}, A2634 \& A400 \citep{2002ApJ...567..202E}.
Observations of a polarised radio point source sample seen through a
cluster atmosphere were presented by
\citet{1991ApJ...379...80K}. They detected an $RM$ broadening towards
the cluster centre implying a magnetic field strength of 1
$\mu$G. More recently, \citet{2001ApJ...547L.111C} analysed a
statistical sample of 16 cluster sources against a control
sample. They also detect a broadening of the $RM$ distribution for
sources towards the cluster centre. They find a cluster magnetic
field strength of \hbox{4...8 $\mu$G}.
These high magnetic field values derived using $RM$ methods seem to be
in contrast to the lower magnetic field values of 0.1...0.3 $\mu$G
estimates from Inverse Compton (IC) measurements which are possible
for clusters with observed diffuse radio haloes
\citep{1987ApJ...320..139R, 1994ApJ...429..554R, 1999ApJ...511L..21R,
1998PASJ...50..389H, 2000ApJ...534L...7F, 2001ApJ...552L..97F,
2004ApJ...602L..73F, 1998A&A...330...90E}. Cosmic microwave
background photons are expected to inverse Compton scatter off of the
relativistic electrons thereby emitting non-thermal X-ray
emission. Upper limits on this non-thermal X-ray emission together
with the radio observations of the synchrotron radiation which is
emitted by the relativistic electron population can then be used to
set lower limits on the average magnetic field strength.
There is an order of magnitude difference between the field strength
derived for these methods. Several arguments can be given to
reconcile the different results. First, except for a very small
number of clusters (including the Coma cluster), at best one of the
methods could be applied, so that the difference could be a
difference between clusters. Second, the Faraday rotation method
measures a volume-averaged magnetic field weighted by the thermal
electron density whereas the inverse Compton results give
volume-averaged field strengths which are weighted with the
relativistic electron distribution. Since the relativistic electron
population is easily diminished in regions with strong magnetic
fields due to the enhanced synchrotron cooling, the inverse Compton
method is expected to provide smaller estimates. Thus, a medium
that is inhomogeneously magnetised on small scales compared to the
observational spatial resolution might possibly solve the
contradiction \citep{1999A&A...344..409E}. Furthermore, since the
observed IC flux could originate from other sources, it is an upper
limit. Hence, the IC measurements give only lower limits on the
magnetic field strength. For a more detailed discussion, we refer to
\citet{2002ARA&A..40..319C, 2004astro.ph.10182G}.
\citet{2003A&A...401..835E} proposed a method to determine the magnetic
power spectra by Fourier transforming $RM$ maps. Based on these
considerations, \citet{2003A&A...412..373V} applied this method and
determined the magnetic power spectrum of three clusters (Abell 400,
Abell 2634 and Hydra~A) from $RM$ maps of radio sources located in
these clusters. Furthermore, they determined field strengths of $\sim
12\,\,\mu$G for the cooling flow cluster Hydra~A, $3 \,\, \mu$G and $6
\,\, \mu$G for the non-cooling flow clusters Abell 2634 and Abell 400,
respectively. Their analysis revealed spectral slopes of the power
spectra with spectral indices $ -2.0\ldots-1.6$. However, it was
realised that using the proposed analysis, it is difficult to reliably
determine differential quantities such as spectral indices due to the
complicated shapes of the emission regions used which lead to a
redistribution of magnetic power within the spectra.
Recently, \citet{2004A&A...424..429M} proposed a numerical method to
determine the magnetic power spectrum in clusters. They infer the
magnetic field strength and structure by comparing simulations of $RM$
maps as caused by multi-scale magnetic fields with the observed
polarisation properties of extended cluster radio sources such as radio
galaxies and haloes. They argue that field strengths derived in the
literature using analytical expressions have been overestimated by a
factor of $\sim$ 2.
In order to determine a power spectrum from observational data, maximum
likelihood estimators are widely used in astronomy. These methods and
algorithms have been greatly improved, especially by the Cosmic
Microwave Background (CMB) analysis which tackles the problem of
determining the power spectrum from large CMB
maps. \citet{1998ApJ...495..564K} proposed such an estimator to
determine the power spectrum of a primordial magnetic field from the
distribution of $RM$ measurements of distant radio galaxies.
Based on the initial idea of \citet{1998ApJ...495..564K}, the methods
developed by the CMB community
\citep[especially][]{1998PhRvD..57.2117B} and our understanding of the
magnetic power spectrum of cluster gas \citep{2003A&A...401..835E}, we
derive here an Bayesian maximum likelihood approach to calculate the
magnetic power spectrum of cluster gas given observed Faraday rotation
maps of extended extragalactic radio sources. The power spectrum
enables us also to determine characteristic field length scales and
strength. After testing our method on artificially generated $RM$ maps
with known power spectra, we apply our analysis to a Faraday rotation
map of Hydra~A. The data were kindly provided by Greg Taylor. In
addition, this method allows us to determine the uncertainties of our
measurement and, thus, we are able to give errors on the calculated
quantities. Based on these calculations, we investigate the nature of
turbulence of the magnetised gas.
This paper is structured as follows. In Sect.~\ref{sec:theory}, a
method employing a maximum likelihood estimator as suggested by
\citet{1998PhRvD..57.2117B} to determine the magnetic power spectrum
from $RM$ maps is introduced. Special requirements for the analysis of
$RM$ maps with such a method are discussed. In Sect.~\ref{sec:test}, we
apply our maximum likelihood estimator to generated $RM$ maps with
known power spectra to test our algorithm. In Sect.~\ref{sec:app}, the
application of our method to data of Hydra~A is described. In
Sect.~\ref{sec:discussion}, the derived power spectra are presented and
the results are discussed. In Sect.~\ref{sec:conclusion}, conclusions
are drawn.
We assume a Hubble constant of H$_{0} = 70$ km s$^{-1}$ Mpc$^{-1}$,
$\Omega_{m} = 0.3$ and $\Omega_{\Lambda} = 0.7$ in a flat universe. All
equations follow the notation of \citet{2003A&A...401..835E}.
\section{Maximum likelihood analysis\label{sec:theory}}
\subsection{The covariance matrix $C_{RM}$\label{sec:crm}}
One of the most commonly used methods of Bayesian statistics is the
maximum likelihood method. The likelihood function for a model
characterised by $p$ parameters $a_p$ is equivalent to the probability
of the data $\vec{\Delta}$ given a particular set of $a_p$ and can be
expressed in the case of (near) Gaussian statistics of $\vec{\Delta}$
as
\begin{equation}
\label{eq:likely}
\mathcal{L}_{\vec{\Delta}}(a_p) = \frac{1}{(2\pi)^{n/2}|C|^{1/2}}\cdot
\exp\left(-\frac{1}{2}\vec{\Delta}^{T}C ^{-1}\vec{\Delta}\right),
\end{equation}
where $|C|$ indicates the determinant of a matrix, $\Delta_i =
RM_i$ are the actual observed data, $n$ indicates the number of
observationally independent points and $C = C(a_p)$ is the
covariance matrix. This covariance matrix can be defined as
\begin{equation}
C_{ij}(a_p) = \langle \Delta_i^{obs}\Delta_j^{obs} \rangle = \langle
RM_i^{obs}\,RM_j^{obs} \rangle,
\end{equation}
where the brackets $\langle \rangle$ denote the expectation value and,
thus, $C_{ij}(a_p)$ describes our expectation based on the proposed
model characterised by a particular set of $a_p$s. Now, the likelihood
function $\mathcal{L}_{\vec{\Delta}}(a_p)$ has to be maximised for the
parameters $a_p$. Although the magnetic fields might be non-Gaussian,
the $RM$ should be close to Gaussian due to the central limit
theorem. Observationally, $RM$ distributions are known to be close to
Gaussian \citep[e.g.][]{1993ApJ...416..554T, 1999A&A...344..472F,
1999A&A...341...29F, 2001MNRAS.326....2T}.
Ideally, the covariance matrix is the sum of a signal and a noise
matrix term which results if the errors are uncorrelated to true
values. Writing $RM^{obs} = RM^{true} + \delta RM$ results in
\begin{eqnarray}
C_{ij}(a_p) & = & \langle RM_i^{true} RM_j^{true} \rangle + \langle
\delta RM_i \, \delta RM_j \rangle \nonumber \\
& = & C_{RM}(\vec{x}_{\perp i},\, \vec{x}_{\perp j}) + \langle \delta RM_i \,
\delta RM_j \rangle
\end{eqnarray}
where $\vec{x}_{\perp i}$ is the displacement of point $i$ from the
$z$-axis and $\langle \delta RM_i \, \delta RM_j \rangle$ indicates the
expectation for the uncertainty in our measurement. Unfortunately,
while in the discussion of the power spectrum measurements of CMB
experiments the noise term is extremely carefully studied, for our
discussion this is not the case and goes beyond the scope of the
paper. Thus, we will neglect this term. However,
\citet{1995MNRAS.273..877J} discuss uncertainties involved in the data
reduction process to gain a model for $\langle \delta RM_i\, \delta
RM_j \rangle$.
Since we are interested in the magnetic power spectrum, we have to
find an expression for the covariance matrix $C_{ij}(a_p) =
C_{RM}(\vec{x}_{\perp i},\,\vec{x}_{\perp j})$ which can be identified as the
$RM$ autocorrelation $\langle RM(\vec{x}_{\perp i})\,RM(\vec{x}_{\perp j})
\rangle$. This has then to be related to the magnetic power spectra.
The observable in any Faraday experiment is the rotation measure
\textit{RM}. For a line of sight parallel to the $z$-axis and
displaced by $\vec{x}_{\perp}$ from it, the $RM$ arising from polarised
emission passing from the source $z_s(\vec{x}_{\perp})$ through a
magnetised medium to the observer located at infinity is expressed by
\begin{equation}
RM(\vec{x}_{\perp})= a_0 \int_{z_s(\vec{x}_{\perp})}^{\infty} \!\!\! {\rm d} \vec{x}\,
n_{{\rm e}}(\vec{x}) \, B_z (\vec{x}),
\end{equation}
where $a_0 = e^3/(2\pi m_e^2c^4)$, $\vec{x} = (\vec{x}_\perp, z)$, $n_e(\vec{x})$
is the electron density and $B_z(\vec{x})$ is the magnetic field component
along the line of sight.
In the following, we will assume that the magnetic fields in galaxy
clusters are isotropically distributed throughout the Faraday
screen. If one samples such a field distribution over a large enough
volume they can be treated as statistically homogeneous and
statistically isotropic. Therefore, any statistical average over a
field quantity will not be influenced by the geometry or the exact
location of the volume sampled. Following \citet{2003A&A...401..835E},
we can define the elements of the $RM$ covariance matrix using the
$RM$ autocorrelation function $C_{RM}(\vec{x}_{\perp i}, \vec{x}_{\perp j}) =
\left< RM(\vec{x}_{\perp i})RM(\vec{x}_{\perp j}) \right>$ and introduce a
window function $f(\vec{x})$ which describes the properties of the
sampling volume
\begin{equation}
\label{eq:correl}
C_{RM}(\vec{x}_{\perp}, \vec{x}'_{\perp}) = \tilde{a_0}^2 \!\!\! \int_{z_s}
^\infty \!\!\!\!\!\! {\rm d} z \int_{z'_s} ^ \infty \!\!\!\!\!\! {\rm d}
z' f(\vec{x})f(\vec{x}')\left< B_z(\vec{x}_{\perp}, z) B_z(\vec{x}'_{\perp}, z')
\right>,
\end{equation}
where $\tilde{a_0} = a_0n_{e0}$, the central electron density is
$n_{e0}$ and the window function is defined by
\begin{equation}
\label{eq:window}
f(\vec{x}) = \mathbf{1}_{\{\vec{x}_{\perp} \in \Omega\} }\,\mathbf{1}_{\{z
\geq z_{\rm s}(\vec{x}_{\perp})\}} \, \,g(\vec{x}) \,n_e(\vec{x})/n_{e0},
\end{equation}
where $\mathbf{1}_{\{condition\}}$ is equal to unity if the condition
is true and zero if not and $\Omega$ defines the region for which
$RM$s were actually measured. The electron density distribution
$n_e(\vec{x})$ is chosen with respect to a reference point $\vec{x}_{ref}$
(usually the cluster centre) such that $n_{e0} = n_e(\vec{x}_{ref})$,
e.g. the central density, and $B_0 = \langle \vec{B}^2 (\vec{x}_{ref})
\rangle ^{1/2}$. The dimensionless average magnetic field profile
$g(\vec{x}) = \langle \vec{B} ^2 (\vec{x}) \rangle ^{1 / 2} / \vec{B} _{0}$ is
assumed to scale with the density profile such that $g(\vec{x}) =
(n_e(\vec{x})/n_{e0})^{\alpha_{B}}$.
Setting $\vec{x}' = \vec{x} + \vec{r}$ and assuming that the correlation length of
the magnetic field is much smaller than characteristic changes in
the electron density distribution, we can separate the two integrals
in Eq.~(\ref{eq:correl}). Furthermore, we can introduce the magnetic
field autocorrelation tensor \hbox{$M_{ij} = \langle B_i(\vec{x}) \cdot
B_j(\vec{x}+\vec{r}) \rangle$} \citep[see e.g.][]{ 1999PhRvL..83.2957S,
2003A&A...401..835E}. Taking this into account, the
$RM$ autocorrelation function can be described by
\begin{equation}
\label{eq:sep_int}
C_{RM}(\vec{x}_{\perp}, \vec{x}_{\perp} + \vec{r}_{\perp}) = \tilde{a_0}^2
\int_{z_s} ^\infty \!\!\!\! {\rm d} z \, f(\vec{x})f(\vec{x}+\vec{r}) \int_{(z'_s - z)
\to -\infty} ^ \infty \!\!\!\!\!\!\!\!\!\!\!\!\! {\rm d} r_z M_{zz}(\vec{r})
\end{equation}
Here, the approximation $(z'_s - z) \to -\infty$ is valid for Faraday
screens which are much thicker than the magnetic autocorrelation
length. This will turn out to be the case in the application at hand.
The Fourier transformed $zz$-component of the autocorrelation tensor
$M_{zz}(\vec{k})$ can be expressed by the Fourier transformed scalar
magnetic autocorrelation function $w(k) = \sum_i M_{ii}(k)$ and a $k$
dependent term (see Eq.~(31) in \citet{2003A&A...401..835E}) leading to
\begin{equation}
\label{eq:mzz_r}
M_{zz}(\vec{r}) = \frac{1}{2\pi^3} \int ^\infty _{-\infty} \!\!\! {\rm d} ^3k
\,\frac{w(k)}{2}\,\left( 1 - \frac{k_z^2}{k^2} \right) \, {\rm
e}^{-i\vec{k} \vec{r}}
\end{equation}
Furthermore, the one dimensional magnetic energy power spectrum
$\varepsilon_B(k)$ can be expressed in terms of the magnetic
autocorrelation function $w(k)$ such that
\begin{equation}
\label{eq:wk_ebk}
\varepsilon_B(k)\, {\rm d} k = \frac{k^2w(k)}{2\,(2\pi)^3}\, {\rm d} k.
\end{equation}
As stated in \citet{2003A&A...401..835E}, the $k_z = 0$ - plane of
$M_{zz}(\vec{k})$ is all that is required to reconstruct the magnetic
autocorrelation function $w(k)$. Thus, inserting Eq.~(\ref{eq:mzz_r})
into Eq.~(\ref{eq:sep_int}) and using Eq.~(\ref{eq:wk_ebk}) leads to
\begin{eqnarray}
C_{RM}(\vec{x}_{\perp}, \vec{x}_{\perp} + \vec{r}_{\perp}) & = & 4\pi^2
\tilde{a_0}^2 \int_{z_s} ^\infty \!\!\!\!\! {\rm d} z\, f(\vec{x})f(\vec{x}+\vec{r})
\times \nonumber \\
& & \int_{-\infty} ^ \infty \!\!\!\!\! {\rm d} k\, \varepsilon_B(k)
\frac{J_0(kr_{\perp})}{k},
\end{eqnarray}
where $J_0(kr_{\perp})$ is the 0th Bessel function. This equation
gives an expression for the $RM$-autocorrelation function in terms of
the magnetic power spectra of the Faraday-producing medium.
Since the magnetic power spectrum is the interesting function, we
parametrise $\varepsilon_B(k) = \sum_p \varepsilon_{B_p} \mathbf{1}_{\{ k \, \in \, [k_p,
k_{p+1}] \}}$, where $\varepsilon_{B_p}$ is constant in the interval $[k_p,
k_{p+1}]$, leading to
\begin{equation}
\label{eq:cfinal}
C_{RM}(\varepsilon_{B_p}) = 4\pi^2 \tilde{a_0}^2 \int_{z_s} ^\infty
\!\!\!\!\!\!dz\, f(\vec{x})f(\vec{x}+\vec{r}) \sum_p\! \varepsilon_{B_p}
\int_{k_p} ^ {k_{p+1}} \!\!\!\!\!\!\!\! dk\,
\frac{J_0(kr_{\perp})}{k},
\end{equation}
where the $\varepsilon_{B_p}$ are to be understood as the model
parameter $a_P$ for which the likelihood function
$\mathcal{L}_{\vec{\Delta)}}(a_p)$ has to be maximised given the Faraday
data $\vec{\Delta}$.
\subsection{Evaluation of the likelihood function\label{sec:likely}}
In order to maximise the likelihood function,
\citet{1998PhRvD..57.2117B} approximate the likelihood function as a
Gaussian of the parameters in regions close to the maximum $\vec{a} =
\{ a \}_{{\rm max}}$, where $\{ a \}_{{\rm max}}$ is the set of model parameters
which maximise the likelihood function. In this case, one can perform
a Taylor expansion of $\ln\mathcal{L}_{\vec{\Delta}}(\vec{a}+\delta
\vec{a})$ about $a_p$ and truncates at the second order in $\delta
a_p$ without making a large error.
\begin{eqnarray}
\ln \mathcal{L}_{\vec{\Delta}}(\vec{a}+\delta \vec{a}) & = & \ln
\mathcal{L}_{\vec{\Delta}}(\vec{a}) + \sum_p \frac{\partial
\ln\mathcal{L}_{\vec{\Delta}}(\vec{a})}{\partial a_{p}} \delta a_p +
\nonumber \\
& & \frac{1}{2} \sum_{pp'} \frac{\partial^2
\ln\mathcal{L}_{\vec{\Delta}}(\vec{a})} {\partial a_p\, \partial a_{p'}}
\delta a_p \delta_{p'}
\end{eqnarray}
With this approximation, one can directly solve for the $\delta a_p$
that maximise the likelihood function $\mathcal{L}$
\begin{equation}
\label{eq:delta}
\delta a_p = - \sum_{p'} \left( \frac{\partial^2
\ln\mathcal{L}_{\vec{\Delta}}(\vec{a})} {\partial a_p\, \partial a_{p'}}
\right)^{-1}\, \frac{\partial
\ln\mathcal{L}_{\vec{\Delta}}(\vec{a})}{\partial a_{p'}},
\end{equation}
where the first derivative is given by
\begin{equation}
\label{eq:first}
\frac{\partial \ln\mathcal{L}_{\vec{\Delta}}(\vec{a})}{\partial
a_{p'}} = \frac{1}{2} \mathrm{Tr} \left[ \left( \vec{\Delta}
\vec{\Delta}^T - C \right) \left( C^{-1} \frac{\partial C}{\partial
a_{p'}} C^{-1} \right) \right]
\end{equation}
and the second derivative is expressed by
\begin{eqnarray}
\label{eq:second}
\mathcal{F}^{(a)} _{pp'} & = & - \left( \frac{\partial^2
\ln\mathcal{L}_{\vec{\Delta}}(\vec{a})} {\partial a_p\, \partial
a_{p'}} \right) = \mathrm{Tr} \left[ \left( \vec{\Delta}
\vec{\Delta}^T - C \right) \left( C^{-1} \frac{\partial C}{\partial
a_{p}} C^{-1}\frac{\partial C}{\partial a_{p'}} C^{-1}
\right. \right. \nonumber \\
& & \left. \left. - \frac{1}{2} C^{-1}\frac{\partial^2 C}{\partial a_p
\partial a_{p'}} C^{-1} \right) \right] + \frac{1}{2} \mathrm{Tr}
\left( C^{-1} \frac{\partial C}{\partial a_{p}} C^{-1} \frac{\partial
C}{\partial a_{p'}} \right),
\end{eqnarray}
where Tr indicates the trace of a matrix. The second derivative is
called the curvature matrix. If the covariance matrix is linear in the
parameter $a_p$ then the second derivatives of the covariance matrix
$\partial^2 C/(\partial a_p \partial a_{p'})$ vanish. Note that for
the calculation of the $\delta a_p$, the inverse curvature matrix
$(\mathcal{F}^{(a)} _{pp'})^{-1}$ has to be calculated. The diagonal
terms of the inverse curvature matrix $(\mathcal{F}_{pp} ^{(a)})^{-1}$
can be regarded as the errors $\sigma^2 _{a_p}$ to the parameters
$a_p$.
A suitable iterative algorithm to determine the power spectra would be
to start with an initial guess of a parameter set $a_p$. Using this
initial guess, the $\delta a_p$s have to be calculated using
Eq.~(\ref{eq:delta}). If the $\delta a_p$s are not sufficiently close
to zero, a new parameter set $a' _p = a_p + \delta a_p$ is used and
again the $\delta a' _p$ are calculated and so on. This process can be
stopped when $\delta a_p / \sigma_{a_p} \le \epsilon$, where
$\epsilon$ describes the required accuracy.
\subsection{Binning and rebinning}
In our parametrisation of the model given by Eq.~(\ref{eq:cfinal}) the
bin size, i.e.~the size of the interval $[k_p, k_{p+1}]$, is
important. Since we are measuring the power spectrum, we chose equal
bins on a logarithmic scale as the initial binning. However, if the
bins are too small then the cross correlation between two bins could be
very high and the two bins cannot be regarded as independent
anymore. Furthermore, the errors might be very large, and could be one
order of magnitude larger than the actual values. In order to avoid
such situations, it is preferable to chose either fewer bins or to
rebin by adding two bins together. Note that this oversampling is not a
real problem, since the model parameter covariance matrix takes care of
the redundancy between data points. However, for computational
efficiency and for a better display of the data, a smaller set of
mostly independent data points is preferable.
To find a criterion for rebinning, an expression for the
cross correlation of two parameter $a_p$ and $a_{p'}$ can be defined
by
\begin{equation}
\label{eq:cross}
\delta_{pp'} = \frac{\langle \sigma_p \sigma_{p'} \rangle}{\langle
\sigma_p\rangle \, \langle \sigma_{p'} \rangle} = \frac{\mathcal{F}^{-1}
_{pp'}}{\sqrt{\mathcal{F}^{-1} _{pp} \mathcal{F}^{-1} _{p'p'}}},
\end{equation}
where the full range, $-1 \le \delta_{pp'} \le 1$, is possible but
usually the correlation will be negative, indicating
anti-correlation. Our criterion for rebinning is to require that if
the absolute value of the cross-correlation $| \delta_{pp'} |$ is
larger than $\delta_{pp'} ^{\rm max}$ for two bins $p$ and $p'$ then
these two bins are added together in such a way that the magnetic
energy $\sum_p \varepsilon_{B_p}* \Delta k_{p}$ is conserved.
After rebinning the algorithm again starts to iterate and finds the
maximum with the new binning. This is done as long as the
cross-correlation of two bins is larger than required.
\subsection{The algorithm}
As a first guess for a set of model parameter $\varepsilon_{B_p}$, we used the
results from a Fourier analysis of the original $RM$ map employing the
algorithms as described in \citet{2003A&A...412..373V}. However, we
also employed as first guess $\varepsilon_{B_p}$ a simple power law $\varepsilon_{B_p} \propto
k_i^{\alpha}$, where $\alpha$ is the spectral index. The results and the
shape of the power spectrum did not change.
If not stated otherwise, an iteration is stopped when $\epsilon <
0.01$, i.e. the change in parameter $\varepsilon_{B_p}$ is smaller than 1\% of
the error in the parameter $\varepsilon_{B_p}$ itself. Once the iteration
converges to a final set of model parameters the cross-correlation
between the bins is checked and if necessary, the algorithm will start
a new iteration after rebinning. Throughout the rest of the paper, we
require a $| \delta_{pp'} | < 0.5$ for $p \neq p'$.
Once the power spectra in terms of $\varepsilon_B(k) = \sum _p \varepsilon_{B_p}
\mathbf{1}_{\{[k_p, k_{p+1}]\}}$ is determined, we can calculate the
magnetic energy density $\varepsilon_B$ by integration of the power spectrum
\begin{equation}
\varepsilon_B(a_p) = \int _0 ^{\infty} {\rm d} k\, \varepsilon_B(k) = \sum_p \varepsilon_{B_p} \Delta
k_p,
\end{equation}
where $\Delta k_p = k_{p+1} - k_p$ is the binsize.
Also $\lambda_B$ and $\lambda_{RM}$ are accessible by integration of
the power spectrum \citep{2003A&A...401..835E}.
\begin{eqnarray}
\lambda_B & = & \pi \frac{\int_0 ^{\infty} {\rm d} k \, \varepsilon_B(k)/k}{\int _0
^{\infty} {\rm d} k \, \varepsilon_B(k)} = \pi \frac{\sum_p \varepsilon_{B_p}
\ln(k_{p+1}/k_{p})} {\sum_p \varepsilon_{B_p} \Delta k_p} \\
\lambda_{RM} & = & 2 \frac{\int_0 ^{\infty} {\rm d} k \,
\varepsilon_B(k)/k^2}{\int_0 ^{\infty} {\rm d} k \, \varepsilon_B(k)/k} = 2 \frac{\sum_p
\varepsilon_{B_p} \left( 1/k_{p} - 1/k_{p+1} \right)}{\sum_p \varepsilon_{B_p}
\ln(k_{p+1}/k_{p})}.
\end{eqnarray}
Since the method allows to calculate errors $\sigma_{\varepsilon_{B_p}}$, one can
also determine errors for these integrated quantities. However, the
cross-correlations $\delta_{pp'}$ which are non-zero as already
mentioned, have to be taken into account. The probability distribution
$P(\vec{a})$ of a parameter can often be described by a Gaussian
\begin{equation}
\label{eq:prob}
P(\vec{a}) \sim e^{-\frac{1}{2} \delta \vec{a} ^T X^{-1} \delta \vec{a}},
\end{equation}
where $X$ is the covariance matrix of the parameters, $\delta \vec{a}
= \vec{a} - \vec{a}_{{\rm peak}}$, $\vec{a}=\{a\}_{{\rm max}}$ is the
determined maximum value for the probability distribution and
$\vec{a}_{{\rm peak}}$ is the actual maximum of the probability
function. The standard deviation is defined as
\begin{equation}
\label{eq:deltaeb}
\langle \delta \varepsilon_B^2 \rangle = \langle (\varepsilon_B(a)-\varepsilon_B)^2 \rangle =
\int\, {\rm d}^n a\, P(a)\,(\varepsilon_B(a) - \varepsilon_B)^2.
\end{equation}
Assuming that $P(\vec{a})$ follows a Gaussian distribution (as done above in
Eq.~(\ref{eq:prob})) and using that $\varepsilon_B(a)$ is linear in the $a_p =
\varepsilon_{B_p}$ then Eq.~(\ref{eq:deltaeb}) becomes
\begin{eqnarray}
\langle \delta \varepsilon_B^2 \rangle & = & \int {\rm d}^n a\, P(a) \, \left[
\delta a \frac{\partial \varepsilon_B}{\partial a_p} \right]^2\\
& = & \int {\rm d}^n a\, P(a) \, \sum_p \delta a_p \, \frac{\partial
\varepsilon_B}{\partial a_p} \, \sum_{p'}
\delta a_{p'} \, \frac{\partial \varepsilon_B}{\partial a_{p'}}.
\end{eqnarray}
Rearranging this equation and realising that the partial derivatives
are independent of the $a_p$ since $\varepsilon_B$ is linear in the $a_p$s this
leads to
\begin{equation}
\langle \delta \varepsilon_B^2 \rangle = \sum_{pp'} \frac{\partial \varepsilon_B}{\partial
a_p}\, \frac{\partial \varepsilon_B}{\partial a_{p'}} \int {\rm d}^n a \, P(a) \, \delta
a_p \, \delta a_{p'}
\end{equation}
and finally using Eq.~(\ref{eq:cross})
\begin{equation}
\langle \delta \varepsilon_B^2 \rangle = \sum_{pp'} \frac{\partial
\varepsilon_B}{\partial a_p}\, \frac{\partial \varepsilon_B}{\partial a_{p'}} \,\langle
\sigma_{p} \sigma_{p'} \rangle,
\end{equation}
where $\langle \sigma_{p} \, \sigma_{p'} \rangle = \mathcal{F}_{pp'} ^{-1}$
A similar argumentation can be applied to the error derivation for the
correlation lengths $\lambda_{RM}$ and $\lambda_B$, although the
correlation length are not linear in the coefficients $a_p$. If one
uses the partial derivatives at the determined maximum, one still is
able to approximately separate them from the integral. This leads to
the following expressions for their errors
\begin{equation}
\langle \delta \lambda_B^2 \rangle \approx \sum_{pp'} \frac{\partial
\lambda_B}{\partial a_p} \bigg\arrowvert_{a _p ^{{\rm max}}} \frac{\partial
\lambda_B}{\partial a_{p'}} \bigg\arrowvert_{a _{p'} ^{{\rm max}}} \langle
\sigma_{p} \sigma_{p'} \rangle
\end{equation}
and
\begin{equation}
\langle \delta \lambda_{RM}^2 \rangle \approx \sum_{pp'} \frac{\partial
\lambda_{RM}}{\partial a_p} \bigg\arrowvert_{a _p ^{{\rm max}}}
\frac{\partial \lambda_{RM}}{\partial a_{p'}} \bigg\arrowvert_{a _{p'}
^{{\rm max}}} \langle \sigma_{p} \sigma_{p'} \rangle.
\end{equation}
\section{Testing the algorithm\label{sec:test}}
\begin{figure*}[htb]
\resizebox{\hsize}{!}{\includegraphics{fig1.ps}}
\caption[]{\label{fig:testrm} Right panel, a small part ($37 \times 37$
kpc) of a typical realisation of a $RM$ map which is produced by a
Kolmogorov-like magnetic field power spectrum for $k \geq k_c = 0.8$
kpc$^{-1}$ and a magnetic field strength of 5 $\mu$G. Left panel, the
$RM$ data used for the data matrix $\Delta_i$ is shown where we
averaged arbitrary neighbouring points in order to reduce the number of
independent points in a similar way as done later with the
observational data.}
\end{figure*}
In order to test our algorithm, we applied our maximum likelihood
estimator to generated $RM$ maps with a known magnetic power spectrum
$\varepsilon_B(k)$. \citet{2003A&A...401..835E} give a prescription (their
Eq.~(37)) for the relation between the amplitude of $RM$,
$|\hat{RM}(k_{\perp})|^2$, and the magnetic power spectrum in Fourier
space
\begin{equation}
\varepsilon_B^{{\rm obs}}(k) = \frac{k^2}{a_1\,A_{\Omega}(2\pi)^4}
\int_{0}^{2\pi} {\rm d} \phi \,\, |\hat{RM}(\vec{k}_{\perp})|^2
\end{equation}
or
\begin{equation}
\label{eq:rmk}
|\hat{RM}(k_{\perp})|^2 = \frac{a_1\,A_{\Omega}(2\pi)^3}{k^2}
\varepsilon_B^{{\rm obs}}(k),
\end{equation}
where $A_{\Omega}$ is the area $\Omega$ for which $RM$'s are actually
measured and $a_1 = a_0^2\,n_{{\rm e0}}^2\,L$, where $L$ is the
characteristic depth of the Faraday screen.
As the Faraday screen, we assumed a box with sides being 150 kpc long
and a depth of $L = 300$ kpc. For simplicity, we assumed a uniform
electron density profile with a density of $n_{{\rm e0}} = 0.001$
cm$^{-3}$. For the magnetic field power spectra, we used
\begin{equation}
\label{eq:k0norm}
\varepsilon_B^{{\rm obs}}(k) = \left\{
\begin{array}{ll}
\frac{\varepsilon_B}{k_0^{1-\alpha} \, k_c^{2+\alpha}} \, k^2 & \forall k \leq
k_c \\
\frac{\varepsilon_B}{k_0}\left( \frac{k}{k_0} \right)^{-5/3} & \forall k
\geq k_c
\end{array} \right. .
\end{equation}
where the spectral index which was set to mimic Kolmogorov turbulence
with energy injection at $k = k_{c}$, and
\begin{equation}
\varepsilon_B = \frac{\langle B^2 \rangle}{8\pi} = \int _{0} ^{k_{{\rm max}}}
\!\!\! {\rm d} k \, \varepsilon_B ^{{\rm obs}}(k),
\end{equation}
where $k_{{\rm max}} = \pi/\Delta r$ is determined by the pixel size
($\Delta r$) of the $RM$ map used. The latter equation combined with
Eq.~(\ref{eq:k0norm}) gives the normalisation $k_0$ in such a way that
the integration over the accessible power spectrum will result in a
magnetic field strength of $B$ for which we used 5 $\mu$G. We used a
$k_{c} = 0.8$ kpc$^{-1}$.
In order to generate a $RM$ map with the magnetic power spectrum
$\varepsilon_B(k)$ for the chosen Faraday screen, we filled the real and
imaginary part of the Fourier space independently with Gaussian
deviates. Then these values were multiplied by the appropriate values
given by Eq.~(\ref{eq:rmk}) corresponding to their place in
$k$-space. As a last step, an inverse Fourier transformation was
performed. A typical realisation of such a generated $RM$ map is shown
in Fig.~\ref{fig:testrm}.
For the analysis of the resulting $RM$ map only a small part of the
initial map was used in order to reproduce the influence of the limited
emission region of a radio source. We applied the Fourier analysis as
described in \citet{2003A&A...401..835E} to this part. The resulting
power spectrum is shown in Fig.~\ref{fig:test} as a dashed line in
comparison with the input power spectrum as a dotted line.
The maximum likelihood method is numerically limited by computational
power since it involves matrix multiplication and inversion, where the
latter is a $N^3$ process. Thus, not all points of the many which are
defined in our maps can be used. However, it is desirable to use as
much information as possible from the original map. Therefore we chose
to randomly average neighbouring points with a scheme which let to a
map with spatially inhomogeneously resolved cells. The resulting map is
highly resolved on top and lowest on the bottom with some random
deviations which make it similar to the error weighting of the observed
data. We used $N$ = 1500 independent points for the analysis. In the
left panel of Fig.~\ref{fig:testrm}, the averaged $RM$ map which was
used for the test is shown.
As a first guess for the maximum likelihood estimation, we used the
power spectra derived by the Fourier analysis. The resulting power
spectrum is shown as filled circles with 1-$\sigma$ error bars in
Fig.~\ref{fig:test}. The input power spectrum and the power spectrum
derived by the maximum likelihood estimator agree well within the one
$\sigma$ level. Integration over this power spectrum results in a field
strength of \hbox{$(4.7 \pm 0.3) \mu$G} in agreement with the input
magnetic field strength of \hbox{$5 \mu$G}.
\begin{figure}[htb]
\resizebox{\hsize}{!}{\includegraphics{fig2.ps}}
\caption[]{\label{fig:test} Power spectra for a simulated $RM$ map as
shown in Fig.~\ref{fig:testrm}. The input power spectrum is shown in
comparison to the one found by the Fourier analysis as described in
\citet{2003A&A...401..835E} and the one which was derived by our
maximum likelihood estimator. One can see the good agreement within one
$\sigma$ between input power spectrum and the power spectrum derived by
the maximum likelihood method.}
\end{figure}
\begin{figure*}[hbt]
\resizebox{\hsize}{!}{\includegraphics{fig3.ps}}
\caption[]{\label{fig:rmav} The final $RM$ map from the north lobe of
Hydra~A which was analysed with the maximum likelihood estimator; left:
error weighted map. The dots indicate the coordinates which
correspond to the appropriate error weighted $RM$ value, which resulted
from averaging over the indicated area; right: original
\textit{Pacman} map. Note that the small scale noise for the diffuse
part of the lobe is averaged out and only the large scale information
carried by this region is maintained.}
\end{figure*}
\section{Application to Hydra~A\label{sec:app}}
\subsection{The data $\vec{\Delta}$ \label{sec:data}}
We applied this maximum likelihood estimator introduced and tested in
the last sections to the Faraday rotation map of the north lobe of the
radio source Hydra~A \citep{1993ApJ...416..554T}. The data were kindly
provided by Greg Taylor.
For this purpose, we used a high fidelity $RM$ map presented in
\citet{2004astro.ph..1216V} which was generated by the newly developed
algorithm \textit{Pacman} \citep{2004astro.ph..1214D} using the
original polarisation data. \textit{Pacman} also provides error maps
$\sigma_i$ by error propagation of the instrumental uncertainties of
polarisation angles. The \textit{Pacman} map which was used is shown
in the right panel of Fig.~\ref{fig:rmav}.
For the same reasons as mentioned in Sect.~\ref{sec:test}, we averaged
the data. An appropriate averaging procedure using error weighting was
applied such that
\begin{equation}
\overline{RM}_i = \frac{\sum_j {RM_j}/{\sigma^2_j}} {\sum_{j}
{1}/{\sigma_j^2}},
\end{equation}
and the error calculates as
\begin{equation}
\sigma ^2 _{\overline{RM}_i} = \frac{\sum_j \left( {1}/
{\sigma^2_j} \right) } { \left( \sum_{j} {1}/{\sigma_j^2}\right)^2 } =
\frac{1}{\sum_{j} {1}/{\sigma_j^2}}.
\end{equation}
Here, the sum goes over the set of old pixels $\{ j \}$ which form the
new pixels $\{ i \}$. The corresponding pixel coordinates $\{ i \}$
were also determined by applying an error weighting scheme
\begin{equation}
\overline{x}_i = \frac{\sum_j {x_j}/{\sigma^2 _j}}{\sum_j
1/\sigma^2_j} \;\;\mbox{and}\;\;
\overline{y}_i = \frac{\sum_j {y_j}/{\sigma^2 _j}}{\sum_j
1/\sigma^2_j}.
\end{equation}
The analysed $RM$ map was determined by a gridding procedure. The
original $RM$ map was divided into four equally sized cells. In each
of these the original data were averaged as described above. Then the
cell with the smallest error was chosen and again divided into four
equally sized cells and the original data contained in the so-determined
cell were averaged. The last step was repeated until the number of
cells reached a defined value $N$. We decided to use $N = 1500$. This
is partly due to the limitation of computational power but also partly
because of the desired suppression of small-scale noise by a strong
averaging of the noisy regions.
The final $RM$ map which was analysed is shown in
Fig.~\ref{fig:rmav}. The most noisy regions in Hydra~A are located in
the coarsely resolved northernmost part of the lobe. We
chose not to resolve this region any further but to keep the
large-scale information which is carried by this region.
\subsection{The window function\label{sec:window}}
As mentioned in Sect.~\ref{sec:crm}, the window function describes the
sampling volume and, thus, we have to find a suitable description for
it based on Eq.~\ref{eq:window}. Hydra A (or 3C218) is located at a
redshift of 0.0538 \citep{1991trcb.book.....D}. For the derivation of
the electron density profile parameter, we relied on the work by
\citet{1999ApJ...517..627M} done for ROSAT PSPC data while using the
deprojection of X-ray surface brightness profiles as described in the
Appendix A of \citet{2004A&A...413...17P}. Since Hydra A is known to
exhibit a strong cooling flow as observed in the X-ray studies, we
assumed a double $\beta$-profile
\footnote {
defined as
\begin{math}
n_{{\rm e}}(r) = [n_{{\rm e1}}^2 (0)(1+(r/r_{{\rm
c1}})^2)^{-3\beta}+n_{{\rm e2}}^2 (0)
(1+(r/r_{{\rm c2}})^2)^{-3\beta}]^{1/2}.
\end{math}
} and used for the inner profile $n_{{\rm e1}}(0) = 0.056$ cm$^{-3}$
and $r_{c1} = 0.53$ arcmin; for the outer profile we used $n_{{\rm
e2}}(0) = 0.0063$ cm$^{-3}$ and $r_{c2} = 2.7$ arcmin and we applied a
$\beta = 0.77$.
\begin{figure*}[hbt]
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{fig4.ps}}
\vspace{-1.0cm}
\caption[]{\label{fig:radial} The comparison of the integrated
squared window function $f^2(r)$ (lines) with the $RM$ dispersion
function $\langle RM^2(r) \rangle$ (open circles) and $\langle RM^2
\rangle - \langle RM(r) \rangle^2$ (filled circles). Different models
for the window function were assumed. In (a) $\alpha_B = 1.0$, in (b)
$\alpha_B = 0.5$ and in (c) $\alpha_B = 0.1$ were used, where the
inclination angle $\theta$ of the source was varied. It can be seen
that models for the window function with $\alpha_B = 0.1\ldots0.5$ and
$\theta = 10\degr \ldots 50\degr$ match the shape of the
dispersion function very well.}
\end{figure*}
Assuming this electron density profile to be accurately determined,
there are two other parameters which enter in the window function. The
first one is related to the source geometry. For Hydra~A, a clear
depolarisation asymmetry between the two lobes is observed, known as
the Laing-Garrington effect \citep{1988Natur.331..147G,
1988Natur.331..149L} suggesting that the source is tilted from the
$xy$-plane \citep{1993ApJ...416..554T}. In fact, the north lobe points
towards the observer. In order to take this into account, we
introduced an angle $\theta$ which describes the angle between the
source and the $xy$-plane such that the north lobe points towards the
observer. \citet{1993ApJ...416..554T} determine an inclination angle
of $\theta = 45\degr$.
The other parameter is related to the global magnetic field
distribution which is assumed to scale with the electron density
profile $B(r) \propto n_{{\rm e}}(r) ^ {\alpha_B}$. In a scenario in
which an originally statistically homogeneously magnetic energy
density gets adiabatically compressed, one expects $\alpha_B =
2/3$. If the ratio of magnetic and thermal pressure is constant
throughout the cluster then $\alpha_B = 0.5$. However, $\alpha_B$
might have any other value. \citet{2001A&A...378..777D} determined an
$\alpha_B=0.9$ for the outer regions of the cluster Abell 119.
In order to constrain the applicable ranges of these quantities, one
can compare the integrated squared window function with the $RM$
dispersion function $\langle RM(r_{\perp})^2 \rangle$ of the $RM$ map
used since
\begin{equation}
\langle RM^2 (r_{\perp}) \rangle \propto \int _{-\infty} ^{\infty}
{\rm d} z\, f^2(r_{\perp}, z),
\end{equation}
as stated by Eq.~(24) of \citet{2003A&A...401..835E}. Therefore, we
compared the shape of the two functions. The result is shown in
Fig.~\ref{fig:radial}. For the window function, we used three
different $\alpha_B =0.1, 0.5, 1.0$ and for each of these, five
different inclination angles $\theta = 0\degr, 10\degr, 30\degr,
45\degr$ and $60\degr$ were employed, although the $\theta = 0\degr$
is not very likely considering the observational evidence of the
Laing-Garrington effect as observed in Hydra~A by
\citet{1993ApJ...416..554T}. The different results are plotted as
lines of different style in Fig.~\ref{fig:radial}. The filled and open
dots represent the $RM$ dispersion function. The solid circles
indicate the binned $\langle RM^2 \rangle$ function. The open circles
represent the binned $\langle RM^2 \rangle - \langle RM \rangle^2$
function, which is cleaned of any foreground $RM$ signals.
From Fig.~\ref{fig:radial}, it can be seen that models with $\alpha_B
= 1.0$ or $\theta > 50\degr$ are not able to recover the shape of
the $RM$ dispersion function and, thus, we expect $\alpha_B < 1.0$ and
$\theta < 50\degr$ to be more likely.
\section{Results and discussion\label{sec:discussion}}
Based on the described treatment of the data and the description of
the window function, first we calculated power spectra for various
scaling exponents $\alpha_B$ while keeping the inclination angle at
$\theta = 45\degr$. For this investigation, we used as the number of
bins $n_l = 5$ which proved to be sufficient. For these calculations,
we used $\epsilon < 0.1$. The resulting power spectra are plotted in
Fig.~\ref{fig:power_alpha}.
\begin{figure}[htb]
\resizebox{\hsize}{!}{\includegraphics{fig5.ps}}
\caption[]{\label{fig:power_alpha} Power spectra for $N = 1500$ and
$n_l = 5$. Different exponents $\alpha_B$ in the relation $B(r) \sim
n_e(r)^{\alpha_B}$ of the window function were used. The inclination
angle of the source was chosen to be $\theta = 45\degr$.}
\end{figure}
In Fig.~\ref{fig:power_alpha}, one can see that the power spectrum
derived for $\alpha_B = 1.0$ has a completely different shape whereas
the other power spectra show only slight deviation from each other and
are vertically displaced, implying different normalisation factors,
i.e. central magnetic field strengths which increase with increasing
$\alpha_B$. The straight dashed line which is also plotted in
Fig.~\ref{fig:power_alpha} indicates a Kolmogorov-like power spectrum
being equal to $5/3$ in our prescription. The power spectra follow this
slope over at least one order of magnitude.
In Sect.~\ref{sec:window}, we were not able to distinguish between the
various scenarios for $\alpha_B$ although we found that an $\alpha_B =
1$ does not properly reproduce the measured $RM$ dispersion. However,
the likelihood function offers the possibility to calculate the actual
probability of a set of parameters given the data (see
Eq.~(\ref{eq:likely})). Thus, we calculated the log likelihood $\ln
{\mathcal L}_{\vec{\Delta}}(\vec{a})$ value for various power spectra
derived for the different window functions varying in the scaling
exponent $\alpha_B$ and assuming the inclination angle of the source
to be for all geometries $\theta = 45\degr$. In
Fig.~\ref{fig:alpha_lnL}, the log likelihood is shown as a function of
the scaling parameter $\alpha_B$ used.
\begin{figure}[htb]
\resizebox{\hsize}{!}{\includegraphics{fig6.ps}}
\caption[]{\label{fig:alpha_lnL} The log likelihood $\ln \mathcal
{L}_{\vec{\Delta}} (\vec{a})$ of various power spectra assuming
different $\alpha_B$ while using a constant inclination angle $\theta =
45\degr$. $\alpha_B = 0.1\ldots0.8$ are in the plateau of maximum
likelihood. The sudden decrease for $\alpha_B < 0.1$ in the likelihood
might be due to non-Gaussian effects becoming too strong.}
\end{figure}
As can be seen from Fig.~\ref{fig:alpha_lnL}, there is a plateau of
most likely scaling exponents $\alpha_B$ ranging from 0.1 to 0.8. An
$\alpha_B = 1$ seems to be very unlikely for our model as already
deduced in Sect.~\ref{sec:window}. The sudden decrease for $\alpha_B <
0.1$ might be due to non-Gaussian effects. The magnetic field strength
derived for this plateau region ranges from 9 $\mu$G to 5 $\mu$G. The
correlation length of the magnetic field $\lambda_B$ was determined to
range between $2.5$ kpc and $3.0$ kpc whereas the $RM$ correlation
length was determined to be in the range of $4.5\ldots5.0$ kpc. These
ranges have to be considered as a systematic uncertainty since we are
not yet able to distinguish between these scenarios
observationally. Another systematic effect might be given by
uncertainties in the electron density itself. Varying the electron
density normalisation parameters ($n_{{\rm e1}}(0)$ and $n_{{\rm
e2}}(0)$) leads to a vertical displacement of the power spectrum while
keeping the same shape.
In order to study the influence of the inclination angle on the power
spectrum, we used an $\alpha_B = 0.5$, being in the middle of the most
likely region derived. For this calculation, we used smaller bins and
thus increased the number of bins to $n_l = 8$. We calculated the
power spectrum for two different inclination angles $\theta = 30\degr$
and $\theta = 45\degr$. The results are shown in
Fig.~\ref{fig:power_theta} in comparison with a Kolmogorov-like power
spectrum.
\begin{figure}[htb]
\resizebox{\hsize}{!}{\includegraphics{fig7.ps}}
\caption[]{\label{fig:power_theta} Power spectra for two different
inclination angles $\theta = 30\degr$ and $\theta = 45\degr$ and an
$\alpha_B = 0.5$. For comparison a Kolmogorov-like power spectrum is
plotted as a straight dashed line. One can see that the calculated
power spectra follow such a power spectrum over at least one order of
magnitude. Note that the error bars are larger than in
Fig.~\ref{fig:power_alpha} because smaller bin sizes were used.}
\end{figure}
As can be seen from Fig.~\ref{fig:power_theta}, the power spectra
derived agree well with a Kolmogorov-like power spectrum over at
least one order of magnitude. For the inclination angle of $\theta =
30\degr$, we derived the following field and map properties \hbox{$B =
5.7 \pm 0.1 \, \mu$G}, \hbox{$\lambda_B = 3.1\pm0.3$ kpc} and
\hbox{$\lambda_{RM} = 6.7 \pm 0.7$ kpc}. For $\theta = 45\degr$, we
calculated \hbox{$B = 7.3 \pm 0.2 \, \mu$G}, \hbox{$\lambda_B =
2.8\pm0.2$ kpc} and \hbox{$\lambda_{RM} = 5.2 \pm 0.5$ kpc}. The value
of the log likelihood $\ln \mathcal{L}$ was determined to be slightly
higher for the inclination angle of $\theta = 30\degr$. The flattening
of the power spectra for large $k$s can be explained by small-scale
noise which we did not model separately.
Although the central magnetic field strength decreases with decreasing
scaling parameter $\alpha_B$, the volume-integrated magnetic field
energy $E_B$ within the cluster core radius $r_{{\rm c2}}$
increases. The volume-integrated magnetic field energy $E_B$ is
calculated as follows
\begin{equation}
E_B = 4 \pi \int _0 ^{r_{{\rm c2}}} {\rm d} r\, r^2 \, \frac{B^2(r)}{8\pi}
= \frac{B_0^2}{2} \int _0 ^{r_{{\rm c2}}} {\rm d} r \, r^2 \, \left(
\frac{n_{{\rm e}}(r)}{n_{{\rm e0}}} \right) ^{2 \alpha_B},
\end{equation}
where we integrate from the cluster centre to the core radius $r_{{\rm
c2}}$ of the second, non-cooling flow, component of the electron
density distribution.
We integrated the magnetic field profile for the various scaling
parameters and the corresponding field strengths which we determined by
our maximum likelihood estimator. The result is plotted in
Fig.~\ref{fig:bradial}. The higher magnetic energies for the smaller
scaling parameters which correspond to a lower central magnetic field
strength are due to the higher field strength in the outer parts of the
cool cluster core. This effect would be much more drastic if we had
extrapolated the scaling $B(r) \propto n_{\rm e}(r)^{\alpha_B}$ to
larger cluster radii and integrated over a larger volume.
\begin{figure}[htb]
\resizebox{\hsize}{!}{\includegraphics{fig8.ps}}
\caption[]{\label{fig:bradial} The integrated magnetic field energy
$E_B$ within the cluster core radius $r_{\rm c2}$ for the various
scaling parameters $\alpha_B$ also used in Fig.~\ref{fig:alpha_lnL} and
the corresponding central magnetic field strength $B_0$ as determined
by our maximum likelihood estimator.}
\end{figure}
\section{Conclusions\label{sec:conclusion}}
We presented a maximum likelihood estimator for the determination of
cluster magnetic field power spectra from $RM$ maps of extended
polarised radio sources. We introduced the covariance matrix for $RM$
under the assumption of statistically homogeneously-distributed
magnetic fields throughout the Faraday screen. We successfully tested
our approach on simulated $RM$ maps with known power spectra.
We applied our approach to the $RM$ map of the north lobe of Hydra
A. We calculated different power spectra for various window functions
being especially influenced by the scaling parameter between electron
density profile and global magnetic field distribution and the
inclination angle of the emission region. The scaling parameter
$\alpha_B$ was determined to be most likely in the range of
$0.1\ldots0.8$.
We realised that there is a systematic uncertainty in the values
calculated due to the uncertainty in the window parameter
itself. Taking this into account, we deduced for the central magnetic
field strength in the Hydra A cluster $B = (7 \pm 2)\,\mu$G and for the
magnetic field correlation length $\lambda_B = (3.0 \pm 0.5)$ kpc. If
the geometry uncertainties could be removed, the remaining statistical
errors are an order of magnitude smaller. The difference of these
values to the ones found in an earlier analysis of the same dataset of
Hydra~A which yielded $B = 12 \mu$G and \hbox{$\lambda_B = 1$ kpc}
\citep{2003A&A...412..373V} is a result of the improved $RM$ map using
the \textit{Pacman} algorithm \citep{2004astro.ph..1214D,
2004astro.ph..1216V} and a better knowledge of the magnetic cluster
profile, i.e. here $\alpha_B \approx 0.5$ \citep[instead of $\alpha_B =
1.0$ in ][]{2003A&A...412..373V}. However, the magnetic field strength
found in Hydra A supports the trend of relatively large magnetic fields
derived for cooling flow clusters from RM measurements reported in the
literature.
The cluster magnetic field power spectrum of Hydra A follows a
Kolmogorov-like power spectrum over at least one order of
magnitude. However, from our analysis it seems that there is a dominant
scale $\sim$ 3 kpc at which the magnetic power is concentrated.
\begin{acknowledgements}
We like to thank Greg Taylor for providing the polarisation data of
the radio source Hydra A and Klaus Dolag for the calculation of the $RM$
map using \textit{Pacman}. We like to thank Greg Taylor and Marat Gilfanov
for useful comments on the manuscript.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,314,259,993,369 | arxiv | \section{Introduction}
The last decade has been fruitful in terms of high energy astronomy. More than 80 gamma rays sources have been found to emit in the TeV range~\cite{catalog}, thanks to Imaging Atmospheric \v{C}erenkov Telescopes (IACTs) such as \textsc{Cangaroo}, \textsc{HEGRA}, H.E.S.S, \textsc{Magic} or \textsc{Veritas}/Whipple~\cite{IACTs}.
There have also been many discussions about the possibility to detect muons produced by high energy gamma rays in underground, underice or underwater neutrino telescopes~\cite{gammarays}\cite{gammaraysok}\cite{Kudryavtsev}. In contrast to upward-going muons from neutrinos, which are the primary purpose of such a telescope, downward-going muons from gamma rays suffer from a high atmospheric muon background. Therefore the sensitivity of a neutrino telescope to gamma ray induced muons is quite lower than IACTs'. However, it has the advantage of monitoring continuously all directions. In addition to their physics potential, muons from gamma rays may also offer calibration benefits in terms of pointing accuracy and angular resolution.
Gamma ray showers are believed to be muon poor, but there are at least three processes by which a photon can produce muons\,: photoproduction, muon pair production and charm decay. The first process involves the (semi)leptonic decay of a pion or a kaon produced by the interaction of the photon with an atmospheric nucleus. Such muons are said to be conventional. The second process is self-explanatory, and its final particles are referred to as direct muons. The final case corresponds to the (semi)leptonic decay of a photoproduced charm meson, and secondary muons are called prompt muons. The prompt muon production was not taken into account in this work since it was not implemented in the software used for the \textsl{Monte Carlo} production. The charm production involves QCD processes that are not fully understood, but measurements at HERA have shown that at photon energies of several TeV charm production is significant~\cite{charm}.
Some calculations have estimated that the muon flux from gamma ray sources could be sufficient for neutrino telescopes to detect them. However, most attempts to estimate this muon flux rely on one-dimensional analytic models, and do not take into account the muon propagation from sea level to the detector and the detector sensitivity. A first attempt to estimate the underwater flux using a \textsc{Monte Carlo} simulation, without considering detection efficiency, has found gamma ray sources to be hardly detectable by a neutrino telescope~\cite{Kudryavtsev}.
In this paper, a full \textsl{Monte Carlo} simulation, including \v{C}erenkov light detection in realistic background conditions and track reconstruction, is presented, within the \textsc{Antares} framework. The expected number of events from the main sources of interest are presented.
\section{The \textsc{Antares} detector}
The Mediterranean sea currently houses the first operational undersea neutrino telescope, and also the largest neutrino telescope in the Northern hemisphere, namely \textsc{Antares} (Astronomy with a Neutrino Telescope and Abyss environmental RESearch)~\cite{ANTARES}. Its full configuration has been completed in May 2008, though data has been taken with partial detector configurations since the first line was in water, in March 2006.
\textsc{Antares}' main focus is to detect astrophysical neutrinos, thanks to the \v{C}erenkov light produced in water by muons resulting from the interactions of neutrinos with the Earth. Because of the atmospheric muon background, \textsc{Antares} field of view is the Southern hemisphere, and in particular the Galactic center.
Installed at 40\,km off Toulon, in France (40$^{\circ}$50$^{\prime}$\,N, 6$^{\circ}$10$^{\prime}$\,E), \textsc{Antares} comprises 12 vertical detection lines positioned on the Mediterranean sea bed, at about 2500\,m depth. Each line hosts up to 25 floors of three 10$^{\prime\prime}$ photomultiplier tubes (PMTs) regularly distributed over 350\,m, the lowest floor being 100\,m above the sea bed. On a given floor, the three PMTs are placed symmetrically around the vertical string and look downwards at 45$^{\circ}$ in order to optimize the collection of \v{C}erenkov photons from upgoing muons rather than from downgoing muons~\cite{OMs}.
The lines are separated from each other by approximately 70\,m, and set as a regular octagon surrounding a square. An instrumented line intended to monitor the environmental conditions completes the apparatus.
The sea current induced displacements of the lines with regard to their vertical axis do not exceed a few meters, and are monitored in real time using compasses, tiltmeters and hydrophones hosted on each line. A position accuracy of about 10\,cm for each PMT is obtained.
An electro-optical cable transfers the electronic readout of the whole detector to shore, where digitized informations are processed in a computer farm.
\section{Monte Carlo simulation}
Extensive Air Shower have been simulated using Corsika v6.720~\cite{CORSIKA}. High energy hadronic interactions are modeled through QGSJET01~\cite{hadronic}, while electromagnetic interactions are processed through EGS4~\cite{EGS4}. QGSJET01 is found to be the most conservative model in comparison to SIBYLL, VENUS and QGSJET-II~\cite{hadronic}, regarding the number of photons creating high energy muons (using VENUS leads to a 7\% rise, assuming a E$_\gamma^{-1}$ flux, in the [1;100]\,TeV range). However, the effect of this increase at the depth of \textsc{Antares} still has to be investigated, the muon range being energy dependent. The energy range considered in the present work goes from 1\,TeV to 100\,TeV.
MUSIC has been used for the propagation of muons in water~\cite{MUSIC}. The \textsc{Antares} \textsl{Monte Carlo} simulation chain then allows for simulation of \v{C}erenkov light in the detector, taking into account, in particular, the water properties and the PMTs angular acceptance~\cite{MC}. It also allows for the addition of realistic bioluminescence background using real data streams.
In this work the data used for the bioluminescence background corresponds to golden running conditions\,: runs are selected where the baseline of raw counting rates and the fraction of bursts\footnote{The burst fraction is defined as the ratio between the time when the counting rate is higher than 250\,kHz and the overall time.} are low (about 60\,kHz and less than 20\%, respectively).
Finally, the events are reconstructed using \textsc{Antares} standard reconstruction strategy. It has to be noticed that this strategy is optimized for upgoing events. The results presented here might thus be slightly enhanced using a dedicated strategy. On the other hand, the cut made on the reconstruction quality is very loose, so the effect of hardening the quality cut may compensate the effect of improving the reconstruction strategy.
\section{Sources of interest}
In order to have a reasonable probability to reach the depth of the \textsc{Antares} detector, a downgoing vertical muon must be more energetic than 1\,TeV at sea level\,: the muon probability to survive to a 2200\,m depth in water is less than 70\% for a 1\,TeV muon (13\% at 700\,GeV). Hence only TeV gamma ray sources may be seen by \textsc{Antares}. More than 80 gamma ray sources have been detected in the TeV range by IACTs~\cite{catalog}. However, not all of them are good candidates for \textsc{Antares}\,: most of them are located in the galactic plane, which is not in \textsc{Antares} field of view\footnote{The gamma rays field of view being the Northern hemisphere, as opposed to the neutrinos field of view.}. Moreover, weak and/or soft fluxes are not likely to produce enough muons.
Fortunately, several of the most powerful sources are visible by \textsc{Antares}, including the so-called ``standard candle'', the Crab pulsar. Characteristics of the most interesting candidates in terms of fluxes and visibility are summarized in table~\ref{tab:sources}. In addition to the Crab, three extragalactic sources have been selected.
Though most of these sources are variable or flaring sources, they are known to have long periods of high activity, which make them more promising over a long period than most steady sources~\cite{1ES}\cite{Mkn421}\cite{Mkn501}.
\begin{table}[!h]
\centering
\begin{tabular}{lccc}\hline
source & visibility & mean zenith & type \\\hline\hline
Crab & 62\% & 51.7 & PWN \\\hline
1ES 1959+650 & 100\% & 49.7 & HBL \\\hline
Mkn 501 & 78\% & 49.4 & HBL \\\hline
Mkn 421 & 76\% & 49.2 & HBL \\\hline
\end{tabular}
\caption{\textsl{Characteristics of \textsc{Antares} best gamma ray sources. HBL stands for High frequency peaked BL Lac object, and PWN for Pulsar Wind Nebula.}}
\label{tab:sources}
\end{table}
\section{Simulation results}
\begin{table*}[!th]
\centering
\begin{tabular}{lcccccc}\hline
source & $\mathrm{F_{\gamma}^{atm}}$ & $\mathrm{F_{\gamma}^{sea}(\times10^{-3})}$ & $\mathrm{N^{det}_{5{\scriptscriptstyle +}}}$& $\mathrm{N_{10{\scriptscriptstyle +}}^{det}}$& $\mathrm{N^{reco}}$\\[0.1cm]\hline\hline
Crab & 5-8 & 0.2-0.8 & 30-70 & 20-45 & 1-4 \\
& & \emph{0.4-1} & \emph{20-50} & \emph{15-35} & \emph{1-3} \\\hline
1ES 1959+650 & 0.8-30 & 0.1-2 & 3-150 & 2-100 & 0.2-8 \\
& & \emph{0.1-2} & \emph{2-100} & \emph{1-70} & \emph{0.1-5} \\\hline
Mkn 421 & 1.5-45 & 0.1-4 & 5-330 & 3-230 & 0.2-20 \\
& & \emph{0.1-5} & \emph{2-260} & \emph{2-190} & \emph{0.1-15} \\\hline
Mkn 501 & 1.5-40 & 0.1-15 & 6-1350 & 4-950 & 0.3-90 \\
& & \emph{0.1-15} & \emph{3-1200} & \emph{2-880} & \emph{0.1-80} \\\hline
\end{tabular}
\caption{\textsl{Number of photons/muons produced by several gamma ray sources at different levels, during one year, assuming a 100\% visibility, for primaries in the [1;100]\,TeV energy range. When relevant, straight font corresponds to a 10 degrees zenith angle, while italic stands for a 40 degrees zenith angle. Fluxes of photons at the top of the atmosphere ($\mathrm{F_{\gamma}^{atm}}$) and at sea level ($\mathrm{F_{\gamma}^{sea}}$) are expressed in m$^{-2}$. $\mathrm{N^{det}_{X{\scriptscriptstyle +}}}$ is the number of photons which produce more than $X$ hits on the detector PMTs, and $\mathrm{N^{reco}}$ corresponds to the number of reconstructed events in realistic bioluminescence conditions.}}
\label{tab:results}
\end{table*}
The number of detected and reconstructed events depends on several parameters, such as the precise level of background, the trigger strategy, the reconstruction strategy and the source flux parametrization. Therefore only range estimates are given. They are reported in table~\ref{tab:results}, assuming a 100\% visibility over one year. Most excentric parametrizations have been omitted.
It is found that only a few photons can be expected to be seen by \textsc{Antares}\,: less than ten events per year are reconstructed for the Crab in realistic conditions, though a few tens produce hits on the detector.
In comparison, a rough estimate on data with similar bioluminescence conditions gives $1.1\times10^5$ (resp. $4.1\times10^4$) reconstructed background events (atmospheric muons) within a one degree cone of the 10 degrees zenith angle direction (resp. 40 degrees), and $2.1\times10^6$ (resp. $7.3\times10^5$) background events within a 5 degrees cone.
It seems thus not reasonable to expect \textsc{Antares} to extract any gamma signal from the background under these conditions for any known flux. Though Markarian 501 may seem promising, the upper limit actually corresponds to high state fluxes parametrizations on dayscale variations~\cite{Mkn501}. A more precise selection of the fluxes and a study of the significance of the expected number of muons are still to be done.
However, these estimates are not so bad as they seem. First, the simulation is conservative in terms of photoproduction cross-section~\cite{sigmaph} and does not take muon production from charm decay into account. In addition, background discrimination has not yet been investigated. In particular, the muon poorness of gamma ray showers may help to reduce the atmospheric background\,: by rejecting multimuon events, one can improve both the signal to noise ratio and the angular resolution. If achievable, the muon pair tagging may also improve the background rejection. Moreover, a dedicated reconstruction strategy could increase the number of detected photons. Finally, galactic sources such as the Crab are not subject to the universe opacity above 100\,TeV, and the extension of their spectra to higher energy could lead to reasonable numbers of detectable photons. Short and powerful bursts are not to be excluded either, the associated background being in such cases almost negligible.
\section{Conclusion}
A complete \textsl{Monte Carlo} study has been processed in order to estimate \textsc{Antares}' ability to detect downgoing muons from gamma ray sources.
It has been found that \textsc{Antares} is not likely to detect any of the currently known sources, unless they show some unexpected behaviour. However the conservative estimates computed in this work show that the gamma ray astronomy field is not completely out of reach of underwater neutrino telescopes, at least for the next generation of detectors.
This study will be refined and extended to the km$^3$-scale successor of \textsc{Antares}, namely KM3NeT, which is currently being designed~\cite{KM3NET}.
|
1,314,259,993,370 | arxiv | \section*{Introduction}
Let $X=(M,J)$ a compact complex manifold regarded as a differentiable manifold together with an integrable almost complex structure. Then it is natural to ask which other complex structures can be obtained by deforming the given complex structure $J$. In general this question is difficult to answer but at least for small deformations there is a universal theory due to Kodaira, Spencer and Kuranishi \cite{kod-sp58, kuranishi62}.
Nevertheless, one rarely has a detailed understanding of all small deformations of a given manifold. One class of manifolds where one can hope for such are so-called nilmanifolds with left-invariant complex structure, i.e., compact quotients of real nilpotent Lie-groups equipped with a left-invariant complex structure.
Nilmanifolds with left-invariant complex structure provide an important source for examples in complex
differential geometry. Among these are the so-called Kodaira-Thurston manifolds, historically the first examples known to admit both a complex structure and a symplectic structure but no K\"ahler structure. In fact, a nilmanifold $M$ admits a K\"ahler structure if and only if
it is a complex torus \cite{ben-gor88,hasegawa89}.
Nilmanifolds will be described by a triple $(\lg, J, \Gamma)$ where $\lg$ is the nilpotent Lie-algebra associated to a simply connected nilpotent Lie-group $G$, $J$ is an integrable complex structure on $\lg$ (see Section \ref{basicdefin}) and $\Gamma\subset G$ is a (cocompact) lattice. The datum of either $\lg$ or $\Gamma$ (considered as an abstract group) determines $G$ up to unique isomorphism.
The general philosophy is that the geometry of the compact, complex manifold $M_J=(\Gamma\backslash G,J)$ should be completely determined by the linear algebra of $\lg$, $J$ and the $\IQ$-subalgebra generated by $\log \Gamma\subset \lg$.
In order to control small deformations using Kuranishi's theory we have to get a good grip on the Dolbeault-cohomology of the holomorphic tangent bundle.
In Section \ref{LDC} we set up a Lie-algebra Dolbeault-cohomology with values in integrable modules (Definition \ref{definintegrable}) and prove an analogue of Serre-Duality in this context. Since it is known that for nilmanifolds the usual Dolbeault-cohomology $H^{p,q}(M)=H^q(M, \Omega^p_M)$ can (nearly always) be calculated using invariant forms \cite{con-fin01, cfgu00}, we can identify the cohomology of the tangent bundle with the the cohomology of the complex
\[ 0\to \einsnull\lg \overset{\delbar}{\to} \nulleins{\lg^*}\tensor \einsnull\lg\overset{\delbar}{\to} \Lambda^2\nulleins{\lg^*}\tensor \einsnull\lg\overset{\delbar}{\to} \dots \]
as explained in Section \ref{liedolbeault}.
The explicit description of the cohomology of the tangent bundle will then be used in Section \ref{smalldefo} to prove our main result.
\begin{custom}[Theorem \ref{invariantdeformation}]
Let $M_J$ $(\lg, J, \Gamma)$ be a nilmanifold with left-invariant complex structure such that
\begin{equation}\label{cohomcond}
\iota:H^{1,q}((\lg,J),\IC)\to H^{1,q}(M_J) \text{ is an isomorphism for all $q$}.\tag{$\ast$}
\end{equation}
Then all small deformations of the complex structure $J$ are also left-invariant complex structures. More precisely, the Kuranishi family contains only left-invariant complex structures.
\end{custom}
This generalises the analogous result for abelian complex structures due to Console, Fino and Poon \cite{con-fin-poon06} (see also \cite{mpps06}).
Deformations in the large will be studied in \cite{rollenske08d}.
Since there are no examples known for which \refb{cohomcond} does not hold, it is widely believed that the following question has a positive answer:
\begin{custom}[Question 1]
Does \refb{cohomcond} hold for every left-invariant complex structure on a nilmanifold?
\end{custom}
We hope that Lie-algebra Dolbeault-cohomology theory will turn out to be useful also in other context, for example in the study of the differential graded algebras arising from nilpotent Lie-algebras with a complex structure (see e.g. \cite{0708.3442v2, math.DG/0610768}).
\subsubsection*{Acknowledgements.} This work is part of my PhD-thesis \cite{rollenske07}. I would like to express my gratitude to my adviser Fabrizio Catanese for suggesting this research, constant encouragement and several useful and critical remarks. Simon Salamon gave valuable references to the literature and helped to improve the presentation at several points. An invaluable bibliographic hint due to Oliver Goertsches opened a new perspective on the problem. Several suggestions by the referee helped to improve the presentation of the results.
During the revision of the paper I was visiting Imperial College London supported by a DFG Forschungsstipendium.
\section{Lie-algebra Dolbeault-cohomology}\label{LDC}
The aim of this section is to set a Dolbeault-cohomology theory for modules over Lie-algebras with complex structure and prove Serre-duality in this context. Our main application is the calculation of the cohomology groups of the tangent sheaf for nilmanifolds with a left-invariant complex structure in Section \ref{smalldefo} but perhaps the notions introduced are of independent interest.
After recalling the basic definitions we define the notion of (anti-)integrable module in Section \ref{intreps} and derive the elementary properties which are interpreted in geometric terms in Section \ref{mod-VB}. Section \ref{liedolbeault} is again devoted to algebra when we set up our machinery of Lie-algebra Dolbeault-cohomology. The geometric implications of this theory will be explained in Section \ref{invariantcohomology}.
\subsection{Lie-algebras with a complex structure}\label{basicdefin}
Let $\lg$ be a finite dimensional real Lie-algebra and $J$ an almost complex structure on the underlying real vector space, i.e., $J$ is an endomorphism of $\lg$ such that $J^2=-Id_\lg$; setting $ix=Jx$ for $x\in \lg$ this makes $\lg$ into a complex vector space.
We can decompose the complexified Lie-algebra $\lg_\IC={\gothg}^{1,0}\oplus \gothg^{0,1}$ into the $\pm i$-eigenspaces of the $\IC$-linear extension of $J$ and every decomposition $\lg_\IC=U\oplus \bar U$ gives rise to a unique almost complex structure $J$ such that $\einsnull{\lg}=U$.
Usually we will use small letters $x,y,\dots$ for elements of $\lg$ and capital letters $X,Y,\dots$ for elements in $\einsnull\lg$. Elements in $\nulleins\lg$ will be denoted by $\bar X,\bar Y,\dots$ using complex conjugation; we will use the same symbol for a linear map and its complexification.
The exterior algebra of the dual vector space $\lg^*$ decomposes as
\[\Lambda^k\lg^*_\IC=\bigoplus_{p+q=k}\Lambda^p{\gothg^*}^{1,0}\tensor \Lambda^q{\gothg^*}^{0,1}=\bigoplus_{p+q=k}\Lambda^{p,q}{\gothg^*}\]
and we have $\overline{\Lambda^{p,q}{\gothg^*}}=\Lambda^{q,p}{\gothg^*}$.
A general reference for the linear algebra coming with a complex structure is \cite{Huybrechts} (Section 1.2).
\begin{defin}\label{defintegrable}
An almost complex structure $J$ on a real Lie-algebra $\lg$ is said to be \defobj{integrable} if the Nijenhuis condition
\begin{equation}\label{nijenhuis}
[x,y]-[Jx,Jy]+J[Jx,y]+J[x,Jy]=0
\end{equation}
holds for all $x,y\in \lg$ and in this case we call the pair $(\lg , J) $ a \defobj{Lie-algebra with complex structure}.
\end{defin}
Hence by a complex structure on a Lie-algebra we will always mean an integrable one. Otherwise we will speak of almost complex structures. We will mainly be concerned with nilpotent Lie-algebras.
\begin{rem}
\begin{enumerate}
\item The real Lie-algebra $\lg$ has the structure of a complex Lie-algebra induced by $J$ if and only if $J[x,y]=[Jx,y]$ holds for all $x,y\in\lg $ and $J$ is then automatically integrable in the above sense.
\item One shows that $J$ is integrable if and only if $\lg^{1,0}$ is a subalgebra of $\lg_\IC$ with the induced bracket.
\item If $G$ is a real Lie-group with Lie-algebra $\lg$ then giving a left-invariant almost complex structure on $G$ is equivalent to giving an almost complex structure $J$ on $\lg$ and $J$ is integrable if and only if it is integrable as an almost complex structure on $G$. It then induces a complex structure on $G$ by the Newlander-Nirenberg theorem (\cite{Kob-NumII}, p.145) and $G$ becomes a complex manifold. The elements of $G$ act holomorphically by multiplication on the left but $G$ is not a complex Lie-group in general.
\end{enumerate}
\end{rem}
\subsection{Integrable representations and modules}\label{intreps}
For the whole section let $(\lg, J) $ be a Lie-algebra with complex structure.
A left $\lg$-module structure on a vector space $E$ is given by a bilinear map
\[\lg\times E \to E \qquad (x,v)\mapsto x\cdot v\]
such that $[x,y]\cdot v=x\cdot y\cdot v-y\cdot x\cdot v$. Note that this induces a map $\lg \to \End E$, a representation of $\lg $ on $E$. If we want to stress the Lie-algebra structure on $\End E$ (induced by the structure of associative algebra by setting $[a,b]=ab-ba$) we use the notation $\mathfrak{gl}(E)$. A representation or a left module structure corresponds hence to a Lie-algebra homomorphism $\lg\to \mathfrak{gl}(E)$.
In the sequel we want to combine these notions with complex structures both on $\lg$ and $E$.
Let $(\lg,J)$ be a Lie-algebra with (integrable) complex structure and $E$ a real vector space with (almost) complex structure $I$.
\begin{defin}\label{definintegrable}
A representation $\rho: (\lg,J) \to \End E$ of $\lg$ on $E$ is said to be \defobj{integrable} if for all $x\in \lg$ the endomorphism of $E$ given by
\[ \mathcal N(x):=[I, (\rho\circ J)(x)]+I[\rho(x), I]\]
vanishes identically. In this case we say that $(E,I)$ is an \defobj{integrable $(\lg,J)$-module}. We say that $(E,I) $ is \defobj{anti-integrable} if $(E,-I) $, the complex conjugate module, is an integrable $\lg$-module.
A homomorphism of (anti-) integrable $\lg$-modules is a homomorphism of underlying $\lg$-modules which is $\IC$-linear with respect to the complex structures.
\end{defin}
This definition is modeled on the adjoint representation $ad:\lg \to \End\lg$ which is integrable if and only if
the Nijenhuis tensor \refb{nijenhuis} vanishes. This is a special case of the next result.
\begin{prop}\label{subrep}
Let $(E,I)$ a be vector space with complex structure and $\rho:\lg\to \End E$ a
representation. Then the following are equivalent:
\begin{enumerate}
\item $\rho$ is integrable.
\item For all $X\in \lg^{1,0}$ the map $\rho(X)$ has no component in $\Hom(E^{1,0},E^{0,1})$.
\item $E^{1,0}$ is an invariant subspace under the action of $\lg^{1,0}$.
\item $\rho\restr{\lg^{1,0}} $ induces a $\IC$-linear representation on $E^{1,0}$ by
restriction.
\end{enumerate}
\end{prop}
\pf
The restriction of $\rho$ to $\einsnull \lg$ is $\IC$-linear by definition since it is the complexification of the real
representation restricted to a complex subspace. Therefore condition (\textit{ii}) is equivalent to (\textit{iii}) and (\textit{iv}).
It remains to prove (\textit{i})$\iff$ (\textit{ii}).
Let $X\in \einsnull{\lg}$ and $V\in \einsnull E$. Using $JX=iX$ and $IV=iV$ we calculate
\begin{align*}
\mathcal N(X)V&= ([I, (\rho\circ J)(X)]+I[\rho(X), I])(V)\\
&= (iI\rho(X)-i\rho(X)I+ I\rho(X)I-I^2\rho(X))(V)\\
&= 2iI\rho(X)V+2\rho(X)V
\end{align*}
and see that
\[\mathcal N(X)V=0\iff I\rho(X)V=i\rho(X)V\iff\rho(X)V\in\einsnull V.\]
This proves (\textit{i})$\implies$ (\textit{ii}). Vice versa assume that (\textit{ii}) holds. We decompose the elements
in $E_\IC$ respectively in $\lg_\IC$ into their $(1,0)$ and $(0,1)$
parts. By the above calculation and its complex conjugate (the representation and hence the bracket are real and commute with complex conjugation) it remains to consider the \emph{mixed} case. We have for all
$X,V$ as above
\begin{align*}
\mathcal N(X)\bar V&= (iI\rho(X)-i\rho(X)I+ I\rho(X)I-I^2\rho(X))(\bar
V)\\
&= iI\rho(X)\bar V-\rho(X)\bar V - i I\rho(X)\bar V +\rho(X)\bar
V\\
&=0
\end{align*}
and hence $\rho$ is integrable.\qed
\begin{cor} \label{antidelbarrep}Let $(E,I)$ a be vector space with complex structure and $\rho:\lg\to \End E$ a
representation. Then the following are equivalent:
\begin{enumerate}
\item $\rho$ is anti-integrable.
\item For all $\bar X\in \lg^{0,1}$ the map $\rho(\bar X)$ has no component in $\Hom(E^{1,0},E^{0,1})$.
\item $E^{1,0}$ is an invariant subspace under the action of $\lg^{0,1}$.
\item $\rho\restr{\lg^{0,1}} $ induces a $\IC$-linear representation on $E^{1,0}$ by
restriction.
\end{enumerate}
\end{cor}
\pf Exchange $I$ by $-I$ in the proof of Proposition \ref{subrep}.\qed
\begin{prop} \label{delbarrep}
Let $\rho$ be an integrable representation on $(E,I)$. The bilinear map $\delta$ given by
\[ \delta:\nulleins\lg\times \einsnull E\overset{\rho}{\to} E_\IC\overset{pr}{\to} \einsnull E\quad (\bar X, V)\mapsto \einsnull{(\rho(\bar
X)V)}\]
induces a $\IC$-linear representation of $\nulleins \lg$ on $\einsnull E$.
\end{prop}
\pf
The map is clearly complex bilinear and it remains to prove the
compatibility with the bracket. Let $\bar X, \bar Y\in \nulleins \lg $
and $V\in \einsnull E$ be arbitrary. Note that $\rho(\bar Y)V= \delta (\bar Y,V)+\nulleins{(\rho(\bar Y)V)}$. Then
\begin{align*}
&\delta([\bar X, \bar Y],V)=\einsnull{(\rho([\bar X, \bar Y])V)}\\
=& \einsnull{\left(\rho(\bar X)\rho(\bar Y)V-\rho(\bar Y)\rho(\bar X)V\right)}\\
=& \einsnull{\left(\rho(\bar X)\left(\delta(\bar Y,V)+\nulleins{(\rho(\bar Y)V)}\right)
-\rho(\bar Y)\left(\delta(\bar X,V)+\nulleins{(\rho(\bar X)V)}\right)\right)}\\
=&\einsnull{\left(\rho(\bar X)\delta(\bar Y,V) -\rho(\bar Y)\delta(\bar
X,V)\right)}+\einsnull{(\underbrace{\rho(\bar X)\nulleins{(\rho(\bar Y)V)}-\rho(\bar Y)\nulleins{(\rho(\bar X)V)}}_\text{of type (0,1)})}\\
=&\delta(\bar X,\delta (\bar Y, V))-\delta(\bar Y,\delta (\bar X,V)).
\end{align*}
Here we used that the action of $\nulleins \lg$ maps $\nulleins E$ to $\nulleins E$ which is the complex conjugate of Proposition \ref{subrep} (iii). Hence $\delta$ induces a $\nulleins \lg $-module structure on
$\einsnull E$ as claimed.\qed
We want to clarify the relation between integrable and anti-integrable modules.
\begin{lem}
Let $(E,I)$ be an integrable $(\lg, J)$-module. Then the dual module with the induced $\lg$-module structure is anti-integrable.
\end{lem}
\pf If $x\in \lg$ and $\phi\in E^*$ then the induced module structure is given by $(x\cdot\phi)(v)=-\phi(xv)$ for $v\in E$. We have to show that for $\bar X\in \nulleins \lg$ and $\Phi\in\einsnull{ E^*}$ the map $(\bar X\cdot\Phi)$ annihilates $\nulleins E$. But if $\bar V$ is in $\nulleins E$ then by the above proposition $\bar X\bar V\in \nulleins E$ and $\Phi(\bar X\bar V)=0$.\qed
\begin{rem}\label{badcategory}
The above result seems unnatural only at first sight. If we consider $E$ as left $\mathfrak{gl} (E)$-module in the canonical way then the complex structure $J\in \End E$ acts on the left. The dual vector space $E^*$ comes with a natural action of $\mathfrak{gl}( E)$ on the right:
\[ \phi \cdot A:= \phi\circ A\qquad\text{ for }A\in \mathfrak{gl}( E), \phi \in E^*\]
and the complex structure of $E^*$ is given exactly in this way $I^*\phi=\phi\circ I$.
In order to make $E^* $ a left $\mathfrak{gl}(E)$-module we have to change sign $A\cdot \phi := -\phi\circ A$; changing the sign of the complex structure corresponds to complex conjugation.
Integrable modules do not behave well under standard linear algebra operation like the tensor product. The reason is simply that we have to work over $\IC$ if we want to keep the complex structure on our vector spaces and over $\IR$ if we want to preserve the $\lg$ action, since this action is not $\IC$-linear in general.
\end{rem}
\subsection{Integrable modules and vector bundles}\label{mod-VB}
Now we want to relate the notion of integrable $\lg$-module to geometry. First we forget the complex structures and look at the differentiable situation:
Let $\lg$ be a real Lie-algebra and $G$ the corresponding simply connected Lie-group. Let $E$ be a (left) $\lg$-module and $\Gamma\subset G$ a co-compact discrete subgroup. Then $E \times G$ is the trivial bundle with an action of $G$ on the left, given by the representation of $\lg$ on $E$.
If we take the quotient by the action of the subgroup $\Gamma$ then the result is a homogeneous, flat vector bundle on $M=\Gamma\backslash G$.
Another possibility to look at this situation is the following: the representation of $\lg$ on $E$ gives rise to a central extension, the semi-direct product $E\rtimes \lg$. The vector space underlying this Lie-algebra is $E\oplus \lg $ with the Lie-algebra structure given by $[(v,x), (w, y)]= (x\cdot w-y\cdot v, [x,y])$ for $(v,x), (w, y)\in E\oplus \lg$.
Regarding the real vector space $E$ as a commutative Lie-group, we an the exact sequence of Lie-groups
\[ 0 \to E\to E\rtimes G\to G\to 1.\]
Now we take the complex structures $(\lg,J)$ and $(E,I)$ into account: using left-translations $J$ induces a left-invariant integrable almost complex structure on $G$ and the quotient $M_J:=(\Gamma\backslash G, J)$ is a compact complex manifold on which the normaliser of $\Gamma$ acts holomorphically on the left and the whole group acts differentiably on the right.
Given a $(\lg, J)$-module with a complex structure $(E,I)$ we would like to define the structure of a complex vector bundle on the differentiably trivial vector bundle $E\times G$, that descends to a flat homogeneous holomorphic vector bundle on $M_J$. This is possible if $(E,I)$ is integrable:
\begin{lem}
The $(\lg, J)$-module $(E,I)$ is integrable if and only if $I\times J$ induces a left-invariant complex structure on the Lie-group $E\rtimes G$.
\end{lem}
\pf
We have to calculate the Nijenhuis tensor \refb{nijenhuis} for $K=I\times J$ on the Lie-algebra $E\rtimes \lg$. Let
$(v,x), (w, y)$ be in $E\rtimes \lg$. Then
\begin{align*}
& [(v,x), (w, y)]-[K(v,x), K(w, y)]+K[K(v,x), (w, y)]+K[(v,x), K(w, y)]\\
=& (x\cdot w-y\cdot v, [x,y])- (Jx\cdot Iw-Jy\cdot Iv, [Jx,Jy]) \\
&+(I(Jx\cdot w-y\cdot Iv), J[Jx,y]) + (I(x\cdot Iw-Jy\cdot v), J[x,Jy]).
\end{align*}
The second component yields the Nijenhuis tensor for $J$ and hence vanishes since we assumed $J$ to be integrable.
Using the notation $\rho(x)$ for the element in $\End E$ corresponding to the action of $x$ on $E$ the we expand $\kn(x)$ as
\begin{align*}
\kn(x)=&I[\rho(x), I]+[I, \rho(Jx)]\\
=&-{I}^2\rho(x) +I\circ\rho(x)\circ I+[I, \rho(Jx)]\\
=&\rho(x)-\rho(Jx)\circ J+I\circ\rho(Jx)+I\circ\rho(x)\circ I,
\end{align*}
and so the first component becomes
\begin{align*}
&x\cdot w-y\cdot v- Jx\cdot Iw+Jy\cdot Iv \\
&\qquad\qquad+I(Jx\cdot w-y\cdot Iv)+(I(x\cdot Iw-Jy\cdot v)\\
=&(\rho(x)-\rho(Jx)\circ J+I\circ\rho(Jx)+I\circ\rho(x)\circ I)w\\
&\qquad\qquad-(\rho(y)-\rho(Jy)\circ J+I\circ\rho(Jy)+I\circ\rho(y)\circ I)v\\
=&\kn(x)w-\kn(y)v.
\end{align*}
Setting $v=0$ we see that $K$ is integrable if and only if $(E,I)$ is an integrable $(\lg, J)$-module.\qed
\begin{rem}
\begin{enumerate}
\item Note that the left-invariance of a complex structure is not preserved by standard vector bundle operations for essentially the same reason as explained in Remark \ref{badcategory}.
\item Even if a vector bundle $E$ is equipped with an integrable left-invariant structure left-invariant sections are not necessarily holomorphic. In fact, the action of each $g\in G$ on $E$ is holomorphic but there is no natural way to speak about \emph{the action varying holomorphically} since $G$ is not a complex Lie-group in general.
\end{enumerate}
\end{rem}
\subsection{Lie-algebra Dolbeault-cohomology}\label{liedolbeault}
In this section we want to define a cohomology theory for Lie-algebras with a complex structure with values in a finite-dimensional integrable module. In the notation we will often suppress the complex structures.
Recall that the cohomology groups of a Lie-algebra $\lg$ with values in a $\lg$-module $E$ are defined as the right derived functors of the functor of invariants (\cite{Weibel}, Chapter 7)
\[E\mapsto E^\lg=\{m\in E\mid x\cdot m=m \text{ for all } x\in \lg\}.\]
The cohomology groups can be calculated using the so-called Chevalley complex for $E$:
\[ 0\to E\overset{d_0}{\to} \lg^*\tensor E\overset{d_1}{\to} \Lambda^2 \lg^* \tensor E\overset{d_2}{\to} \dots {\to} \Lambda^{dim \lg} \lg^*\tensor E\to 0\]
with differential given by
\begin{multline}\label{ch-diff}
(d_k\alpha)(x_1, \dots , x_{k+1}):=\sum_{i=1}^{k+1} (-1)^{i+1} x_i (\alpha(x_1, \dots ,\hat x_i, \dots , x_{k+1}))\\
+\sum_{1\leq i <j\leq k+1} (-1)^{i+j} \alpha([x_i,x_j], x_1, \dots, \hat x_i, \dots, \hat x_j,\dots , x_{k+1}).
\end{multline}
It was originally introduced by Chevalley and Eilenberg \cite{Knapp,che-eil48} and its elements can be interpreted as left-invariant differential forms in the the geometric context.
Now let $(\lg,J)$ be a Lie-algebra with complex structure and let $(E,I)$ be a finite dimensional, integrable (resp. anti-integrable) $\lg$-module. By Proposition \ref{delbarrep} (resp. Corollary \ref{antidelbarrep}) we have a representation of $\nulleins\lg $ on $\einsnull E$ given by
$(\bar X, V)\mapsto \einsnull{(\rho(\bar X)V)}$ (resp. $(\bar X, V)\mapsto \rho(\bar X)V$). Together with the representation of $\nulleins \lg$ on $\Lambda^p\einsnull{ \lg^*}$ induced by the adjoint representation we obtain a $\nulleins\lg$-module structure on $\Lambda^p\einsnull{\lg^*}\tensor\einsnull E$.
\begin{defin}
Let $(\lg,J)$ be a Lie-algebra with complex structure and let $(E,I)$ be a finite dimensional, integrable (anti-integrable) $\lg$-module. Then we define
\begin{gather*}
H^{p,q}(\lg, E)=H^{p,q}_{\delbar}((\lg,J),( E,I)):= H^q(\nulleins{\lg}, \Lambda^p\einsnull{\lg^*}\tensor\einsnull E)\\
\end{gather*}
We call $ H^k_{\delbar}(\lg, E):=H^{0,k}(\lg, E)$ the $k$-th \defobj{Dolbeault-cohomology group of $\lg$ with values in $E$}.
\end{defin}
\begin{exam}\label{trivialmodule}
Consider for $(\lg, J)$ as above $\IC$ as the trivial $\lg_\IC$-module. Then the associated Chevalley differential on the exterior algebra $\Lambda^\bullet \lg^*_\IC$ decomposes into $d=\del+\delbar$ since $J$ is integrable and we can consider the double complex $(\Lambda^{p,q}\lg^*, \delbar, \del)$.
The adjoint action of $\lg$ on itself yields an anti-integrable $\lg$-module structure on $\lg^*$, hence a $\nulleins\lg$-module structure on $\Lambda^p\einsnull{\lg^*}$. It is now easy to see that the columns of the above double complex
\[0\to \Lambda^{p,0}\lg^*\to\Lambda^{p,1}\lg^*\to \dots\]
are the Chevalley complexes calculating $H^q(\nulleins\lg, \Lambda^{p,0}\lg^*)= H^{p,q}_{\delbar} (\lg, \IC)$.
\end{exam}
Now we want to develop some kind of Hodge theory for our Dolbeault-cohomology which we model on the usual Hodge theory for holomorphic vector bundles as it can be found for example in the book of Huybrechts \cite{Huybrechts}.
Let $2n$ be the real dimension of $\lg$.
First of all we choose an euclidean structure $g=\langle-,-\rangle$ on the real vector space underlying $\lg$ which is compatible with the given complex structure $J$ in the sense that $\langle J-,J-\rangle=\langle-,-\rangle$. Let $vol$ be the volume form, i.e., the unique element in $\Lambda^{2n} \lg^*$ inducing the same orientation as $J$ and of length one in the induced metric on $\Lambda^\bullet \lg^*$. We define the Hodge-$\ast$-operator, which is an isometry on $\Lambda^\bullet\lg^*$, by
\[ \alpha\wedge \ast\beta = \langle\alpha, \beta\rangle vol \quad \text{for all } \alpha, \beta \in \Lambda^\bullet\lg^*.\]
On the complexified vector space $\lg_\IC$ we have a natural induced hermitian product $\langle-,-\rangle_\IC$ and a $\ast$-operator determined by
\[ \alpha\wedge \ast\bar\beta = \langle\alpha, \beta\rangle_\IC vol \quad \text{for all } \alpha, \beta \in \Lambda^\bullet\lg^*_\IC.\]
which maps $(p,q)$-forms to $(n-p, n-q)$-forms.
We want now to define a star operator also on $\Lambda^{\bullet,\bullet}\lg^*\tensor \einsnull E$. For this purpose we choose an euclidean product on $E$ compatible with the complex structure $I$ which induces a hermitian structure $h$ on $\einsnull E$. We consider $h$ as an $\IC$-antilinear isomorphism $h:\einsnull E\isom \einsnull{E^*}$. Then
\[ \bar\ast_{ E} :\Lambda^{p,q}\lg^*\tensor \einsnull E\to \Lambda^{n-p,n-q} \tensor\einsnull E^*\]
is defined by $\bar\ast_{ E}(\alpha\tensor s)= \overline{\ast\alpha}\tensor h(s)=\ast(\bar\alpha)\tensor h(s)$.
Let $(-,-)$ be the hermitian form on $\Lambda^{\bullet, \bullet}\lg^*\tensor \einsnull E$ induced by $g$ and $h$. Then $\bar\ast_{ E}$ is a $\IC$-antilinear isomorphism depending on our choice of $g$ and $h$ and the identity
\[ (\alpha, \beta)vol = \alpha\wedge\bar\ast_E\beta\]
holds for $\alpha, \beta\in \Lambda^{p,q}\lg\tensor \einsnull E$, where $"\wedge"$ is the exterior product for the elements in $\Lambda^{\bullet, \bullet}\lg$ and the evaluation map $\einsnull E\tensor \einsnull E^*\to \IC$ on the module part.
It is not difficult to verify that one has $\bar\ast_{ E^*}\circ\bar\ast_{ E}=(-1)^{p+q}$ on $\Lambda^{p,q}\lg\tensor \einsnull E$.
\begin{defin}\label{laplacedefin}
Let $(E,I)$ be an (anti-)integrable $(\lg, J)$-module. The operator $\delbar_E^*:\Lambda^{p,q}\lg\tensor \einsnull E\to\Lambda^{p,q-1}\lg\tensor \einsnull E$ is defined as
\[\delbar_E^*:= -\bar\ast_{E^*}\circ \delbar_{E^*} \circ \bar\ast_E.\]
Let $\Delta_E:=\delbar_E^*\delbar_E+\delbar_E\delbar_E^*$ be the \defobj{Laplace operator} on $\Lambda^{p,q}\lg\tensor \einsnull E$. We call an element $\alpha$ harmonic if $\Delta_E(\alpha)=0$ and denote by $\kh^{p,q}(\lg,E)$ the space of harmonic elements (where we omit $g$ and $h$ from the notation).
\end{defin}
Observe that $\bar\ast_E$ induces a $\IC$-antilinear isomorphism
\[\bar\ast_E : \kh^{p,q}(\lg,E)\isom \kh^{n-p,n-q}(\lg,E^*).\]
\begin{prop} If $H^{2n}(\lg_\IC, \IC)=\IC$, where $\IC$ is considered as the trivial $\lg$-module, then the operator $\delbar_E^*$ is adjoint to $\delbar_E$ with respect to the metric induced by $g$ and $h$. In this case $\Delta_E$ is self-adjoint.
\end{prop}
The condition on the cohomology is somehow the equivalent of Stokes theorem as will be seen in the proof.
\pf The second assertion is a consequence of the first one which in turn is proved by the following calculation:
First of all note that the assumption $\Lambda^{2n}\lg^*_\IC\isom \IC$ implies that $d_{2n-1}=0$ in $\Lambda^\bullet \lg^* $, the Chevalley complex of the trivial module. Hence the same holds for $\delbar : \Lambda^{n, n-1}\lg^*\to \Lambda^{n,n}\lg^*$. For $\alpha \in \Lambda^{p,q}\lg^*\tensor \einsnull E$ and $\beta \in \Lambda^{p,q+1}\lg^*\tensor \einsnull E$ we have
\begin{align*}
(\alpha, \delbar^*_E\beta)vol &= -(\alpha, \bar\ast_{E^*}\circ \delbar_{E^*} \circ \bar\ast_E\beta)vol\\
&= -\alpha\wedge \bar\ast_E\bar\ast_{E^*}\delbar_{E^*}\bar\ast_E\beta\\
&= (-1)^{n-p+n-q-1} \alpha\wedge\delbar_{E^*}\bar\ast_E\beta\\
&= -\delbar(\alpha\wedge\bar\ast_E\beta)+\delbar_E\alpha\wedge\bar\ast_E\beta\\
&= \delbar_E\alpha\wedge\bar\ast_E\beta\\
&= (\delbar_E\alpha, \beta)vol.
\end{align*}
Here we used the identity
\[\delbar(\alpha\wedge\bar\ast_E\beta)=\delbar_E\alpha\wedge\bar\ast_E\beta+(-1)^{p+q} \alpha\wedge\delbar_{E^*}\bar\ast_E\beta\]
that follows form the Leibniz rule in the exterior algebra and the fact that the evaluation map $\einsnull E\tensor \einsnull E^*\to \IC$ is a map of $\nulleins \lg $-modules. \qed
\begin{rem}
We have always $H^{2n}(\lg_\IC , \Lambda^{2n}\lg_\IC)=\IC$ (See \cite{Weibel}, Exercise 7.7.2). Hence the assumptions of the theorem hold if $\lg $ acts trivially on $\Lambda^{2n}\lg$, in particular if $\lg $ is nilpotent.
\end{rem}
Here are some standard consequences:
\begin{cor}
If $H^{2n}(\lg_\IC, \IC)=\IC$ then an element $\alpha \in \Lambda^{p,q}\lg\tensor E$ is harmonic if and only if $\alpha$ is $\delbar_E$ and $\delbar_E^*$ closed.
\end{cor}
\pf Standard argument.\qed
\begin{cor}[Hodge-decomposition]
Let $(E,J)$ be an (anti-)integrable module over the Lie-algebra with complex structure $(\lg,J)$ both equipped with compatible euclidean products. If $H^{2n}(\lg_\IC, \IC)=\IC$ then there is a orthogonal decomposition
\[\Lambda^{p,q}\tensor \einsnull E =\delbar_E( \Lambda^{p,q-1}\tensor \einsnull E)\oplus \kh^{p,q}(\lg,E)\oplus
\delbar_E^*(\Lambda^{p,q+1}\tensor \einsnull E).\]
\end{cor}
\pf Since everthing is finite dimensional this follows trivially from the above.\qed
\begin{cor}
If $H^{2n}(\lg_\IC, \IC)=\IC$ then the natural projection
\[\kh^{p,q}(\lg,E)\to H^{p,q}(\lg,E)\]
is bijective.
\end{cor}
\begin{theo}[Serre-Duality]
Let $(\lg, J)$ be a Lie-algebra with complex structure such that $H^{2n}(\lg_\IC, \IC)=\IC$ and $(E,I)$ an (anti-)integrable $\lg$-module. Then the paring
\[H^{p,q}(\lg, E)\times H^{n-p,n-q}(\lg, E^*)\to \IC\cdot vol\isom \IC \quad (\alpha, \beta)\mapsto \alpha \wedge \beta\]
is well defined and non degenerate
\end{theo}
\pf
Fix hermitian structures on $E$ and $\lg$ respectively. Then consider the pairing
\[\kh^{p,q}(\lg, E)\times \kh^{n-p,n-q}(\lg, E^*)\to \IC\cdot vol\isom \IC.\]
We claim that for any non-zero $ \alpha\in \kh^{p,q}(\lg, E)$ there exists an element $\beta \in \kh^{n-p, n-q}(\lg, E^*)$ such that $\alpha \wedge \beta \neq 0$. Indeed, choosing $\beta= \bar\ast_E\alpha$ we have
\[\alpha\wedge\beta=\alpha\wedge\bar\ast_E\alpha=(\alpha, \alpha)vol =\|\alpha\|^2 vol\neq 0.\]
This proves that the pairing in non degenerate.\qed
\begin{cor}\label{serreduality} Let $(\lg, J)$ be a Lie-algebra with complex structure such that $H^{2n}(\lg_\IC, \IC)=\IC$
For any (anti-)integrable $(\lg, J)$-module there exist natural isomorphisms
\begin{gather*}
H^{p,q}(\lg, E)\isom H^{n-p, n-q}(\lg, E^*)^*\\
\intertext{and if $\Lambda^n\lg^*$ is the trivial $\lg$-module}
H^q_{\delbar}(\lg, E)\isom H^{n-q}_{\delbar}(\lg, E^*)^*.
\end{gather*}
\end{cor}
\subsection{Cohomology with invariant forms}\label{invariantcohomology}
We are now going to translate our results on Lie-algebra Dolbeault-cohomology to the geometric situation.
Recall the situation considered in Section \ref{mod-VB}: let $(\lg, J)$ be a real Lie-algebra with complex structure of real dimension $2n$ and $(E,I)$ an integrable $(\lg,J)$-module. Let $G$ be the simply connected Lie-group associated to $\lg$ endowed with the left-invariant complex structure induced by $J$. Let $\Gamma$ a uniform lattice in $G$ and consider the flat, homogeneous, holomorphic vector bundle $\ke$ on $M=\Gamma \backslash G$ constructed by taking the quotient of $E\times G$ by $\Gamma$ acting on the left.
Let $g$ be a euclidean structure on $\lg$ compatible with the complex structure $J$, such that $M$ has volume one with respect to the associated left-invariant metric on $M$. Choose also a euclidean structure on $E$ compatible with the complex structure $I$.
Let $\pi:G\to M$ be the projection. We say that a smooth section $s\in \ka^{p,q}(M,\ke)$ is invariant if $\pi^*s$ is invariant under the action of $G$ on the left, i.e., $l_g^*(\pi^*s)=\pi^*s$ for all $g\in G$. This makes sense since $\pi^*\ke=E\times G$ is trivial as a smooth vector bundle and in particular $l_g^*\pi^*\ke=\pi^*\ke$. A smooth section $s$ in the trivial bundle $E\times G$ is the pullback of a section of $\ke$ if and only if it is invariant under the action of $\Gamma$ on the left.
The relation between the usual Dolbeault theory for vector bundles on complex manifolds and our theory developed so far is summarised in the following
\begin{prop}
In the above situation $\Lambda^{p,q}\lg\tensor\einsnull E$ can be identified with the subset of invariant, smooth differential form on $M$ with values in the holomorphic bundle $\ke$.
If in addition $H^{2n}(\lg_\IC, \IC)=\IC$ then the following holds:
\begin{enumerate}
\item The differential in the Chevalley complex as given in \refb{ch-diff} coincides with the usual differential restricted to invariant forms with values in $\ke$. In particular, if $E$ is the trivial module the decomposition $d=\del+\delbar$ on $\Lambda^{\bullet, \bullet}\lg^*$ coincides with the usual one on the complex manifold $M$.
\item The Chevalley complex associated to the $\nulleins\lg$-module structure on $\einsnull E$ is the subcomplex of invariant forms contained in the usual Dolbeault resolution of the holomorphic vector bundle $\ke$ by smooth differential forms with values in $\ke$.
\item The Hodge-$\ast$-operator defined on $\Lambda^\bullet \lg^*_\IC$ in Section \ref{liedolbeault} coincides with the usual Hodge-$\ast$-operator on the exterior algebra of smooth differential forms. The same holds true for the operator $\bar\ast_E$.
\item The operators $\delbar^*_E$ and $\Delta_E$ in Definition \ref{laplacedefin} are the restrictions of the corresponding operators on smooth differential forms. In particular we have an inclusion
\[\kh^{n-p,n-q}(\lg, E)\subset \kh^{n-p,n-q}(M, \ke)\]
where $\kh^{n-p,n-q}(M, \ke)$ are the harmonic $(p,q)$-forms with values in $\ke$ with respect to the chosen left-invariant hermitian structures.
\end{enumerate}
\end{prop}
\pf The first claim is clear by construction. The Lie bracket on $\lg$ is clearly the restriction of the usual Lie bracket on vector fields on $M$ and also the definition of the differential in \refb{ch-diff} coincides with the usual one for smooth differential forms (see e.g. \cite{Huybrechts}, p. 283). Since $\pi^*\ke$ is differentiably a trivial bundle the same holds for differential forms on $G$ with values in $\pi^*\ke$ and therefore also for sections of $\ke$ itself since we can check this locally. This proves (i) and (ii) using the identification of $\ke$ with $\einsnull E$.
Our reference for the Hodge theory of holomorphic vector bundles is \cite{Huybrechts} (ch. 4.1). Now, recall that we defined our operator $\bar \ast_E$ by the relation
\[ \alpha\wedge\bar\ast_E\beta = (\alpha, \beta)vol =(\alpha, \beta)\ast 1\]
for $\alpha, \beta\in \Lambda^{p,q}\lg\tensor \einsnull E$ which conicides with the definition for differential forms in $\ka^{p,q}(M,\ke)$ if we consider $\alpha$ and $\beta$ as invariant differential forms on $M$:
The hermitian metric on $\ka^{p,q}(M,\ke)$ is defined by $ (\alpha, \beta)_M=\int_M(\alpha, \beta)vol $ but if the forms are invariant we have
\[(\alpha, \beta)_M=\int_M(\alpha, \beta)vol=(\alpha, \beta)\int_M vol =(\alpha,\beta)\]
since we chose the invariant metric such that the volume of $M$ is one.
Therefore also $\bar\ast_\ke=\bar\ast_E$ and this concludes the proof since the Laplace operator can be described in terms of $\bar\ast_\ke$ and $\delbar$.\qed
\begin{cor}\label{isoduality}
In the above situation we have an inclusion
\[\iota_E:H^{p,q}(\lg,E)\to H^{p,q}(M,\ke)\]
induced by the inclusion on the level of harmonic differential forms. In particular if $\iota_E$ is an isomorphism then so is $\iota_{E^*}:H^{n-p,n-q}(\lg,E^*)\to H^{n-p,n-q}(M,\ke^*)$.
\end{cor}
\pf The first claim is an immediate consequence of (\textit{vi}) in the proposition while the second then follows for dimension reasons from Serre-Duality both on $M$ and for Lie-algebra Dolbeault-cohomology (Corollary \ref{serreduality}).\qed
We will apply this to the cohomology of nilmanifolds in the next section in order to study the space of infinitesimal deformations.
\section{Nilmanifolds and their small deformations}\label{smalldefo}
The aim of this section is to prove the main result of this paper namely that small deformations of nilmanifolds with left-invariant complex structure carry a left-invariant complex structure under very mild conditions.
\subsection{Nilmanifolds with left-invariant complex structure }\label{set-up}
In this section we will present those aspects of the theory of nilmanifolds with left-invariant complex structure which we need to formulate and prove our theorem. A more detailed study of their geometry can be found in \cite{rollenske08d}.
In the following let $(\lg, J)$ be a nilpotent Lie-algebra with complex structure as in Definition \ref{defintegrable} and $G$ an associated simply-connected Lie-group. By left-translation $J$ defines an integrable almost complex structure on $G$.
Now assume that $\Gamma\subset G$ is a lattice, a discrete cocompact subgroup. A nilpotent Lie-group $G$ contains such a subgroup if and only if its Lie-algebra can be defined over $\IQ$ (see e.g. \cite{Cor-Green}, p. 204). Then the complex structure on $G$ descends to a complex structure on the compact manifolds $M:=\Gamma\backslash G$.
\begin{defin}
A compact complex manifold $M_J:=(M,J)$ is called \defobj{nilmanifold with left-invariant complex structure} if there is a nilpotent Lie algebra with complex structure $(\lg,J)$ and a lattice $\Gamma$ in an associated simply-connected Lie-group such that $M_J\isom(\Gamma\backslash G, J)$.
\end{defin}
Since $\Gamma=\pi_1(M_J)$ determines $G$ and hence $\lg$ up to isomorphism (\cite{VinGorbShvart}, p.45, Corollary 2.6), by abuse of notation we will always identify $M_J$ with $(\Gamma\backslash G, J)$ and call it a nilmanifolds with left-invariant complex structure of type $(\lg, \Gamma)$.
\begin{rem}
\begin{enumerate}
\item
A nilmanifold $M_J$ of type $(\lg, \Gamma)$ with left-invariant complex structure is K\"ah\-leri\-an if and only if $\lg$ is abelian and $M$ is a complex torus \cite{ben-gor88,hasegawa89}. It can be arbitrarily far from being K\"ahler in the sense that the Fr\"olicher spectral sequence may be arbitrarily non-degenerate \cite{rollenske07a}.
\item If $[\einsnull{\lg}, \nulleins{\lg}]=0$ then $J$ is called complex parallelisable (bi-invariant), $(G,J)$ is a complex Lie group and $M_J$ is complex parallelisable. They have interesting arithmetic properties \cite{winkelmann98}. The other extremal case are abelian complex structures defined by $[\einsnull{\lg}, \einsnull{\lg}]=0$ studied for example in \cite{mpps06, con-fin-poon06}.
\item All (iterated) principal holomorphic torus bundles can be described as nilmanifolds with left-invariant complex structure but not all of them admit such a geometric description.
\item
For nilpotent Lie-groups the exponential map $\exp: \lg \to G$ is a diffeomorphism and the preimage of $\Gamma$ in $\lg$ generates a rational subalgebra $\lg_\IQ\subset \lg$ (\cite{Cor-Green}, p.204). The complex structure $J$ is called $\Gamma$-rational if it maps $\lg_\IQ$ to itself; in this case one has more control over the geometry of $M_J$ \cite{con-fin01}.
\end{enumerate}
\end{rem}
\subsection{A parameter-space for left-invariant complex structures}
We want to parametrise the space of left-invariant complex structures on a given nilpotent Lie-algebra $\lg$. Let $2n$ be the real dimension of $\lg$.
A complex structure $J$ is uniquely determined by specifying the $(0,1)$-subspace $\bar V\subset\lg_\IC$ and the integrability condition can be expressed as $[\bar V,\bar V]\subset \bar V$. Hence we write (like in \cite{salamon01})
\[\kc(\lg):= \{\bar V\in \IG(n , \lg_\IC)\mid V\cap \bar V=0, [\bar V,\bar V]\subset \bar V\}\]
where $\IG(n , \lg_\IC)$ is the Grassmannian of $n$-dimensional complex subspaces of $\lg_\IC$.
Recall that its tangent space at a point $\bar V$ is \[T_{\bar V} \IG(n , \lg_\IC)=\Hom_\IC(\bar V, \lg_\IC/\bar V)\isom {\bar V}^*\tensor V\isom \nulleins \lg^*\tensor \einsnull \lg\]
if we endow $\lg$ with the complex structure $J_{\bar V}$ induced by $\bar V$.
In general it is a difficult question to decide if $\kc(\lg)$ is non-empty for a given Lie-algebra $\lg$. For the next paragraph we will assume this to be the case.
Now fix a simply connected nilpotent Lie-group $G$ with Lie-algebra $\lg$. We want to describe a family of complex manifolds $\pi:\km(\lg)\to \kc(\lg)$ such that over every point $\bar V\in \kc(\lg)$ the fibre $\inverse\pi(\bar V)$ is the manifold $G$ with the left-invariant complex structure $J_{\bar V}$.
Let $\bar \kv\subset \lg_\IC\times\tilde\kc(\lg)$ be the restriction of the tautological bundle on the Grassmannian to the open subset
\[\tilde\kc(\lg):= \{\bar V\in \IG(n , \lg_\IC)\mid V\cap \bar V=0\}\]
and consider the manifold \[\tilde\km(\lg):=G\times \tilde\kc(\lg).\]
The group $G$ acts on on the left of $\tilde\km(\lg)$ by $l_g(h,\bar V)= (gh, \bar V)$ and we can define the subbundle $\nulleins T\tilde\km(\lg)\subset T\tilde\km(\lg)_\IC$ by
\[\nulleins T\tilde\km(\lg)\restr{\{g\}\times \tilde\kc(\lg)}:={l_g}_*\bar\kv\oplus \nulleins T\tilde\kc(\lg).\]
This subbundle gives an almost complex structure on $\tilde\km(\lg)$ which is integrable over $\kc(\lg)$. So we obtain our desired family by taking the pullback
\[\km(\lg) :=\tilde\km (\lg)\times_{\tilde\kc(\lg)}\kc(\lg).\]
If $\Gamma\subset G$ is a lattice then we can take the quotient of $\km(\lg)$ by the action of $\Gamma$ on the left and we obtain a family $\km(\lg, \Gamma)\to \kc(\lg)$ of compact, complex manifolds such that the fibre over $\bar V\in \kc(\lg)$ is the nilmanifold $M_{\bar V}=( \Gamma\backslash G, J_{\bar V})$. Summarising we have shown the following:
\begin{prop}
Every nilmanifold with left-invariant complex structure $M_{J}$ with fundamental group $\pi(M)\isom \Gamma$ is isomorphic to a nilmanifold in the family $\km(\lg, \Gamma)$.
\end{prop}
\pf We only have to observe that by \cite{VinGorbShvart}, p.45, Corollary 2.6 the lattice $\pi_1(M)$ determines $\lg$ up to canonical isomorphism, hence $M_{J}$ is biholomorphic to a fibre in the family $\km(\lg, \Gamma)\to \kc(\lg)$.\qed
There are many natural questions concerning the family $\kc(\lg)$, for example when is it non-empty, smooth, versal and what are the connected components. Catanese and Frediani studied in \cite{catanese04, cat-fred06} the subfamily consisting of principal holomorphic torus bundles over a torus with fixed dimension of fibre and base, the so called \emph{Appel-Humbert family}, and proved that in some 3-dimensional cases it is a connected component of the Teich\-m\"ul\-ler-Space. The family containing the Iwasawa manifolds was studied by Ketsetzis and Salamon in \cite{ket-sal04}.
\subsection{Kuranishi theory and small deformations}
We will now use deformation theory in the spirit of Kodaira-Spencer \cite{kod-sp58} and Kuranishi \cite{kuranishi62} to study small deformations of left-invariant complex structures on nilmanifolds.
A deformation of a given compact complex manifold $X$ is a flat proper map $\pi:\kx\to \kb$ of (connected) complex spaces, such that all the fibres are smooth manifolds, together with an isomorphism with $X\isom \ky_0=\inverse\pi(0)$ for a point $0\in \kb$. If $\kb$ is smooth then $\pi$ is just a holomorphic submersion. Kodaira and Spencer showed that first order deformations correspond to elements in $H^1(X, \Theta_X)$ where $\Theta_X$ is the sheaf of holomorphic tangent vectors.
A key result is now the theorem of Kuranishi which, for a given compact complex manifold $X$, guarantees the existence of a locally complete space of deformations $\kx\to\mathrm{Kur}(X)$ which is versal at the point corresponding to $X$. In other word, for every deformation $\ky\to \kb$ of $X$ there is a small neighbourhood $\ku$ of $0$ in $\kb$ yielding a diagram
\[\xymatrix{ \ky\restr{\ku}\isom f^*\kx \ar[d]\ar[r] & \kx \ar[d]\\
\ku\ar[r]^f&\mathrm{Kur}(X),}\]
and in addition the differential of $f$ at $0$ is unique.
The Kuranishi family $\mathrm{Kur}(X)$ hence parametrises all sufficiently small deformations of $X$. In general the map $f$ will not be unique which is roughly due to the existence of automorphisms.
In order to study small deformations first of all we need a good description of the cohomology of the tangent bundle.
By a theorem of Nomizu \cite{nomizu54} the de Rham cohomology of a nilmanifold can be calculated using invariant differential forms and is isomorphic to the cohomology of the complex
\[0\to \lg^*\overset{d}{\to}\Lambda^2\lg^* \overset{d}{\to}\Lambda^3\lg^* \overset{d}{\to}\dots\]
The question if the Dolbeault-cohomology of compact nilmanifolds with left-invariant complex structure can be calculated using invariant differential forms has been addressed by Console and Fino in \cite{con-fin01} and Cordero, Fernandez, Gray and Ugarte in \cite{cfgu00}. We restate their results using the notation from Sections \ref{set-up} and \ref{liedolbeault}:
\begin{theo}\label{citedolbeault}
Let $\Gamma\backslash G=M$ be a real nilmanifold with Lie-algebra $\lg$. Then there is a dense open subset $U$ of the space $\kc(\lg)$ of all left-invariant complex structures on $M$ such that for all $J\in U$ we have an isomorphism
\[\iota_J:H^{p,q}((\lg,J),\IC)\to H^{p,q}(M_J),\]
on the associated nilmanifold with left-invariant complex structure $M_J$,
where we consider $\IC$ as the trivial $\lg_\IC$-module (\cite{con-fin01}, Theorem A).
In addition this holds true in the following cases:
\begin{itemize}
\item The complex structure $J$ is $\Gamma$-rational. (\cite{con-fin01}, Theorem B).
\item The complex structure $J$ is abelian \cite{con-fin01}.
\item The complex structure $J$ is bi-invariant, $G$ is a complex Lie-group and $M_J$ is complex parallelisable \cite{sakane76, con-fin01}.
\item The complex manifold $M_J$ has the structure of an iterated principal holomorphic torus bundle \cite{cfgu00}.
\end{itemize}
\end{theo}
The idea of the proof is the following: as long as $M_J$ can be given a structure of iterated bundle with a good control over the cohomology of the base and of the fibre one can use the Borel spectral sequence for Dolbeault-cohomology in order to get an inductive proof. This is the case if the complex structure is $\Gamma$-rational or $M_J$ is an iterated principal holomorphic bundle. This yields the result on a dense subset of the space of invariant complex structures and Console and Fino then show that the property \emph{"The map $\iota_J$ is an isomorphism." } is stable under small deformations.
It is an open question if $\iota_J$ is an isomorphism for every left-invariant complex structure on a nilmanifold.
The work on Lie-algebra Dolbeault-cohomology in Section \ref{LDC} now allows us to compute the cohomology of the holomorphic tangent bundle resp. tangent sheaf:
\begin{cor}
Under the same conditions as in Theorem \ref{citedolbeault} the inclusion
\[\iota:H^{p}_{\delbar}((\lg,J), \lg)\to H^{p}(M_J, \kt_{M_J})\isom H^p(M_J, \Theta_{M_J})\]
is an isomorphism. Here we consider $\lg$ as an integrable $\lg$-module under the adjoint representation.
\end{cor}
\pf This is Corollary \ref{isoduality} applied to the holomorphic tangent bundle of $M_J$.\qed
The same was proved for 2-step nilmanifolds with abelian complex structure in \cite{mpps06} and for abelian complex structures in general in \cite{con-fin-poon06}. Hence we can extend the theorem proved there:
\begin{theo}\label{invariantdeformation}
Let $M_J$ be a nilmanifold with left-invariant complex structure of type $(\lg, \Gamma)$ such that
\[\iota:H^{1,q}((\lg,J),\IC)\to H^{1,q}(M_J)\]
is an isomorphism for all $q$. Then all small deformations of the complex structure $J$ are again left-invariant complex structures. More precisely, the Kuranishi family contains only left-invariant complex structures.
\end{theo}
\pf By the work of Kuranishi, the small deformations of $M_J$ are governed by the differential graded algebra $\ka^*_{M_J}(\kt_M)$ of differential forms with values in $\kt_M$. By the above corollary the inclusion $\Lambda^*\nulleins{\lg^*} \tensor \einsnull\lg\subset \ka^*_{M_J}(\kt_M)$ is a quasi-isomorphism and hence induces an isomorphism of corresponding deformation spaces.
We spell this out more in detail following Kuranishi's inductive method on harmonic forms in order to give a description of the Kuranishi space. Note that this has already been done in \cite{mpps06} in the context of abelian complex structures. We choose an invariant, compatible hermitian structure on $M$ as in Section \ref{invariantcohomology}. Recall that the Shouten bracket is defined by
\begin{gather*}
[\cdot, \cdot]: H^1(M, \kt_M)\times H^1(M, \kt_M) \to H^2(M, \kt_M)\\
[\bar\omega\tensor V, \bar\omega' \tensor V] := \bar \omega' \wedge L_{V'}\bar \omega\tensor V+ \bar\omega \wedge L_{V}\bar\omega'\tensor V'+\bar \omega\wedge \bar \omega ' \tensor [V,V']
\end{gather*}
where $L$ is the Lie derivative, i.e. $L_V\bar\omega'= i_V\circ d \bar\omega'+d\circ i_V\bar\omega'$. By assumption we can represent every class in $H^1(M, \kt_M)$ by an element in $\nulleins\kh(\lg, \lg)$ which can be considered as an invariant, harmonic differential form on $M$ with respect to the hermitian structure.
Let $G$ be Green's operator which inverts the Laplacian on the orthogonal complement of the harmonic forms. By construction $G$ maps invariant forms to invariant forms since the Laplacian has this property.
Let $\eta_1, \dots, \eta_m$ be a basis for $\nulleins\kh(\lg, \lg)$ and consider the equation
\[\phi(t)=\sum_{i=1}^m \eta_i t_i +\frac{1}{2} \delbar^* G[\phi(t), \phi(t)].\]
It has a formal power series solution with values in $\nulleins{\lg^*}\tensor \einsnull \lg$ which is given inductively by \[\phi_1(t)=\sum_{i=1}^m \eta_i t_i\text{ and }\phi_r(t)= \frac{1}{2} \sum_{s=1}^{r-1} \delbar^* G[\phi_s(t), \phi_{r-s}(t)].\]
Note that by construction $\phi(t)$ is left-invariant.
By Kuranishi theory (see e.g. \cite{catanese88}, p. 11) this series converges for small $t$ and there is a complete family of deformations of $M$ over the base
\[B:= \{ t\in B_\epsilon(0)\mid \delbar \phi(t)-\frac{1}{2}[\phi(t), \phi(t)]=0\}.\]
If $\xi_1, \dots, \xi_k$ is a basis of $\kh^{0,2}(\lg, \lg)$ then we can use the inner product $(\cdot, \cdot)$ on $\Lambda^2\nulleins{\lg^*}\tensor \einsnull \lg$ to describe $B$ as the zero locus of the functions
\[g_i(t)= (\xi_i, [\phi(t), \phi(t) ]),\qquad i=1,\dots ,k.\]
The complex structure over a point $\eta=\sum_{i=1}^m \eta_i t_i\in B$ is determined by
\[ \nulleins{(TM_\eta)}= (id+\phi(t)) \nulleins{TM}.\]
In particular the complex structure is left-invariant since this is true for $\phi(t)$ and $\nulleins{TM}$.\qed
The Kuranishi space has been described more in detail in special cases. If the complex structure is abelian it is often smooth \cite{mpps06, con-fin-poon06} and if $M_J$ is complex parallelisable it is cut out by polynomial equations but usually singular and reducible \cite{rollenske08a}. We believe that this is true also for general nilmanifolds.
Beyond small deformations one can look at deformations in the large which have been studied in \cite{rollenske08d}.
|
1,314,259,993,371 | arxiv | \section{Introduction}
Over the last years, magnetic fields have been detected in a significant number of local dwarf galaxies. This includes prominent examples such as the Large Magellanic Cloud \citep[LMC, ][]{Gaensler05}, the Small Magellanic Cloud \citep[SMC, ][]{Mao08}, and many additional examples such as NGC~4449 \citep{Chyzy00}, NGC~ 1569 \citep{Kepley10}, NGC~6822 \citep{Chyzy03}, IC~10 \citep{Chyzy03, Heesen11} and NGC~ 4214 \citep{Kepley11}. \citet{Chyzy11} pursued a dedicated investigation of radio emission and magnetic fields in an unbiased sample of $12$ Local Group (LG) irregular and dwarf irregular galaxies yielding both detections and upper limits, while \citet{Roychowdhury12} employed the stacking technique to improve the sensitivity in the radio and to probe average properties of the radio emission for the faintest end of dwarf galaxies. A central result of both studies is that the magnetic fields in dwarf galaxies are about three times weaker than in normal spirals, with a typical field strength of $<4.2\pm1.8$~$\mu$G as given by \citet{Chyzy11}. Both the detections and upper limits are consistent with the assumption that local dwarf galaxies lie on the far-infrared - radio correlation, with a typical scaling of the magnetic field strength $B$ with the star formation surface density $\Sigma_{\rm SFR}^{1/3}$.
The correlation between the far-infrared and radio fluxes was originally observed by \citet{Kruit73b, Kruit73c, Kruit73a}. Subsequent investigations have been pursued by \citet{deJong85} and \citet{Helou85}, while the interpretation in terms of calorimeter models was pursued by \citet{Volk89}. \citet{Niklas97b} proposed a detailed scenario in which the far-infrared - radio correlation emerges due to a relation between the magnetic field strength, the gas surface density and the star formation rate. In particular, it is well-known that the gas surface density is strongly correlated to star formation activity, as reflected in the Kennicutt-Schmidt relation \citep{Schmidt59, Kennicutt98, Kennicutt08, Walter08, Bigiel11, Kennicutt12}. Massive stars emit UV radiation absorbed by dust grains, and re-emitted in the infrared and far-infrared. In addition, the supernova explosions of massive stars inject both cosmic rays and turbulence into the interstellar medium. Such turbulence efficiently amplifies magnetic field via the small-scale dynamo \citep{Kazantsev68, Subramanian99, Scheko02, Schober12b, Schleicher13, FederrathPRL, Grete15}, and as a result, the feedback from star formation provides the relevant ingredients to drive the radio emission \citep[see e.g.][]{Groves03, Schleicher13b}.
The potential validity of the far-infrared - radio correlation even in dwarf galaxies was already suggested by \citet{Bell03}, as both the dust content in the dwarfs and the efficiency of non-thermal radio emission may decrease in a similar amount towards lower star formation rates. At that time, the correlation had been observationally established only for nearby spiral galaxies, including a sample of 1809 galaxies probed by \citet{Yun01}, for which the correlation has been confirmed over 5 orders of magnitude in luminosity. More recent work by \citet{Lacki10} has shown that in fact both the escape of UV photons and cosmic rays from the galaxy may be correlated to the characteristic surface densities, and contributes to our overall understanding of the observations. It is worth noting that these correlations do not only hold on a global scale, but have further been confirmed within the galaxies via dedicated investigations \citep{Dumas11, Taba13, Heesen14}.
Recent studies by \citet{Murphy09}, \citet{Ivison10a}, \citet{Jarvis10}, \citet{Sargent10} and \citet{Casey12} in fact provide evidence that the far-infrared - radio correlation holds at least until redshifts of $z\sim2$, and it also holds in the context of galaxy mergers \citep{Drzazga11}. {In addition, work by \citet{Miettinen15} shows that the radio emitting region is more extended than the infrared emitting region at least in some cases. The latter is potentially consistent with Taffy-like systems \citep{Condon02}, mergers \citep{Murphy13} or systems undergoing tidal interactions \citep{Donevski15}, though we suggest that tidal tails as in the Antennae galaxies \citep{Chyzy04} may be the more frequent scenario. In the context of mergers, previous studies, e.g. \citet{Drzazga11} have preferentially considered the impact of magnetic fields, while \citet{Lisenfeld10} pointed out the potential importance of particle acceleration. The latter requires rather high Mach numbers, which may not be available in the interstellar medium \citep{Guo14a}, or the Firehose instability, if the acceleration only concerns the electrons \citep{Guo14b}. In that case, a high plasma beta would be required, while current estimates indicate that it may be rather small \citep{Beck15}.}
The radio-infrared correlation holds for thermal and non-thermal (synchrotron) radio emission, though with different slopes. Thermal radio emission may dominate in dwarf galaxies \citep{Roychowdhury12} and hence mask the relation between non-thermal and infrared emission. The interpretation requires a careful separation of both emission components, e.g. with the help of spectral index data, which is non-trivial and often not possible. Alternatively, the thermal radio emission is assumed to be linearly proportional to the infrared emission, although not all infrared emission is directly related to UV radiation from young stars. Only H$\alpha$ emission can be safely assumed to be proportional to radio thermal emission, if extinction is properly corrected.
On theoretical grounds, it is expected that magnetic fields can be efficiently ampified even in small systems and at high redshift, due to the efficient amplification by turbulence \citep{Arshakian09, Wang09, Schleicher10c, Souza10, Latif13, Schober13}, and it is thus conceivable that the far-infrared - radio correlation will be in place early on. As pointed out by \citet{Murphy09}, a potential difficulty is however the increasing strength of the cosmic microwave background at high redshift, enhancing the inverse Compton emission and providing an additional loss mechanism for the cosmic ray electrons. It is thus conceivable that the latter may lead to a modification or a breakdown of the correlation at very high redshift due to differences in the energy loss mechanisms of cosmic rays \citep{Lacki10b, Schleicher13b, Schober15}.
In this paper, we explore whether a correlation between the far-infrared and radio emission can still be expected in dwarf galaxies, in particular in the limit of star formation {surface densities} below $0.1$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$. In this respect, one needs to examine a couple of relevant issues. A first one concerns the behavior of star formation itself. In the sample explored by \citet{Chyzy11}, the Kennicutt-Schmidt relation appeared to hold even in the dwarf galaxy regime, while \citet{Roychowdhury09} reported potential deviations towards lower star formation rates. Through dedicated spatially resolved investigations of star formation and the HI-dominated gas both in nearby spirals and dwarf irregular galaxies using the THINGS \citep{Walter08} and FIGGS \citep{Begum08} survey, \citet{Roychowdhury15} have pursued a detailed comparison the the Kennicutt-Schmidt relation in both regimes, finding in particular that there is no dependence on the metallicity of the gas. It is therefore conceivable that this relation will hold in the dwarf galaxy regime, even though the scatter may potentially increase towards lower gas masses.
An important difference compared to spiral galaxies is that the rotation in local dwarfs may be substantially reduced, implying slow or more chaotic rotation with a low differential rotation \citep{Chyzy03}. In the VLA-ANGST survey of 35 nearby dwarf galaxies, the lowest detected rotation velocities have been of the order $20$~km/s. Through the Westerbork HI Survey of Spiral and Irregular Galaxies (WHISP), rotation curves have been measured for a sample of 62 galaxies, finding typical rotational velocities of $20-80$~km/s on scales of a few disk scale lengths \citep{Swaters09}. In general, a strong variation is found from dwarf to dwarf, and in some cases like NGC4449 \citep{Theis01} or IC~10 \citep{Ashley14}, the rotation curves have been explained by tidal interactions of disks with other dwarfs. The implications of the large diversity of dwarf galaxy rotation curves has therefore been discussed also in recent studies \citep{Oman15}.
In the radio regime, additional differences have been found compared to the typical conditions in local spirals, whose main properties were already described by \citet{Condon92}. In particular, while the fraction of thermal radio emission corresponds to about $8\%$ for local spirals \citep{Murphy06}, a non-thermal fraction of $\sim50\%$ has been reported recently by \citet{Roychowdhury12} in the dwarf galaxy regime. We will in fact show in this paper that such a behavior arises rather naturally due to the non-linear nature of the far-infrared - radio correlation.
The structure of this paper is as follows. In section~\ref{model}, we outline our overall modelling framework, which is employed to derive characteristic timescales both for dynamical processes within the dwarfs as well as the timescales for thermal and radio emission. These are employed to derive critical star formation surface densities, which are required for these processes to be maintained in a steady fashion. In section~\ref{slope}, these results are employed to distinguish between four characteristic regimes of radio emission in the dwarf galaxies, and we discuss in particular the expected slope and the breakdown of the correlation at very low star formation rates. A discussion with our main conclusions is presented in section~\ref{discussions}.
\section{Model framework}\label{model}
In the following, we will outline our main model framework, starting with the basic assumptions (subsection~\ref{basic}) and including a more detailed framework for the formation of galactic winds (subsection~\ref{windmodel}). A discussion of relevant timescales for the dynamics, the thermal and radio emission is given in subsection~\ref{timescales} along with a derivation of critical star formation rates, which are presented and compared in subsection~\ref{criticalsfr}.
\subsection{Basic assumptions}\label{basic}
We assume that star formation in dwarf galaxies occurs in a disk of neutral HI gas with surface density $\Sigma$, radius $R_{\rm disk}$ and height $H$. We further assume that the Kennicutt- Schmidt relation provides a valid description of the star formation law, so that the star formation surface density is given as\begin{equation}
\Sigma_{\rm SFR}=C\Sigma^N,\label{Kennicutt}
\end{equation}
where $C$ is a normalization constant and $N\sim1.5$ the slope of the relation. Considering the results by \citet{Chyzy11}, we adopt here \begin{equation}
C\sim\frac{2\times10^{-2}\ M_\odot\ \mathrm{yr}^{-1}\ \mathrm{kpc}^{-2}}{\left( 10^8\ M_\odot\ \mathrm{kpc}^{-2} \right)^N}
\end{equation}
for the normalization. With these quantities, the amount of gas available for star formation is $M_{gas}\sim R_{\rm disk}^2\pi\Sigma$, and the total star formation rate within the galaxy is given as\begin{equation}
\dot{M}_{\rm SFR}=R_{\rm disk}^2\pi \Sigma_{\rm SFR}=R_{\rm disk}^2\pi C\Sigma^N.
\end{equation}
The observed relation between non-thermal radio and infrared emission further suggest a relation {between the magnetic field strength $B$ and the star formation surface density $\Sigma_{\rm SFR}$ of the form}\begin{equation}
B=C_B \Sigma_{\rm SFR}^{1/3},\label{BSFR}
\end{equation}
where we determine the normalization constant $C_B$ from the investigation of \citet{Chyzy11} as\begin{equation}
C_B\sim\frac{8\mu G}{\left( 0.1\ M_\odot\ \mathrm{yr}^{-1}\ \mathrm{kpc}^{-2} \right)^{1/3}}.
\end{equation}
The height of the disk of the warm interstellar medium (ISM) results from the balance between the gravitational force, the turbulent pressure, thermal pressure, magnetic pressure and cosmic rays. The disk height $H$ is then given as\begin{equation}
H = \frac{v_{th}+v_t(1+2\epsilon_{\rm B}^{1/2})}{\Omega_K},
\end{equation}
where $v_{th}\sim10$~km/s corresponds to the thermal velocity of the warm ISM, $v_t$ is the turbulent velocity and $\Omega_K$ the angular velocity required to balance the gravitational force. In general, we note that the disk height will be different depending on the ISM component that is considered, as the cold gas may have different turbulent and lower thermal velocities. We assume here equipartition between the magnetic energy density and cosmic rays, with the magnetic energy density given as a fraction $\epsilon_{\rm B}$ of the turbulent energy density. Indeed, observations have shown that the energy density of turbulence and the magnetic field are essentially comparable, implying $\epsilon_{\rm B}\sim1$ \citep{Beck07, Beck15}. The latter indicates that turbulence may play a central role in amplifying the magnetic field.
Also in this paper, we will in the following assume approximate equipartition between magnetic and turbulent energy in the warm gas component, i.e. the component which predominantly gives rise to the observed radio emission. {We particularly note that we focus here on the disordered component of the magnetic field, which yields the dominant contribution to the radio flux. For comparison, studies by \citet{Drzazga16} have shown that the ordered component in the outer parts of the galaxy is also affected by tidal interactions.} The density of the warm neutral gas can be estimated as $\Sigma/2H$. The typical density ratio between the ionized and neutral warm gas component is denoted as $C_{\rm HII/HI}$ in the following, and we adopt a typical value of $C_{\rm HII/HI}\sim2\%$ following \citet{Tielens05}. The equipartition between turbulent and magnetic energy in the warm ionized gas can then be expressed as
\begin{equation}
\frac{B^2}{8\pi}=\epsilon_{\rm B}C_{\rm HII/HI}\frac{1}{2}\frac{\Sigma}{2H}v_t^2.\label{equi}
\end{equation}
With the {order-of-magnitude} scaling relation $H\sim v_t/\Omega_K$, the above expression can be simplified as
\begin{equation}
\frac{B^2}{8\pi}=\frac{1}{4}\epsilon_{\rm B}C_{\rm HII/HI}\Sigma v_t \Omega_K.\label{equi}
\end{equation}
We can therefore solve for the turbulent velocity $v_t$, and insert Eq.~(\ref{BSFR}) as well as the Kennicutt-Schmidt relation (Eq.~\ref{Kennicutt}), yielding\begin{eqnarray}
v_t&=&\frac{B^2}{2\pi\epsilon_{\rm B}C_{\rm HII/HI}\Sigma\Omega_K}=\frac{C_B^2\Sigma_{\rm SFR}^{2/3}}{2\pi\epsilon_{\rm B}C_{\rm HII/HI}\Sigma\Omega_K}\nonumber\\
&=&\frac{C_B^2 C^{2/3}\Sigma^{2N/3-1}}{2\pi\epsilon_{\rm B}C_{\rm HII/HI}\Omega_K}.\label{vt}
\end{eqnarray}
Evaluating the disk height and turbulent velocity in this way, additional properties of the galaxy follow from the more detailed ISM model described in the next subsection. A central parameter is then the amount of rotation $\Omega_K$ of the dwarf galaxy.
\subsection{Wind model}\label{windmodel}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=1]{wind.eps}
\caption{Wind velocity as a function of the star formation surface density. Shown are both the velocities for a thermally-driven wind (Eq.~\ref{windtherm}) and our full model including magnetic and cosmic-ray pressure (Eq.~\ref{wind}). As real dwarf galaxies show a relevant scatter in their typical field strength, we expect realistic cases to lie inbetween.}
\label{figwind}
\end{center}
\end{figure*}
The wind model adopted here is an extension of the framework developed by \citet{Shu05}. They employed the multi-phase ISM model for supernova evolution by \citet{McKee77} and \citet{Efstathiou00} to construct a model for galactic winds in the context of starburst galaxies. A central assumption in their model is thus that the galactic porosity, i.e. the volume filling factor of the hot gas, which is formally defined via\begin{equation}
P=\frac{f_d V_{\rm hot}}{V_{\rm SFR}},
\end{equation}
is generally of order $1$, with $V_{\rm hot}$ the volume of the hot gas, $V_{\rm SFR}$ the total volume of the star-forming system and $f_d\sim 2H/R$ a correction factor for galactic disks. To generalize this point, we therefore adopt the expression derived by \citet{Clarke02} to parametrize the dependence of porosity on galaxy properties, which is given as\begin{equation}
P=\frac{7 f_d (\dot{M}_{\rm SFR}/(M_\odot/yr))}{(M_{\rm ISM}/10^{10}~M_\odot)(v_{th}/(10\ km/s))^2}.
\end{equation}
In the above, $M_{\rm ISM}$ denotes the mass of the ISM, which we estimate via $M_{ISM}=R_{\rm disk}^2\pi \Sigma${, and $\dot{M}_{\rm SFR}$ denotes the star formation rate in the galaxy}. The stellar mass here is not considered, as we only divide by the thermal energy of the warm ISM. The warm gas produced due to star formation is compared with the thermal energy of the warm ISM, adopting a thermal velocity of $v_{th}\sim10$~km/s. Unless in the regime of extremely low star formation rates, we generally find $P\sim1$, consistent with previous results by \citet{Clarke02}.
{We note that the expressions adopted here assume a Salpeter IMF \citep{Salpeter55}, consistent with the assumption that about $5\%$ of the stellar mass is in massive stars with more than $8$~M$_\odot$. The precise form of the IMF in dwarf galaxies is however a matter of ongoing debate. For instance, the results by \citet{Geha13} for ultra-faint dwarf galaxies indicate that the IMF is more shallow than Salpeter in the mass range between $0.52$~M$_\odot$ and $0.77$~M$_\odot$. On the other hand, a comparison of H$\alpha$ and UV data suggests a deficient of high-mass stars below star formation rates of $0.003$~M$_\odot$~yr$^{-1}$ \citep{Lee09}, in line with expectations of \citet{Weidner05}. For a generalization of the framework adopted here, all star formation rates or surface densities in the following may be considered to be multiplied by a factor $\epsilon_{hm}/0.05$. We note here in particular that a lower value of $\epsilon_{hm}$ may imply that some of the relations discussed here could break down even earlier, or, in case of a gradual evolution of this parameter, a steepening of the far-infrared - radio relation would occur. In the absence of a more detailed knowledge, we will in the following however assume that the parameter is constant, and explore the corresponding results.}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=1]{sfr_diss.eps}
\caption{Turbulence injection timescale (or timescale of massive star formation) {compared to} turbulence dissipation timescale in dwarf galaxies with rotational velocities of $20$~km/s, $40$~km/s and $80$~km/s. The timescales are shown as a function of star formation surface density. The critical star formation surface density is found to be $\sim10^{-5}$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$, with only a weak dependence of the rotational velocity. The size of the star-forming region is assumed to be $0.5$~kpc.}
\label{sfr_diss}
\end{center}
\end{figure*}
From the star formation surface density $\Sigma_{\rm SFR}$ in the galaxy, we can estimate the star formation rate per unit volume as $\dot{\rho}_{\rm SFR}=\Sigma_{\rm SFR}/2H$. For the model of \citet{Shu05}, we first need to calculate the supernova rate $S_{-13}$ in units of $10^{-13}$~pc$^{-3}$~yr$^{-1}$. The latter is given as\begin{equation}
S_{-13}=10^{13}\frac{\dot{\rho}_{\rm SFR}}{M_{ps}}=\frac{10^{13}\times\Sigma_{\rm SFR}}{2HM_{ps}},
\end{equation}
where $M_{ps}$ denotes the required stellar mass to form at least one star massive enough to explode as a supernova. The latter is parametrized via $M_{ps}=8$~M$_\odot/\epsilon_{hm}$, with $\epsilon_{hm}\sim0.05$ the mass fraction of massive stars \citep{Kroupa02, Chabrier03}. From this expression, one can calculate the temperature of the hot gas in cavities as \citep{Efstathiou00, Shu05}:\begin{equation}
T_h=6.6\times10^5\left( \frac{ S_{-13}E_{51}f_\Sigma}{\gamma}\right)^{0.29} K,\label{Th}
\end{equation}
with $E_{51}=1$ the typical energy of supernova explosions in units of $10^{51}$~erg and $\gamma=2.5$ the ratio of blast wave velocity to the isothermal sound speed of the hot phase in the case of strong shocks. The quantity $f_\Sigma$ parametrizes the dependence on various properties of the ISM, including conductivity and minimum mass of the clouds. For most of them, we keep the standard parameters employed by \citet{Shu05}, and consider in the following only the impact of the varying cold gas fraction $f_c=e^{-P}$. The latter yields\begin{equation}
f_\Sigma = 21.5\left( \frac{f_c}{e^{-1}} \right)^{-1}.
\end{equation}
We note here that the final wind velocity will scale as $f_\Sigma^{0.145}$, and is therefore highly insensitive to the ISM parameters incorporated into $f_\Sigma$. In a fully ionized gas, the isothermal sound speed is now given as\begin{equation}
C_i=\sqrt{k_B T_h/\mu m_p}=37 \left( \frac{T_h}{10^5\ K} \right)^{0.5}\ \mathrm{km/s},\label{Ci}
\end{equation}
with $k_B$ the Boltzmann constant, $m_p$ the proton mass and $\mu$ the mean molecular weight. In the presence of a pure thermal driving, the wind velocity then follows as\begin{equation}
v_{\rm wind, therm}=\Gamma_W C_i P^{-1/7},\label{windtherm}
\end{equation}
where the factor $\Gamma_W\sim\sqrt{2.5}$ allows for some radiative cooling of the energy injected by the supernova, and the factor $P^{-1/7}$ provides a correction in the limit of low porosity parameters developed by \citet{Shu05}.
Here, we further aim to account for the role of magnetic and cosmic ray pressure in driving such winds. For this purpose, we note that Eq.~(\ref{windtherm}) has been derived from the conservation of the specific enthalpy in the hot gas. We recall that the thermal enthalpy of the hot gas is given as $H_T=\frac{5}{3}E_T$, with $E_T$ its thermal energy. The magnetic enthalpy is given as $H_B=2E_B$, with $E_B$ the magnetic energy, and the enthalpy of cosmic rays follows as $H_{CR}=\frac{4}{3}E_{CR}$, with $E_{CR}$ the cosmic ray energy. We assume here approximate equipartition between the energy in magnetic fields and cosmic rays, and our expression for the total enthalpy is then given as \begin{equation}
H_{\rm tot}=\frac{5}{3}E_T\left(1+2\frac{E_B}{E_T} \right).
\end{equation}
The ratio $E_B/E_T$ is evaluated using\begin{equation}
\frac{E_B}{E_T}=\frac{B^2/8\pi}{1.5 n_h k_B T_h},
\end{equation}
where $n_h$ is the number density of the hot gas, which we evaluate as \citep{Efstathiou00}\begin{equation}
n_h=4.3\times10^{-3}S_{-13}^{0.36}\gamma^{-0.36}f_\Sigma^{-0.393}.\label{nh}
\end{equation}
Considering thus the conservation of thermal, magnetic and cosmic ray enthalpy, the resulting wind velocity is given as\begin{equation}
v_{\rm wind, therm}=\Gamma_W C_i P^{-1/7}\sqrt{1+2\frac{E_B}{E_T}}.\label{wind}
\end{equation}
The resulting wind velocities both for the full model and the thermally-driven wind are illustrated in Fig.~\ref{figwind}. {To understand the resulting behavior as a function of the star formation surface densitiy, we first note that the porosity $P$ is close to $1$ for most of the regime considered here, and we also assume a constant ratio $E_B/E_T$. The main variable through which the star formation surface density enters in the calculation is thus the isothermal sound speed $C_i$ given in Eq.~(\ref{Ci}), which depends on the temperature of the hot gas in ISM cavities given in Eq.~(\ref{Th}). The latter scales with $S_{-13}^{0.29}$, with $S_{-13}\propto \Sigma_{\rm SFR}/H$. For a typical case with Kennicutt-Schmidt index $N=1$, we have $H\sim v_t/\Omega_K\propto \Sigma_{\rm SFR}^0/\Omega_K$ from Eq.~(\ref{vt}), where $\Omega_K$ is independent of the star formation rate. Thus, $T_h\propto \Sigma_{\rm SFR}^{0.29}$ and $v_{\rm wind}\propto C_i\propto C_i\propto T_h^{0.5}\propto \Sigma_{\rm SFR}^{0.145}$. The latter implies that the wind velocity changes over one order of magnitude when the star formation surface density changes by 7 orders of magnitude, consistent with the results in Fig.~\ref{figwind}. }
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=1]{sfr_var.eps}
\caption{Turbulence injection timescale (or timescale of massive star formation) {compared to} turbulence dissipation timescale in dwarf galaxies for different Kennicutt-Schmidt relations {as a function of the star formation surface density}. We adopt here a reference case with $N=1.5$ as well as two additional cases with $N=1$ and $N=2$. The size of the star forming region is assumed to be $0.5$~kpc, the rotational velocity $40$~km/s. We find that the critical star formation surface density has no strong dependence on the precise form of the Kennicutt-Schmidt relation.}
\label{sfr_var}
\end{center}
\end{figure*}
\subsection{Characteristic timescales}\label{timescales}
In this subsection, we evaluate characteristic timescales of the dwarf galaxies and their ISM to assess whether the far-infrared - radio relation can be maintained for low star formation rates. In the following, we distinguish between dynamical timescales, the timescale for the thermal emission and the timescales for radio emission and cosmic ray losses.
\subsubsection{Dynamical timescales}
Assuming that the disk height $H$ corresponds to the size of the largest turbulent eddies, the timescale for the turbulent energy dissipation
is given as\begin{equation}
\tau_{\rm diss}\sim\frac{H}{v_t}\sim \Omega_K^{-1}.
\end{equation}
The balance between gravitational force and turbulent pressure within the vertical direction therefore implies that the turbulence dissipation time is comparable to the rotation period of the dwarf galaxy $\Omega_K^{-1}$, assuming that the vertical support of the disk is dominated by the turbulent energy, i.e. $H\sim \Omega_K/v_t$. We do not explicitly account here for the potential effects of differential rotation, which may change the rotation period as a function of radius, as we are predominantly interested in an order-of-magnitude estimate. {For $\tau_{\rm diss}$, we expect overall a very weak dependence on the star formation surface density, which in fact vanishes under the assumption that $H\sim v_t/\Omega_K$. Such a very weak dependence is indeed visible in Fig.~\ref{sfr_diss}.}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=1]{timescale_20.eps}
\caption{Characteristic timescales for cosmic ray loss mechanisms for a reference model with $R_{\rm disk}=0.5$~kpc and a rotational velocity of $20$~km/s of the star-forming region. Shown are in particular the injection timescale of the cosmic rays, defined as the timescale for massive star formation, the adiabatic losses both for a full and a thermally-driven wind model, the timescale for cosmic ray diffusion and for synchrotron losses. The dominant loss mechanism in this regime is due to the full wind model, while cosmic ray diffusion would imply a transition for lower star formation rates. Injection timescales longer than the characteristic timescales for losses may induce significant fluctuations in the non-thermal radio emission. We note that the cosmic ray diffusion timescale implicitly assumes an observed frequency of $1$~GHz, while higher-frequency observations may probe more energetic cosmic rays with shorter diffusion times.}
\label{timescale20}
\end{center}
\end{figure*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=1]{timescale.eps}
\caption{{Same as Fig.~\ref{timescale20}, but with a rotational velocity of $40$~km/s.}}
\label{timescale40}
\end{center}
\end{figure*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=1]{timescale_80.eps}
\caption{{Same as Fig.~\ref{timescale20}, but with a rotational velocity of $80$~km/s.}}
\label{timescale80}
\end{center}
\end{figure*}
The injection of turbulent energy is closely linked to the formation of massive stars with at least $8$~M$_\odot$, which have a typical lifetime of about $55$~million years. The formation rate of massive stars is given as\begin{equation}
\dot{M}_{\rm sf,hm}=\epsilon_{hm}\dot{M}_{\rm SFR}=\epsilon_{hm}R_{\rm disk}^2\pi C\Sigma^N,
\end{equation}
with $\epsilon_{hm}$ the mass fraction of massive stars introduced above. {The characteristic timescale between the formation of two massive stars is then given as}\begin{equation}
\tau_{\rm hm}=\frac{8\ M_\odot}{\dot{M}_{\rm sf,hm}}=\frac{8\ M_\odot}{\epsilon_{hm}R_{\rm disk}^2\pi C\Sigma^N},\label{sfhm}
\end{equation}
corresponding to the characteristic timescale of turbulent energy injection. {The resulting scaling with $\Sigma_{\rm SFR}^{-1}\propto\Sigma^{-N}$ is directly visible in Fig.~\ref{sfr_diss}.} Now, a sufficient amount of turbulence in the galaxy can be maintained as long as $\tau_{\rm hm}<\tau_{\rm diss}$, implying that the turbulent energy is efficiently replenished by star formation. In the opposite case with $\tau_{\rm hm}>\tau_{\rm diss}$, the turbulence in the galaxy will decay before the next injection event. The latter implies the decay of turbulence and the decay of magnetic fields, implying that a relation such as $B\propto \Sigma_{\rm SFR}^{1/3}$ cannot be maintained. In particular, one may expect a significant increase of the scatter depending on the state at which the dwarf galaxy will be observed.
From this condition, one can derive a critical gas surface density above which the turbulence injection timescale remains sufficiently small:\begin{equation}
\Sigma_{\rm crit}=\left( \frac{8\ M_\odot\Omega_K}{\epsilon_{hm}R_{\rm disk}^2\pi C} \right)^{1/N}.
\end{equation}
Using the Kennicutt-Schmidt-relation (Eq.~\ref{Kennicutt}), the latter can be translated into a critical star formation surface density, yielding\begin{equation}
\Sigma_{\rm SFR, crit}=\frac{8\ M_\odot\Omega_K}{\epsilon_{hm}R_{\rm disk}^2\pi}.\label{SFRcrit}
\end{equation}
For a prelimary estimate, we adopt here a radius of $R_{\rm disk}=0.5$~kpc for the size of the star forming region, and we investigate the turbulence injection and dissipation timescales for dwarf galaxies with rotational velocities of $20$~km/s, $40$~km/s and $80$~km/s for the star-forming region. The critical star formation surface density is found to be $\sim10^{-5}$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$, with only a weak dependence of the rotational velocity. The more detailed results are given in Fig.~\ref{sfr_diss}. We further explore the impact of different power-law slopes $N$ in the Kennicutt-Schmidt relation, as shown in Fig.~\ref{sfr_var}, and find only a weak dependence on the form of the relation. The same is true for changes in the normalization of the relation by up to a factor of 10.
\subsubsection{Timescales for thermal emission}
Our considerations so far have predominantly concerned the dynamics in the galaxy and whether they can maintain a power-law relation between star formation and the magnetic field strength. However, to maintain the observed far-infrared - radio correlation, the star formation rate needs to be translated into thermal emission from dust grains, while the radio emission is due to cosmic ray synchrotron emission in the magnetic field. The infrared emission is due to thermal emission of dust grains, and the corresponding timescale is the cooling time of the dust. It can be shown that the latter is almost independent of the grain-size, and mostly depends on the dust temperature \citep{Krugel08}. Assuming a dust temperature of $10$~K, the cooling time corresponds to $\sim10^4$~yrs, and is even shorter for larger dust temperatures. The latter implies that the radiation of massive stars deposited onto dust grains is radiated away very quickly. However, the thermal energy of the grains is replenished during the lifetime of the massive stars, which is of the order $50\times10^6$~yrs for a star with $8$ solar masses.
Considering Figs.~\ref{sfr_diss} and \ref{sfr_var}, we find that the timescale for massive star formation becomes longer than the lifetime of a massive star for a star formation surface density of $\sim3\times10^{-5}$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$, considering our reference model with $R_{\rm disk}=0.5$~kpc. We can further generalize this requirement for arbitrary sizes of the star forming region, yielding a critical star formation surface density for the thermal emission as\begin{equation}
\left(\frac{\Sigma_{\rm SFR, therm}}{10^{-6}\,M_\odot\,kpc^{-2}\, yr^{-1}}\right)=\frac{8\ M_\odot}{5\times10^7\ \mathrm{yr} (\frac{\epsilon_{\rm hm}}{0.05})(\frac{R_{\rm disk}}{\rm kpc})^2 \pi }.\label{thermal}
\end{equation}
{For star formation surface densities lower than $10^{-6}$~M$_\odot$~kpc$^{}-2$~yr$^{-1}$, assuming a star-forming region of $1$~kpc and $\epsilon_{hm}\sim0.05$}, a constant thermal emission cannot be maintained, leading to strong fluctuations in the far-infrared - radio correlation. A steepening of the initial mass function (IMF) at very low star formation rates may somewhat change this transition, and in that case a breakdown of continuous thermal emission might occur even earlier. Understanding such an effect however requires a better understanding of star formation in this regime.
From a comparison with Eq.~\ref{SFRcrit}, we note that the condition becomes relevant for $\Omega_{K,therm}^{-1}<5\times10^7$~yr, as then the continuous thermal emission breaks down while the relation $B\propto\Sigma_{\rm SFR}^{1/3}$ is still maintained. In the opposite case, a breakdown of the relation between $B$ and $\Sigma_{\rm SFR}$ may occur while still having a steady thermal emission. The rotation rate of the galaxy is thus a key parameter in regulating this transition.
\subsubsection{Timescales for radio emission}\label{radioloss}
In the following, we will assess the characteristic timescales for radio emission and cosmic ray energy losses and their impact on the far-infrared - radio correlation. The timescale for the synchrotron emission can be expressed as \citep{Murphy09}\begin{equation}
\frac{\tau_{\rm sync}}{\rm yr}=1.4\times10^9\left( \frac{\nu_c}{\rm GHz} \right)^{-1/2}\left( \frac{B}{\mu G} \right)^{-3/2},
\end{equation}
where $\nu_c$ is the characteristic frequency of emission of cosmic rays with an energy $E$. These quantities are related via \citep{Murphy09}\begin{equation}
\frac{\nu_c}{\rm GHz}=1.3\times10^{-2}\left( \frac{B}{\mu G} \right)\left(\frac{E}{\rm GeV} \right)^2.\label{nuc}
\end{equation}
In the following, we will consider the synchrotron emission of cosmic rays with a characteristic frequency of $1$~GHz, and we evalute the magnetic field strength assuming the observed relation $B\propto\Sigma_{SFR}^{1/3}$ (Eq.~\ref{BSFR}). This timescale, along with other characteristic timescales introduced below, is shown in Figs.~\ref{timescale20}-\ref{timescale80} for dwarf galaxies with a star forming region of $R_{\rm disk}=0.5$~kpc and rotational velocities of $20$~km/s, $40$~km/s and $80$~km/s. In particular, the synchrotron timescale does not depend on the amount of rotation, it decreases with $\Sigma_{\rm SFR}$ and is considerably larger than the other timescales. We can therefore conclude that the cosmic ray abundance will not be significantly depleted via synchrotron emission.
An additional effect which can be relevant are the inverse Compton losses of the non-thermal electrons. Considering the inverse Compton scattering due to the cosmic microwave background (CMB), it can be shown that the latter becomes important at a critical field strength \citep{Murphy09, Schleicher13b} \begin{equation}
B_{IC}=3.25\,\mu\mathrm{G}\left(1+z\right)^2.\label{BIC}
\end{equation}
As the magnetic field strength $B$ scales approximately as $\Sigma_{\rm SFR}^{1/3}$ (cf. Eq.~\ref{BSFR}), we expect that inverse Compton losses become relevant in dwarf galaxies. Comparing the Eq.~(\ref{BSFR}) with Eq.~(\ref{BIC}), these losses are expected to become dominant for star formation surface densities below $0.005$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$. In this regime, the expected synchrotron flux should thus be corrected by a factor $\tau_{IC}/\tau_{\rm sync}\propto B^2\propto \Sigma_{\rm SFR}^{2/3}$ \citep{Murphy09}, implying a steeper decrease of the non-thermal radio emission at very low star formation rates.
In the presence of further cosmic ray loss mechanisms with timescales shorter than the characteristic timescale for the injection, the cosmic rays will be depleted very efficiently, inducing an even steeper scaling relation, along with significant fluctuations in the non-thermal radio emission with peaks occuring at the events of cosmic ray injection. In particular, the cosmic rays will be depleted through diffusion processes in the interstellar medium. Typical values of the diffusion coefficients range from $3\times10^{27}$~cm$^2$~s$^{-1}$ \citep{Mulcahy14} to $2\times10^{29}$~cm$^2$~s$^{-1}$ \citep{Heesen09}, and may depend both on the magnetic field strength and the alignment of the magnetic fields. We adopt here a typical diffusion coefficient of $D_E=2\times10^{28}$~cm$^2$~s$^{-1}$, which is characteristic for cosmic ray energies of $1$~GeV and which allows us to assess the potential relevance of the diffusion \citep{Murphy09}. The characteristic diffusion timescale of the cosmic rays is then given as\begin{equation}
\frac{\tau_{\rm esc}}{yr}=1.5\times10^7\left( \frac{H}{\rm kpc} \right)^2,
\end{equation}
where we assumed that the shortest pathway for the cosmic rays to diffuse out of the galaxy is along the disk height. Assuming observations at a fixed frequency of $1$~GHz, the energy of the cosmic rays providing the main contribution to the synchrotron emission is expected to weakly increase with decreasing magnetic field strength, potentially increasing the diffusion coefficient by a factor of 2-3. We however neglect that here in the light of the overall uncertainties. Another potential uncertainty in this timescale corresponds to the cosmic ray streaming instability \citep{McKenzie84}, as the typical Alfven time \begin{equation}
t_A\sim10^7\,\mathrm{yr}\left( \frac{H}{\mathrm{kpc}} \right)\left( \frac{v_A}{100\,\mathrm{km/s}} \right)^{-1}
\end{equation}
is comparable to the escape time by diffusion. The interaction of such processes thus should be treated in more detail via numerical simulations.
With our simplified assumption, the diffusion timescale here depends on the rotational velocity, as the latter influences the scale height of the disk. For a rotational velocity of $20$~km/s, it is evident from Fig.~\ref{timescale20} that the diffusion timescale becomes shorter than the injection timescale at a star formation surface density of $\sim7\times10^{-7}$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$. In case of a rotational velocity of $40$~km/s, this transition occurs at about $10^{-5}$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$ (Fig.~\ref{timescale40}), and for a rotational velocity of $80$~km/s at about $10^{-4}$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$ (Fig.~\ref{timescale80}). The rotational velocity therefore has a strong impact on the transition, predominantly as it implies a flatter structure potentially enhancing the escape. We note that the plots showing the diffusion timescale implicitly assume an observed frequency of $1$~GHz. Higher-frequency observations could probe more energetic cosmic rays, implying shorter diffusion times.
From a comparison with Eq.~\ref{sfhm}, we can derive a critical star formation surface density below which the cosmic ray diffusion becomes relevant. In this regime, the cosmic rays will be strongly depleted after the ejection event, implying that the radio emission cannot be maintained. The critical rate is given as\begin{equation}
\Sigma_{\rm SFR, diff}=\frac{8\ M_\odot}{1.5\times10^7\ \mathrm{yr}\times\epsilon_{hm}R_{\rm disk}^2\pi (H/\mathrm{kpc})^2},\label{SFRdiff}
\end{equation}
implying an increasing critical star formation rate of larger disk heights. Observationally the latter may be estimated from a measurement of the turbulent velocity $v_t$ and the rotation rate $\Omega_K$, thus yielding $H\sim v_t/\Omega_K$. We further note that this process will depend on the cosmic ray energy, as more energetic cosmic rays have larger diffusion coefficients, and longer diffusion times. We therefore expect that observations at higher frequencies will allow to correct for the effect of cosmic ray diffusion and probe whether a relation between the star formation rate and the magnetic field is still maintained. From a comparison of Eq.~\ref{SFRdiff} with Eq.~\ref{SFRcrit}, a critical point corresponds to the regime where $\Omega_K^{-1}=2.4\times10^6$~yr\ $(H/\mathrm{kpc})^2$. Using $H\sim v_t/\Omega_K$, the latter yields a critical transition at \begin{equation}
\Omega_{\rm K,diff}\sim1.7\times10^{-11} \left(\frac{v_t}{\mathrm{km/s}}\right)^2\ \mathrm{yr}^{-1},\label{omegadiff}
\end{equation}
or, focusing on the dependence of the scale height of the disk,\begin{equation}
\Omega_{\rm K,diff}=\frac{1}{1.5\times10^7\mathrm{\ yr}\times(H/\mathrm{kpc})^2}.\label{omegadiffdisk}
\end{equation}
For smaller angular velocities, the continuous radio emission may break down due to cosmic-ray diffusion even if the correlation between the magnetic field and the star formation rate is still maintained. This behavior can be inverted in the regime of fast rotation, where cosmic ray diffusion losses are expected to become relevant only after the breakdown of the correlation between the star formation rate and the magnetic field. We may further compare with the expression for the critical star formation rate for thermal emission (Eq.~\ref{thermal}), leading to the critical condition that\begin{equation}
\left( \frac{(v_t/\Omega_K)_{\rm crit}}{\rm kpc} \right)\sim1.8.
\end{equation}
In the regime of large disk heights, the thermal emission thus breaks down before the radio emission, while the inverse behavior is found for very thin disks.
Finally, we will consider adiabatic expansion losses due to galactic winds. The characteristic timescale can be estimated via \citep{Murphy09}\begin{equation}
\frac{\tau_{ad}}{yr}=10^9\left(\frac{H}{\rm kpc} \right)\left( \frac{v_{\rm wind}}{\rm km/s} \right)^{-1}.
\end{equation}
For the latter, we employ the detailed wind model outlined in subsection~\ref{windmodel}, and we will both consider the effects of thermally-driven winds (Eq.~\ref{windtherm}) as well as our full model including magnetic field and cosmic-ray pressure (Eq.~\ref{wind}). The effect of the wind model again has a strong dependence on the disk height of the galaxy, and thus its rotational velocity. In the reference case with a rotational velocity of $20$~km/s (Fig.~\ref{timescale20}), cosmic ray diffusion is less efficient than the effects of the thermal wind model, though comparable within an order of magnitude at the star formation surface density where it becomes comparable to the timescale for injection events. Including magnetic and cosmic ray pressure enhances the wind and implies a depletion of the cosmic rays already at a star formation rate of $\sim10^{-5}$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$. For a rotational velocity of $40$~km/s (Fig.~\ref{timescale40}), the effects of the thermal wind model and cosmic ray diffusion are essentially comparable, while magnetically driven winds would be even more efficient. Finally at rotational velocities of $80$~km/s (Fig.~\ref{timescale80}),cosmic ray diffusion is comparable to the effects of the full wind model.
Our results thus suggest that adiabatic losses by the wind can be particularly relevant in the regime of low rotation and large disk heights. From these results, it appears that the presence of a magnetically driven wind could change the critical star formation rate for cosmic ray depletion by at most an order of magnitude.
\subsubsection{Critical star formation surface densities}\label{criticalsfr}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=1]{sfrcrit.eps}
\caption{A comparison of the critical star formation surface density to maintain the relation between star formation and the magnetic field, given via Eq.~\ref{SFRcrit}, with the critical star formation rate to maintain thermal emission, given via Eq.~\ref{thermal}. {Critical star formation surface densities are given as functions of the size of the star-forming region.} Explored are characteristic parameters for the rotational velocity of $20, 40, 80$~km/s. For large star-forming regions, the critical star formation rate to maintain thermal emission eventually exceeds the critical star formation rate to maintain the correlation between star formation and the magnetic field. A comparison with Fig.~\ref{sfrdiff} however shows that the radio emission will break down even earlier in this regime due to cosmic-ray diffusion losses.}
\label{sfrcrit}
\end{center}
\end{figure*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=1]{sfrdiff.eps}
\caption{A comparison of the critical star formation surface density for cosmic ray diffusion, given via Eq.~\ref{SFRdiff}, with the critical star formation rate to maintain the relation between star formation and magnetic field. {Critical star formation surface densities are given as functions of the size of the star-forming region.} We adopt here a characteristic rotational velocity of $20, 40, 80$~km/s, and explore values of the disk scale height of $0.5, 1, 2$~kpc. The largest critical values can be obtained for more compact star-forming regions. The relative importance of both processes is found to sensitively depend on both quantities. We note that the critical star formation rate for cosmic ray diffusion implicitly assumes observations at $1$~GHz, while higher-frequency observations will probe more energetic cosmic rays, potentially implying a larger critical star formation rate.}
\label{sfrdiff}
\end{center}
\end{figure*}
From the considerations above, we have derived a set of critical star formation surface densities which describe the transition between different physical regimes in the dwarf galaxy and its ISM. Equation~(\ref{SFRcrit}) describes the {condition} under which the turbulence injection time remains larger than its decay time, implying that turbulence and turbulent magnetic field structures can be maintained and will be correlated with the star formation rate in the galaxy. This timescale depends in particular on the size of the star-forming region $R_{\rm disk}$ and the angular velocity $\Omega_K$, which can be expressed as $v_{\rm rot}/R_{\rm disk}$.
Equation~(\ref{thermal}) denotes the critical star formation rate above which continuous thermal emission can be maintained in the galaxy, as the characteristic timescale for massive star formation is shorter than the typical lifetime of massive stars. This timescale depends essentially on the size of the star-forming region. Finally, equation~(\ref{SFRdiff}) denotes the critical star formation surface density for which the timescale for diffusion losses becomes comparable to the injection timescale of cosmic rays. This condition is therefore important to allow for continuous radio emission. The critical rate depends in particular on the size of the star-forming region and the disk height $H$.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=1]{lum.eps}
\caption{Ratio of the non-thermal radio ($\sim2.64$~GHz) to thermal infrared ($\sim60$~$\mu$m) surface brightness, normalized for NGC~4449 adopting the values of \citet{Chyzy11}, as a function of the star formation surface density. We assume a size of the star-forming region of $0.5$~kpc, and explore values for the disk height of $0.1, 0.25, 0.5$~kpc, which influence the critical star formation surface density concerning the relevance of cosmic-ray diffusion losses (Eq.~\ref{SFRdiff}). The transition illustrated here is relevant in the regime of slow rotation defined via Eq.~(\ref{omegadiffdisk}), {where cosmic-ray diffusion effects become relevant before the relation between magnetic field strengths and star formation surface density breaks down}. }
\label{lum}
\end{center}
\end{figure*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=1]{lum_therm.eps}
\caption{The total to thermal radio flux density ratio as a function of the star formation rate. The evolution of the slope is eveluated from Eqs.~(\ref{spiral}, \ref{redCR}), assuming a transition at the critical star formation rate given via Eq.~\ref{SFRdiff}. We assume a size of the star-forming region of $0.5$~kpc, and explore values for the disk height of $0.1, 0.25, 0.5$~kpc. The normalization of the flux ratios assumes a thermal-to-non-thermal flux ratio of $8\%$ for a star formation rate of $0.1$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$. The plot assumes a typical radio frequency of $\sim1$~GHz.}
\label{lum_therm}
\end{center}
\end{figure*}
To assess the relative importance of the different critical star formation surface densities, they are plotted as a function of $R_{\rm disk}$ in Fig.~\ref{sfrcrit}-\ref{sfrdiff}, considering typical rotational velocities $v_{\rm rot}=20, 40, 80$~ km/s and disk scale heights $H=0.5, 1, 2$~kpc. In particular for large star-forming regions, the dominant mechanism leading to a break-down of the far-infrared - radio correlation is cosmic ray diffusion, however with critical values of $\sim10^{-6}$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$, which are difficult to observe. The critical values are significantly enhanced for more compact star-forming regions, scaling approximately as $R_{\rm disk}^{-2}$, with characteristic values of about $\sim10^{-4}$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$ for $R_{\rm disk}=0.5$~kpc. In this case, the critical values due to cosmic ray diffusion and due to the comparison of turbulence injection and dissipation time are about equally important. The critical star formation rate to maintain thermal emission, on the other hand, never appears to be responsible for a critical transition, as the lifetime of massive stars is still long enough for other processes to become relevant earlier.
We note again that the critical star formation surface density for cosmic ray diffusion implicitly assumes observations at a frequency of $1$~GHz. Observations at higher frequencies will probe more energetic cosmic rays and imply shorter timescales for cosmic ray diffusion, potentially increasing the critical star formation rate to maintain the injection events at a high enough rate.
\section{The slope of the far-infrared - radio correlation}\label{slope}
The synchrotron emission from an ensemble of relativistic electrons of energy $E$ at a frequency $\nu$ is generally given as\begin{equation}
L_s(\nu)d\nu=\frac{dE}{dt}N(E)dE,
\end{equation}
where $dE/dt$ describes the synchrotron emission of the individual electrons and $N(E)$ denotes the number of electrons at energy $E$. The losses of the individual electrons are directly proportional to the magnetic energy density $U_B=B^2/8\pi$.
In spiral galaxies, the injection timescale of cosmic rays is considerably smaller than the timescale for synchrotron losses, cosmic ray diffusion, losses by winds or other effects like inverse Compton scattering \citep[see e.g.][]{Murphy09}. In this regime, the amount of cosmic rays that can be maintained in the galaxy corresponds to equipartition with the magnetic field, implying $U_B=U_{CR}$, as for larger amounts of cosmic rays, the cosmic ray pressure will exceed the pressure from the magnetic field, leading to expansion and an effective loss of the cosmic rays from the galaxy. In this regime, we thus have $N(E)\propto B^2/8\pi$, and thus $L(\nu)\propto B^4\propto \Sigma_{\rm SFR}^{4/3}$, where we assumed that $B\propto\Sigma_{\rm SFR}^{1/3}$. In the more general case with a synchrotron spectral index $\alpha$ different from $1$, one may expect a scaling as $B^{3+\alpha}$, while we note that the currently observed value $\alpha=0.7\pm0.2$ is consistent with the above relation in the $2\Sigma$ range. The ratio of non-thermal to thermal emission is therefore expected to scale as \begin{equation}
\left(\frac{L_s}{L_{th}}\right)_{\rm spiral}\propto \Sigma_{\rm SFR}^{1/3},\label{spiral}
\end{equation}
as the thermal emission is expected to be proportional to the star formation rate. In this derivation, we have so far assumed that synchrotron emission is the main loss mechanism for the cosmic rays, and we assumed that galaxies are optically thick to the UV emission of the stars, implying that the energy is trapped and converted to thermal emission of the dust grains. As we have shown in section~\ref{radioloss}, inverse Compton losses may start to become relevant at star formation surface densities of $\sim0.005$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$, leading to an additional factor $\Sigma_{\rm SFR}^{2/3}$ in the scaling of the non-thermal radio emission at very low star formation rates.
To take into account the potential effect of optically thin dust opacities for the UV, we adopt here a typical UV dust opacity of $\kappa_d\sim300$~cm$^2~g^{-1}$ \citep{Tielens05}. The effects of the optically thin regime thus start to become relevant for surface densities of $\sim1.5\times10^7$~M$_\odot$~kpc$^{-2}$. From the Kennicutt-Schmidt relation (Eq.~\ref{Kennicutt}) with $N\sim3/2$, the latter corresponds to a star formation surface density of $\sim0.002$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$. Within the uncertainties present here, this correction thus becomes relevant in the same regime in which the inverse Compton losses also become relevant. To correct for this effect, the thermal emission should be multiplied with a factor proportional to the dust UV opacity $\tau_{d}\propto \kappa_d \Sigma\propto \kappa_d \Sigma_{\rm SFR}^{1/N}$. In the typical case of $N=3/2$, we thus have a correction factor of $\Sigma_{\rm SFR}^{2/3}$, and the corrections from both effects essentially cancel, so we expect that Eq~(\ref{spiral}) is still valid in this regime. Even for the extreme cases $N=1$ or $N=2$, we note that the corrections would be small, corresponding to a dependence of the form $\Sigma_{\rm SFR}^{\pm1/6}$, i.e. a very weak change in the dependence on $\Sigma_{\rm SFR}.$ We note that this canceling can also be understood in terms of a decreasing efficiency of star formation tracers, as previously described by \citet{Bell03}.
{The scaling relation given in Eq.~(\ref{spiral})} is both expected (and observed) for the ratio of non-thermal to thermal radio emission, as well as for the ratio of non-thermal radio emission to thermal emission in the infrared or far-infrared, as the thermal component is always proportional to the star formation rate. The latter implies a non-linear far-infrared - radio correlation with a logarithmic slope of $4/3$, i.e. $L_s\propto L_{th}^{4/3}$ \citep{Niklas97b}. From the shape of Eq.~(\ref{spiral}), it is further evident that the ratio of non-thermal to thermal emission decreases with decreasing star formation rates. This is consistent with recent observations by \citet{Roychowdhury12} of non-thermal radio fractions of up to $50\%$ in dwarf galaxies, in contrast to typical thermal fractions of $\sim8\%$ in spirals \citep{Niklas97, Murphy06}. Potential deviations from such a trend could potentially occur due to a steepening of the IMF at very low star formation rates, which would reduce the thermal emission, but also the injection of the cosmic rays. We however assume here that such effects do not yet occur in the regime where equipartition between cosmic rays and magnetic fields can be efficiently maintained.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=1]{omegacrit.eps}
\caption{The critical rotation rate $\Omega_{\rm K, diff}$ of the dwarf galaxy as a function of the disk scale height $H$, as given in Eq.~(\ref{omegadiffdisk}), to distinguish between the regime of low star formation and high rotation and the regime of low star formation and low rotation. In the former regime, we still expect a valid but steeper far-infrared - radio correlation, while in the limit of low rotation, the correlation between star formation and the magnetic field will break down before cosmic ray diffusion becomes relevant. The critical rotation rate adopted here implicitly assumes observations at $1$~GHz due to the diffusion timescale of the cosmic rays. Observations at higher frequencies will probe more energetic electrons, implying shorter diffusion times, and a larger critical rotation rate.}
\label{omegacrit}
\end{center}
\end{figure*}
While a change of the thermal fraction may already be expected on the grounds given above, we will further illustrate below that also the slope of the far-infrared - radio correlation, or correspondingly, the slope of the thermal to non-thermal radio emission, will change with decreasing star formation rate. For this purpose, we need to consider the regime where cosmic ray injection becomes less efficient, so that a state of equipartition with the magnetic field is not reached. In particular, when the timescale for cosmic ray losses via diffusion or galactic winds becomes shorter than the injection timescale, i.e. the formation timescale of massive stars, the cosmic ray abundance in the galaxy will start decreasing. As we have shown in section~\ref{radioloss} that cosmic ray diffusion and adiabatic losses through winds are yielding roughly similar results, we will in the following focus on losses via cosmic ray diffusion, which are relevant below the critical star formation surface density given in Eq.~(\ref{SFRdiff}), with a strong dependence both on the size of the star-forming region $R_{\rm disk}$ and the disk scale height $H$.
In this regime, the amount of cosmic rays will be dictated by the injection, implying that {their number $N(E)$ scales as} $N(E)\propto \Sigma_{\rm SFR}$. In this case, the ratio of non-thermal to thermal emission scales as
\begin{equation}
\left(\frac{L_s}{L_{th}}\right)_{\rm CR\ diff}\propto \frac{U_B \Sigma_{\rm SFR}}{\Sigma_{\rm SFR}}\propto B^2.\label{redCR}
\end{equation}
Under these conditions, the magnetic field strength $B$ depends on the rotation rate of the galaxy. As shown in section~\ref{radioloss}, there is a critical rotation rate $\Omega_{\rm K, diff}$ (Eq.~\ref{omegadiffdisk}) above which the relation between magnetic field and star formation rate is still maintained, while this relation is no longer valid for lower values of the rotation rate. In terms of the critical star formation rates derived above, one may thus distinguish between the following regimes:\begin{itemize}
\item high star formation rates with $\Sigma_{\rm SFR}> \Sigma_{\rm SFR,crit}$ and $\Sigma_{\rm SFR}> \Sigma_{\rm SFR,diff}$,
\item low star formation rates and low rotation, $\Sigma_{\rm SFR}> \Sigma_{\rm SFR,crit}$ but $\Sigma_{\rm SFR}< \Sigma_{\rm SFR,diff}$,
\item low star formation rates and strong rotation, $\Sigma_{\rm SFR}< \Sigma_{\rm SFR,crit}$ and $\Sigma_{\rm SFR}> \Sigma_{\rm SFR,diff}$,
\item very low star formation rates with $\Sigma_{\rm SFR}< \Sigma_{\rm SFR,crit}$ and $\Sigma_{\rm SFR}< \Sigma_{\rm SFR,diff}$,
\item very low star formation rates with $\Sigma_{\rm SFR}<\Sigma_{\rm SFR,therm}$.
\end{itemize}
The first case is the regime of spiral galaxies, which we described in Eq.~(\ref{spiral}). In the regime of low star formation and low rotation, the ratio of non-thermal to thermal emission can be described via Eq.~(\ref{redCR}), and a correlation $B\propto\Sigma_{\rm SFR}^{1/3}$ can still be expected. In this case, we have\begin{equation}
\left(\frac{L_s}{L_{th}}\right)_{\rm low\ SF,\ high\ rot}\propto B^2\propto \Sigma_{\rm SFR}^{2/3}, \label{lowSFhrot}
\end{equation}
implying a scaling of $L_s\propto L_{th}^{5/3}$ and a change in the slope of the far-infrared - radio correlation by $1/3$ on a logarithmic scale. In this regime, the ratio thus decreases more steeply with $\Sigma_{\rm SFR}$, and the thermal fraction will be further enhanced towards low star formation rates. The latter is consistent with the current stacking results by \citet{Roychowdhury12}.
The latter is given in Fig.~\ref{lum} for a characteristic case with $R_{\rm disk}=0.5$~kpc and disk scale heights of $0.1, 0.25, 0.5$~kpc, with a normalization of the relation obtained via NGC~4449, {as the latter lies well onto the $L_{2.64\ GHz}-L_{60\ \mu m}$ relation given by \citet{Chyzy11}. While our results related to cosmic ray diffusion have been arrived assuming a frequency of $1$~GHz, we note that the characteristic electron energy scales as $E\propto\nu_c^{1/2}$, and the cosmic ray diffusion coefficient has a weak scaling $\propto E^{1/2}$. Overall, the difference in frequency thus introduces only a factor of $1.3$ difference with respect to cosmic ray diffusion. We also note that our results have a predominantly illustrative character, and the typical scatter in this relations needs to be taken into account, implying a variation by at least a factor of $3$.}
{With the assumptions adopted here,} the steepening occurs at a star formation surface density of $\sim3\times10^{-5}$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$, with some spread depending on the scale height of the disk. We also show the ratio of the total to thermal radio emission in Fig.~\ref{lum_therm}. For normalization purposes, we have adopted a characteristic value of $8\%$ for the ratio of thermal to non-thermal radio emission at a star formation surface density of $0.1$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$ \citep[see][for the thermal fraction in spiral galaxies]{Murphy06}. The figure shows a substantial decrease of the total to thermal emission as a function of the star formation rate, with a flattening towards a value of $1$ at very low star formation rates. The slope of this ratio changes at the critical star formation rate given in Eq.~(\ref{omegadiffdisk}), and the details of the transition depend on the properties of the dwarf galaxy. As the distinction between this and the subsequent regime of low rotation depends on the critical rotation rate, the latter quantity is given in Fig.~\ref{omegacrit}.
We note that the star formation rates for which we expect changes in the slope may also depend on the observed frequency, as cosmic ray diffusion becomes more efficient at larger observed frequencies, implying shorter diffusion times. High-frequency observations may thus be able to probe this transition already at higher star formation rates. At the same time, we note that the normalization of the thermal-to-non-thermal flux density ratio may be frequency dependent, and may require further exploration.
In the regime of low star formation rates and high rotation, there is no longer a correlation between star formation rate and magnetic field strength. The ratio of non-thermal to thermal emission can still be described by Eq.~(\ref{redCR}), but the magnetic field strength may be significantly decreased as a result of turbulent decay. In general, the latter is expected to significantly decrease the radio emission, while in a few cases with very recent injection, the non-thermal emission can be comparable to the case of low star formation and high rotational velocities. Quite generally, we expect here a significant amount of scatter, including many non-detections, and potentially a few number of detections of non-thermal emission.
In the case with very low star formation rates below $\Sigma_{\rm SFR,crit}$ and $\Sigma_{\rm SFR}< \Sigma_{\rm SFR,diff}$, the correlation between star formation and magnetic field is broken, and cosmic ray diffusion is very efficient, rapidly removing any cosmic rays that might be ejected. In this regime, we do not expect a relevant or detectable non-thermal component, and no signs of a far-infrared - radio correlation should be detectable. We expect that the corresponding dwarfs can be detected only via thermal emission. Continuous thermal emission is expected to eventually break down below $\Sigma_{\rm SFR}< \Sigma_{\rm SFR,therm}$.
\section{Discussion and conclusions}\label{discussions}
In this paper, we have discussed the far-infrared - radio correlation in dwarf galaxies and its potential evolution in the regime of low star formation rates. As a starting point, we have assessed under which conditions a correlation between the magnetic field strength and the star formation rate can be maintained through the continuous injection of turbulence, leading to the definition of a critical star formation rate required for the injection of a sufficient amount of turbulent energy (Eq.~\ref{SFRcrit}). The latter will ensure magnetic field amplification via the small-scale dynamo \citep[e.g.][]{Kazantsev68, Scheko02, Schober12b}, which happens on short timescales and effectively ensures that the magnetic field strength is coupled to the star formation rate. {Considering a typical size of the star-forming region of $1$~kpc, this relation will break down at critical star formation surface densities of $10^{-5}-10^{-6}$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$ depending on the amount of rotation.} We also derived a critical star formation rate to ensure continuous thermal emission in the galaxy, which results from the requirement that the timescale of massive star formation should be smaller than the typical lifetime of massive stars. This criterion has been developed in Eq.~\ref{thermal}, even though our comparison has shown that it is likely not relevant in practice. {Assuming a star-forming region of $1$~kpc and a $5\%$ fraction of massive stars, the resulting star formation surface density corresponds to $\sim10^{-6}$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$. }
We further explored the dominant loss mechanisms for cosmic rays in the galaxy, which have a major impact on the non-thermal radio emission and its dependence on the star formation rate. In particular, we derived a critical star formation surface density for the importance of cosmic ray diffusion losses given by Eq.~(\ref{SFRdiff}), below which the injection time of cosmic rays becomes larger than the timescale for diffusion losses, thus strongly {reducing} the amount of cosmic rays in the galaxy. {Assuming a star-forming region of $1$~kpc, these effects become relevant for star formation surface densities of $10^{-4}-10^{-6}$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$, depending on the scale height of the disk. Overall, the critical star formation surface density for this transition thus appears higher than the corresponding transition for the thermal emission. We have compared the cosmic-ray diffusion losses} with the losses due to cosmic winds, adopting both the thermally-driven wind model by \citet{Shu05} as well as a generalized model accounting for the effect of cosmic rays and magnetic fields as potential driving agents. Depending on the amount of rotation in the galaxy, we have found that the losses from winds can become relevant at somewhat higher or lower star formation rates compared to the cosmic ray diffusion processes, but do not change dramatically the details of the transition.
From a comparison of these results, we have shown that the critical star formation surface densities for the maintainance of turbulence (Eq.~\ref{SFRcrit}) and for the relevance of cosmic-ray diffusion (Eq.~\ref{SFRdiff}) are most relevant for introducing a potential breakdown in the far-infrared - radio correlation. Which of the transitions occurs first depends on the rotation rate of the dwarf and the scale height of the disk, and can be expressed in terms of a critical rotation rate given in Eq.~(\ref{omegadiffdisk}), {corresponding to rotation period of $1.5\times10^7$~yrs for a galaxy with scale height of $1$~kpc. We note in particular that cosmic-ray diffusion losses become relevant before the breakdown of the magnetic field - star formation surface density relation in the case of low rotation. Whether the rotation period is above or below the critical value thus regulates the further evolution of the far-infrared - radio correlation in the regime of low star formation rates.}
We have determined four different regimes employing the conditions derived above. In the limit of high star formation rates above the critical star formation surface densities {mentioned above}, we have shown that the ratio of non-thermal radio to far-infrared / infrared luminosity should scale as $\Sigma_{\rm SFR}^{1/3}$, as we expect that the cosmic ray abundance is limited by the strength of the magnetic field, effectively enforcing equipartition, implying a scaling of the cosmic-ray abundance with $B^2$ and of the overall non-thermal emission as $B^4$ or $\Sigma_{\rm SFR}^{4/3}$, {typically above a star formation surface density of $\sim10^{-6}$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$ for a star-forming region of $1$~kpc.}
For lower star formation rates in the limit of low rotation (below the critical value in Eq.~\ref{omegadiffdisk}), the correlation between magnetic field strength and star formation will still be in place, but cosmic ray diffusion losses will start becoming efficient. As a result, the cosmic ray abundance in galaxies will no longer reach equipartition with the magnetic field strength, but is instead expected to be proportional to the injection rate, i.e. the rate of star formation. In this regime, it can then be shown that the ratio of non-thermal radio to far-infrared / infrared luminosity will scale as $\Sigma_{\rm SFR}^{2/3}$, implying a steeper decrease with decreasing star formation rate. Such steepening may be expected below typical star formation surface densities of $3\times10^{-5}$~M$_\odot$~kpc$^{-2}$~yr$^{-1}$, with some dependence both on the size of the star-forming region and the scale height of the disk.
On the other hand, in the regime of low star formation rates and {rotation rates above the before-mentioned critical value}, the cosmic ray diffusion losses are not yet significant, but the correlation between the magnetic field strength and the star formation rate is expected to break down, as a steady injection of turbulence cannot be maintained, leading to the dissipation both of turbulence and the magnetic fields. As a result, one may expect significant non-thermal emission only in those sources with recent injection events, and otherwise no significant emission in the vast majority of sources. Finally, for star formation rates both below the critical rates for turbulence driving and cosmic-ray diffusion, it is clear that the correlation is broken down due to both mechanism.
Overall, we therefore expect modifications of the far-infrared - radio correlation in the regime of low star formation rates, which will be particularly relevant for future observations with the Square Kilometre Array (SKA)\footnote{Website SKA: https://www.skatelescope.org/} and its Key Science Project on ''The Origin and Evolution of Cosmic Magnetism''. Its precursors and pathfinders such as LOFAR\footnote{Website LOFAR: http://www.lofar.org/}, MeerKAT\footnote{Website MeerKAT: http://www.ska.ac.za/meerkat/} and ASKAP\footnote{Website ASKAP: http://www.atnf.csiro.au/projects/askap/} are in fact already in the process of further probing magnetic field structures in local dwarf galaxies, as in the LOFAR Key Science Project on ''Cosmic Magnetism of the Nearby Universe'' \citep{Beck13LOFAR}. Such dedicated investigations will be particularly valuable to determine the frontiers of cosmic magnetism and the origin of magnetic fields in galaxies.
\begin{acknowledgements}
We thank Chris Chyzy for valuable discussions on the rotation rates in dwarf galaxies, and Aritra Basu for valuable comments that helped to improve our manuscript. RB acknowledges support by the DFG Research Unit FOR1254. DRGS thanks for funding through Fondecyt regular (project code 1161247) and through the ''Concurso Proyectos Internacionales de Investigaci\'on, Convocatoria 2015'' (project code PII20150171).
\end{acknowledgements}
\input{dwarf.bbl}
\end{document}
|
1,314,259,993,372 | arxiv | \section*{ABSTRACT}
The $k$-center problem is a fundamental problem we often face when considering
complex service systems. Typical challenges include the placement of warehouses
in logistics or positioning of servers for content delivery networks. We
previously have proposed \emph{Dragoon} as an effective algorithm to approach
the
$k$-center problem. This paper evaluates \emph{Dragoon} with a focus on potential
worst case behavior in comparison to other techniques. We use an evolutionary
algorithm to generate instances of the $k$-center problem that are especially
challenging for \emph{Dragoon}. Ultimately, our experiments confirm the
previous
good results of \emph{Dragoon}, however, we also can reliably find scenarios
where it is clearly outperformed by other approaches.
\section{INTRODUCTION}
Imagine a manager that plans new warehouses to distribute goods to his customers.
Consider a planner of content delivery networks who has to setup servers close
to users to provide reliable access to information. Think of an emergency staff
that needs to know where to place distribution centers to supply basic goods
after a natural disaster. Complex service system like these and many other
scenarios are instances of the $k$-center problem. Consequently, there is a big
interest in solving these problems. Especially considering the last two examples
we need reliable results in short time. Accordingly, an appropriate optimization
approach needs to be robust and efficient. Figure \ref{fig:kcenter} shows an
abstract instance of the $k$-center problem and a solution generated by the
Dragoon algorithm. A general introduction to the $k$-center problem and respective
optimization heuristics can be found in \citeN{Kleinberg2005}.
\begin{figure}[!b]
\centering
\begin{minipage}{.4\textwidth}
\fbox{\includegraphics[width=\textwidth]{fig/before}}
\end{minipage}%
\hfill
\begin{minipage}{.15\textwidth}
$\xrightarrow{optimization} $
\end{minipage}%
\begin{minipage}{.4\textwidth}
\fbox{\includegraphics[width=\textwidth]{fig/after}}
\end{minipage}
\caption{The $k$-center problem: for a set of customers the challenge is to
determine a given number of central locations to provide a service to the
customers. \label{fig:kcenter}}
\end{figure}
In this paper we discuss Dragoon in detail, an optimization approach that has
previously been identified as an efficient optimization technique
\def\citeauthoryear##1##2##3{##2, ##3}\@internalcite{hillmann2015c}. We focus more on the robustness aspect of the
optimization, by analyzing its worst case behavior in comparison to other
techniques. To this end we employ an evolutionary algorithm to generate problem
instances that are especially challenging for Dragoon. This idea is inspired by
a paper from \shortciteNP{nguyen2014}. They used an evolutionary algorithm to
generate images that would fool a deep neural network. The considered deep
neural network was trained to recognized certain classes of images, however, the
research group was able to generate images that were false positives. Similarly,
we try to identify instances of the $k$-center problem were the explicit and
implicit optimization assumptions of Dragoon are exploited, leading to low
quality solutions.
Our paper is structured in the following way: In the next section we briefly
introduce the $k$-center problem and its characteristics. Afterward, we present
the optimization approaches we considered. In Sections \ref{experiment-setup}
and \ref{experiment-results} we explain our experimental setup and provide the
results of our tests. Finally, we discuss the results of our study and give a
summary and short outlook.
\section{THE K-CENTER PROBLEM}
The $k$-center problem is a classic optimization problem, that is known to be
NP-hard \cite{gonzales1985}. Informally, we are looking to place multiple
centers to satisfy a number of customers. Our goal is to minimize the maximal
distance of a customer to its closest center. For example, in disaster
management we want to find good locations for camps to provide services to
multiple cities. More formally, given a set of locations $V$ and number of
centers to place ($k$) and a distance function $d$ we have to determine a subset
$S \subseteq V$ with $|S| = k$. The optimization objective $D$ is the minimization of the
maximal distance of locations to their closest center (see Equation
\ref{distance}). For this paper the distance function $d$ is based on the
Euclidean distance in $R^2$.
\begin{equation}
\label{distance}
D = \min \max_{v \in V} \min_{s \in S} d(v,s)
\end{equation}
In this paper we consider only the placement of centers at a customer location
(Node-Placement). We do not discuss the more general Free-Placement, where
customers can be placed anywhere. The results and approaches presented in this
paper can, however, be extended to consider Free-Placement. We focus on
Node-Placement since it is oftentimes reasonable to rely on the existing
infrastructure, e.g., given network infrastructure for CDNs or existing roads
for logistics problems. Especially, for problems with a short planning horizon
like disaster management or dynamic adaption of CDNs it is impractical to setup
a new center, that is not at an customer location. When we consider time
critical applications we need an optimization approach that is fast and robust.
The following section will briefly introduce a number of optimization
techniques for the $k$-center problem.
\section{OPTIMIZATION APPROACHES}
Previously, we mentioned the NP-hard nature of the $k$-center problem, which
makes it a difficult problem to solve. There is no fast exact approach to
calculate a solution for a given problem instance. Small instances can be solved
optimally using brute force or branch and bound techniques. Larger scenarios
cannot be solved practically since the search space grows exponentially. To handle these
instances a number of heuristic approaches have been proposed to generate
approximate solutions. Generally, as long as they have polynomial runtime these
heuristics can only be guaranteed to be 2-approximable, i.e., at worst the
maximal distance can be twice as large as the theoretical optimum \cite{Kleinberg2005}.
Dragoon is an 2-approximbale approach that generated very promising results in
previous experiments \def\citeauthoryear##1##2##3{##2, ##3}\@internalcite{hillmann2015c}. We will compare it to the
following techniques: 2-Approx, MacQueen, Greedy, and a new extension of Greedy
called Backtrack. A special focus is put on worst case performance of Dragoon
in comparison to the reference approaches. Dragoon and the other techniques are
presented in the next few sections. All of them have two things in common, they
are reasonably fast and they can be implemented to operate in a nearly
deterministic way without many stochastic effects. This allows us to perform a
huge number of experiments for different problem instances, without requiring
experiment repetitions for statistical significant results.
We have compared Dragoon to other approaches before, that we do not include in
this paper. Typical metaheuristics like evolutionary algorithms and particle swarm
optimization are left out because of their stochastic nature. Integer linear
programming is excluded from the experiments since it requires to much
calculation time to evaluate large instances effectively.
\subsection{Dragoon}
The Dragoon algorithm was initially used to optimize Landmark positions to
improve the quality of IP geolocation \def\citeauthoryear##1##2##3{##2, ##3}\@internalcite{hillmann2015a,hillmann2015b}.
However, it can be applied to any kind of $k$-center problem. Dragoon uses three
stages for optimization: first an initial reference placement, second center
placement based on the 2-Approx algorithm (see next section), and finally an
iterative improvement strategy.
For the initialization, Dragoon places a single virtual center by solving
the 1-center problem. This means it determines the optimal position for a
single center. This center will only be used for the first optimization step and
is removed afterward, consequently, it is not part of the final solution. The
advantage of this approach is that oftentimes the first center serves more
customers than centers placed later on. By removing the virtual center we
generate solutions that are much more balanced with regard to the numbers of
customers assigned to each center.
In the second stage, Dragoon iteratively places centers using the 2-Approx strategy
to position all centers. The first placement decision is incorporating the
virtual center from the initialization step. Afterward, the virtual center is
removed and no longer specifically considered in further placement decision.
When the second stage concludes we have a guaranteed 2-approximable solution,
which is then improved in the final stage.
During the final stage, a local optimization strategy is used to obtain a
better solution. The procedure iterates over all centers and tries to move them
to an adjacent node. If such a move leads to an improvement it is accepted,
otherwise it will be undone. Consequently, the algorithm can only improve the
initial solution and therefore leads to overall better result. This process is
repeated until no further improvement can be obtained. In the next
section we will discuss the 2-Approx approach that is used in Dragoon's second stage.
\subsection{2-Approx}
The 2-Approx algorithm discussed in \citeN{hochbaum1985} and is named this way,
since it reliably provides a 2-approximable solution for the maximum distance ($D$)
objective. This means, the following is true for all solutions generated by
2-Approx:
$$D_{2-Approx} \leq D_{optimal} \cdot 2.$$
The algorithm uses an iterative approach for center placement. The first center
is placed arbitrarily, either randomly or using some kind of initialization.
Each successive center is placed at the customer location that has the maximal
distance to its closest center. This procedure is repeated until all centers
are
positioned.
\subsection{MacQueen}
MacQueen is an implementation of a k-means approach \def\citeauthoryear##1##2##3{##2, ##3}\@internalcite{macqueen1967}.
Starting from an initial placement it iteratively optimizes a solution until no
further improvements are achieved. For each iteration customers are assigned to
their closest center. Then each center location is changed to optimally support
the assigned customers. The repositioning is based on the geometrical mean of
the customer group. Eventually, center positions no longer change and the
optimization ends. \citeN{selim1984} have shown that k-means approaches are
guaranteed to converge.
\subsection{Greedy}
We use a straight forward implementation of a greedy approach discussed in
\shortciteN{jamin2001}. We iteratively place centers one at a time. To place a
center,
all possible locations are evaluated and the center is placed at the position
that maximizes the optimization objective. For node-based placement this means,
we consider all customer locations during each iteration. With regard to
free placement, some kind of rasterization is required. Similar to the 2-Approx
approach, Greedy also guarantees a 2-approximable result.
\subsection{Backtrack}
Backtrack is an extension of the Greedy algorithm we propose. Generally, each
placement decision in Greedy is only locally optimal. Therefore, the generated
solution is usually not a global optimum. Backtrack takes the solution generated
by Greedy and tries to improve it. It only performs changes that lead to an
improvement, consequently, it only generates solutions that are better or at
least equally good as the ones generated by Greedy. For this reason, Backtrack
also guarantees 2-approximable results. For the optimization Backtrack
individually evaluates all centers and tries to reposition them. This is done
iteratively for every available center. The optimization terminates either when
we cannot find a better position for single center after we iterated over all
of
them or when a given number of optimization steps is reached. We have to limit
the maximal number of optimization steps, since we cannot predict how long the
optimization would run otherwise. In contrast to MacQueen, there is no
guarantee
of convergence.
\section{EXPERIMENT SETUP}
\label{experiment-setup}
In our experiments we always consider the relative performance of two
approaches to solve a given problem instance. Usually, we look at the
performance of an ``challenger'' algorithm in comparison to a ``challenged''
approach (usually Dragoon). To measure the difference between them we simply
calculate the absolute difference for a given problem instance:
$$ \Delta D = D_{challenger} - D_{challenged}.$$
Therefore, a negative $\Delta D$ indicates that
the challenger performed better than the challenged (Dragoon).
The first part of our experiments was designed to evaluate the average
performance of the algorithms. We considered six general setups of the $k$-center
problem (see Table \ref{table:setup}). Instance of these setups were generated
randomly by placing the customers at random position in a square with $0.0 <
x,y < 100.0$. We usually selected square numbers for the amount of customers
and centers, since we can easily map them into the square area for thought
experiments. For example, if we consider 4 centers that are placed exactly in
the middle of the 4 sub-quadrants (Free Placement) of our area with side length
100.0 we know that $D$ can at most be 35.36 ($\frac{1}{4} \sqrt{100^2
+100^2}$). For our experiments we evaluated 1000 random instances for each
comparison.
\begin{table}[htbp]
\begin{center}
\caption{Experiment setups \label{table:setup}}
\begin{tabular}{|l|cccccc|}
\hline
Setup & I & II & III & IV & V & VI \\
\hline
Customers & 10 & 25 & 36 & 49 & 49 & 64\\
Centers & 2 & 4 & 4 & 9 & 4 & 16\\
\hline
\end{tabular}
\end{center}
\end{table}
In the second part of our experiments we tried to identify problem instance
were the challenger approach would significantly outperform Dragoon. To find
these instance we employed an evolutionary algorithm that we implemented using the
Serein framework \cite{uhlig2015}. The evolutionary algorithm used the following setup:
\begin{itemize}
\item \emph{Problem encoding}: For n customers we used a vector of doubles
($0.0 < z <1.0$) containing an x and y component for each customer location
$\vec{v} =(z^x_1,z^y_1, ... ,z^x_n,z^y_n)$. A pair of components $z$ is mapped
to a location in our square ($100.0^2$).
\item \emph{Fitness function}: Using the mapped customer locations the centers
were placed using the challenging and challenged algorithm. Afterward, the
fitness was calculated as $\Delta D$
\item \emph{Reproduction operators:} we used single point Gaussian mutation
with a standard deviation of 0.05 and whole arithmetic recombination with a
probability of 0.3.
\item \emph{Selection operators:} we employed tournament selection with
tournament size 2 and deterministic selection for survival with uncapped elitism.
\item We used a population size of 20 and run the evolutionary algorithm for 100
generations.
\end{itemize}
Starting from a random population of scenarios the evolutionary algorithm searches
for a fitting scenario. In our context fitting means, a scenario that can be
solved by the challenger approach with a better result than the solution
provided by Dragoon. The next section will discuss the result of both
experiments.
\section{RESULTS}
\label{experiment-results}
The first part of our experiments aimed to confirm the results of our previous
studies. We expected Dragoon to perform better than the previously considered
reference approaches, i.e., MacQueen, 2-Approx, and Greedy. At the same time we
included the new approach Backtrack to evaluate its performance. Looking at the
results in Table \ref{table:results1}, we can confirm that Dragoon on average returns
comparably good results. However, Backtrack slightly outperform Dragoon.
\begin{table}[htbp]
\centering
\caption{Average performance of approaches in comparison to \emph{Dragoon}
($\Delta D_{avg}$) based on 1000 random problem instances (smaller values are
better).\label{table:results1}}
\begin{tabular}{|c|d{4.5}d{4.5}d{4.5}d{4.5}|}
\hline
Users / Centers & \multicolumn{1}{c}{MacQueen} & \multicolumn{1}{c}{2-Approx} &
\multicolumn{1}{c}{Greedy} & \multicolumn{1}{c|}{Backtrack}\\
\hline
10 / 2 & 4.95 & 13.31 & 4.59 & -0.71 \\
25 / 4 & 7.64 & 9.09 & 9.5 & 0.70\\
36 / 4 & 7.84 & 10.13 & 11.02 & -0.59\\
49 / 9 & 6.92 & 3.72 & 6.42 & -1.23\\
64 / 4 & 7.43 & 12.12 & 12.61 & -0.71\\
64 / 16 & 6.10 & 1.68 & 2.77 & -1.35\\
\hline
\end{tabular}
\normalsize
\end{table}
The second part of our experiments considered the worst case performance of Dragoon. Table \ref{table:results2} illustrates that we were able to determine problem instance were Dragoon failed in comparison to the reference approach. Conversely, we can say that for each approach there exist scenarios where they can significantly outperform Dragoon. For example, Figure \ref{fig:scenario} shows a scenario where the MacQueen algorithm determines a much better solution than the Dragoon approach. Interestingly Backtrack, our new approach, showed a slightly better average performance than Dragoon.
\begin{table}[htbp]
\centering
\caption{Worst case performance of \emph{Dragoon} in comparison to other
approaches for a problem instance generated by a GA (smaller $\Delta D$ values
are better). \label{table:results2}}
\begin{tabular}{|c|d{4.5}d{4.5}d{4.5}d{4.5}|}
\hline
Users / Centers & \multicolumn{1}{c}{MacQueen} & \multicolumn{1}{c}{2-Approx} &
\multicolumn{1}{c}{Greedy} & \multicolumn{1}{c|}{Backtrack}\\
\hline
10 / 2 & -26.41 & -29.43 & -31.78 & -23.55 \\
25 / 4 & -17.34 & -10.54 & -10.81 & -13.39\\
36 / 4 & -3.23 & -10.84 & -11.96 & -14.85\\
49 / 9 & -8.75 & -5.92 & -4.92 & -8.20\\
64 / 4 & -7.81 & -4.73 & -0.82 & -10.32\\
64 / 16 & -5.49 & -5.41 & -4.65 & -6.05\\
\hline
\end{tabular}
\normalsize
\end{table}
\begin{figure}[!htbp]
\centering
\begin{minipage}{.49\textwidth}
\fbox{\includegraphics[width=\textwidth]{fig/dragoon}}
\end{minipage}%
\hfill
\begin{minipage}{.49\textwidth}
\fbox{\includegraphics[width=\textwidth]{fig/macQueen}}
\end{minipage}
\caption{For the given scenario Dragoon (left) finds only a suboptimal solution.
MacQueen (right) determines a much better solution.\label{fig:scenario}}
\end{figure}
\section{DISCUSSION}
Our results illustrate that there are worst case scenarios where Dragoon is
clearly outperformed by other approaches. In critical application areas this
can lead to lower quality solutions than initially expected. For these scenarios
it is important to acknowledge the potential of failure and include appropriate
fall-back methods. Looking at the scenarios where Dragoon performed worst, we
noticed that they often had single customers which where located in isolated
positions. However, currently we have no empirical data to statistically support
this observation.
It is important to note, that occurrences of low quality solutions are not
limited to Dragoon. Each of the presented approaches can fail given a certain
problem instance. In general, nearly every approach can be beaten by all other
techniques in certain constellations. Table \ref{table:results3} illustrates
this fact by matching each approach against all the other ones. Here we show
only one setup, however, the general result was the same across all
configurations.
\begin{table}[b]
\centering
\caption{Summary of results: Can a certain ``challenger'' approach outperform another ``challenged'' approach in an instance of the $k$-center problem with 49 customers and 4 centers (smaller $\Delta D$ values are better).
\label{table:results3}}
\begin{tabular}{|c|d{4.5}d{4.5}d{4.5}d{4.5}d{4.5}|}
\cline{2-6}
\multicolumn{1}{c|}{}&\multicolumn{5}{|c|}{\textbf{Challenged}}\\
\hline
\multicolumn{1}{|c|}{\textbf{Challenger}} &
\multicolumn{1}{c}{MacQueen}&\multicolumn{1}{c}{Backtrack}&\multicolumn{1}{c}{
TwoApprox}&\multicolumn{1}{c}{Greedy}&\multicolumn{1}{c|}{Dragoon}\\
\hline
MacQueen & &-6.29 &-24.38 &-22.68 &-7.81\\
Backtrack &-32.68 & &-24.73 &-26.45 &-10.32\\
TwoApprox &-31.26 & 0.0 & &-21.82 &-4.37\\
Greedy &-21.54 &-0.39 &-15.47 & &-0.82\\
Dragoon &-38.53 &-8.52 &-24.64 &-22.12 & \\
\hline
\end{tabular}
\normalsize
\end{table}
One exception is the comparison between Greedy and Backtrack, usually Greedy
should not be able to outperform Backtrack. Backtrack strictly improves a
solution generated by Greedy, therefore, it should be strictly superior.
Looking at the results in Table \ref{table:results3}, we see an observation that
seems to contradict this guarantee, i.e, Greedy outperforms Backtrack with
$\Delta D = -0.39$). At this point it is important to remember that we selected
algorithms that are mostly deterministic, nevertheless, there is still some
randomness included. Specifically, for Greedy if two potential placements lead
to the same improvement the one that is actually selected is chosen randomly.
In Table \ref{table:results3}, 2-Approx was not able to outperform Backtrack, but
in principle it is able to do so. For other setups we have found instances where
it is better than Backtrack. Nonetheless, the fact that it did not outperform
Backtrack in the given example indicates that it is quite difficult to find a
instance were it is superior.
\section{SUMMARY AND OUTLOOK}
We presented a critical view at the Dragoon optimization approach, considering
its potential worst case behavior. While it returns very good results on
average, there are instances of the $k$-center problem where it is outperformed
significantly. The novel Backtrack approach generated very good results and is
even slightly better than Dragoon. However, as discussed before, it is also
susceptible to fail for certain scenarios. The general verdict is that all
approaches shine for certain scenarios and fall off in other cases.
This paper considered the $k$-center problem on a very abstract level. Our future
work will focus more on practical aspects of the problem. For example, like
nonlinear distance calculation between customer and its center including
factors like dynamic traffic, limited availability of certain services depending
on the center. This includes more intricate assignment strategies that match
customers and centers based on customer profiles.
With regard to the underlying optimization there are three potential paths to
follow. The first and simplest approach to guarantee more robustness is to use
more than one of the presented optimization techniques, to limit the occurrence
of worst case behavior. Considering time critical application this path might, however,
be impossible. As a second path, we can add some intelligence to select the
appropriate approach for a given problem instance. We could either try to train
a neural network to classify the problem instances or strive to identify
certain problem properties that favor certain approaches, e.g., avoiding Dragoon
for problems were certain customers have secluded positions. Finally, we are
also investigating opportunities to further increase the robustness of the
approaches on an algorithmic level. Currently we are developing an advanced
version of Dragoon.
\bibliographystyle{wsc}
\section*{\abstractname}}
{\par}
\newdimen\labelwidthi
\newdimen\labelwidthii
\settowidth{\labelwidthi}{M}
\settowidth{\labelwidthii}{(d)}
\leftmargini=1.2cm
\leftmarginii=4ex
\leftmarginii=4ex
\def\@listi{\leftmargin\leftmargini
\rightmargin0pt
\parsep 0\p@
\topsep 10\p@
\itemsep0\p@}
\let\@listI\@listi
\@listi
\def\@listii{\leftmargin\leftmarginii
\rightmargin0pt
\labelsep 1ex
\parsep 0\p@
\topsep 0\p@
\itemsep0\p@}
\let\@listII\@listii
\@listii
\def\@listiii{\leftmargin\leftmarginiii
\rightmargin0pt
\parsep 0\p@
\topsep 0\p@
\itemsep0\p@}
\let\@listIII\@listiii
\@listiii
\labelsep=.25in
\setlength \labelwidth{\leftmargini}
\addtolength\labelwidth{-\labelsep}
\renewcommand\labelenumi{\rlap{\rm\theenumi.}}
\renewcommand\labelitemi{\rlap{\textbullet}}
\def\@seccntformat#1{\hbox to .25in{\csname the#1\endcsname\hss}\relax}
\def\@sect#1#2#3#4#5#6[#7]#8{%
\ifnum #2>\c@secnumdepth
\let\@svsec\@empty
\else
\refstepcounter{#1}%
\protected@edef\@svsec{\@seccntformat{#1}\relax}%
\fi
\@tempskipa #5\relax
\ifdim \@tempskipa>\z@
\begingroup
#6{%
\@hangfrom{\hskip #3\relax\@svsec}%
\interlinepenalty \@M #8\@@par}%
\endgroup
\csname #1mark\endcsname{#7}%
\addcontentsline{toc}{#1}{%
\ifnum #2>\c@secnumdepth \else
\protect\numberline{\csname the#1\endcsname}%
\fi
#7}%
\else
\def\@svsechd{%
#6{\hskip #3\relax
\@svsec #8}%
\csname #1mark\endcsname{#7}%
\addcontentsline{toc}{#1}{%
\ifnum #2>\c@secnumdepth \else
\protect\numberline{\csname the#1\endcsname}%
\fi
#7}}%
\fi
\@xsect{#5}}
\deftable{table}
\long\def\@makecaption#1#2{%
\ifx\@captypetable \vskip3pt \else
\vskip\abovecaptionskip\fi
\sbox\@tempboxa{#1: #2}%
\ifdim \wd\@tempboxa >\hsize
#1: #2\par
\else
\global \@minipagefalse
\hb@xt@\hsize{\hfil\box\@tempboxa\hfil}%
\fi
\vskip\belowcaptionskip}
\renewcommand\section{\@startsection {section}{1}{\z@}%
{-12pt}%
{6pt}%
{\hyphenpenalty10000\normalfont\normalsize\bfseries}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}%
{-12pt}%
{6pt}%
{\normalfont\normalsize\hyphenpenalty10000\bfseries}}
\renewcommand\subsubsection{\@startsection{subsubsection}{3}{\z@}%
{-12pt}
{6pt}
{\normalfont\normalsize\hyphenpenalty10000\bfseries}}
\let\savesubsub\subsubsection
\def\subsubsection#1{\savesubsub{\ \ #1}}
\parskip=0pt plus .01pt
\let\saveparagraph\paragraph
\def\paragraph#1{\vskip1sp
{\bf #1}\hskip1em\relax}
\raggedbottom
\newenvironment{hangref}{\begin{list}{}{\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}\setlength{\rightmargin}{0pt}
\setlength{\leftmargin}{+\parindent}
\setlength{\itemindent}{-\parindent}}}{\end{list}}
\newif\iftitle
\def\@oddhead{\iftitle\global\titlefalse
\vtop to0pt{\hbox to.9\textwidth{\titlepageheadfont
Proceedings of the \currentYear{} Winter Simulation Conference\hfill}%
\vskip2pt \hbox to .9\textwidth{\titlepageheadfont
\currentEditors , eds.\hfill}%
\vss} \else \hbox
to\textwidth{\titlepageheadfont\hfill\thetitle\hfill}\fi}
\def\@evenhead{\iftitle\global\titlefalse\fi%
\hbox to \textwidth{\hss\titlepageheadfont \theauthors\hss}}
\let\@oddfoot\relax
\let\@evenfoot\@oddfoot
\def\ttitle#1{\gdef\thetitle{#1}}
\def\vskip12pt{\vskip12pt}
\let\vs\vskip12pt
\def\bd{\vskip2pt\noindent}
\spaceskip=3.5pt plus 2pt minus2pt
\parindent=.25in
\hfuzz=1pt
\widowpenalty=10000 \clubpenalty=10000
\def\verbatim{\spaceskip=0pt
\@verbatim \frenchspacing\@vobeyspaces \@xverbatim}
\newcommand\smscriptsize{\@setfontsize\scriptsize\@vipt\@viipt}
\def\def\@captype{figure}{\def\@captype{figure}}
\let\endnofloatfigure\relax
\def\def\@captype{table}{\def\@captype{table}}
\let\endnofloattable\relax
\newcount\itemcount
\def\spenumerate{\bgroup\leftskip=.25in
\global\itemcount=0
\def\item{\global\advance\itemcount by 1
\vskip1sp \noindent\hskip-.25in\hbox to
.25in{\the\itemcount.\hss}}}
\def\vskip12pt\egroup{\vskip12pt\egroup}
\newif\ifnoindent
\def\@begintheorem#1#2{\vskip-12pt\vskip1sp
\trivlist
\item[\ifnoindent\global\noindentfalse\else
\hskip.25in\fi\hskip \labelsep{\bfseries #1\ #2}]\itshape}
\def\@opargbegintheorem#1#2#3{\vskip-12pt\vskip1sp
\trivlist
\item[\ifnoindent\global\noindentfalse\else\hskip.25in\fi%
\hskip \labelsep{\bfseries #1\ #2\ (#3)}]\itshape}
\def\@endtheorem{\vskip1sp}
\newlength{\bibhang}
\setlength{\bibhang}{2em}
\newdimen\bibindent
\bibindent=.25in
\@ifundefined{refname}%
{\@ifundefined{chapter}%
{\newcommand{\refname}{References}}%
{\newcommand{\refname}{Bibliography}}%
}%
{}%
\def\thebibliography#1{\section*{\refname\@mkboth
{\uppercase{\refname}}{\uppercase{\refname}}}\list
{[\arabic{enumi}]}{\settowidth\labelwidth{[#1]}
\rightmargin=0pt \leftmargin=0pt
\leftmargin\labelwidth
\advance\leftmargin\labelsep
\advance\leftmargin\bibindent
\advance\leftmargin-24pt
\itemindent -\bibindent
\listparindent \itemindent
\parsep \z@
\usecounter{enumi}}
\def{}
\sloppy
\sfcode`\.=1000\relax}
\endinput
\makeatletter
\def\@lbibitem[#1]#2{\item[
\if@filesw {\let\protect\noexpand\immediate\write\@auxout{
\string\bibcite {#2}{#1}}}\fi\ignorespaces}
\def\@cite#1#2{{#1\if@tempswa , #2\fi}}
\makeatother
\def\Box{\vbox to 6pt{\hrule
\hbox{\vrule height 4.8pt \hskip 4.8pt \vrule } \hrule}}
\end{filecontents*}
\begin{filecontents*}{wscbib.tex}
\makeatletter
\let\@internalcite\cite
\def\cite{\def\@citeseppen{-1000}%
\def\@cite##1##2{(##1\if@tempswa , ##2\fi)}%
\def\citeauthoryear##1##2##3{##1 ##3}\@internalcite}
\def\citeNP{\def\@citeseppen{-1000}%
\def\@cite##1##2{##1\if@tempswa , ##2\fi}%
\def\citeauthoryear##1##2##3{##1 ##3}\@internalcite}
\def\citeN{\def\@citeseppen{-1000}%
\def\@cite##1##2{##1\if@tempswa, ##2)\else{}\fi}%
\def\citeauthoryear##1##2##3{##1 (##3)}\@citedata}
\def\citeA{\def\@citeseppen{-1000}%
\def\@cite##1##2{(##1\if@tempswa , ##2\fi)}%
\def\citeauthoryear##1##2##3{##1}\@internalcite}
\def\citeANP{\def\@citeseppen{-1000}%
\def\@cite##1##2{##1\if@tempswa , ##2\fi}%
\def\citeauthoryear##1##2##3{##1}\@internalcite}
\def\def\citeauthoryear##1##2##3{##2, ##3}\@internalcite{\def\@citeseppen{-1000}%
\def\@cite##1##2{(##1\if@tempswa , ##2\fi)}%
\def\citeauthoryear##1##2##3{##2 ##3}\@internalcite}
\def\shortciteNP{\def\@citeseppen{-1000}%
\def\@cite##1##2{##1\if@tempswa , ##2\fi}%
\def\citeauthoryear##1##2##3{##2 ##3}\@internalcite}
\def\shortciteN{\def\@citeseppen{-1000}%
\def\@cite##1##2{##1\if@tempswa, ##2\else{}\fi}%
\def\citeauthoryear##1##2##3{##2 (##3)}\@citedata}
\def\def\citeauthoryear##1##2##3{##2}\@internalcite{\def\@citeseppen{-1000}%
\def\@cite##1##2{(##1\if@tempswa , ##2\fi)}%
\def\citeauthoryear##1##2##3{##2}\@internalcite}
\def\shortciteANP{\def\@citeseppen{-1000}%
\def\@cite##1##2{##1\if@tempswa , ##2\fi}%
\def\citeauthoryear##1##2##3{##2}\@internalcite}
\def\def\citeauthoryear##1##2##3{##3}\@internalcite{\def\@citeseppen{-1000}%
\def\@cite##1##2{(##1\if@tempswa , ##2\fi)}%
\def\citeauthoryear##1##2##3{##3}\@citedata}
\def\citeyearNP{\def\@citeseppen{-1000}%
\def\@cite##1##2{##1\if@tempswa , ##2\fi}%
\def\citeauthoryear##1##2##3{##3}\@citedata}
\def\@citedata{%
\@ifnextchar [{\@tempswatrue\@citedatax}%
{\@tempswafalse\@citedatax[]}%
}
\def\@citedatax[#1]#2{%
\if@filesw\immediate\write\@auxout{\string\citation{#2}}\fi%
\def\@citea{}\@cite{\@for\@citeb:=#2\do%
{\@citea\def\@citea{, }\@ifundefine
{b@\@citeb}{{\bf ?}%
\@warning{Citation `\@citeb' on page \thepage \space undefined}}%
{\csname b@\@citeb\endcsname}}}{#1}}%
\def\@citex[#1]#2{%
\if@filesw\immediate\write\@auxout{\string\citation{#2}}\fi%
\def\@citea{}\@cite{\@for\@citeb:=#2\do%
{\@citea\def\@citea{, }\@ifundefine
{b@\@citeb}{{\bf ?}%
\@warning{Citation `\@citeb' on page \thepage \space undefined}}%
{\csname b@\@citeb\endcsname}}}{#1}}%
\def\@biblabel#1{}
\makeatother
\newdimen\bibindent
\bibindent=0.0em
\def\thebibliography#1{\section*{\refname}\list
{}{\settowidth\labelwidth{[#1]}
\leftmargin\parindent
\itemindent -\parindent
\listparindent \itemindent
\itemsep 0pt
\parsep 0pt}
\def{}
\sloppy
\sfcode`\.=1000\relax}
\end{filecontents*}
\section*{\refname\@mkboth
\def\thebibliography#1{\section*{\refname}\list
{}{\settowidth\labelwidth{[#1]}
\leftmargin\parindent
\itemindent -\parindent
\listparindent \itemindent
\itemsep 0pt
\parsep 0pt}
\def{}
\sloppy
\sfcode`\.=1000\relax}
|
1,314,259,993,373 | arxiv | \section{\label{}}
\section{Introduction}
Stars in nature are usually rotating and may be subject to non-axisymmetric
rotational instabilities. An exact treatment of these instabilities exists
only for incompressible equilibrium fluids in Newtonian gravity, e.g.
\cite{Chandra69,Tassoul}. For these configurations, global rotational
instabilities may arise from non-radial toroidal modes $e^{im\varphi}$ (where
$m=\pm 1,\pm 2, \dots$ and $\varphi$ is the azimuthal angle).
For sufficiently rapid rotation, the $m=2$ bar mode becomes either
{\em secularly} or {\em dynamically} unstable. The onset of instability can
typically be marked by a critical value of the dimensionless parameter
$\beta \equiv T/|W|$, where $T$ is the rotational kinetic energy and $W$ the
gravitational potential energy. Uniformly rotating, incompressible stars in
Newtonian theory are secularly unstable to bar-mode formation when $\beta
\geq \beta_{\rm sec} \simeq 0.14$. This instability can grow only in the
presence of some dissipative mechanism, like viscosity or gravitational
radiation, and the associated growth timescale is the dissipative timescale,
which is usually much longer than the dynamical timescale of the system. By
contrast, a dynamical instability to bar-mode formation sets in when $\beta
\geq \beta_{\rm dyn} \simeq 0.27$. This instability is independent of any
dissipative mechanism, and the growth time is the hydrodynamic timescale.
There are several papers indicating that dynamical instability of the rotating
stars occurs at low $T/|W|$, which is far below the standard criterion of
dynamical instability in Newtonian gravity. Tohline and Hachisu \cite{TH90}
find in the self-gravitating ring and tori that $m=2$ dynamical instability
occurs around $T/|W| \sim 0.16$ in the lowest case in the centrally condensed
configurations. Shibata, Karino, and Eriguchi \cite{SKE} find that $m=2$
dynamical instability occurs around $T/|W| \sim O(10^{-2})$ in the high degree
($\Omega_{\rm c} / \Omega_{\rm eq} \approx 10$) of differential rotation.
Note that $\Omega_{\rm c}$ and $\Omega_{\rm eq}$ are the angular velocity at
the center and at the equatorial surface, respectively. Centrella et al.
\cite{CNLB01} found dynamical $m=1$ instability around $T/|W| \sim 0.09$ in
the $n=3.33$ polytropic toroidal star with high degree ($\Omega_{\rm c} /
\Omega_{\rm eq} = 26$) of differential rotation, and Saijo, Baumgarte, and
Shapiro \cite{SBS03} extended the results of dynamical $m=1$ instability to
$n \gtrsim 2.5$, $\Omega_{\rm c} / \Omega_{\rm eq} \gtrsim 10$. Note that $n$
is the polytropic index of the star.
There are some indications that corotation resonance triggers dynamical bar
instability. Papaloizou and Pringle \cite{PP84} found that
non-selfgravitating tori with constant specific angular momentum are
unstable to low order non-axisymmetric modes and that the modes grow on a
dynamical time-scale. Watts, Andersson, and Jones \cite{WAJ05} argue that the
shear instabilities occur when the degree of differential rotation exceeds a
critical value and the $f$-mode develops a corotation point associated with
the presence of a continuous spectrum. They also point out that dynamical bar
instability that Shibata et al. \cite{SKE} found is in the corotation band.
\begin{table*}
\begin{center}
\begin{minipage}{14cm}
\caption{
Three differentially rotating equilibrium stars that trigger dynamical
instability
\label{tab:initial}}
\begin{tabular}{c c c c c c c c c}
\hline
\hline
Model &
$n$ \footnotemark[1] &
$d / R_{\rm eq}$ \footnotemark[2] &
$R_{\rm p} / R_{\rm eq}$ \footnotemark[3] &
$\Omega_{\rm c} / \Omega_{\rm eq}$ \footnotemark[4] &
$\rho_{\rm c} / \rho_{\rm max}$ \footnotemark[5] &
$R_{\rm maxd}/R_{\rm eq}$ \footnotemark[6] &
$T/|W|$ \footnotemark[7] & dominant unstable mode
\\
\hline
I & $3.33$ &
$0.20$ & $0.413$ & $26.0$ & $0.491$ & $0.198$ &
$0.146$ & $m=1$
\\
II & $1.00$ &
$0.20$ & $0.250$ & $26.0$ & $0.160$ & $0.383$ &
$0.119$ & $m=2$
\\
III & $1.00$ &
$1.00$ & $0.250$ & $2.0$ & $0.837$ & $0.388$ &
$0.277$ & $m=2$
\\
\hline
\hline
\end{tabular}
\footnotetext[1]{$n$: Polytropic index}
\footnotetext[2]{$R_{\rm eq}$: Equatorial radius}
\footnotetext[3]{$R_{\rm pl}$: Polar radius}
\footnotetext[4]{$\Omega_{\rm c}$:
Central angular velocity; $\Omega_{\rm eq}$: Equatorial angular velocity}
\footnotetext[5]{$\rho_{\rm c}$: Central density;
$\rho_{\rm max}$: Maximum density}
\footnotetext[6]{$R_{\rm maxd}$: Radius of
maximum density}
\footnotetext[7]{$T$: Rotational kinetic energy;
$W$: Gravitational potential energy}
\end{minipage}
\end{center}
\end{table*}
Our purpose in this paper is to investigate the nature of dynamical $m=1$
instability with both eigenmode analysis and hydrodynamical analysis.
A non-linear hydrodynamical simulation is indispensable for investigation
of evolutionary process and final outcome of instability. The nature of
instability as a source of gravitational wave, which interests us most, is
only accessible through non-linear hydrodynamical computations. On the
other hand, a linear eigenmode analysis is in general easier to approach
the dynamical instability of given equilibria and it may be helpful
to have physical insight on the mechanism and the origin of instability.
Therefore a linear eigenmode analysis and a non-linear simulation are
complementary to each other and they both help us to understand the
nature of dynamical instability.
For a hydrodynamical simulation, we used the numerical code developed in
Ref. \cite{SBS03}, while we introduced a toy cylinder model that mimics
differentially rotating stars to study the instability. Self-gravitating
cylinder models have been used to study general dynamical nature of such
gaseous masses as stars, accretion disks and of stellar system as galaxies.
Although there is no cylinder with infinite length in reality, it is expected
to share some qualitative similarities with realistic astrophysical objects,
e.g. \cite{Ostriker65}. In fact, it has served as a useful model to
investigate secular and dynamical instabilities of rotating masses.
This paper is organized as follows. In \S~\ref{sec:Nhydro} we present our
hydrodynamical results of dynamical one-armed spiral and dynamical bar
instability. We present our diagnosis of dynamical $m=1$ and $m=2$
instability by using canonical angular momentum in \S~\ref{sec:Canonical},
and briefly summarize our findings in \S~\ref{sec:Discussion}. Throughout
this paper we use gravitational units with $G = 1$. A more detailed
discussion will be presented in Ref. \cite{SY05}.
\section{Dynamical instabilities in differentially rotating stars}
\label{sec:Nhydro}
First we explain features of our initial data sets of differentially
rotating stars on which we performed non-linear hydrodynamical computations.
We assume a polytropic equation of state,
\begin{equation}
P = \kappa \rho^{1+1/n},
\end{equation}
where $P$ is the pressure, $\kappa$ is a constant, $\rho$ is the density,
$n$ is the polytropic index. One feature of the polytropic equation of state
is that all matter quantities can be renormalized in terms of $\kappa$ so that
$\kappa$ does not explicitly appear. We also assume the ``$j$-constant''
rotation law as
\begin{equation}
\Omega = \frac{j_{0}}{d^{2} + \varpi^{2}},
\label{eqn:omega}
\end{equation}
where $\Omega$ is the angular velocity, $j_{0}$ is a constant parameter with
units of specific angular momentum, and $\varpi$ is the cylindrical radius.
The parameter $d$ determines the length scale over which $\Omega$ changes;
uniform rotation is achieved in the limit $d \rightarrow \infty$, with keeping
the ratio $j_0/d^2$ finite. We choose the same data sets as Ref. \cite{SBS03}
for investigating low $T/|W|$ dynamical instability in differentially rotating
stars. We also construct the equilibrium of a star with weak differential
rotation in high $T/|W|$, which excites the standard dynamical bar
instability, e.g. \cite{Chandra69}. The characteristic parameters are
summarized in Table \ref{tab:initial}.
To enhance any $m=1$ or $m=2$ instability, we disturb the initial equilibrium
density $\rho_{\rm eq}$ by a non-axisymmetric perturbation according to
\begin{equation}
\rho = \rho_{\rm eq}
\left( 1 +
\delta^{(1)} \frac{x+y}{R_{\rm eq}} +
\delta^{(2)} \frac{x^{2}-y^{2}}{R_{\rm eq}^{2}}
\right),
\label{eqn:DPerturb}
\end{equation}
where $R_{\rm eq}$ is the equatorial radius, with
$\delta^{(1)} = \delta^{(2)} \approx 1.7 - 2.8 \times 10^{-3}$ in all our
simulations. We also introduce ``dipole'' $D$ and ``quadrupole'' $Q$
diagnostics to monitor the development of $m=1$ and $m=2$ modes as
\begin{eqnarray}
D &=& \left< e^{i m \varphi} \right>_{m=1} =
\frac{1}{M} \int \rho \frac{x + i y}{\sqrt{x^{2}+y^{2}}} d^3 x,
\\
Q &=& \left< e^{i m \varphi} \right>_{m=2} =
\frac{1}{M} \int \rho \frac{(x^{2}-y^{2}) + i (2 x y)}{x^{2}+y^{2}} d^3 x,
\nonumber \\
\end{eqnarray}
respectively.
We compute approximate gravitational waveforms by using the quadrupole
formula. In the radiation zone, gravitational waves can be described by a
transverse-traceless, perturbed metric $h_{ij}^{TT}$ with respect to a flat
spacetime. In the quadrupole formula, $h_{ij}^{TT}$ is found from
\begin{equation}
h_{ij}^{TT}= \frac{2}{r} \frac{d^{2}}{d t^{2}} I_{ij}^{TT},
\label{eqn:wave1}
\end{equation}
where $r$ is the distance to the source, $I_{ij}$ the quadrupole moment of
the mass distribution, and where $TT$ denotes the transverse-traceless
projection. Choosing the direction of the wave propagation to be along the
rotational axis ($z$-axis), the two polarization modes of gravitational
waves can be determined from
\begin{equation}
h_{+} \equiv \frac{1}{2} (h_{xx}^{TT} - h_{yy}^{TT})
\mbox{~~~and~~~}
h_{\times} \equiv h_{xy}^{TT}.
\end{equation}
For observers along the rotation axis, we thus have
\begin{eqnarray}
\frac{r h_{+}}{M} &=&
\frac{1}{2 M} \frac{d^{2}}{d t^{2}} ( I_{xx}^{TT} - I_{yy}^{TT}), \label{h+}
\\
\frac{r h_{\times}}{M} &=&
\frac{1}{M} \frac{d^{2}}{d t^{2}} I_{xy}^{TT} \label{h-}
.
\end{eqnarray}
\begin{figure}
\centering
\includegraphics[width=7cm]{fig1.eps}
\caption{
Diagnostics $D$ and $Q$ as a function of $t/P_{\rm c}$ for 3 differentially
rotating stars (see Table \ref{tab:initial}). Solid and dotted lines
denote the values of $D$ and $Q$, respectively. The Roman numeral
in each panel corresponds to the model of the differentially rotating
stars, respectively. Hereafter $P_{\rm c}$ represents the central rotation
period.
}
\label{fig:dig}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7.0cm]{fig2.eps}
\caption{
Density contours in the equatorial plane for 3 differentially rotating stars
(see Table \ref{tab:initial}).
Models I, II, and III are plotted at the parameter ($t/P_{\rm c}$,
$\rho_{\rm max} / \rho_{\rm max}^{(0)}$) =
($16.3$, $3.63$),
($132$, $1.25$), and
($5.49$, $1.20$),
where $\rho_{\rm max}$ is the maximum density, $\rho_{\rm max}^{(0)}$ is the
maximum density at $t=0$, and $R$ is the equatorial radius at $t=0$. The
contour lines denote densities $\rho/\rho_{\rm max} = 10^{- (16-i) d}
(i=1, \cdots, 15)$ for model I and $\rho/\rho_{\rm max} = 6.67 (16-i) \times
10^{-2} (i=1, \cdots, 15)$ for models II and III, respectively.
}
\label{fig:qxy}
\end{figure}
The time evolutions of dipole diagnostic and the quadrupole diagnostic are
plotted in Fig. \ref{fig:dig}. We determine that the system is stable
to $m=1$ ($m=2$) mode when the dipole (quadrupole) diagnostic remains small
throughout the evolution, while the system is unstable when the diagnostic
grows exponentially at the early stage of the evolution. It is clearly seen
in Fig. \ref{fig:dig} that the star is more unstable to the one-armed spiral
mode for model I, and more unstable to the bar mode for models II and III.
In fact, both diagnostics grow for model I. The dipole diagnostic, however,
grows more markedly than the quadrupole diagnostic, indicating that the $m=1$
mode is the dominant unstable mode.
The density contour of the differentially rotating stars are shown in Fig.
\ref{fig:qxy}. It is clearly seen in Fig. \ref{fig:qxy} that one-armed
spiral structure is formed at the intermediate stage of the evolution
for model I, and that bar structure is formed for models II and III once the
dynamical instability sets in.
We also show gravitational waves generated from dynamical one-armed spiral and
bar instability in Fig. \ref{fig:gw}. For $m=1$ modes, the gravitational
radiation is emitted not by the primary mode itself, but by the $m=2$
secondary harmonic which is simultaneously excited, albeit at a lower
amplitude. Unlike the case for bar-unstable stars, the gravitational wave
signal does not persist for many periods, but instead is damped fairly rapidly.
\section{Canonical Angular Momentum}
\label{sec:Canonical}
We introduce canonical angular momentum following Ref. \cite{FS78} to diagnose
oscillations in rotating fluids. For adiabatic linear perturbations on a
perfect fluid configuration in stationary, axisymmetric spacetime, it is
possible to introduce canonical conserved quantities. Since we only use
canonical angular momentum $J_{\rm c}$ in this paper, we write down its
explicit form as
\begin{eqnarray}
J_{\rm c} &=&
m\int (\Re[\sigma]-m\Omega)\rho|\xi|^2 d^{3}x
\nonumber \\
&&
-2m\int \rho \varpi\Omega\cdot \Im[\xi^\varpi\xi^{\varphi *}] d^{3}x,
\label{canonJform}
\end{eqnarray}
where $\sigma$ is the eigenfrequency, $\xi^{i}$ is Lagrangian displacement
vector. Note that total canonical angular momentum becomes zero when
dynamical instability sets in.
\begin{figure}
\centering
\includegraphics[width=7cm]{fig3.eps}
\caption{
Gravitational waveform for 3 differentially rotating stars (see Table
\ref{tab:initial}) as seen by a distant observer located on the rotational
axis of the equilibrium star.
}
\label{fig:gw}
\end{figure}
Next we apply the method of canonical angular momentum to the linearized
oscillations of a cylinder. We prepare two $m=1$ stable modes (A-i, A-iii)
and one $m=1$ unstable mode (A-ii), summarized in Table \ref{tab:cylfrq}. We
plot the integrand of canonical angular momentum $\varpi j_{\rm c}$
\begin{equation}
j_{c} = m (\Re[\sigma]-m\Omega)\rho|\xi|^2
-2m \rho \varpi\Omega\cdot \Im[\xi^\varpi\xi^{\varphi *}],
\end{equation}
for $m=1$ mode in Fig. \ref{fig:jcm1}. We define corotation radius
$\varpi_{\rm crt}$ of modes as $\Re[\sigma]-m\Omega(\varpi)=0$. This means
that the pattern speed of mode coincides with the local rotational frequency
of background flow there. We find that the distribution of canonical angular
momentum changes its sign around corotation radius in $m=1$ unstable case, and
that it is positive for $\varpi<\varpi_{\rm{crt}}$, while it is negative for
$\varpi>\varpi_{\rm{crt}}$. The behavior of the canonical angular momentum
suggests us that this instability is related to the existence of corotation
point inside the cylinder. We also find that the positive case of the
canonical angular momentum (A-iii) corresponds to the case when the pattern
speed of the mode is faster than the rotation of cylinder in all radius, while
the negative (A-i) corresponds to the opposite. We also check the low $T/|W|$
bar instability in cylindrical model and find that the same behavior as $m=1$
mode (B) appears in the distribution of the canonical angular momentum density
(Fig. \ref{fig:jcm2}).
\begin{table}
\begin{center}
\caption{Parameters for equilibrium gaseous fluid and eigenfrequency
\label{tab:cylfrq}
}
\begin{tabular}{c c c c c}
\hline
\hline
Model &
$\Omega_{\rm c}/\Omega_{\rm s}$ \footnotemark[8] &
$T/|W|$ & $\sigma/\Omega_{\rm c}$ \footnotemark[9] &
$\varpi_{\rm{crt}}/\varpi_{\rm s}$ \footnotemark[10]
\\
\hline
A-i & 11.34 & 0.460 & $-0.245$ & ---\\
A-ii & 11.34 & 0.460 & $0.551+0.0315 i$ & 0.281\\
A-iii & 11.34 & 0.460 & $1.15$ & ---\\
B & 13.00 & 0.170 & $0.327+0.0126 i$ & 0.507\\
\hline
\hline
\end{tabular}
\footnotetext[8]{$\Omega_{\rm s}$: Surface angular velocity}
\footnotetext[9]{$\sigma$: Eigenfrequency}
\footnotetext[10]{$\varpi_{\rm crt}$: Corotation radius;
$\varpi_{\rm s}$: Surface radius}
\end{center}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=17cm]{fig4.eps}
\caption{
Distribution of canonical angular momentum density for $m=1$ unstable mode
(see Table \ref{tab:cylfrq}). Vertical dashed line represents the location
of corotation radius of the mode.
The Roman character in each panel corresponds to the model of the cylindrical
gaseous fluid, respectively.
Note that we normalized the distribution
of the canonical angular momentum in an appropriate value, since the
eigenfunction can be scaled arbitrarily.
}
\label{fig:jcm1}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=7cm]{fig5.eps}
\caption{
Distribution of canonical angular momentum density for $m=2$ unstable mode
(see Table \ref{tab:cylfrq}).
The Roman character in the panel corresponds to the model of the cylindrical
gaseous fluid.
Vertical dashed lines mark the locations of corotation radius of the mode.
}
\label{fig:jcm2}
\end{figure}
We furthermore investigate $m=2$ instability of Bardeen disk and classical bar
instability of Maclaurin spheroid. In these cases, the canonical angular
momentum density that is analytically obtained is zero in all cylindrical
radius. Therefore the behavior of the canonical angular momentum density
shows a clear contrast between $m=1$, $2$ instability with high degree of
differential rotation and $m=2$ classical instability of uniformly rotating
fluid.
Finally, we adopt the method of canonical angular momentum to the nonlinear
hydrodynamics. We identify the complex eigenfrequency and the corotation
radius from dipole or quadrupole diagnostics which is summarized in
Table \ref{tab:freq}. Note that we read off the eigenfrequency from
those plots at the early stage of the evolution. The Eulerian perturbed
velocity is defined by subtracting the velocity at equilibrium from the
velocity. The Lagrangian displacement vector is extracted by using a linear
formula for a dominant mode in each case.
\begin{table}
\begin{center}
\caption{
Eigenfrequency and the corotation radius of 3 differentially rotating stars
\label{tab:freq}}
\begin{tabular}{c c c}
\hline
\hline
Model &
$\sigma$ $[\Omega_{\rm c}]$ &
$\varpi_{\rm crt}$ $[R_{\rm eq}]$
\\
\hline
I & $0.590 + 0.0896 i$ & $0.167$
\\
II & $0.284 + 0.0121 i$ & $0.492$
\\
III & $0.757 + 0.200 i$ & ---
\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
We show the snapshots of canonical angular momentum density in
Fig. \ref{fig:jc}. Since we determine the corotation radius using the
extracted eigenfrequency and the angular velocity profile at equilibrium, the
radius does not change throughout the evolution. For low $T/|W|$ dynamical
instability, the distribution of the canonical angular momentum drastically
changes its sign around the corotation radius, and the maximum amount of
canonical angular momentum density increases at the early stage of evolution.
This means that the angular momentum flows inside the corotation radius in
the evolution. On the other hand, for high $T/|W|$ dynamical instability
that is related to the classical bar instability, the distribution of the
canonical angular momentum is smooth with no particular feature and tends to
have a positive portion outside. This means that the canonical angular
momentum flows outwards in the evolution, which is in clear contrast to the
case of low $T/|W|$ one.
From these different behaviors of the distribution of the canonical angular
momentum, we see that the mechanisms working in the low $T/|W|$ instabilities
and the classical bar instability may be quite different, i.e., in the former
the corotation resonance of modes are essential, while the instability is
global in the latter case.
\section{Discussion}
\label{sec:Discussion}
We have studied the nature of dynamical one-armed spiral instability in
differentially rotating stars both in linear eigenmode analysis and
in hydrodynamic simulation using canonical angular momentum distribution.
We have found that the one-armed spiral instability occurs around the
corotation radius of the star by investigating the distribution of the
canonical angular momentum. We have also found by investigating the canonical
angular momentum that the instability grows through the inflow of the angular
momentum inside the corotation radius. The feature also holds for the
dynamical bar instability in low $T/|W|$, which is in clear contrast to that
of classical dynamical bar instability in high $T/|W|$.
Therefore the existence of
corotation point inside the star plays a significant role of exciting
one-armed spiral mode and bar mode dynamically in low $T/|W|$.
\begin{figure}
\centering
\includegraphics[width=7cm]{fig6.eps}
\caption{
Snapshots of the canonical angular momentum distribution
$\varpi j_{\rm c}(\varpi)$ in the equatorial plane for 3 differentially
rotating stars (see Table \ref{tab:initial}). Solid, dotted, dashed, and
dash-dotted line represents the time
$t/P_{\rm c} =$(3.47, 6.93, 10.40, 13.86) for model I,
$t/P_{\rm c} =$(45.68, 56.43, 67.18, 77.97) for model II, and
$t/P_{\rm c} =$(1.10, 2.19, 3.29, 4.39) for model III, respectively.
Note that vertical line in panels I and II denotes the corotation radius of
the star (model III does not have a corotation radius). We also enlarged the
figure around the corotation radius for panels I and II, which is presented in
the right upper part of each panel. Although our method of determining the
corotation radius is not precise, we clearly find that the distribution
significantly changes around the corotation radius.
}
\label{fig:jc}
\end{figure}
Finally, we mention the feature of gravitational waves generated from this
instability. Quasi-periodic gravitational waves emitted by stars with $m=1$
instabilities have smaller amplitudes than those emitted by stars unstable to
the $m=2$ bar mode. For $m=1$ modes, the gravitational radiation is emitted
not by the primary mode itself, but by the $m=2$ secondary harmonic which is
simultaneously excited, possibly through non-linear self-coupling of m=1 mode.
(Remarkably the precedent studies \cite{CNLB01,SBS03} found that the pattern
speed of $m=2$ mode is almost the same as that of $m=1$ mode, which suggest
the former is the harmonic of the latter.) Unlike the case for bar-unstable
stars, the gravitational wave signal does not persist for many periods, but
instead is damped fairly rapidly.
|
1,314,259,993,374 | arxiv | \section{Introduction}
\label{sec:introduction}
The aim of this work is to find out what affine quantization does to a classical
field-theory. The simplest such theory is a free real scalar field of mass $m$. In that
case, the spectrum of physical states obtained with canonical quantization is known:
states containing many indistinguishable particles with momenta $\vec{p}_1,\vec{p}_2,
\ldots$ and energies $\sqrt{|\vec{p}_i|^2+m^2}$ (here $c=1$) obeying Bose statistics.
The simplest question to ask now is: what becomes of this if the free real scalar field
is subject to affine quantization \cite{Klauder2020c,Klauder2000} rather than
canonical quantization \citep{Dirac}? Does the system describe particles in this case
as well? If so, do they interact with one another? Working out the two-point function
of the free field in that framework should be of use to answer these questions.
The free real scalar field is well understood by canonical quantization.
The standard set of problems that can be resolved by canonical quantization
is distinct from the standard set of problems that can be resolved by affine
quantization, and one can therefore expect that an affine quantization of
the classical free real scalar differs from that of canonical quantization. The
purpose of this paper is to try to understand in what ways an affine quantization
is similar as well as dissimilar from a canonical quantization. We add
that some non-free real scalar fields have already been observed and that canonical
quantization fails for several non-renormalizable fields, such as $(\phi^{12})_3$
\cite{Fantoni2020} and $(\phi^4)_4$ \cite{Fantoni2020a}. The key to that result
is the introduction of a highly unusual, additional, non-quadratic, term
that is dictated by affine quantization. While affine quantization employs
an additional term, that particular term formally disappears when $\hbar\to 0$,
which makes it a plausible modification of the quadratic terms of traditional
free real scalar fields in order to extend acceptable quantization of traditional
non–renormalizable models.
The Euclidean action in canonical quantization \citep{Dirac}, in units where $\hbar=1$,
is
\begin{eqnarray} \label{eq:c-action}
S^{(c)}[\phi]=\mathop{\mathlarger{\mathlarger{\int}}}\left\{\frac{1}{2}
\sum_{\mu=0}^s\left[\frac{\partial\phi(x)}{\partial x_\mu}\right]^2
+V(\phi(x))\right\}\,d^nx,
\end{eqnarray}
with $x=(x_0,x_1,\ldots,x_s)=(x_0,\vec{x})$ for $s$ spatial dimensions and $n=s+1$ for
the number of space-time dimensions with $x_0=ct$. We will work at $s=3$. And $V$ is
the self-interaction potential density for which we will choose
$V(\phi)=(1/2)m^2\phi^2$ corresponding to a free-theory with a bare mass $m$.
The Eudlidean action in affine quantization \cite{Klauder2020c,Klauder2000} is
\begin{eqnarray} \label{eq:a-action}
S^{(a)}[\phi]=\mathop{\mathlarger{\mathlarger{\int}}}\left\{\frac{1}{2}
\sum_{\mu=0}^s\left[\frac{\partial\phi(x)}{\partial x_\mu}\right]^2
+\frac{3}{8}\frac{\delta^{2s}(0)}{\phi^2(x)+
\epsilon}+V(\phi(x))\right\}\,d^nx,
\end{eqnarray}
where $\epsilon>0$ is a parameter used to regularize the ``$3/8$'' extra term (see
Appendix A in \cite{Fantoni2020}) and $\delta$ is a Dirac delta function. In this case
the Hamiltonian density contains a divergent term, in the total
potential density
${\cal V}(\phi)=\frac{1}{2}m^2\phi^2+\frac{3}{8}\delta^s(0)/(\phi^2+\epsilon)$, in the
continuum, but the field theory can be treated on a lattice, and the {\sl approach}
toward the continuum will be taken under exam in this work. In fact, the path integral
needs this feature since we have examples such as
$\int\phi^2(x) e^{- S^{(a)}[\phi]}\,{\cal D}\phi/\int e^{-S^{(a)}[\phi]}{\cal D}\phi$
which is a creation of $\langle \psi| \hat{\phi \mkern 0mu}^2(x) |\psi\rangle$, namely
it creates a quantum version of the classical $\phi^2(x)$. The quantum operator
$\hat{\phi \mkern 0mu}^2(x) \sim \delta^s(0)$ and must be passed through the functional
integral which deals with terms within $S^{(a)}[\phi]$ leading to the fact that the
term $\phi^2(x)$ needs to be $\sim \delta^s(0)$ (at the minima of ${\cal V}$) to handle
the integration and that factor being ``passed'' to the quantum operator term
$\hat{\phi \mkern 0mu}^2(x)$. In the $V\to 0$ limit, this model remains different from
a massless free-theory due to exactly the new
$(3/8)\delta^{2s}(0)/[\phi^2(x)+\epsilon]$
interaction term (we have a ``pseudofree'' situation).
In our previous works we studied the non-renormalizable canonical cases with
$V(\phi)=(1/2)m^2\phi^2+g\phi^4$ \cite{Fantoni2020a} in $s=3$ and $(1/2)m^2\phi^2+g
\phi^{12}$ in $s=2$ \cite{Fantoni2020}, where $g$ is the bare coupling constant. And
we showed that the corresponding affine cases are indeed renormalizable.
Monte Carlo (MC) \cite{Kalos-Whitlock,Metropolis} is the numerical method of choice to
treat multidimensional integrals of high $D$ dimensions (it supercedes the traditional
integration methods, like the trapezoidal rule, the Simpson rule,$\ldots$, based on
the knowledge of the $\alpha^{\rm th}$ derivative of the integrating function already
for $D>2\alpha$) therefore is especially useful to compute path integrals. We will use
it to study the two-point function of the Euclidean action of a real scalar field in
affine quantization. Our estimate of the path integrals will be generally subject to
three sources of numerical uncertainties: The one due to the statistical errors, the
one due to the space-time discretization, and the one due to the finite-size effects.
Of these the statistical errors scale like $M^{-1/2}$ where $M$ is the computer time,
the discretization of space-time is responsible for the distance from the continuum
limit (which corresponds to a lattice spacing $a\to 0$), and the finite-size effects
stems from the necessity to approximate the infinite space-time system with one in a
periodic box of volume $L^n$ with $L=Na$ being the box side, subject to $N$
discretization points.
The work is organized as follows: In section \ref{sec:model} we derive the lattice
formulation of the field theory needed in the treatment on the computer, in section
\ref{sec:observables} we describe our computer experiment and introduce the observables
that will be measured during our simulations, in section \ref{sec:results} we present
our results, and section \ref{sec:conclusions} is for final remarks.
\section{The lattice formulation of the field-theory model}
\label{sec:model}
We used a lattice formulation of the field theory. The theory considers a real
scalar field $\phi$ taking the value $\phi(x)$ on each site of a periodic,
hypercubic, $n$-dimensional lattice of lattice spacing $a$ and periodicity $L=Na$.
The canonical action for the field, Eq. (\ref{eq:c-action}), is then approximated by
\begin{eqnarray} \label{eq:pa}
S^{(c)}[\phi]\approx\left\{\frac{1}{2}\sum_{x,\mu}a^{-2}
\left[\phi(x)-\phi(x+e_\mu)\right]^2+\sum_xV(\phi(x))\right\}a^n,
\end{eqnarray}
where $e_\mu$ is a vector of length $a$ in the $+\mu$ direction and we are at a
temperature $T=1/Na$, in units where Boltzmann constant $k_B=1$. An analogous
expression holds for the affine action of Eq. (\ref{eq:a-action}) where the Dirac
delta function is replaced by $\delta^{2s}(0)\to a^{-2s}$.
We will use this ``primitive approximation'' for the action even if it can be improved
in several ways \citep{Ceperley1995} in order to reduce the error due to the
space-time discretization. In reaching to the expression (\ref{eq:pa}) we neglected the
term $\propto a^{2n}$ due to the commutator of the kinetic and potential parts of the
Hamiltonian, in the Baker–Campbell–Hausdorff formula. In reaching to the path integral
expression this is justified by the Trotter formula.
The affine regularization of the previous paragraphs, leading to $\vec{x}\to \mathbf k a$,
where $a>0$ is the tiny lattice spacing, is helpful in our analysis but needs not be
the final regularization. In particular, the new term $\phi(x_0,\vec{x})^{-2}\to \phi_
\mathbf k^{-2}$ leads to a divergence when, at a fixed value of $\mathbf k$, the integral over the
region $|\phi_\mathbf k|<1$, of $\int (\phi_\mathbf k)^{-2}\;d\phi_\mathbf k =\infty$. This behavior can
be overcome in an additional form of regularization.\footnote{The additional
regularization is essentially taken from Eq.~(14) in \cite{Klauder2020d}.} Instead of
just $\phi_\mathbf k$ we choose $2s$ additional terms that are nearest neighbors to $\mathbf k$.
These additional terms enter in the form
$\phi_\mathbf k^{-2}\to [\;\sum_\mathbf l \,J_{\mathbf k,\mathbf l}\, \phi_{\mathbf l}^2\,]^{-1}$, where
$J_{\mathbf k,\mathbf l}=(2s+1)^{-1}$ for $\mathbf l=\mathbf k$ plus $\mathbf l$ is each of the $2s$ nearest
neighbors of $\mathbf k$. This averaging of $\phi_\mathbf k$ also leads to a finite integration
where, with all $|\phi_\mathbf l|<1$, we have
\begin{eqnarray}
\int\cdots\int \;\left[\,\sum_{\mathbf l}\,J_{\mathbf k,\mathbf l}\, \phi_{\mathbf l}^2\,\right]^{-1}\;\prod_{\mathbf l}\,d\phi_\mathbf l<\infty,
\end{eqnarray}
which is finite as determined by choosing $\phi_\mathbf l=r \,u_\mathbf l$ such that
$\sum_l u_\mathbf l^2 <\infty$ leading to the integral $U\int \,r^{-2} r^{2s} dr<\infty$, for
all $s>0$, where $U<\infty$ accounts for the remaining finite integrations.
Clearly, this procedure of averaging the expression $\phi_\mathbf k^{-2}$ offers a smoother
regulation, and we shall also adopt that procedure for our MC studies. We will refer to
this affine regularization as term B and the one discussed earlier, obtained by
choosing $J_{\mathbf k,\mathbf l}=\delta_{\mathbf k,\mathbf l}$, as term A.
The vacuum expectation of a functional observable ${\cal O}[\phi]$ is
\begin{eqnarray} \label{eq:expectation}
\langle{\cal O}\rangle\approx\frac{\int{\cal O}[\phi]\exp(-S[\phi])\,\prod_{x}d\phi(x)}
{\int\exp(-S[\phi])\,\prod_{x}d\phi(x)},
\end{eqnarray}
for a given action $S$.
We will approach the continuum limit by choosing a fixed $L$ and increasing the
number of discretizations $N$ of each component of the space-time. So that the
lattice spacing $a=L/N\to 0$. To make contact with the continuum limit, two conditions
must be met $a \ll 1/m \ll L$ where $1/m$ is the Compton wavelength.
\section{Simulation details and Relevant observables}
\label{sec:observables}
We want to determine the two-point function
\begin{eqnarray} \label{eq:Fxy}
K(x,y)=\langle[\phi(x)-\langle\phi(x)\rangle][\phi(y)-\langle\phi(y)\rangle]\rangle=\langle\phi(x)\phi(y)\rangle-\langle\phi(x)\rangle^2,
\end{eqnarray}
replacing $x$ by $x+k$ with $k=a w_n$ with $w_n=(n_0,n_1,\ldots,n_s)$ and
$n_\mu\in{\rm Z}\hskip-.3em{\rm Z}$ amounts to a mere relabeling of the lattice points. Hence, due to
translational invariance, $K(x,y)$ can only depend on the difference between the
coordinates of the two points and we can define,
\begin{eqnarray} \label{eq:tp}
D(z)=\frac{1}{L^n}\sum_x K(x,x+z)a^n,
\end{eqnarray}
For the massless free-theory with $V\to 0$ in canonical quantization, we find that in
non periodic space-time (at zero temperature)
\begin{eqnarray}
D'(z)=\int \frac{e^{-ip\cdot z}}{p^2}\,\frac{d^np}{(2\pi)^n}=
\left\{\begin{array}{ll}
-|z|/2 & n=1 \\
-(\ln |z|/l)/2\pi & n=2 \\
1/|z|4\pi & n=3 \\
1/|z|^24\pi^2 & n=4
\end{array}\right.,
\end{eqnarray}
where $|z|=\sqrt{z_0^2+z_1^2+\ldots+z_s^2}$ and $l$ is a length. This shows how the
massless field generates long range interactions.
For a massive free-theory with $V(\phi(x))=\frac{1}{2}m^2\phi^2(x)$ in canonical
quantization, we find that in non periodic space-time (at zero temperature) with n=4
\begin{eqnarray}
D'(z)=\int \frac{e^{-ip\cdot z}}{p^2+m^2}\,\frac{d^np}{(2\pi)^n}=
m{\rm K}_1(m|z|)/|z|4\pi^2,
\end{eqnarray}
where $m$ is the mass and ${\rm K}_1$ is a modified Bessel function.
In periodic space-time (at a temperature $T=1/Na$)
\begin{eqnarray} \label{eq:tpt}
D(z)=\sum_{w_n} D'(z+Lw_n),
\end{eqnarray}
where the sum can be restricted by an infrared cutoff $irc$ such that
$-irc\le n_\mu\le irc$ (without any physical significance) in order to reach a given
numerical accuracy. If we remove the cutoff the function diverges for the massless
case.
Our MC simulations use the Metropolis algorithm \citep{Kalos-Whitlock,Metropolis}
to calculate the ensemble average of Eq. (\ref{eq:expectation}) which is a $N^n$
multidimensional integral. The simulation is started from the initial condition
$\phi=0$. One MC step consisted in a random displacement of each one of the $N^n$
$\phi(x)$ as follows
\begin{eqnarray} \label{eq:move}
\phi\rightarrow\phi+(2\eta-1)\delta,
\end{eqnarray}
where $\eta$ is a uniform pseudo random number in $[0,1]$ and $\delta$ is the
amplitude of the displacement. Each one of these $N^n$ moves is accepted if
$\exp(-\Delta S)>\eta$ where $\Delta S$ is the change in the action due to the move
(it can be efficiently calculated considering how the kinetic part and the
potential part change by the displacement of a single $\phi(x)$)
and rejected otherwise. The amplitude $\delta$ is chosen in such a way to have
acceptance ratios as close as possible to $1/2$ and is kept constant during the
evolution of the simulation. One simulation consisted of
$M$ MC steps. The statistical error on the average $\langle{\cal O}\rangle$ will then
depend on the correlation time necessary to decorrelate the property ${\cal O}$, $\tau_
{\cal O}$, and will be determined as $\sqrt{\tau_{\cal O}\sigma_{\cal O}^2/(MN^n)}$, where $
\sigma_{\cal O}^2$ is the intrinsic variance for ${\cal O}$.
\section{Simulation results}
\label{sec:results}
We worked in units where $c=\hbar=k_B=1$. We chose the regularization parameter of
the affine quantization A term to be $\epsilon=10^{-10}$.
\footnote{Note that we could as well choose a regularization putting hard walls at
$\phi=\pm\varepsilon$ therefore rejecting MC moves whenever
$\phi\in[-\varepsilon,\varepsilon]$}
For a massive free-theory, $V(\phi)=\frac{1}{2}m^2\phi^2$, in canonical quantization
(\ref{eq:c-action}) with $m=1, N=15, L=3, a=L/N=0.2$ we obtained the result shown in
Fig. \ref{fig:tp-c-N15L3m1} where we compare the MC results with the exact expression
of Eq. (\ref{eq:tpt}) with an infrared cutoff of $irc=2$ which is sufficient for an
accuracy of $10^{-3}$. The run was $M=10^6$ MC steps long. The figure shows good
agreement between the MC and the exact expression except at the origin due to the
space-time discretization.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10cm]{fig-tp-c-g0N15L3m1.eps}
\end{center}
\caption{Two-point function $D(z)$ of Eq. (\ref{eq:tp}), for a free
real scalar field subject to canonical quantization with a self-interaction potential
density of the form $V(\phi)=\frac{1}{2}m^2\phi^2$ in Eq. (\ref{eq:c-action}) with
$m=1, N=15, L=3, a=L/N=0.2$. We compare with the analytic exact expression of Eq.
(\ref{eq:tpt}) with an infrared cutoff of $irc=2$. A logarithmic scale is used on the
$y$-axis.}
\label{fig:tp-c-N15L3m1}
\end{figure}
For a free massive theory $V(\phi)=\frac{1}{2}m^2\phi^2$ in affine quantization
(\ref{eq:a-action}) using term A, the self-interaction is a double well with a spike
barrier at $\phi=0$. We tuned the width of the displacement, $\delta$ in Eq.
(\ref{eq:move}), so that the random walk in the $\phi(x)$ will sample the probability
distribution $\exp(-S[\phi])$ most efficiently, with short equilibration times. In Fig.
\ref{fig:tp-a-N15L3m1} we show the result for a free real scalar field subject to
affine quantization with a total self-interaction of the form
${\cal V}(\phi)=\frac{1}{2}m^2\phi^2+\frac{3}{8}a^{-2s}/(\phi^2+\epsilon)$
with $m=1, N=15, L=3, a=L/N=0.2$, and $\epsilon=10^{-10}$ after cutting the first
equilibration MC steps of a run made of $M=2.5\times 10^6$ steps. During the
simulations we also calculated the renormalized mass $m_R$ and the renormalized
coupling constant $g_R$ \cite{Fantoni2020}. As we can see from the figure the symmetry
$z \to L-z$ of the two-point function is preserved within the errorbars.
The minima of the classical ${\cal V}$ is at $\phi=\pm\Phi$ with
$\Phi^2=-\epsilon+\sqrt{3}/(2a^3m)$ which diverges in the continuum limit $a\to 0$
(this of course does not happen in the harmonic oscillator case \cite{Gouba2020} which
is independent of the lattice spacing). Moreover the minimum of the action
$L^{s+1}m(\sqrt{3}-m\epsilon a^s)/2a^s$ also diverges, both in the continuum limit at
finite volume ($ma \to 0$) and in the infinite volume limit at fixed lattice spacing
($mL \to \infty$) (this also happen for the affine harmonic oscillator \cite{Gouba2020}
which has a well defined zero temperature limit). The corresponding contribution to the
vacuum expectation only occurs together with the normalization constant in front of the
path integral and drops out in quantities of physical interest (as long as the system
is not placed in a curved geometry, i.e. in a gravitational field - there, the
cosmological constant does have physical significance)
The symmetry $\phi\to-\phi$ is broken in the simulations (see Appendix \ref{app:Phi})
and as a result $\langle\phi(x)\rangle$ is different from zero. The action
$S=\bar{K}+\bar{V}$ where $\bar{K}$ is the kinetic term and $\bar{V}$ the total
potential term. Imagine now that we are in a configuration where all the $N^n$
components, $\phi(x)$, are around $+\Phi$. In order to start migrating one single $x'$
component, $\phi(x')$, around the other minimum at $-\Phi$ will have no cost in the
potential, $\Delta\bar{V}\approx 0$, but it will have a big cost in the kinetic term
between ``neighboring'' $x$, resulting in a big $\Delta\bar{K}$ (as long as the
distance between the two minima, $2\Phi$, which diverges in the continuum limit, is
large). As a consequence $\exp(-\Delta S)$ will be very small and the move will be
almost surely rejected according to the Metropolis rule. Moreover, once the system
reaches the phase with all $\phi(x)$ in one of the minima, it is very unlikely that a
single $\phi(x')$ will move to the other minimum but it cannot be excluded, in
principle. If this happens one has a situation where the field is around $+\Phi$ at all
$x$ except at $x'$ where it is around $-\Phi$. But we can easily see that now it would
be statistically favorable for the single field on the left to rejoin the fields on the
right other then all the fields on the right join the field on the left. Exactly the
same holds for affine quantization (\ref{eq:a-action}) using term B, since due to the
kinetic energy term in the action the fields at neighboring points tend to assume
similar values. On the other hand this would not
hold for an {\sl ultralocal} \cite{Klauder2020b} theory where we could have the field
visiting both wells at $\pm\Phi$ but only at not ``neighboring'' times, resulting in a
vanishing $\langle\phi(x)\rangle$. Apart from this the shape of the two-point function
is qualitatively similar to the one of the {\sl covariant} case of Eq.
(\ref{eq:a-action}). In addition in a covariant {\sl complex} field one could go
``slowly'' ``around'' the ``mountain'' at $\phi=0$ with no need of ``jumps''.
For our choice of the parameters we have $\Phi\approx 10.404$ with $\Phi^2\approx
108.253$. The results in Fig. \ref{fig:tp-a-N15L3m1} indicate that the quantization
increases this number by about 10\%. The minimum of $D(z)$ is reached around $|z|=L/2$.
The two-point function is qualitatively similar to the one of the free field. This is
supported by recent results on a one dimensional harmonic oscillator treated with
affine quantization \cite{Gouba2020} where it is shown that the eigenvalues are still
equally spaced. A non-linear fit of the MC data (removing the first point at $|z|=0$)
with the function $D_{m_D}(z)$ where $D_{m_D}$ is the two-point function of a free
field of mass $m_D$ of Eq. (\ref{eq:tpt}) with an $irc=2$, taking $m_D$ as the only fit
parameter, gives $m_D\approx 0.9$. The result of the fit is also shown in Fig.
\ref{fig:tp-a-N15L3m1}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10cm]{fig-tpfit-a-e10-o-g0N15L3m1.eps}
\end{center}
\caption{Two-point function $D(z)$ of Eq. (\ref{eq:tp}), for a free
real scalar field subject to affine quantization with term A and a self-interaction
potential density of the form $V(\phi)=\frac{1}{2}m^2\phi^2$ in Eq. (\ref{eq:a-action})
with $m=1, N=15, L=3, a=L/N=0.2$, and $\epsilon=10^{-10}$. Also shown is the result of
a non-linear fit of the data (except the first point at $|z|=0$) with the function
$D_{m_D}(z)$ where $D_{m_D}$ is the two-point function of a free field of mass $m_D$
of Eq. (\ref{eq:tpt}) with an $irc=2$, taking $m_D$ as the only fit parameter.}
\label{fig:tp-a-N15L3m1}
\end{figure}
For a free real scalar field subject to affine quantization with term A, in $n=4$
space-time dimensions in a volume $3^4$ with a regularization parameter
$\epsilon=10^{-10}$, we studied the continuum limit, $N\to\infty$, (by choosing values
lower of 15) and the dependence on the bare mass $m$, of the five quantities
$m_R, g_R, \langle\phi(x)\rangle^2, m_D$, and $D(0)$. The results are shown in Table
\ref{tab:study}. From the table we see how moving towards the continuum limit
$m_D\approx m$ but $m_R$ becomes small due to the fact that when the field picks up an
expectation value, the Fourier transform of the field $\widetilde{\phi}(0)$ picks up a
contribution proportional to the volume of the box. Moreover, for the same reason,
$g_R\approx 2$. The Table also shows the value of $\Phi^2$ and of
$\langle\phi(x)\rangle^2$ to be compared. We see that the second is always larger than
the first one by a percentage increasing with increasing $m$ and with increasing $a$.
The value of $D(0)$ is increasing with a decrease of the lattice spacing $a$, signaling
a divergence in the continuum limit.
\begin{table}[htbp]
\caption{We determined, for a free real scalar field subject to affine quantization
with term A, in $n=4$ space-time dimensions, the dependence of $m_R, g_R, m_D$, and
$D(0)$ on the number of one dimensional discretization points $N$ and the bare mass
$m$ at $L=3$ with a regularization parameter $\epsilon=10^{-10}$. The runs were
$M=5\times 10^6$ MC steps long. The value of $\Phi^2$ and of $\langle\phi(x)\rangle^2$
are also shown for comparison.}
\label{tab:study}
\vspace{.5cm}
{\footnotesize
\begin{tabular}{||c|c||cccccc||}
\hline
\hline
$N$ & $m$ & $m_R$ & $g_R$ & $\Phi^2$ & $\langle\phi(x)\rangle^2$ & $m_D$ & $D(0)$\\
\hline
\hline
\multirow{3}{*}{15} & 1 & 0.0122(3) & 1.9979(1) & 108.2 & 120.6(1) & 0.934 & 3.69(6)\\
& 2 & 0.00646(4) & 1.99983(3) & 54.13 & 65.7(1) & 1.785 & 3.32(6)\\
& 3 & 0.0186(6) & 1.99925(8) & 36.08 & 45.85(7) & 3.009 & 2.97(6)\\
\hline
\multirow{3}{*}{12} & 1 & 0.01053(5) & 1.99958(5) & 55.43 & 63.25(8) & 0.302 & 2.38(5)\\
& 2 & 0.00967(9) & 1.99992(2) & 27.71 & 34.54(5) & 2.467 & 2.00(5)\\
& 3 & 0.0095(1) & 1.99905(8) & 18.47 & 24.00(4) & 5.483 & 1.66(5)\\
\hline
\multirow{3}{*}{10} & 1 & 0.01417(4) & 1.999464(4) & 32.07 & 37.46(5) & 0.587 & 1.58(3)\\
& 2 & 0.0124(1) & 1.99995(1) & 16.04 & 20.43(3) & 3.789 & 1.29(3)\\
& 3 & 0.0119(2) & 1.99996(1) & 10.69 & 14.03(2) & 5.647 & 1.02(3)\\
\hline
\hline
\end{tabular}
}
\end{table}
Summarizing, the two-point function for $\phi - \langle\phi\rangle$
looks similar to the two-point function of a free field with mass $m_D$. In other
words, the correlation length of the affine quantum field theory is $m/m_D$ times the
Compton wavelength of the canonical quantum theory of the free scalar field. Our
results seem to suggest that, going towards the continuum, the affine model is
approaching a free field with the same bare mass.
The value of $m_D$ is not easy to understand, however. If the action is treated at the
classical level, small deviations from the minimum are determined by the curvature of
the total potential, $m_c^2 = d^2{\cal V}/d\phi^2$ at $\phi = \Phi$.
The mass term contributes $m^2$ and the ``3/8'' term yields a contribution that is 3
times larger. For $\epsilon = 0$, the mass relevant for the relation between frequency
of the waves and wavelength is: $m_c = 2 m$ independently of $a$.
In Fig. \ref{fig:tp-a} we show $D(z)$ as obtained for $m=1$
($L=3, \epsilon=10^{-10}$) and three choices of $N$, in the long simulations of the
Table \ref{tab:study}. One can then see the approach to the continuum of the two-point
function of the affine model.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10cm]{fig-tp.eps}
\end{center}
\caption{(color online) Two-point function $D(z)$ of Eq. (\ref{eq:tp}) for a free
real scalar field subject to affine quantization with a self-interaction potential
density of the form $V(\phi)=\frac{1}{2}m^2\phi^2$ in Eq. (\ref{eq:a-action}) with
$m=1, L=3, \epsilon=10^{-10}$ and increasing $N=10,12,15$.}
\label{fig:tp-a}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
In a recent work \citep{Fantoni2020a} we studied the case of a non-renormalizable
$(\phi^4)_4$ canonical theory (where the self-interaction potential is
$V(\phi)=g\phi^4$) in four space-time dimensions and proved through MC that the theory
becomes renormalizable if one treats the field through affine quantization.
In the present work we observed that for $g=0$ the simplest question to ask was: Does
the affine system describe particles as for the canonical one? If so, do they interact
with one another?
We tried to answer these question by looking at the two-point function. What we proved
through our MC analysis was that the affine case with $g=0$ has to be considered like a
``sort'' of free-theory of ``quasiparticles'' (in the sense of Lev Landau in his theory
for Fermi liquids) where the ``3/8'' term just offers itself like a sort of
``collective excitation'' term. In this case the $\phi\to-\phi$ symmetry is broken and
the field acquires a non-zero vacuum expectation. The two-point function nonetheless
has all the same features as those of a free scalar field of similar mass, in the
continuum limit.
One shortcoming of the affine formulation of the field theory is the divergence (in the
continuum) of the vacuum expectation value of the field which generates the
disconnected contribution to the Green's functions. The path integral is fully
determined by the local properties of the field that enter through the action. The
expectation value of the field does not represent a local property of the field. We
cannot imagine how one could possibly get rid of it. In the Standard Model, however,
one of the crucial properties of the Higgs fields is that they pick up a vacuum
expectation value $v$. The masses of the W- and Z-bosons as well as those of the
leptons and quarks are proportional to $v$. In order to remedy to this drawback one
should perform the following scaling $\phi\to a^{-s/2}\phi$ (together with $g\to a^sg$
in a possible interaction term of the form $g\phi^4$) which would bring about an
additional factor $a^{-s}$ multiplying the action. This scaling proved successful in
our forthcoming work on the affine quantization of a Higgs complex scalar field
\cite{Fantoni2021}.
The present paper is wanted to confirm that both canonical and affine procedures lead
to desired and expected behavior for quadratic potential terms. A later paper
\cite{Fantoni2021} will be designed to deal with quartic potential terms with canonical
and affine procedures.
|
1,314,259,993,375 | arxiv | \section{Introduction}
Recently, there has been much theoretical interest in the possibility of using an adiabatic quantum optimization technique to solve NP-complete and NP-hard problems \cite{bapst, dickson2011}. This is due to the following trick: suppose we have a quantum Hamiltonian $H_{\mathrm{P}}$ whose ground state corresponds to the solution of a problem of interest, and another Hamiltonian $H_0$, whose ground state is ``easy" (both to find and to prepare in an experimental setup). Then, if we prepare a quantum system to be in the ground state of $H_0$, and then adiabatically change the Hamiltonian for a time $T$ according to \begin{equation}
H(t) = \left(1-\frac{t}{T}\right)H_0 + \frac{t}{T} H_{\mathrm{P}},
\end{equation}then if $T$ is large enough, the quantum system will remain in the ground state for all times, and thus at time $T$ will correspond to a solution of an NP problem.
In this paper, we will focus on the first part of the problem: finding a quantum Hamiltonian, $H_{\mathrm{P}}$, which can encode the ground state of a problem of interest. In particular, we will look for \emph{classical Ising} Hamiltonians, which are extremely simple to interpret as quantum Hamiltonians by simply converting each classical spin variable into a qubit, and have the added advantage that they may be more adaptable to quantum computing hardware. In each case, there are an infinite number of possible choices of $H_{\mathrm{P}}$ which can be used, although many of them will be simply related to the others by, for example, slightly tweaking some couplings or magnetic fields. We will not ask the question of how robust the Hamiltonians are to slight changes in their couplings or magnetic fields, and simply content ourselves with finding a single Hamiltonian for each problem.
Analogies between Ising Hamiltonians and NP problems have been frequently studied in the past \cite{mezard, fu1986}, although many of the NP problems previously studied are very straightforward to phrase in this manner. This paper can be thought of as the result of a challenge to a theoretical physicist to show how ``all of the famous NP problems"\footnote{No offense to anyone whose problems have been left out.} \cite{garey} can be written down as Ising models. The ``subtle" techniques in this paper are primarily of a few flavors, which roughly correspond to the tackling the following three issues through a (polynomial-size) expansion of the state space: problems with inequalities as constraints (for example, $n\ge 1$, as opposed to $n=1$), problems where the natural variables have $q$ states (with $q>2$), and problems which ask global questions about graphs, such as connectivity. The methods we use to tackle these problems with Ising models generalize very naturally to more complicated problems. However, it is sometimes the case that we seem to get ``lucky" and find solutions which do not require expansions of the state space: for example, present a solution to a tricky minimax graph problem by a separation of three energy scales, although this trick does not seem to generalize as well as the others.
There has been debate about whether or not these algorithms would actually be useful for a quantum computer \cite{dickson2011, bapst}, and this paper does not settle this debate. However, it may be useful in the distant future to have a quantum computer solve a large NP problem without the aid of a classical computation, and so therefore finding quantum Hamiltonians whose ground states correspond directly to the solutions of NP problems may be prove useful.
In a sense, the goal of this paper is trivial, as Ising spin glasses are known to be NP-hard \cite{Barahona1982}, so it is natural to suspect connections with all other NP problems. This triviality is exaggerated by the fact that a simple NP-complete number partitioning problem is simply related to an Ising spin glass, so if there are polynomial time maps of all other NP-complete maps into this problem, we expect polynomial size Ising formulations. However, many of these direct constructions do not seem to be written down in a manner accessible for physicists or quantum algorithm designers, if they have been at all. Thus, the goal of this paper is to present these constructions in one place, in a self-contained manner, starting from trivial constructions and moving onwards to significantly more subtle constructions, and culminating in an Ising model for each of Karp's 21 NP-complete problems.
\section{Partitioning Problems}
The simplest problems to phrase as Ising models are partitioning problems: these maps are celebrated and well-known \cite{mezard, fu1986}. For completeness, we review them here. Some of the techniques used in these models will prove useful in more nontrivial constructions.
\subsection{Number Partitioning}
Number partitioning asks the following: given a set $S$ of numbers, which we will take to be natural numbers, is there a partition of this set of numbers into two disjoint subsets $R$ and $S-R$, such that the sum of the elements in both sets is the same? This can be phrased trivially as an Ising model as follows. Let $n_i$ ($i=1,\ldots, N=|S|$) describe the numbers in set $S$, and let \begin{equation}
H= A\left(\sum_{i=1}^N n_is_i\right)^2
\end{equation}be an energy functional, where $s_i=\pm 1$ is an Ising spin variable. Here $A>0$ is some positive constant. Typically, such constants are scaled to 1 in the literature, but for simplicity we will retain them, since in many formulations a separation of energy scales will prove useful and retaining each scale can make it easier to follow.
It is clear that if there is a solution to the Ising model with $H=0$, then there is a configuration of spins where the sum of the $n_i$ for the $+1$ spins is the same for the sum of the $n_i$ for the $-1$ spins. Thus, if the ground state energy is $H=0$, there is a solution to the number partitioning problem. Degeneracies in this ground state which do not correspond to $s_i^* \rightarrow -s_i^*$ correspond to distinct solutions to the partitioning problem. Furthermore, if the ground state has $H>0$, we know that there are no solutions to the partitioning problem, but the ground state we do find is (one of) the best possible solutions, in the sense that it minimizes the mismatch.
\subsection{Graph Partitioning}
Graph partitioning is the classic example of a map between the physics of Ising spin glasses and NP problems. We will phrase it in a slightly different form than the original \cite{fu1986}, which employed a hard constraint on the phase space. We will want none of our formulations to do this, as this may hamper their ability to be used in quantum computing applications.
Let us consider an undirected graph $G=(V,E)$. with an even number $N=|V|$ of vertices. We ask: what is a partition of the set $V$ into two subsets of equal size $N/2$ such that the number of edges connecting the two subsets is minimized? This is known to be an NP-complete problem. As before, we will place an Ising spin $s_v=\pm 1$ on each vertex $v\in V$ on the graph, and we will let $+1$ and $-1$ denote the vertex being in either the $+$ set or the $-$ set. We solve this with an energy functional consisting of two components: \begin{equation}
H=H_A+H_B
\end{equation}where \begin{equation}
H_A = A\left(\sum_{n=1}^N s_i\right)^2
\end{equation}is an energy which provides a penalty if the number of elements in the + set is not equal to the number in the $-$ set, and \begin{equation}
H_B = B\sum_{(uv) \in W} \frac{1-s_us_v}{2}
\end{equation}is a term which provides an energy penalty $B$ for each time that an edge connects vertices from different subsets. If $B>0$, then we wish to minimize the number of edges between the two subsets; if $B<0$, we will choose to maximize this number. Should we choose $B<0$, we must ensure that $B$ is small enough so that it is never favorable to violate the constraint of $H_A$ in order to minimize energy.
\subsection{Cliques}
A clique of size $K$ in an undirected graph $G=(V,E)$ is a subset $W\subseteq V$ of the vertices, of size $|W|=K$, such that the subgraph $(W,E_W)$ (where $E_W$ is the edge set $E$ restricted to edges between nodes in $W$) is a fully connected graph -- i.e., all possible $K(K-1)/2$ edges in the graph are present.
We show how the NP-complete decision problem of whether or not a clique of size $K$ exists can be written as an Ising-like model. We place a spin variable $s_v=\pm 1$ on each vertex $v\in V$ of the graph. In general, in this paper, for a spin variable $s_\alpha$, we will define the binary bit variable \begin{equation}
x_\alpha \equiv \frac{s_\alpha+1}{2}.
\end{equation} It will typically be more convenient to phrase the energies in terms of this variable $x_\alpha$, as it will be for this problem. Note that any energy functional which was quadratic in $s_v$ will remain quadratic in $x_v$, and vice versa, so we are free to use either variable.
We will write the energy as \begin{equation}
H= A\left(K-\sum_v x_v\right)^2 + A\left[\frac{K(K-1)}{2}-\sum_{(uv)\in E} x_ux_v\right]
\end{equation}where $A>0$ is a positive constant. The ground state of this Hamiltonian is $H=0$ if and only if a clique of size $K$ exists. This is because the role of the first term is to enforce the constraint that exactly $K$ vertices may be included in a trial subset $W$ of the vertices, and the second term subsequently gives an energy penalty for each possible edge between two vertices of $W$ which is not included in $E_W$. There are no energy penalties from the latter term precisely when the trial set $W$ forms a clique. Given a ground state solution, it is of course easy to read off from the $x_v$ which $K$ nodes form a clique.
A quantum algorithm can actually be made slightly more efficient so long as the initial state can be carefully prepared \cite{childs}.
\section{Hard Constraint Covering and Packing Problems}
In this section, we will turn to what we call ``hard constraint" problems related to covering and packing sets. These are essentially problems where constraints must be exactly satisfied. Many of the problems described below are often discussed in the literature, but again we review them here for completeness.
\subsection{Binary Integer Linear Programming}
Consider the problem, where $x_1,\ldots, x_N$ are binary variables, arranged into a vector $\mathbf{x}$. If we ask, what is the largest value of $\mathbf{c}\cdot \mathbf{x}$, for some vector $\mathbf{c}$, given a constraint \begin{equation}
A\mathbf{x} = \mathbf{b}
\end{equation}with $A$ some matrix and $\mathbf{b}$ some vector with $m$ components.
We solve this as followed. Let \begin{equation}
H=H_A+H_B
\end{equation}where \begin{equation}
H_A = A\sum_{j=1}^m \left[b_j - \sum_{i=1}^N A_{ji}x_i\right]^2
\end{equation}and $A>0$ is a constant. The ground states of $H_A=0$ enforce (if such a ground state exists, of course!) the constraint that $A\mathbf{x}=\mathbf{b}$. Then we set \begin{equation}
H_B = -B\sum_{i=1}^N c_ix_i.
\end{equation}with $B\ll A$ another positive constant. The energy scale separation is required so that it is never favorable to violate the $H_A$ constraint.
\subsection{Exact Cover}
The exact cover problem goes as follows: consider a set $U = \lbrace 1,\ldots, n\rbrace$, and subsets $V_i \subseteq U$ such that \begin{equation}
U = \bigcup_i V_i.
\end{equation}The question is: is there a subset of $\lbrace V_i\rbrace$, called $R$, such that the elements of $R$ are disjoint sets, and the union of the elements of $R$ is $U$? This problem was solved in \cite{choi2010} but for simplicity, we repeat it here.
We write this problem as an Ising model as follows. Let us denote a spin variable $s_i$ and binary variable $x_i$ for each subset $V_i$. The Hamiltonian we use is $H=H_A+H_B$, where\begin{equation}
H_A = A\sum_{\alpha=1}^n \left(1-\sum_{i:\alpha\in V_i} x_i\right)^2.
\end{equation}$H_A=0$ precisely when every element is included exactly one time, which implies that the unions are disjoint. The existence of a ground state of energy $H=0$ corresponds to the existence of a solution to the exact cover problem.
It is also straightforward to extend this, and find the \emph{smallest} exact cover. This is done by simply adding a second energy scale $B\ll A$: \begin{equation}
H_B = B\sum_i x_i.
\end{equation}The ground state of this model will be $nB$, where $n$ is the smallest number of subsets required.
\subsection{Set Packing}
Let us consider the same setup as the previous problem, but now ask a different question: what is the largest number of subsets $V_i$ which are all disjoint? This is called the set packing problem.\footnote{Often times, this is also referred to as the maximally independent set (MIS) problem.} To do this, we use $H=H_A+H_B$ with a separation of energy scales. We use \begin{equation}
H_A=A\sum_{i,j:V_i\cap V_j\neq \emptyset} x_i x_j,
\end{equation}which is minimized only when all subsets are disjoint. Then, we use \begin{equation}
H_B = -B\sum_i x_i
\end{equation}which simply counts the number of sets we included. Choosing $B<A$ ensures that it is never favorable to violate the constraint $H_A$ (since there will always be a penalty of at least $A$ per extra set included).
Note that an equivalent formulation of this problem is as follows: let us consider the sets to be encoded in an undirected graph $G=(V,E)$, where each set $V_i$ corresponds to a vertex $i\in V$. An edge $ij\in E$ exists when $V_i\cap V_j$ is nonempty. It is straightforward to see that if we replace \begin{equation}
H_A = A\sum_{ij\in E} x_ix_j
\end{equation}that the question of what is the maximal number of vertices which may be ``colored" ($x_i=1$) such that no two colored vertices are connected by an edge, is exactly equivalent to the set packing problem described above.
\subsection{Vertex Cover}
Given an undirected graph $G=(V,E)$, what is the smallest number of vertices that can be ``colored" such that every edge is incident to a colored vertex? Let $x_v$ be a binary variable on each vertex, which is 1 if it is colored, and 0 if it is not colored. Our Hamiltonian will be $H=H_A+H_B$. The constraint that every edge has at least colored vertex is encoded in $H_A$:\begin{equation}
H_A = A\sum_{uv\in E}(1-x_u)(1-x_v).
\end{equation}Then, we want to minimize the number of colored vertices with $H_B$: \begin{equation}
H_B = B\sum_v x_v
\end{equation}
Choose $NB<A$, where $N=|V|$. Then the ground state of this Ising model corresponds to the configuration of colored vertices such that each edge connects to a colored vertex, and the fewest number of vertices are colored.
\subsection{Satisfiability}
Satisfiability is one of the most famous NP-complete problems. Every satisfiability problem can be written as a so-called 3SAT problem in conjunctive normal form (and this algorithm takes only polynomial steps/time) and so we will focus for simplicity on this case. In this case, we ask whether \begin{equation}
\Psi = C_1\wedge C_2\cdots \wedge C_m
\end{equation}can take on the value of true -- i.e., every $C_i$ for $1\le i\le N$ is true, where the form of each $C_i$ is: \begin{equation}
C_i = y_{i_1}\vee y_{i_2}\vee y_{i_3}
\end{equation}Here $y_{i_1}$, $y_{i_2}$ and $y_{i_3}$ are selected from another set of Boolean variables: $x_1,\ldots, x_N, \overline{x}_1,\ldots, \overline{x}_N$.
There is a well-known reduction of 3SAT to the set packing problem \cite{choi2010} which we reproduce here, for completeness. Consider solving the set packing problem on the following graph $G$ with $3m$ nodes, which we construct as follows. For each clause $C_i$, we add 3 nodes to the graph, and connect each node to the other 3. After this step, if there is a $y_1$ and $y_2$ such that $y_1=x_j$ and $y_2=\overline{x}_j$, then we also add an edge between these two nodes. Solving the set packing problem on this $G$, and asking whether the solution has exactly $m$ nodes, is equivalent to solving the 3SAT problem. This can be seen as follows: if a solution to the 3SAT problem exists, only one element of each clause needs to be true -- if more are true, that is also acceptable, but we must have that one is true, so let us choose to color the vertex corresponding to the variable which is true. However, we may also not choose to have both $x_1$ be true and $\overline{x}_1$ be true, so we are required to connect all such points with an edge.
\subsection{Minimax Graph Problem}
The minimax graph problem proceeds as follows: let $G=(V,E)$ denote an undirected graph, and let $C\subseteq E$ be a proposed ``coloring". The constraints on $C$ are as follows: for each edge in $C$, let us color the two vertices it connects. We will then demand that: no two edges in $C$ share a vertex, and that there are no two uncolored vertices connected by an edge. This is minimal in that we cannot add any more edges to $C$ (coloring any appropriate vertices) without violating the first constraint, and maximal in the sense that the trivial empty set solution is not allowed -- we must include all edges between uncolored vertices.
Note that, from this point on in this paper, we have not found the Ising formulations of this paper in the literature.
We will use the spins on the graph to model whether or not an edge is colored. Let us use the binary variable $x_e$ to denote whether or not an edge is colored. Our method consists of the following trick. Our energy will consist of a series of three energy functionals, each of which is chosen to be ``much" smaller than the previous one: \begin{equation}
H=H_A+H_B+H_C.
\end{equation}The first and largest term, $H_A$, will impose the constraint that no vertex has two colored edges. This can be done by setting \begin{equation}
H_A = A\sum_v \sum_{\lbrace e_1,e_2\rbrace\subset \partial v} x_{e_1}x_{e_2}.
\end{equation}Here $A>0$ is a positive energy, and $\partial v$ corresponds to the subset of $E$ of edges which connect to $v$. Thus the ground states consist of $H_A=0$; if $H_A>0$, it is because there is a vertex where two of its edges are colored.
If we choose $A$ to be large enough, then we also can define, \emph{for states with $H_A=0$}, the variable \begin{equation}
y_v \equiv \left\lbrace \begin{array}{ll} 1 &\ v\text{ has a colored edge} \\ 0 &\ v\text{ has no colored edges} \end{array} \right.= \sum_{e\in\partial v} x_e.
\end{equation}We stress that this definition is only valid for states with $H_A=0$, since in these states each vertex has either 0 or 1 colored edges. We then define the energy $H_B$, such that solutions to the minimax coloring problem also have $H_B=0$. Since we have already constrained the number of colored edges per vertex, we choose $H_B$ to raise the energy of all solutions where there exists a possible edge which can be colored, yet still not violate the coloring condition, out of the ground state. To do this, we can sum over all edges in the graph, and check whether or not the edge connects two vertices, neither of which are colored: \begin{equation}
H_B = B\sum_{e=(uv)} (1-y_u)(1-y_v).
\end{equation}
Note that since, $y_v$ can be negative, we must choose $B>0$ to be small enough. This will ensure that a ground state will have energy $H_A+H_B=0$, and correspond precisely to $H_A=H_B=0$: i.e., states which do not violate the minimax constraints.
Now, given the states where $H_A=H_B=0$, we now want the ground state to be the state where the fewest number of edges are colored. To do this, we simply let \begin{equation}
H_C = C\sum_e x_e
\end{equation}count the number of colored edges. Here $C$ is an energy scale chosen to be small enough such that it is never energetically favorable to violate the constraints imposed by either the $H_A$ or $H_B$ terms: for example, $CN<2B$. The term with the smallest $H_C$ has the smallest number of edges, and is clearly the solution to the minimax problem. Each ground state of this spin model is equivalent to a solution of the minimax problem.
\section{Soft Constraint Packing and Covering Problems}
We now turn to NP problems whose formulations as Ising models are substantially more subtle. This is, in every case, due to the fact that the constraints on the system are ``soft" constraints which are only inequalities. For most problems of interest, these soft constraints can be re-written as ``hard constraints" by an expansion of the number of spins.
\subsection{Set Cover}
Consider a set $U=\lbrace 1,\ldots, n\rbrace$, with sets $V_\alpha \subseteq U$ ($\alpha=1,\ldots, N$) such that \begin{equation}
U = \bigcup_{\alpha=1}^N V_\alpha.
\end{equation}The set covering problem is to find the smallest possible number of $V_\alpha$s, such that the union of them is equal to $U$.
Our solution consists of the following. Let us denote $x_\alpha$ to be a binary variable which is 1 if set $\alpha$ is included, and 0 if set $\alpha$ is not included. Let us then denote $x_{i,m}$ to be a binary variable which is 1 if the number of $V_\alpha$s which include element $i$ is $m\ge 1$, and 0 otherwise. Set $H=H_A+H_B$. Our first energy imposes the constraints that exactly one $x_{i,m}$ must be 1, since each element of $U$ must be included a fixed number of times, and that the number of times that we claimed $i$ was included is in fact equal to the number of $V_\alpha$ we have included, with $i$ as an element: \begin{equation}
H_A = A\sum_{i=1}^n \left(1-\sum_{m=1}^N x_{i,m}\right)^2+A\sum_{i=1}^n \left(\sum_{m=1}^N mx_{i,m}-\sum_{\alpha: i\in V_\alpha} x_\alpha \right)^2.
\end{equation}
Finally, we minimize over the number of $V_\alpha$s included: \begin{equation}
H_B =B\sum_\alpha x_\alpha,
\end{equation}with $B \ll A$ chosen so that the $A$ constraint is never violated.
\subsection{Knapsack with Integer Weights}
The knapsack problem is the following NP-complete problem: given a list of $N$ objects, labeled by indices $\alpha$, with the weight of each object given by $w_\alpha$, and a knapsack which can only carry weight $W$, what is the heaviest collection of objects which we can add to the knapsack such that their combined weight is smaller than $W$?
Let $x_\alpha$ be a binary variable denoting if we included $\alpha$, and $x_n$ for $1\le n\le W$ denote a binary variable which is 1 if the final weight of the knapsack is $n$, and 0 otherwise. Our solution consists of letting $H=H_A+H_B$, with \begin{equation}
H_A = A\left(1-\sum_{n=1}^W x_n\right)^2+A\left(\sum_{n=1}^W nx_n - \sum_\alpha w_\alpha x_\alpha \right)^2
\end{equation}which enforces that the weight can only take on one value and that the weight of the objects in the knapsack equals the value we claimed it did, and finally \begin{equation}
H_B = -B\sum_{n=1}^W nx_n,
\end{equation}with $B>0$ and $B\ll A$, chosen such that it is never favorable to violate the constraints.
There is a trick which can be used to dramatically reduce the number of extra $x_i$ spins which must be added. For simplicity, let us focus on the case $W=2^M$, for $M$ some positive integer, where the trick works most efficiently. In this case, we only need $x_1,\ldots, x_M$ instead of $x_1,\ldots, x_W$. It is easy to check that \begin{equation}
H= A\left(\sum_{n=1}^M 2^{n-1}x_n - \sum_\alpha w_\alpha x_\alpha\right)^2 - B\sum_{n=1}^M 2^{n-1}x_n
\end{equation}solves the exact same knapsack problem. This trick can be used on many of the problems below as well, but it makes the physical intuition a bit less clear, so we will not use it in our explicit formulations.
\section{Coloring Problems}
We now turn to coloring problems. Naively, coloring problems are often best phrased as Potts models, but these classical Potts models can be converted to classical Ising models with an expansion of the number of spins. This simple trick forms the basis for our solutions to this class of problems.
\subsection{Graph Coloring}
Given an undirected graph $G=(V,E)$, and a set of $n$ colors, is it possible to color each vertex in the graph with a specific color, such that no edge connects two vertices of the same color? This is a generalization of the classic problem of how many colors are needed to color a map, such that every two countries which share a border have a different color, and is called the graph coloring problem.
Our solution consists of the following: we denote $x_{v,i}$ to be a binary variable which is 1 if vertex $v$ is colored with color $i$, and 0 otherwise. The energy is\begin{equation}
H = A\sum_v \left(1-\sum_{i=1}^n x_{v,i}\right)^2 + A\sum_{(uv)\in W} \sum_{i=1}^n x_{u,i}x_{v,i}.
\end{equation}The first term enforces the constraint that each vertex has exactly one color, and provides an energy penalty each time this is violated, and the second term gives an energy penalty each time an edge connects two vertices of the same color. If there is a ground state of this model with $H=0$, then there is a solution to the coloring problem on this graph with $n$ colors. We can also read off the color of each node (in one such coloring scheme) by looking at which $x$s are 1. Note that the number of spins can be slightly reduced, since there is a permutation symmetry among colorings, by choosing a specific node in the graph to have the color 1, and one of its neighbors to have the color 2, for example.
\subsection{Clique Cover}
The clique cover problem, for an undirected graph $G=(V,E)$, is the following: given $n$ colors, is there a coloring of the graph such that if we assign every vertex in the graph exactly one color, then the subgraph for any color is a clique?
Our solution is very similar to the graph coloring problem. Again, we employ the same binary variables as for graph coloring, and use Hamiltonian \begin{equation}
H=A\sum_v \left(1-\sum_{i=1}^n x_{v,i}\right)^2+A\sum_{i=1}^n \left[\frac{1}{2}\left(-1+\sum_v x_{v,i}\right)\sum_v x_{v,i} - \sum_{(uv)\in W} x_{u,i}x_{v,i} \right].
\end{equation}The first term enforces the constraint that each vertex has exactly one color by giving an energy penalty each time this constraint is violated. In the second term, since the sum over $v$ of $x_{v,i}$ counts the number of nodes with color $i$, the first sum counts highest possible number of edges that could exist with color $i$. The second term then checks if, in fact, this number of edges does in fact exist. Thus $H=0$ if and only if the clique cover problem is solved by the given coloring. If a ground state exists with $H=0$, there is a solution to the clique covering problem.
\subsection{Job Sequencing with Integer Lengths}
The job sequencing problem is as follows: given a list of $N$ jobs for, say, a computer cluster, and job $i$ has length $L_i$, how can each job be assigned to a computer in the cluster such that, if the length of cluster $\alpha$, and its set of jobs is $V_\alpha$, then\begin{equation}
M_\alpha \equiv \sum_{i\in V_\alpha} L_i.
\end{equation}We assume that $L_i \in \mathbb{N}$.
To do this, we will begin by demanding that without loss of generality, $M_1 \ge M_\alpha$ for any $\alpha$. Introduce the variables $x_{i,\alpha}$ which are 1 if job $i$ is added to computer $\alpha$, and 0 otherwise, and the variables $y_{n,\alpha}$ for $\alpha \ne 1$ and $n\ge 0$, which is 1 if the difference $M_1-M_\alpha = n$. Then the Hamiltonian \begin{equation}
H_A = A\sum_{i=1}^N \left(1-\sum_\alpha x_{i,\alpha}\right)^2 + A\sum_{\alpha=1}^M \left(\sum_n ny_{n,\alpha} + \sum_i L_i (x_{i,\alpha}-x_{i,1})\right)^2
\end{equation}encodes that each job can be given to exactly one computer, and that no computer can have a longer total length than computer 1. To find the minimal maximal length $M_1$, we just use \begin{equation}
H_B =B\sum_i L_i x_{i,1}.
\end{equation}
\section{Hamiltonian Cycles}
In this section, we describe the solution to the (undirected or directed) Hamiltonian cycles problem, and subsequently the traveling salesman problem, which for the Ising spin glass formulation, is a trivial extension.
\subsection{Hamiltonian Cycles and Paths}
Let $G=(V,E)$, and $N=|V|$. The graph can either be directed or undirected, and the solution will not change. The Hamiltonian path problem is as follows: starting at some node in the graph, can one travel along an edge, visiting other nodes in the graph, such that one can reach every single node in the graph without ever returning to the same node twice? The Hamiltonian cycles problem asks that, in addition, the traveler can return to the starting point from the last node he visits.
Without loss of generality, let us label the vertices $1,\ldots, N$, and take the edge set $(uv)$ to be directed -- i.e., the order $uv$ matters. It is trivial to extend to undirected graphs, by just considering a directed graph with $(vu)$ added to the edge set whenever $(uv)$ is added to the edge set. Our solution will use $N^2$ spins: binary bit variables $x_{v,i}$, where $v$ represents the vertex and $i$ represents its order in a prospective cycle. Our energy will have three components. The first two things we require is to enforce that every vertex can only appear once in a cycle, and that there must be a $j^{\mathrm{th}}$ node in the cycle for each $j$. Finally, for the nodes in our prospective ordering, if $x_{u,j}$ and $x_{v,j+1}$ are both 1, then $(uv)\in E$. Note that $N+1$ should be read as 1, in the expressions below, if we are solving the cycles problem. These are encoded in the Hamiltonian: \begin{equation}
H = A\sum_{v=1}^n\left(1-\sum_{j=1}^N x_{v,j}\right)^2 + A\sum_{j=1}^n \left(1-\sum_{v=1}^N x_{v,j}\right)^2 + A\sum_{(uv)\notin E} \sum_{j=1}^N x_{u,j}x_{v,j+1}.
\end{equation}
$A>0$ is a constant. It is clear that a ground state of this system has $H=0$ only if we have an ordering of vertices where each vertex is only included once, and adjacent vertices in the cycle have edges on the graph -- i.e., we have a Hamiltonian cycle.
To solve Hamiltonian path, instead of Hamiltonian cycle, all we have to do is restrict the sum over $j$ above from 1 to $N-1$. Thus it does not care about whether or not the first and last nodes are also connected.
We note that it is straightforward to imagine slightly reducing the size of the state space for the Hamiltonian cycles problem as follows: it is clear that node 1 must always be included in a Hamiltonian cycle, and without loss of generality we can set $x_{1,1}=1$: this just means that the overall ordering of the cycle is chosen so that node 1 comes first. This reduces the number of spins to $(N-1)^2$.
\subsection{Traveling Salesman}
The traveling salesman problem for a graph $G=(V,E)$, where each edge $uv$ in the graph has a weight $W_{uv}$ associated to it, is to find the Hamiltonian cycle such that the sum of the weights of each edge in the cycle is minimized. Typically, the traveling salesman problem assumes a complete graph, but we have the technology developed to solve it on a more arbitrary graph.
To solve this problem, we use $H=H_A+H_B$, with $H_A$ the Hamiltonian given for the directed (or undirected, if the graph is undirected for traveling salesman) Hamiltonian cycles problem. We then simply add \begin{equation}
H_B = B\sum_{(uv)\in E} W_{uv}\sum_{j=1}^N x_{u,j}x_{v,j+1}.
\end{equation} with $B$ small enough that it is never favorable to violate the constraints of $H_A$. If the traveling salesman does not have to return to his starting position, we can restrict the sum over $j$ from $1$ to $N-1$, as before.
\section{Tree Problems}
The most subtle NP problems to solve with Ising models are problems which require finding connected tree subgraphs of larger graphs. The key point which makes the solution difficult is the requirement of connectivity, and the solution relies on similar tricks, therefore, to the Hamiltonian cycles problem discussed earlier.
\subsection{Minimal Spanning Tree with a Maximal Degree Constraint}
The minimal spanning tree is the following: given an undirected graph $G=(V,E)$, where each edge $(uv) \in E$ comes with a cost $c_{uv}$, what is the tree $T\subseteq G$, which contains all vertices, such that the cost of $T$, defined as \begin{equation}
c(T) \equiv \sum_{(uv)\in E_T} c_{uv},
\end{equation}is minimized? Without loss of generality, we take $c_{uv}>0$ in this subsection (a large positive constant can always be added to each $c_{uv}$ ensure that the smallest value of $c_{uv}$ is strictly positive, since the number of edges of a tree is always $N-1$, if the number of vertices in $N$). We will also add a degree constraint, that each degree in $T$ be $\le \Delta$, which makes the problem NP-complete.
To solve this problem, we place a binary variable $y_e$ on each edge to determine whether or not that edge is included in $T$: \begin{equation}
y_e \equiv \left\lbrace\begin{array}{ll} 1 &\ e\in E_T \\ 0 &\ \text{otherwise} \end{array}\right..
\end{equation}We also place a large number of binary variables $x_{v,i}$ on each vertex, and $x_{uv,i} \ne x_{vu,i}$ on edge $(uv)$: the number $i=0,1,\ldots, N/2$ will be used to keep track of the depth a node in the tree, and if $x_{uv}=1$, it means that $u$ is closer to the root than $v$, and if $x_{vu}=1$ it means that $v$ is closer to the root, and we use another variable $z_{v,i}$ ($i=1,\ldots \Delta$) to count the number of degrees of each node. We now use energy $H=H_A+H_B$, where the terms in $H_A$ are used to impose the constraints that: there is exactly one root to the tree, each vertex has a depth, each bond has a depth, and its two vertices must be at different heights, the tree is connected (i.e., exactly one edge to a non-root vertex comes from a vertex at lower depth), each node can have at most $\Delta$ edges, and each edge at depth $i$ points between a node at depth $i-1$ and $i$, respectively: \begin{align}
H_A&= A\left(1-\sum_v x_{v,0}\right)^2 + A\sum_v \left(1-\sum_i x_{v,i}\right)^2 + A\sum_{uv\in E} \left(y_{uv}-\sum_i (x_{uv,i}+x_{vu,i})\right)^2 \notag \\
&\;\;\;\;\; + A\sum_v \sum_{i=1}^{N/2}\left(x_{v,i}-\sum_{(uv)\in E} x_{uv,i}\right)^2 + A \sum_v \sum_{i=1}^{N/2}\left(\sum_{j=1}^\Delta jz_{v,j}-\sum_{uv,vu\in E}\sum_i x_{uv,i}\right)^2 \notag \\
&\;\;\;\;\; + A\sum_{uv,vu\in E} \sum_{i=1}^{N/2} x_{uv,i}(2-x_{u,i-1}-x_{v,i})
\end{align}
The ground states with $H_A=0$ are trees which include every vertex. Note that when we sum over $uv\in E$, we are not counting $vu$ as distinct, but when we notate $uv,vu\in E$, we \emph{do} treat $uv \ne vu$. We then add \begin{equation}
H_B =B\sum_{uv,vu\in E}\sum_{i=1}^{N/2} c_{uv} x_{uv,i}.
\end{equation}The minimum of $E$ will find the minimal spanning tree, subject to the degree constraint.
\subsection{Steiner Trees}
The Steiner tree problem is somewhat similar to the problem above: given our costs $c_{uv}$, we want to find a minimal spanning tree for a subset $U\subset V$ of the vertices, with no degree constraints. To do this, we use the same Hamiltonian as for the minimal spanning tree, except we add binary variables $y_v$ for $v\notin U$ which determine whether or not a node $U$ is included, and use the Hamiltonian $H=H_A+H_B$, where $H_A$ enforces constraints similarly to in the previous case: \begin{align}
H_A&= A\left(1-\sum_v x_{v,0}\right)^2 + A\sum_v \left(y_v-\sum_i x_{v,i}\right)^2 + A\sum_{uv\in E} \left(y_{uv}-\sum_i (x_{uv,i}+x_{vu,i})\right)^2 \notag \\
&\;\;\;\;\; + A\sum_v \sum_{i=1}^{N/2}\left(x_{v,i}-\sum_{(uv)\in E} x_{uv,i}\right)^2 + A\sum_{uv,vu\in E} \sum_{i=1}^{N/2} x_{uv,i}(2-x_{u,i-1}-x_{v,i})
\end{align}We then use $H_B$ from the previous model to determine the minimum weight tree.
\subsection{Directed Feedback Vertex Set}
A feedback vertex set for a directed graph $G=(V,E)$ is a subset $F\subset X$ such that the subgraph $(X-F, \partial(X-F))$ is acyclic. We will refer to $F$ as the feedback set. Solving a decision problem for whether or not a feedback set exists for $|F|\le k$ is NP-complete. We solve the optimization problem of finding the smallest size of the feedback set first for a directed graph -- the extension to an undirected graph will be a bit more involved.
Before solving this problem, it will help to prove two lemmas. The first lemma is quite simple: there exists a node in a directed acyclic graph which is not the end point of any edges. For suppose that for each vertex, there was an edge that ends on that vertex. Then pick an arbitrary vertex, pick any edge ending on that vertex, and follow that edge in reverse to the starting vertex. Repeat this process more than $N$ times, and a simple counting argument implies that we must have visited the same node more than once, at least once. Thus, we have traversed a cycle in reverse, which contradicts our assumption.
The second lemma is as follows: a directed graph $G=(V,E)$ is acyclic if and only if there is a height function $h:V\rightarrow \mathbb{N}$ such that if $uv\in E$, $h(u)<h(v)$: i.e., every edge points from a node at lower height to one at higher height. That height function existence implies acyclic is easiest to prove using the contrapositive: suppose that a graph is cyclic. Then on a cycle of edges, we have \begin{equation}
0 < \sum [h(u_{i+1}) - h(u_i)] = h(u_1) - h(u_n) + h(u_n) - h(u_{n-1}) + \cdots -h(u_1)=0
\end{equation}is a contradiction. To prove that an acyclic graph has a height function, we construct one recursively. Using our first lemma, we know that there exists a vertex $u$ with only outgoing edges, so let us call $h(u)=1$. For any other vertex, we will call the height of that vertex $h(v) = 1+h^\prime(v)$, where $h^\prime(v)$ is found by repeating this process on the graph with node $u$ removed (which must also be acyclic). It is clear this process will terminate and assign exactly one node height $i$ for each integer $1\le i\le |V|$.
We can now exploit this lemma to write down an Ising spin formulation of this problem. We place a binary variable $y_v$ on each vertex, which is 0 if $v$ is part of the feedback set, and 1 otherwise. We then place a binary variable $x_{v,i}$ on each vertex, which is 1 if vertex $v$ is at height $i$. So far the heights $i$ are arbitrary, and the requirement that a height function be valid will be imposed by the energy. The energy functional we use is $H=H_A+H_B$ where\begin{equation}
H_A = A\sum_v \left(y_v - \sum_i x_{v,i}\right)^2 + A \sum_{uv\in E} \sum_{i\ge j} x_{u,i}x_{v,j}.
\end{equation}
The first term ensures that if a vertex is not part of the feedback set, it has a well-defined height; the second term ensures that an edge only connects a node with lower height to a node at higher height. We then find the smallest possible feedback set by adding \begin{equation}
H_B = B\sum_v (1-y_v).
\end{equation}
\subsection{Undirected Feedback Vertex Set}
The extension to undirected graphs requires a bit more care. In this case, we have to be careful because there is no a priori distinction on whether the height of one end of an edge is smaller or larger than the other -- this makes the problem much more involved, at first sight. Furthermore, it is not true that a directed acyclic graph is acyclic if the orientation of edges is ignored. However, for an undirected graph, we also know that a feedback vertex set must reduce the graph to trees, although there is no longer a requirement that these trees are connected (this is called a forest). With this in mind, we find that the problem is actually extremely similar to minimal spanning tree, but without degree constraints or connectivity constraints. The new subtlety, however, is that we cannot remove edges.
To solve this problem, we do the following: introduce a binary variable $x_{v,i}$, which is 1 if $v$ is a vertex in any tree at depth $i$, and 0 otherwise. However, to account for the fact that we may remove vertices, we will allow for $x_{v,-1}=1$ if $v$ is part of the feedback vertex set, and 0 otherwise. We do a similar thing for edges: we consider $x_{uv,i} \ne x_{vu,i}$ to be defined as before when $i>0$. We also define the variables $x_{uv,-1}\ne x_{vu,-1}$, which we take to be 1 when the ending node of the ``directed" edge is in the feedback vertex set. Now, we can write down a very similar energy to the minimal spanning tree: \begin{align}
H_A &= A\sum_v \left(1-\sum_i x_{v,i}\right)^2 + A\sum_{uv\in E} \left(1-\sum_i (x_{uv,i} + x_{vu,i})\right)^2 +A \sum_{uv \in E} (x_{uv,-1}-x_{v,-1})^2 \notag \\
&\;\;\;\;\; + A\sum_v\sum_{i>0}\left(x_{v,i} - \sum_{uv\in E}x_{uv,i}\right)^2 + A\sum_{uv,vu\in E} \sum_{i>0} x_{uv,i}(2-x_{u,i-1}-x_{v,i})
\end{align}
The changes are as follows: we no longer constrain only 1 node to be the root, or constrain the degree of a vertex -- however, we have to add a new term to ensure that edges are only ignored in the tree constraint if they point to a node in the feedback set. We then add \begin{equation}
H_B = B\sum_v x_{v,-1}
\end{equation}with $B\ll A$ chosen so that the $A$ constraints are never violated. This counts the number of nodes in the feedback set, so thus $H$ is minimized when $H_B$ is smallest -- i.e., we have to remove the fewest number of nodes.
\subsection{Feedback Edge Set}
For a directed graph, the feedback edge set problem is to find the smallest set of edges $F\subset E$ such that $(V,E-F)$ is a directed acyclic graph. It is known to be NP-hard.\footnote{It is in P if the graph is undirected however.} Our solution will be somewhat similar to the directed feedback vertex set. We place a binary variable $y_{uv}$ on each edge, which is 1 if $uv\notin F$, and define $x_{uv,i}$ to be 1 if $uv$ if both $y_{uv}=1$ and the height of node $u$ is $i$. We also add a binary variable $x_{v,i}$, as for the feedback vertex set. Our constraint energy must then enforce that: each vertex and included edge has a well-defined height, and that each edge points from a lower height to a higher height: \begin{equation}
H_A = A\sum_v \left(1-\sum_i x_{v,i}\right)^2 + A\sum_{uv\in E}\left(y_{uv}-\sum_i x_{uv,i}\right)^2 + A\sum_{uv}\sum_i x_{uv,i}\left(2-x_{u,i}-\sum_{j>i} x_{v,j}\right).
\end{equation}
We then use \begin{equation}
H_B = B\sum_{uv\in E} (1-y_{uv})
\end{equation}to count the number of edges in $F$ -- it is minimized when this number is smallest.
\section{Graph Isomorphisms}
The question of whether two graphs $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$ are isomorphic is believed to be hard, but its classification into a complexity class is still a mystery. Since it is a hard problem, let us nonetheless describe an Ising formulation for it. An isomorphism is only possible if $|V_1|=|V_2|\equiv N$, so we will restrict ourselves to this case, and without loss of generality, we label the vertices of $G_1$ with $1,\ldots, N$.
We write this as an Ising model as follows. Let us describe a proposed isomorphism through binary variables $x_{v,i}$ which is 1 if vertex $v$ in $G_2$ gets mapped to vertex $i$ in $G_1$. The energy \begin{equation}
H_A = A\sum_v \left(1-\sum_i x_{v,i}\right)^2 + A\sum_i \left(1-\sum_v x_{v,i}\right)^2
\end{equation}ensures that this map is bijective. We then use an energy \begin{equation}
H_B = B\sum_{ij \in E_1} \left(1-\sum_{uv\in E_2} x_{u,i}x_{v,j}\right) + B\sum_{ij\notin E_1} \sum_{uv\in E_2} x_{u,i}x_{v,j}
\end{equation} to penalize a bad mapping: i.e. an edge that is not in $G_1$ is in $G_2$, or an edge that is in $G_1$ is not in $G_2$. If the ground state of this Hamiltonian corresponds to $H=0$, there is an isomorphism.
\section{Conclusion}
In this paper, we have presented classical Ising formulations for a wide variety of famous NP problems. Although most of the new constructions are quite cumbersome and unlikely to be useful for a long time, if ever, the tricks introduced may be able to be refined and made efficient enough to become useful. Much of this paper can be thought of as an amusing exercise to simply enumerate many subtle constructions, although it may also turn out that some of the new constructions are quite useful for quantum algorithm designers as well. We stress that techniques involving separations of multiple scales are quite likely the most efficient for solving nontrivial problems, and this may be a useful direction to explore further.
We note that many of the Ising formulations of tree problems in this paper required substantial expansions of the state space and perhaps there are ways to avoid this with more clever Hamiltonians. I am unsure if it is an open question to determine the smallest possible scaling of state space size which can encode arbitrary instances of many NP problems, and this may be a worthwhile future direction.
The fact that there is an Ising model representation for a problem by no means ensure it is NP-complete or NP-hard. As a reminder that sometimes the Ising approach may be quite inconvenient, let us consider the following example. Consider the simple problem of finding the largest integer in a list $n_1,\ldots,n_N$. Introducing binary variables $x_i$ for $i=1,\ldots, N$, the Ising model \begin{equation}
H = A\left(1-\sum_i x_i\right)^2 - B\sum_i n_ix_i
\end{equation}for $A>B\max(n_i)$ solves this problem. In fact, this problem looks somewhat like an instance of the random field Ising model on a complete graph, and yet this has a very simple $\mathrm{O}(N)$ classical algorithm. Thus, the results of this paper should be taken with a grain of salt -- it is quite possible there are far more efficient algorithms, even for a quantum computer.
\section*{Acknowledgements}
A.L. is supported by the Purcell Fellowship at Harvard.
He would like to thank Robert Lucas for suggesting this problem, and Vicky Choi, Jacob Sanders and John Tran for helpful comments.
\bibliographystyle{plain}
\addcontentsline{toc}{section}{References}
|
1,314,259,993,376 | arxiv | \section{Introduction}\label{par:intro}
The optical-UV spectra of Seyfert~1 galaxies and quasars is characterized by strong and broad emission lines, which are believed to be produced by gas
photoionized by the central source. The broad line region (BLR) gas has been initially proposed to be in the form of a set of clouds
\citep[e.g.][]{kmt81}. However, first the confinement of these clouds was problematic \citep{mf87} and second, an accurate analysis of the smoothness
of the broad line wings revealed that in fact the gas could not be in the form of discrete clouds but rather a continuous distribution of gas \citep{arav98}. Another hypothesis is that the gas is
ejected
from the accretion disc outskirts in the form of a wind \citep[e.g.][]{murray95,bottorff97,elvis00,czerny11}. Finally the gas reservoir could be provided by disrupted
stars in the vicinity of the black hole \citep{baldwin03}.\\
Extensive observation of this phenomenon, through the reverberation mapping technique, led to the conclusion that the BLR is extended over a large area
and that the radius of the BLR scales with the square root of the ionizing luminosity \citep{peterson93}.
The BLR does not have an homogeneous, isotropic distribution \citep[e.g.][]{decarli08}. Several studies pointed out that higher ionized lines
(represented by \ion{C}{iv}) are incompatible with an origin in the same region where the bulk of H$\beta$ is produced \citep{sulentic00}. Different approaches to this problem lead to
divergent results. A flat disk structure would preferentially emit \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\, while H$\beta$ would be produced in a vertically extended region, more distant from the central source \citep{sulentic00,
decarli08,goad12}. Other interpretations propose a different scenario, where the gas emitting H$\beta$ has a flat geometry, near to the accretion disk, while \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ would be emitted in an
extended region \citep[][and references therein]{kz13}. By virtue of the tight correlation found between the BLR size and the AGN luminosity \citep{bentz13}, the BLR gas could also
arise from the accretion disk itself and generate a failed wind. The confinement of such cloud motion, involving both outflow and inflow of gas, would be set by the dust sublimation radius \citep{czerny11,galianni13}. Studies of gas dynamics within the BLR point indeed to a complex motion of the gas \citep{pancoast12},
where the matter may sometimes infall towards the black hole \citep{pancoast13,gg13}. Observationally, the broad lines centroids often show shifts (up to hundreds km\,s$^{-1}$)
with respect to the systemic velocity of the AGN. Higher-ionization lines (like \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi$\lambda1548$) show in general
more pronounced blue-shifts with respect to lower ionization lines \citep{peterson97}. This points also to a stratified medium, where the illumination of the cloud is related to the
ionization of the clouds \citep{peterson04}. A way to model the BLR emission without a priori assumptions on its origin is the "locally optimally emitting cloud" model \citep[LOC, ][]{baldwin}, which describes the total emission of a line as a function of the density and
the distance of the gas from the central source \citep[see][ for a review]{korista97a}. This model has been successfully applied to the broad lines detected in the UV of e.g. \object{NGC~5548}
\citep[e.g.][]{korista00}.
Emission from the BLR can in principle extend from the optical-UV up to the X-ray band. With the advent of XMM-{\it Newton}\ and {\it Chandra},
relatively weak, but significant, broad emission lines have been detected in the soft X-ray band. Often these lines display a symmetric profile, suggesting an origin far from
the accretion disk where relativistic effects would instead distort the line profile \citep[e.g.][]{steen09,fabian09}. The most prominent X-ray lines with a non-relativistic
profile are found at the energy of the
O\,{\sc vii}\ triplet and the O\,{\sc viii}\,Ly$\alpha$ \citep[e.g.][ hereinafter C07]{boller07,steen09,longinotti10,ponti10,costantini07}.
An extension of the LOC model, adding also the X-ray band in the modeling, has been applied to
\object{Mrk~279} (C07). In that case, the luminosities of the soft-X-ray emission lines (\ifmmode {\rm C}\,{\sc vi} \else C\,{\sc vi}\fi, N\,{\sc vii}, O\,{\sc vii}, O\,{\sc viii}\ and Ne\,{\sc ix}) were well predicted by the LOC model, suggesting also that the bulk
of the X-ray lines could possibly arise up to three times closer to the black hole than the UV lines.\\
A contribution of the BLR to the {Fe\,K$\alpha$}\ line at 6.4\,keV has been often
debated. A comparison between the Full Width Half Maximum (FWHM) of the H$\beta$ line at 4861\,\AA\ and the FWHM of the narrow component of the {Fe\,K$\alpha$}\ line as measured by {\it Chandra}-HETG, did
not reveal any correlation, as it would have been expected if the lines originated from the same gas \citep{nandra06}. However, on a specific source, namely the liner \object{NGC~7213}, where no hard X-ray reflection was
observed, the {Fe\,K$\alpha$}\ line and the \ifmmode {\rm H}\beta \else H$\beta$\fi\ line are consistent with having the same FWHM \citep{bianchi08}. On the other hand, as seen above, X-ray lines
may originate in different regions of the BLR. Therefore a direct comparison between the FWHM of {Fe\,K$\alpha$}\ and H$\beta$ may not prove or disprove that {Fe\,K$\alpha$}\ is also produced
in the BLR. A further extension of the LOC model to the 6.4\,keV region showed that, in the case of Mrk~279, the BLR emission contributed for at most 17\% to the total {Fe\,K$\alpha$}\ emission, suggesting that
reflection either from the disk or from the torus had to be instead the dominant emitter of that line \citep{costantini10}.
\object{Mrk~509} has been subject to a large multiwavelength campaign, carried out in
2009 \citep{kaastra1}. The source has been an ideal laboratory in order to study the ionized gas outflowing from the source \citep{detmers11,ebrero11,kaastra2,kriss11,steen11,arav12}.
The broad band continuum was investigated in \citet[][]{med11,pop13,boissay14} and the {Fe\,K$\alpha$}\ long term variability in \citet{ponti13}.
In this paper of the series we investigate the BLR emission through the emission lines, simultaneously detected by different instruments from the optical to the X-rays.
The paper is organized as follows: In Sect.~\ref{par:data} the data are described. In Sect.~\ref{par:model} we describe the application of the LOC model to the data. The discussion is in
Sect.~\ref{par:discussion}, followed by the conclusions in Sect.~\ref{par:conclusion}.
Here we adopt a redshift of 0.034397 \citep[][]{huchra93}. The cosmological parameters used are:
H$_0$=70 km/s/Mpc, $\Omega_{\rm m}$=0.3, and $\Omega_{\Lambda}$=0.7. The errors are calculated at 1$\sigma$ significance, obtained using the $\chi^2$
statistical method.
\begin{table*}
\caption{\label{t:lines} Main parameters of the broad lines components used in this analysis.}
\begin{center}
\begin{tabular}{llllll}
\hline\hline
Ion & Wavelength & Lum$_{i}$ & Lum$_{b}$& Lum$_{vb}$& Inst.\\
\hline
{Fe\,K$\alpha$} & 1.93 & $-$& $-$ &$4.1\pm0.5$& 1\\
Ne\,{\sc ix} & 13.69 &$-$ &$1.07\pm0.12$ &$-$&2\\
O\,{\sc viii}$^a$ & 18.96 &$-$&$1.56\pm0.14$&$-$ &2\\
O\,{\sc vii}& 22.1 &$-$ &$3.72\pm0.33$&$-$& 2\\
N\,{\sc vii}& 24.77 &$-$&$2.28\pm1.52$&$-$ &2\\
\ifmmode {\rm C}\,{\sc vi} \else C\,{\sc vi}\fi& 33.73 &$-$&$1.37\pm0.54$&$-$& 2\\
\hline
\ifmmode {\rm C}\,{\sc iii} \else C\,{\sc iii}\fi& 977& $-$ & $39\pm 15$ & $-$ & 3\\
N\,{\sc iii}&991& $-$ & $-$ & $35\pm 14$ & 3\\
O\,{\sc vi}$^a$ & 1025&$-$ & $62\pm 24$ & $90\pm 36$& 3\\
\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi$^a$&1216&$35\pm 7$ & $ 196\pm 40$ & $402\pm 80$& 4\\
\ion{N}{v} &1238&$-$ & $ 15\pm 3 $ & $-$ & 4\\
\ion{Si}{ii} &1260&$-$ & $ 30\pm 6 $ & $-$& 4\\
\ion{O}{i}$^a$ &1304&$-$ & $14\pm 3 $ & $-$& 4\\
\ion{C}{ii} &1335&$-$ & $3.5\pm0.7$ & $-$& 4\\
\ion{Si}{iv}$^a$ &1403&$-$ & $55\pm 11 $ & $-$ & 4\\
\ion{N}{iv}]&1486&$-$ & $ 3.0\pm 0.6$ & $-$ & 4\\
\ion{Si}{ii} & 1526&$-$ & $ 6\pm 1 $ & $-$ & 4\\
\ion{C}{iv} &1548&$49\pm 9 $ & $124\pm 24$ & $191\pm 39$& 4\\
\ion{He}{ii} &1640&$8.5\pm 1.7 $ & $55\pm 11 $ & $-$ & 4\\
\ion{O}{iii}] &1663&$-$ & $27\pm 5$ & $-$ & 4\\
\hline
\ifmmode {\rm H}\delta \else H$\delta$\fi & 4102 & $-$& $11\pm1$&$-$ & 5\\
\ifmmode {\rm H}\gamma \else H$\gamma$\fi$^a$& 4340 &$-$&$24\pm4$&$-$ & 5\\
\ifmmode {\rm H}\beta \else H$\beta$\fi & 4861 &$-$&$45\pm14$&$-$ & 5\\
\ifmmode {\rm H}\alpha \else H$\alpha$\fi & 6563 &$-$&$121\pm9$&$-$ & 5\\
\hline
\end{tabular}
\end{center}
Notes:\\
In columns 3, 4 and 5, the lines luminosity are reported for the intermediate {\it(i)} with FWHM=1000--3000\,km\,s$^{-1}$, broad {\it(b)} with FWHM=4000--5000\,km\,s$^{-1}$,
and very broad {\it(vb)} with FWHM$>$9000\,km\,s$^{-1}$\ components (defined in Sect.~\ref{par:uvlines}).\\
Restframe nominal wavelengths are in \AA, luminosities are in units of $10^{41}$\,erg\,s$^{-1}$.\\
Instruments: 1: XMM-{\it Newton}-EPIC-PN, 2: XMM-{\it Newton}-RGS, 3: FUSE, 4: HST-COS, 5: XMM-{\it Newton}-OM\\
$^a$ Blends of lines: O\,{\sc viii}\ with the O\,{\sc vii}-He$\beta$ line; the O\,{\sc vi}\ doublet with \ifmmode {\rm Ly}\beta \else Ly$\beta$\fi; The \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ with the \ion{O}{v}] triplet and He\,{\sc ii}; O\,{\sc i}\ with
\ion{Si}{ii}; the \ion{Si}{iv} doublet with both the \ion{O}{iv}] and \ion{S}{iv} quintuplets; \ifmmode {\rm H}\gamma \else H$\gamma$\fi\ with He\,{\sc ii}.
\end{table*}
\section{The data}\label{par:data}
Here we make use of the analyses already presented in other papers of this series on Mrk~509. In particular the XMM-{\it Newton}-OM optical lines are taken from \citet{med11}, the COS and FUSE broad
emission line fluxes are taken from \citet[][hereinafter K11]{kriss11}. The X-ray broad line data are from \citet[][using RGS and LETGS ]{detmers11, ebrero11} and \citet[][using PN]{ponti13}.
The lines that we use in our modeling are listed in Table~\ref{t:lines}.
\subsection{The X-ray broad lines}
The XMM-{\it Newton}-RGS spectrum shows evidence of broad emission at energies consistent with the transitions of the main He-like
(\ion{O}{vii} and \ion{Ne}{ix} triplets) and H-like (\ion{C}{vi}, \ion{N}{vii}, \ion{O}{viii}) lines \citep[see Table~2 and Fig.~3 of][]{detmers11}. The FWHM of non blended lines was about 4000\,km\,s$^{-1}$.
For the triplets, neither the FWHM nor the individual-line fluxes could be disentangled. In particular, only for
the resonance lines has a significant detection been found. We therefore took the FWHM of the resonance
line as a reference value and derived the upper limits of the intercombination and forbidden lines for both the \ion{O}{vii} and
\ion{Ne}{ix} triplets. In Table\,~\ref{t:lines} we report the intrinsic line luminosities.\\
The luminosity of the {Fe\,K$\alpha$}\ line (Table~\ref{t:lines}) has been measured by the EPIC-PN\ instrument. The line is formed by a constant,
narrow, component plus a broad, smoothly variable component \citep{ponti13}. We do not consider here the narrow component whose FWHM is not resolved by XMM-{\it Newton}. This component
is not variable on long time scales and may be caused by reflection from regions distant from the black hole, like the molecular torus.
The broad and variable component of the {Fe\,K$\alpha$}\ line has a FWHM of about 15,000-30,000\,km\,s$^{-1}$\ which may probably partly arise from the BLR \citep{ponti13}.
The EPIC-PN\ spectrum of Mrk~509\ also shows hints of highly ionized lines from \ion{Fe}{xxv}
and \ion{Fe}{xxvi}. These are too ionized to be produced in the BLR \citep[e.g.][]{costantini10}, but they are likely to come from a hot inner part of the molecular
torus \citep{costantini10, ponti13}. Thus we do not include these lines in this analysis.
\subsection{The UV broad lines}\label{par:uvlines}
In the HST-COS modeling of the emission lines, more than one Gaussian component is necessary to fit the
data (see Table~3--6 and Fig.~4 of K11). The most prominent lines (i.e. \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ and \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi) show as many as four distinct components.
A first narrow component has a FWHM of about 300\,km\,s$^{-1}$, then an intermediate component with FHWM
of about 1000--3000\,km\,s$^{-1}$\ and a broad component with 4000--5000\,km\,s$^{-1}$\ are also present in the fit. Finally, a very broad component with FWHM
of about 9000--10\,000\,km\,s$^{-1}$\ is present for the most prominent lines (Table~\ref{t:lines}). We ignored in this study the narrow component (FWHM$\sim$300\,km\,s$^{-1}$), which is unlikely to
be produced in the BLR but should rather come from the Narrow Line Region (NLR). Due to the complex and extended morphology of the narrow-line emitting gas, the distance of the NLR
in this source is uncertain \citep{phillips83, fischer15}. From the width of the narrow lines,
the nominal virial distance ranges between 6 and 13\,pc, depending on the black hole mass estimate (see Sect.~\ref{par:geometry}). Note that with respect to Table~3 in K11, we summed the doublet luminosities as in many cases they are partially blended with each other.
We corrected the line fluxes for the Galactic extinction (E(B-V)=0.057) following the extinction law in \citet{cardelli89}. The errors listed in Table~\ref{t:lines} are discussed below (Sect.~\ref{par:errors}).\\
We also used the archival FUSE data,
which offer the flux measurements of shorter wavelength lines. The drawback of this approach is that the FUSE
observations were taken about 10 years before our campaign (in 1999--2000). In this time interval the source might
have changed significantly its flux and emission profiles. In this analysis we chose the 1999 observation \citep{kriss00},
as in that
occasion the flux was comparable to the HST-COS data in the overlapping band and the FWHM most resemble the present data. In Table~\ref{t:lines} we report the FUSE line
luminosities used in this paper. Also in this case we summed doublets and the blended lines.
\subsection{The optical broad lines}
The Optical Monitor (OM) data were collected at the same time as the X-ray data presented in this paper. The data reduction
and analysis has been presented by \citet{med11} and included correction for Galactic absorption and subtraction of the stellar contribution from the host galaxy from both the continuum and the emission lines. The optical grism data, covering the 3000--6000\,\AA\ wavelength range,
displayed indeed clear emission lines of the Balmer series \citep[see Table~3 and Fig.~4 in][]{med11}. For
the H$\alpha$ line two line components could be disentangled into a narrow and unresolved component with a flux of
$\sim3.3\times10^{-13}$\,erg\,cm$^{-2}$\,s$^{-1}$ and a broader component, with a FWHM of $\sim4300$\,km\,s$^{-1}$. For the other lines of
the series, the narrow component could not be disentangled. In order to obtain an estimate of the flux of the broad component alone,
we simply scaled the flux of the H$\alpha$-narrow component for the line ratio of the other lines of the Balmer series \citep[e.g. ][]{rafanelli}. We then subtracted the estimated narrow-line flux from the total flux measured by OM, resulting
in a relatively small correction with respect to the total line flux. The intrinsic luminosities of the broad lines
are reported in Table\,\ref{t:lines}.
\subsection{Notes on the uncertainties}\label{par:errors}
The uncertainties associated with the measurements are quite heterogeneous, reflecting the
different instruments' performances.
In the UV data, the statistical errors on the fluxes are extremely small (2--4\%, K11). However, the line luminosities are affected by
additional systematic uncertainties, due to the derivation of specific line components, namely the ones coming from the BLR, among a blend of
different emission lines, with different broadening, fluxes, and velocity shifts. For instance, the \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ line doublet at
1548\,\AA\ is the sum of as much as seven components which suffer significant blending (K11). As seen in
Sect.~\ref{par:uvlines}, only for the strongest lines could three broad lines widths be disentangled. However, for the lower-flux
lines this decomposition could not be done, leaving room for additional uncertainties on the line flux of the broad
components. Therefore, we assigned more realistic error bars to the UV data.
We associated an error of 20\% to the fluxes, which is roughly based on the ratio between the narrow and
the broad components of the \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ doublet. We also left out from the final modeling \ion{Si}{ii} ($\lambda\lambda 1260, 1526$\,\AA) and \ion{N}{iv}] as in the original COS data (see K11) those lines
are easily confused with the continuum and are therefore affected by a much larger, and difficult to quantify, uncertainty than the one provided by the statistical error.
The line fluxes and widths observed by FUSE in 1999 may also be different from 2009.
K11 estimated that the continuum flux was 34--55\% lower and the emission
lines could have been affected. In order to take into account the possible line variability, we assigned to the FUSE detections an error of 40\% on the flux. We also left out the N\,{\sc iii}\ from the fit.
Being N\,{\sc iii}\ a weak and shallow line, only a very-broad component is reported, which may be contaminated by the continuum emission \citep{kriss00}.
For the O\,{\sc vii}\ triplet, in the RGS band, we summed up the values of the line fluxes, but formally retaining the percentage error on the best-measured line, as
upper limits were also present \citep{detmers11}. We considered \ifmmode {\rm C}\,{\sc vi} \else C\,{\sc vi}\fi\ and N\,{\sc vii}\ as upper limits because the detection was not more significant than 2.5$\sigma$
in the RGS analysis. However we used these two points as additional constraints to the fit, using them as an upper limit value on the model.\\
For the iron broad component, detected at 6.4\,keV in the EPIC-PN\ spectrum, which we consider in this work, we also summed the {Fe\,K$\alpha$}\ line with the {Fe\,K$\beta$}\ line (about 10\% of the flux of the {Fe\,K$\alpha$}) as we do in the
model.
\section{The data modeling}\label{par:model}
\subsection{The LOC model}\label{par:loc_modeling}
In analogy with previous works \citep{costantini07, costantini10}, we interpret the broad emission features using a
global model. In the "locally optimally-emitting cloud" model, the emerging emission spectrum is not dominated by the details of the
clouds (or more generally gas layers), but rather on their global distribution in hydrogen number density, $n$, and radial distance from the source, $r$ \citep{baldwin}. The gas distribution is indeed
described by the integrated luminosity of every emission line, weighted by a powerlaw distribution for $n$ and $r$:\\
\begin{equation}
L_{\rm line}\propto\int\int L(r,n)\ r^{\gamma}\ n^{\beta}\ dr\ dn.
\end{equation}
The powerlaw index of $n$ has been reported to be typically consistent with unity in the LOC analysis of quasars \citep{baldwin97}. A steeper (flatter) index for the density
distribution would enhance regions of the BLR where the density is too low (high) to efficiently produce lines \citep{korista00}. Here we assume the index $\beta$ to be
unity. Following C07, the density ranged between
$10^{8-12.5}$\,cm$^{-3}$. This is the range where the line emission is effective \citep{korista97a}. The radius ranged between $10^{14.75-18.5}$\,cm, to include also the possible X-ray emission from the lines,
in addition to the UV and optical ones (C07).
The gas hydrogen column density was fixed to $10^{23}$\,cm$^{-2}$ where
most of the emission should occur \citep{korista97a,korista00}. Besides, the emission spectrum is not significantly
sensitive to the column density in the range $10^{22-24}$\,cm$^{-2}$ \citep{korista00}. The grid of parameters has been constructed using Cloudy (ver.~10.00),
last described in \citet{ferland13}. For each point of the grid, $L(r,n)$ is calculated and then integrated according to Eq.~1.
The emitted spectrum is dependent on the spectral energy distribution (SED) of the source. In this case we benefited from the simultaneous
instrument coverage from optical (with OM) to UV (with HST-COS) and X-rays (EPIC-PN\ and {\it INTEGRAL}).
As a baseline we took the SED averaged over the 40-days XMM-{\it Newton}\ monitoring campaign
\citep[labeled standard SED in Fig. 3 of][]{kaastra1}, taking care that the SED is truncated at infrared frequencies (no-IR case in that figure).
Although the accretion disk must have some
longer-wavelength emission, most of the infrared part (especially the far-IR bump) is likely to emerge from outer parts of the system, like the molecular torus. An overestimate
of the infra-red radiation would mean to add free-free heating to the process. This effect becomes important at longer wavelengths as it is
proportional to $n^2/\nu^2$, where $\nu$ is the photon frequency. Free-free heating significantly alters the line ratios of e.g. \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ to \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\
or O\,{\sc vi}\ \citep[][]{ferland02}. To avoid this effect, we truncated the SED at about 4\,$\mu$m. During the XMM-{\it Newton}\ campaign the light
curve of both the hard (2--10\,keV) and the soft (0.5--2\,keV) X-ray flux raised gradually up to a factor 1.3 and decreased of about the same factor in about one month \citep{kaastra1}. The OM photometric points
followed the same trend \citep[e.g. Fig.~1 of][]{med11}. Variations of the continuum fitting parameters \citep[discussed in ][]{med11,pop13}
were not dramatic. Therefore at first order, the SED did not change significantly in shape, while varying in normalization.
\begin{figure}
\begin{center}
\resizebox{0.44\textwidth}{!}{\includegraphics[angle=0]{fig1.ps}}
\end{center}
\caption{\label{f:bad_uvnarrow} The LOC fitting considering the sum of the broad and very-broad line-components provides consistently a better description of the data for any other choice of parameters.
From top to bottom: combination of intermediate+broad, broad+very-broad, intermediate+broad lines+very-broad and broad lines alone.
In this example, we only show the fit to the UV data.}
\end{figure}
\subsection{The LOC fitting}\label{par:fit}
The best-fit distribution of the gas in the black hole system is dependent on many parameters using the LOC model. The radial distribution and the covering factor of the gas, which are the
free parameters in the fit, in turn depend on pre-determined parameters, namely the SED, the metallicity (that we assume to be solar for the moment, see Sect.~\ref{par:abundances}), and the inner and outer radii of the gas. Moreover, broad lines
measured in an energy range covering more than three decades in energy, are likely to arise from gas distributed over a large region with inhomogeneous characteristics.
In fitting our model, we considered four different UV line-widths combinations, namely intermediate+broad, broad+very broad, intermediate+broad+very broad
as well as broad lines alone (Table~\ref{t:lines}). We also selected six bands over which to perform the
$\chi^2$ test on the line flux modeling i.e. optical, X-rays, UV, optical+UV, X-rays+UV and X-rays+UV+optical. The individual bands are defined by the instruments used (Table~\ref{t:lines}).
We used an array of six possible inner radii, ranging from log\,$r$=14.75 to 17.7\,cm (the actual outer radius being log\,$r$=18.5\,cm) to construct the model. Considering all combinations
of parameters, we obtain 144 different fitting runs.
Whenever a limited number of lines (e.g. the UV band lines alone) are fitted, the model is extrapolated to the
adjacent bands to inspect the contribution of the best-fit model to the other lines. Not all the runs are of course sensitive to all the parameters. For instance a run which fits the
X-ray band only will be insensitive to any UV line widths.
Free parameters of the fit are the covering factor $C_V$ of the gas and the slope $\gamma$ of the
radial distribution. The covering factor ($\Omega/4\pi$, where $\Omega$ is the opening angle) measured by the LOC is the fraction of the gas as seen by the source. The value of $C_V$ is constrained to lie in the range 0.05--0.6, based on the range of past estimates for the BLR obtained with different techniques (see Sect.~\ref{par:geometry}).
In the following we describe the dependence of the fit on the different parameters, based on the goodness of fit.
In Fig.~\ref{f:bad_uvnarrow} we show the comparison among best-fits with different line widths for the UV lines. Considering the same band (the UV only here), the inclusion of the
intermediate component (Sect.~\ref{par:uvlines}) systematically slightly worsen the fit.
For simplicity, in the following we describe the sum of the broad and very broad components only, as they provide a slightly better fit,
although the other combinations were also always checked in parallel.
We show the fits in the different wavelength bands in Fig.~\ref{f:bands}. In Table~\ref{t:1comp} the best fit parameters are shown for
each combination of bands, using the full range of radii. We note that the UV data certainly dominate
the fit, by virtue of the larger number of data points with a relatively smaller error bar. The global fit however does not completely explain the high- and low-energy ends of the data. A fit based on
the X-ray data under-predicts the UV and optical data, maybe suggesting the presence of an additional component. On the other hand, the fit based on the optical band, well describes both
the optical and the \ion{C}{iv} UV line, albeit with a large overestimate of the rest of the UV and X-ray lines.
The LOC fitting depends also on the inner radius over which the radial gas distribution is calculated. We fitted the data for a choice of six inner radii, roughly separated by
half a decade. In Fig.~\ref{f:radius} a fit considering all the data is plotted for a selection of inner
radii. We see that while the UV band is only marginally affected by the inner radius choice, this parameter can make a difference for the optical and X-ray bands.
The application of a a single component of the LOC model with some tunable parameters, does not totally explain the data. In the following we explore other effects that may play a role in
the line emission.
\begin{figure}
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=90]{fig2.ps}}
\end{center}
\caption{\label{f:bands} LOC fits over individual bands: X-rays (dashed line), UV (solid line) and optical band (dash-dotted line).}
\end{figure}
\begin{figure}
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=90]{fig3.ps}}
\end{center}
\caption{\label{f:radius} LOC fits over the whole spectral band (X-UV-optical) with three representative inner radii. The X-ray band is best fitted for smaller inner radii, while the Optical band
may be better described if larger radii are used.}
\end{figure}
\begin{table}
\caption{\label{t:1comp} Results the LOC fitting considering different spectral bands.}
\begin{center}
\begin{tabular}{llll}
\hline
\hline
& $\gamma$ & $C_V$ & $\chi^2_{\rm Red}$\,({\it dof})\\
\hline
O & $1.10\pm0.5$ & $>0.6$ & 0.9 (2)\\
UV & $1.05\pm0.08$ & $<0.05$ & 4.3 (8)\\
X & $1.13\pm0.23$ & $>0.6$ & 1.5 (4)\\
UV+X & $1.05\pm0.05$ & $<0.05$ & 4.9 (14)\\
UV+O & $1.0\pm0.1$ & $<0.05$ & 3.4 (12)\\
UV+O+X & $1.05\pm0.06$ & $<0.05$ & 4.4 (18)\\
\hline
\end{tabular}
\end{center}
Notes: $\gamma$ is the slope of the radial distribution of the line luminosities. $C_V$ is the covering factor of the
BLR gas. $\chi^2_{\rm Red}$ is the reduced $\chi^2$ and $dof$ are the degrees of freedom.
\end{table}
\subsection{Extinction in the BLR}\label{par:ext_blr}
As seen above, the application of the LOC model to Mrk~509\ points out that the optical lines are systematically underestimated.
A possible solution is to include extinction in the BLR itself. The UV/optical
continuum Mrk~509\ is not significantly reddened \citep[][]{osterbrock77}. However, the dust may be associated only to the line emission region, in a way that the continuum that we measure would be unaffected
\citep[][]{osterbrock06}. In principle, the He\,{\sc ii}(1640\AA)/He\,{\sc ii}(4686\AA) ratio would be an indicator of reddening intrinsic to the BLR \citep[e.g.][]{osterbrock06}. In practice, both lines are severely
blended with neighboring lines and with the wing of higher flux lines, namely \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ for He\,{\sc ii}(1640\AA) and \ifmmode {\rm H}\beta \else H$\beta$\fi\ for He\,{\sc ii}(4686\AA) \citep{bottorff02}. In our case, the observed line ratio is very low
($\sim1.7$) when compared to the theoretical value derived from the LOC model \citep[6--8;][]{bottorff02}. This line ratio would imply an extinction E(B-V)=0.18 \citep[eq.~4 of][]{annibali10},
when a Small Magellanic Cloud (SMC) extinction curve, possibly more appropriate for AGN, is used \citep[][]{hopkins04,willot05}. Knowing the uncertainties (i.e. line blending)
associated to our He\,{\sc ii}\ measurements, we took this value as the upper limit of a series of E(B-V) values to be applied to our lines. Namely, we tested E(B-V)=(0.18, 0.15, 0.10, 0.075, 0.05, 0.025).
We then corrected accordingly the observed optical and UV fluxes
\citep[][]{annibali10}. For the lines observed by FUSE (Table~\ref{t:lines}), we extrapolated the known SMC extinction curve with a $\lambda^{4}$ function to reach those wavelengths.
The extinction in the X-rays has been simply estimated using the E(B-V)-$N_{\rm H}$ relation provided in \citet[][]{predehl95}, considering a SMC-like selective to total extinction ratio $R_V$
of 2.93 \citep{pei92}.\\
When only the UV lines are modeled, the $\chi^2$ method chooses lower values of the BLR extinction, with a final $\chi^2_{\rm red}$ of 6.1 ({\it dof}=8) for E(B-V)=0.025.
The effect of the BLR extinction is relatively modest in the X-ray band and mainly affecting the O\,{\sc vii}\ lines.
However, any value of the BLR extinction largely overcorrects the Balmer series lines. Therefore when also the optical lines are included in the model, the resulting fit
becomes even worse ($\chi^{2}_{red}=16-20$, for 18 {\it dof}).
\subsection{A two-component LOC model}\label{par:two}
A single LOC-component does not provide a fully satisfactory fit. This is not surprising, given the large range of ionization potentials of the lines.
Therefore we attempt here to test a two-component model.
As before, for each of the two components we fit all the combinations of line widths (as in Fig.~\ref{f:bad_uvnarrow}). We first considered the whole range of radii (Model~1 of Table~\ref{t:total}). Then we made the inner radius (as defined in
Sect.~\ref{par:fit}) of both components vary (Model~2 in Table~\ref{t:total}). Finally, we took into account the different emissivity depending on the size of the region, by varying also the outer radius. To do
this, we divided the radial range into four regions (starting at log$r=14.75, 15.53, 16.56, 17.47$),
in order to have roughly an order of magnitude difference between two adjacent radii.
We then considered for each component all combination of adjacent regions (or single regions).
Therefore we have a total of 10 options for the size of each component of the gas (Model~3 in Table~\ref{t:total}).
Note that for each run the inner and outer radii were fixed parameters.
We fitted the whole band (X-ray, UV, optical: XUVO) for the two LOC components. The fit is driven by the UV band, where the uncertainties on the data are the smallest.
We note that the slope of the powerlaw is dependent on the covering factor, as flatter slopes ($\gamma<1.1$) systematically correspond
to very small covering factors ($C_V<0.05$). Conversely, the upper limit we set for the covering factor ($C_V$=0.6) corresponds to steeper radial slopes \citep[see also][]{korista00}.
The covering factor has the effect of regulating the predicted line luminosities. A steeper radial distribution would enhance the lines at smaller radii,
where the gas illumination is stronger. Therefore a larger $C_V$ would be required to tune down the line luminosities.
On the contrary, a flatter slope, would lower the contribution of the strong-illuminated region, while the outer radii are enhanced.
However, the radiation field lowers with the distance, therefore a smaller $C_V$ is necessary to adapt the predicted fluxes to the real data.
In the last line of Table~\ref{t:total} we report the reduced $\chi^2$. The reduced $\chi^2$ never falls below $\sim$2, even for the better-fitting models.
This is especially due to the outlying data points, namely Si\,{\sc iv}, \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi,
and He\,{\sc ii}. The exclusion of the {Fe\,K$\alpha$}\ was not resolutive, as this line has a larger uncertainty with respect to the UV lines.
Fig.~\ref{f:regions} refers to Model 3. As expected, the more sensitive lines were the optical and the
X-rays, respectively. A highly ionized component, extending down to
log$r<$14.7\,cm is necessary in order to reproduce the O\,{\sc viii}\ and Ne\,{\sc ix}\ lines in the X-rays. All the optical lines and part of the \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ line
are best fitted by adding a component with a larger inner radius.
\begin{table*}
\caption{\label{t:total} Best fit parameters for a two-component model and different emitting regions.}
\begin{center}
\begin{tabular}{llll}
\hline\hline
& {\bf Model 1} & {\bf Model 2} & {\bf Model 3}\\
\hline
{\bf Comp 1}&&&\\
$r_{in}$ & 14.75 & 17.0 & 17.47\\
$r_{out}$ & 18.5 & 18.5 & 18.5\\
$\gamma$ & $1.59\pm0.03$ & $1.04\pm0.02$ & $1.10\pm0.02$\\
$C_V$ & $<0.05$ & $<0.05$ & $>0.6$\\
\hline
{\bf Comp 2}&&&\\
$r_{in}$ & 14.75 & 14.75 &14.75\\
$r_{out}$ & 18.5 & 18.5 & 17.47\\
$\gamma$ & $1.06\pm0.02$ & $1.17\pm0.02$ & $1.15\pm0.02$ \\
$C_V$ & $<0.05$ & $>0.6$ & $>0.6$\\
\hline
$\chi^2_{red}$\,({\it dof}) & 5.2(15)& 2.5(15)& 2.3(15)\\
\hline
\end{tabular}
\end{center}
Notes:\\
The parameters are: $r_{in}$, the inner radius; $r_{out}$ the outer radius; $\gamma$,
the slope of the radial distribution and $C_V$, the covering factor. Note that the {Fe\,K$\alpha$}\ line is excluded from this fit.
model~1: the emissivity occurs over all radii for both components.\\
model~2: the two components have different inner radii.\\
model~3: both the inner and outer radii of the emissivity is variable for both components.\\
\end{table*}
\begin{figure}
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=90]{fig4a.ps}}
\resizebox{\hsize}{!}{\includegraphics[angle=90]{fig4b.ps}}
\end{center}
\caption{\label{f:regions} Upper panel: LOC fit with two components, acting in different regions near the AGN (Model~3 in Table~3).
X-ray data are best fitted by a component near the black hole, while the optical data are better
fitted by a further-away component. Lower panel: residuals to the fit.}
\end{figure}
\section{Discussion}\label{par:discussion}
\subsection{Abundances and the influence of the SED}\label{par:abundances}
Abundances in the BLR should be either solar or super-solar. The metal enrichment should come from episodic starburst
activity \citep{romano02}. The N\,{\sc v}\ line is often taken as an abundance indicator in AGN since it is a product of the
CNO cycle in massive stars.
Using the broad component ratios in our data for N\,{\sc v}/\ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ and N\,{\sc v}/He\,{\sc ii}, the diagnostic plots of \citet{hamann02}
suggest abundances in Mrk 509 of $1<Z<3$ \citep[see][for the limitations in determining abundances in the BLR]{steen11}. In this analysis we
considered a SED with solar abundances, as defined in Cloudy. We therefore also tested the fits presented above using a metallicity 3 times solar.
The fits obtained are systematically worse ($\Delta\chi^2=2-7$ for the same number of degrees of freedom). This suggests that the abundances are close to solar.
The present HST-COS data were taken 20 days after the last XMM-{\it Newton}\ pointing \citep{kaastra1}, as the closing measurements of the campaign, which lasted in total about 100 days.
Spectral coverage simultaneous to HST-COS was provided instead by both {\it Chandra}-LETGS \citep{ebrero11} and Swift-XRT \citep{med11}. We used the average SED recorded, 20--60 days before the HST-COS observation, by the XMM-{\it Newton}\ instruments. The choice of SED is very important in the BLR modeling, as different lines
respond on different time scales to the continuum variations \citep{korista00,peterson04}. Reverberation mapping studies of Mrk~509\ report that the delay of the
\ifmmode {\rm H}\beta \else H$\beta$\fi\ with respect to the continuum is very long \citep[about 80 days for \ifmmode {\rm H}\beta \else H$\beta$\fi,][]{carone96, peterson04}. However, higher ionization lines respond faster to
the continuum variations. Taking as a reference the average \ifmmode {\rm H}\beta \else H$\beta$\fi/\ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ delay ratio for NGC~5548 \citep{peterson04}, for which, contrary to Mrk~509, a large set of line measurements is available, we obtain that
the \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ line in Mrk~509\ should roughly respond in 40 days. A similar (but shorter) time delay should apply to the \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ line \citep{korista00}.
This delay falls in the time interval covered by the XMM-{\it Newton}\ data. Therefore our choice of SED should be appropriate for the modeling of at least the main UV lines. Variability of the X-ray broad
lines has been reported on years-long time scales \citep{costantini10}, however no short-term studies are available. We expect that the X-ray broad lines should respond promptly to the continuum variations, as they may be located up to
three times closer to the black hole with respect to the UV lines (C07). During the XMM-{\it Newton}\ campaign the flux changed at most 30\%, with a minimal change in spectral shape
(Sect.~\ref{par:loc_modeling}). The used SED should therefore represent what the BLR gas see for the X-ray band.
However, for the optical lines the used SED might be too luminous as the we observed an increase in luminosity by about 30\% during the XMM-Newton campaign,
and as seen above, the time-delay of the optical lines may be large.
\subsection{The UV-optical emitting region}
The LOC model has been extensively used to model the UV and optical lines of AGN \citep[e.g.][]{korista97a}.
In this study we find that a single radial distribution of the gas over the whole range of radii, applied to the UV band, would have a slope $\gamma\sim1$, as
prescribed by the standard LOC model (Table~\ref{t:1comp}). The covering factor is unconstrained as it hits the lower limit that we imposed on this parameter.
As in the case of Mrk~279 (C07), the \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ line is a systematic outlier. This line may obey to mechanisms other than pure gravitation
(e.g. inflows/outflows) or may arise in a geometrically different region than e.g. the optical lines \citep[e.g.][ and references therein]{goad12}. Finally, \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ and \ifmmode {\rm C}\,{\sc vi} \else C\,{\sc vi}\fi\ are found in some sources to
respond on a slightly different time scale to the continuum variation. In the case of NGC~5548 this difference in response is of the order of 20 days \citep[][ Sect.~\ref{par:abundances}]{korista00}. This may account for some of the
mismatch between the two lines in our fit.
As tested above (Sect.~\ref{par:ext_blr}), extinction in the BLR of Mrk~509\ must be negligible, therefore the discrepancy with the model cannot be ascribed to dust
in the emitting region. The ionization of the BLR follows the rules of photoionization. In particular for a given UV-emitting ion \citep[e.g. \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi, \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi, O\,{\sc vi}, as detailed in ][]{korista97a},
the ionization parameter remains constant throughout the region (dashed lines in Fig.~\ref{f:contour}).
Note that for lower ionization lines (namely, the Balmer
lines, Fig.~\ref{f:contour}, right panel), density effects come into play besides pure recombination \citep{kk79,osterbrock06} and the ionization parameter does not follow the emission contour
\citep{korista97a}.
This model does not require a universal ionization parameter, because of the assumption of the stratified nature of the gas. A pressure confined gas model, which may also allow for a
range of ionization parameters in a stratified medium, would also predict, given a bolometric luminosity, a gas hydrogen density as a function of radius \citep[eq 21 in][]{baskin14}. This prediction is
drawn in Fig.~\ref{f:contour} (magenta solid line), using $L_{bol}\sim3L_{1350\,\AA}$ \citep{kaspi05}, where $L_{1350\,\AA}$ has been extrapolated from the average SED of Mrk~509
\citep{kaastra1}. This density prediction is not too far off, however it overestimates the optimal emitting region density
of the higher ionization ions (an example is given in the left panel of Fig.~\ref{f:contour}), while it would match the Balmer lines emitting region (right panel).
\begin{figure}
\hspace{-0.8cm}
\hbox{
{\includegraphics[angle=90,height=4cm,width=5cm]{fig5a.ps}}
\hspace{-0.2cm}
{\includegraphics[angle=90,height=4cm,width=5cm]{fig5b.ps}}
}
\caption{\label{f:contour} contour profiles of O\,{\sc vi}\ and \ifmmode {\rm H}\alpha \else H$\alpha$\fi\ as a function of density and distance, here using a radial slope $\gamma$ of 0.10. The dashed lines indicates constant
ionization parameters, as detailed in the legend. The solid magenta line follows the density prediction of the pressure confined emission model of \citet{baskin14}.}
\end{figure}
\subsection{The size and gas distribution of the BLR}\label{par:geometry}
Several arguments point to a natural outer boundary for the BLR which should be intuitively given by the dust sublimation radius \citep[][]{suganuma06,landt14}.
For Mrk~509, this radius corresponds to $3.6\times10^{18}$\,cm \citep{mn12}.
The maximum radius of our LOC model is $3\times10^{18}$\,cm. An expansion of the BLR outer radius to $7.6\times10^{18}$\,cm does not improve the fit. This is a natural
consequence of the LOC model construction. For radial distributions with slopes $\gamma\raisebox{-.5ex}{$\;\stackrel{>}{\sim}\;$} 1$ the line emissivity of
some major lines (O\,{\sc vi}, \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi) already drops at $10^{18}$\,cm \citep[C07,][]{baskin14}.
Therefore our fit is consistent with a confined BLR region, possibly within the sublimation radius.
The radius of the BLR has been found to scale with the UV luminosity. If we take the \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ line as a reference, $R_{\rm
BLR}=2\times10^{16}h_0L_{42}^{0.5}$(\ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi), where $h_0$ is the Hubble constant in units of 100\,km\,s$^{-1}$\ and $L_{42}$ is the \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ luminosity in units of $10^{42}$\,erg\,s$^{-1}$
\citep{peterson97}. For Mrk~509, the radius of the BLR based on this equation is $\sim2.6\times10^{17}$\,cm. Using instead the known relation between the size of the \ifmmode {\rm H}\beta \else H$\beta$\fi\ emitting region
and the luminosity at 5100\AA, we obtain, for Mrk~509, $R_{\ifmmode {\rm H}\beta \else H$\beta$\fi}\sim1.2\times10^{17}$\,cm \citep{bentz13}.\\
In our fit the location of the UV emitting lines is consistent with these estimates, as, although UV lines are efficiently emitted in a large range of radii
(Fig.~\ref{f:radius} and C07), a large fraction of the UV line luminosity could come from radii $\geq 10^{17}$\,cm (Model 2,3 in Table~\ref{t:total}, Fig.~\ref{f:regions}).
Assuming Keplerian motion, the FWHM of our lines imply that the very-broad lines (FWHM$\sim$9000-10,000\,km\,s$^{-1}$) are located at approximatively $2.5-5\times10^{16}$\,cm, depending on the mass
of the black hole: $1.43\times10^{8}$\,M$_{\odot}$ \citep{peterson04} or $3\times10^{8}$\,M$_{\odot}$ \citep{med11}. For the broad lines (FWHM$\sim$4000-5000\,km\,s$^{-1}$) the distance would then be
$1.3-2.5\times10^{17}$\,cm, consistent with our results for the UV-optical component. Finally for the intermediate lines (FWHM$\sim$1000-3000\,km\,s$^{-1}$) the calculated distance is $2-4\times10^{18}$\,cm.
The location of the line emitting gas is stratified, therefore these single-radius estimates are only taken as a reference. The very-broad and the broad lines are well within the estimated radius for
the BLR. The so-called intermediate line region could possibly bridge the BLR and the NLR \citep{baldwin97}.
In interpreting the BLR emission, we tested a two-component model, characterized not only by different radial distributions and covering factors,
but also by different physical sizes and inner/outer radii of the emitting plasmas.
Our fits are not completely satisfying as important outliers, like \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi, are present.
However, the best fit points to the interesting possibility that the optical and part of the UV region originates at larger radii
(starting at $3\times10^{17}$\,cm), while the X-ray- and
some fraction of the UV-emission regions would have an inner radius smaller than $6\times10^{14}$\,cm (as also found in C07) and a larger extension up
to about the beginning of the optical BLR (Sect.~\ref{par:two}).
This would point to a scenario in which the optical lines, including the \ifmmode {\rm H}\beta \else H$\beta$\fi, would come from the outer region of the BLR.
Such a span in distance between the optical and the X-ray lines, would also imply for the latter a faster response-time to any continuum variation.
Such an effect has not been systematically studied, although strong flux variation of the O\,{\sc vii}\ broad line has been observed before \citep{costantini10}.
The inability to find a good fit with the present model,
which assumes a simple plane parallel geometry, could suggest a more complex geometry.
Recently for instance an inflated geometry \citep["bowl geometry",][]{goad12} for the outer region, possibly confined by a dusty torus has been
suggested using different approaches \citep{goad12,pancoast12,gg13}.
The covering factor was set in our fits to be in the range 0.05--0.6. The lower limit has been chosen following
early studies on the relation between the equivalent width of the \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ and the
covering
factor \citep[0.05--0.1, e.g.][]{cf88}. However subsequent studies, using among others also the LOC model technique,
have pointed out that the covering factor can be larger: from 0.30 \citep[e.g.][and references therein]{maiolino01} up to 0.5--0.6 \citep{korista00}.
The covering factor here is the fraction of the gas as seen by the source. This is equal to the observer's line of sight covering
factor only if a spherical distribution of the gas is assumed.
A more flattened geometry would then
reconcile a large covering factor with the fact that absorption lines from the broad line region are in general not observed in the optical-UV band.
In our fits the covering factor is unconstrained. However large covering factors have been preferentially found when a
two-component model was applied, especially when the inner and outer radius were allowed to vary for both components.
The measured high covering fraction, necessary to explain the line luminosities of the two components, would then point to a gas with non-spherical geometry.
As these two components are along our line of sight, they may be one
below the other, therefore the sum of the two $C_V$ can well be above one, as long as the individual covering factors do not cover entirely the source (i.e. $C_V<1$).
Despite the extensive exploration of the impact of different parameters to the modeling, our analysis also underlines that a simple
parameterization may be inadequate to explain the complexity of
the BLR. Reasons for not reaching a better fit include minor effects like possible different responses
of \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ and \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ to continuum variations, non-simultaneity of the FUSE data, and inhomogeneous
information on the broad-band line profiles. The $C_V$ may not be a simple step function, but the clouds/gas-layers may experience a
differential covering factor for instance as a function of the distance or line ionization.
A major effect would be the complex dynamics and geometry of the BLR, which needs more sophisticated models to be explained.
\subsection{The iron line at 6.4 keV}
In this paper we include the 6.4\,keV {Fe\,K$\alpha$}\ line, observed simultaneously with the other soft X-ray, UV and optical lines.
The narrow and non-variable component, probably produced in distant regions, was not considered in the fit. We find that the derived emission of the BLR contribution to the broad {Fe\,K$\alpha$}\ line
component is around 30\%, if we used a two-component model. The emission would happen at a range of distances from the source, although at
small radii (log$r\raisebox{-.5ex}{$\;\stackrel{>}{\sim}\;$}$14.75\,cm) the emission is enhanced (Fig.~\ref{f:radius}).
Note that fortuitously, a single component fit, based on the optical lines, would provide a perfect fit to the {Fe\,K$\alpha$}\ line
(Fig.~\ref{f:bands}). However, such a gas would produce both UV and soft-X-ray line fluxes at least a factor of 6 larger than observed.
A modest contribution ($\sim17\%$) of the BLR to the iron line has been also reported in Mrk~279, using non simultaneous UV and X-ray data \citep[][]{costantini10}.
\section{Conclusions}\label{par:conclusion}
In this paper we attempted to find a global explanation of the structure of the gas emitting broad lines in Mrk~509, from the he optical to the X-ray band using a simple parametrization of the BLR.
This study is possible thanks to the simultaneous and long observations of XMM-{\it Newton}\ and HST-COS.
We find that lines broader than FWHM$>$4000\,km\,s$^{-1}$\ contribute to the bulk of the BLR emission.
A two-component LOC model provides a statistically better, but not conclusive, description of the data.
The two components are characterized by
similar radial emissivity distribution ($\gamma\sim1.10-1.15$), but different size and distance from the central source.
The X-rays and part of the UV radiation come from an inner and extended region
($r\sim5\times10^{14}-3\times10^{17}$\,cm), while the optical and part of the UV gas would be located at the outskirts of the BLR ($r\sim3\times10^{17}-3\times10^{18}$\,cm). This picture
appears to be in agreement with recent results
on the geometry of the BLR, locating the \ifmmode {\rm H}\beta \else H$\beta$\fi\ line away from the ionizing source. However, more sophisticated parameterizations are needed to have a definitive answer.
The {Fe\,K$\alpha$}\ broader line cannot completely be accounted for by emission from the BLR gas. The contribution of the BLR is around 30\% for this line.
\begin{acknowledgements}
The Netherlands Institute for Space Research is supported
financially by NWO, the Netherlands Organization for Scientific Research. XMM-{\it Newton}\ is an ESA science missions with instruments and contributions directly funded by ESA
Members States and the USA (NASA). We thank the referee, E.~Behar for his useful comments. We also thank L. di Gesu for commenting on the manuscript and G. Ferland and F. Annibali for discussion on
extinction in the BLR and host galaxy. GP acknowledges support of the Bundsministerium f\"ur Wirtschaft und Technologie/Deutsches Zentrum f\"ur Luft-und Raumfahrt (BMWI/DLR, FKZ 50
OR 1408). P.-O.P. and SB acknowledge financial support from the CNES and franco-italian CNRS/INAF PICS. G.K. was supported by NASA through grants for
HST program number 12022 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
\end{acknowledgements}
|
1,314,259,993,377 | arxiv | \section{Introduction}
The experimental data suggests that the see-saw scale is much lower than
the Planck scale or even the GUT scale. It is therefore natural to think
the scale is related to the breaking of some symmetry.
The simplest symmetry is the $B-L$ symmetry.
In principle, the $B-L$ symmetry can be a global or a local symmetry.
If we take it to be a global symmetry, its spontaneous breaking leads to the pseudo Nambu-Goldstone
boson, majoron. Since several experiments give severe constraints on the majoron,
it is natural to make it local gauge symmetry if we consider a higher ranked GUT such as
${\rm SO}(10)$.
The spontaneous breaking of $B-L$ symmetry can be exploited by developing
the vacuum expectation value (VEV) of a scalar multiplet $\Delta_1$
which carries $B-L=-2$. For the anomaly cancellation and to keep the low-energy
supersymmetry, its counterpart $\Delta_2$ that has $B-L=+2$ has to be included
into a theory. After the spontaneous breaking of this $B-L$ symmetry, it leads to
a massive gauge boson, $Z_{B-L}$.
In this paper we think about the possibility to break ${\rm U}(1)_{\rm B-L}$ symmetry through
the radiative corrections to the soft mass squared which is responsible for the VEV of
the ${\rm U}(1)_{\rm B-L}$ breaking in analogous to the case of RESB in the MSSM.
Here we explore such a possibility by considering the renormalization group equations (RGEs)
of the soft mass terms for the $B-L$ breaking sector.
Our resultant $B-L$ breaking scale is found to be around $v_{B-L} \simeq10^{5}$ GeV, that is
in a sense quite appealing if we consider to incorporate the thermal leptogenesis scenario
in SUSY models because the gravitino problem put a severe constraint
on the reheating temperature as $T_R \lesssim 10^6$ GeV
for the gravitino mass of order $m_{3/2} \lesssim 100$ GeV.
Once we incorporate the ${\rm U}(1)_{\rm B-L}$ gauge symmetry in SUSY models,
an extra ${\rm U}(1)$ gaugino, $\tilde{Z}_{B-L}$ appears in addition to the extra gauge boson $Z_{B-L}$.
It has recently been noticed that if there exist such an extra gaugino, it can mediate a SUSY breaking
so as to induce the gaugino masses for each SM gauge group at the two loop level,
while the scalar soft masses are generated at the one loop level.
The Z-prime mediated SUSY breaking is basically to use an extra ${\rm U}(1)^\prime$ vector multiplet
as a field which communicates a SUSY breaking source with the visible sector.
This setup is much more appealing and economical than the gauge-mediated
SUSY breaking.
In this mediation mechanism, it is not necessary to introduce some additional
sector as a 'messenger field', that can be implemented into a theory just as a gauge
multiplet associated with an extra ${\rm U}(1)^\prime$ gauge symmetry.
We take such an extra ${\rm U}(1)^\prime$ as a ${\rm U}(1)_{\rm B-L}$ symmetry,
and then, we can identify the messenger scale as the scale of $B-L$ symmetry
breaking scale.
\section{Radiative B-L breaking}
The interactions between Higgs and matter superfields are
described by the superpotential %
\begin{eqnarray}%
W &=& (Y_u)_{ij} U^c_i Q_j H_2 + (Y_d)_{ij} D^c_i Q_i H_1
+ (Y_e)_{ij} E^c_i L_j H_1
\nonumber\\
&+& \mu H_1 H_2
+ (Y_\nu)_{ij} N^c_i L_j H_2+ f_{ij} \Delta_1 N^c_i N^c_j
\nonumber\\
&+& \mu' \Delta_1 \Delta_2 \;,
\label{superpot}
\end{eqnarray}
where the indices $i$, $j$ run over three generations, $H_u$ and $H_d$
denote the up-type and down-type MSSM Higgs doublets, respectively.
After developing the VEV of the $B-L$ breaking field,
$\left<\Delta_1 \right> = v_{B-L}$,
the right-handed neutrino obtains the Majorana mass as $M_N = f v_{B-L}$.
And it gives a light neutrino mass through the see-saw mechanism as follows:
$M_\nu = m_D M_N^{-1} m_D^T$, where $m_D = Y_\nu v~(v=174\,{\rm GeV})$
is the Dirac neutrino mass matrix.
The soft SUSY-breaking terms which is added to the MSSM soft mass terms are given by
\begin{eqnarray}
- \Delta{\cal L}_{\rm soft}
&=& ( m^2_N)_{ij} \tilde{N}_i^{\dagger} \tilde{N}_j
+m_{\Delta_1}^2 |\Delta_1|^2
+ m_{\Delta_2}^2 |\Delta_2|^2
\nonumber\\\
&+&
\left((A_{\nu})_{ij} \tilde{N}_i^{\dagger} \tilde{\ell}_j H_u + h.c. \right)
\nonumber\\
&+& (A_f)_{ij} \Delta_1 \tilde{N}_i \tilde{N}_j +h.c.
\nonumber\\\
&+& \frac{1}{2} M_{\tilde{Z}_{B-L}} \tilde{Z}_{B-L} \tilde{Z}_{B-L} +h.c.
\label{softterms}
\end{eqnarray}
From Eqs. (\ref{superpot}) and (\ref{softterms}),
the scalar potential relevant for the $B-L$ breaking sector can be written as
\begin{eqnarray}
V(\Delta_1,\Delta_2)
&=& \left( |\mu'|^2 + m_{\Delta_1}^2 \right) |\Delta_1|^2
+ \left( |\mu'|^2 + m_{\Delta_2}^2 \right) |\Delta_2|^2
\nonumber\\
&+& \frac{1}{2} g_{B-L}^2 \left(|\Delta_1|^2 - |\Delta_2|^2 \right)^2 \;,
\end{eqnarray}
where we have neglected the Yukawa coupling contributions to the scalar potential.
The VEV of the $B-L$ breaking field $\Delta_1$ is determined to be
\begin{equation}
|\left< \Delta_1 \right>|^2 =
- \frac{2}{g_{B-L}^2} \left( |\mu'|^2 + m_{\Delta_1}^2 \right) \;.
\end{equation}
\section{Z-prime mediation of SUSY breaking}
Since all the chiral superfields in the visible sector are
charged under ${\rm U}(1)_{\rm B-L}$, so all the corresponding scalars receive soft
mass terms at 1-loop of order
\begin{eqnarray}
\label{eqn:scalarmass}
m^2_{\tilde{q}_i} &=& \frac{8}{9} \frac{\alpha_{B-L}}{4 \pi} M_{\tilde{Z}_{B-L}}^2
\ln\left(\frac{\Lambda_S}{M_{\tilde{Z}_{B-L}}} \right),
\nonumber\\
m^2_{\tilde{\ell}_i} &=& 8\, \frac{\alpha_{B-L}}{4 \pi} M_{\tilde{Z}_{B-L}}^2
\ln\left(\frac{\Lambda_S}{M_{\tilde{Z}_{B-L}}} \right),
\end{eqnarray}
where $\alpha_{B-L}=g_{B-L}^2/(4\pi)$.
The MSSM gaugino masses, however,
can only be generated at 2-loop level since they do not directly couple to the ${\rm U}(1)_{\rm B-L}$,
\begin{eqnarray}
\label{eqn:gauginomass}
M_a
&=& 4 c_a\, \frac{\alpha_{B-L}}{4 \pi} \frac{\alpha_a}{4 \pi} M_{\tilde{Z}_{B-L}}
\ln\left(\frac{\Lambda_S}{M_{\tilde{Z}_{B-L}}} \right) \;,
\end{eqnarray}
where $(c_1, c_2, c_3) = (\frac{92}{15}, 4, \frac{4}{3})$.
Since these gaugino masses are proportional to $c_a$, we expect that the
gluino will typically be lighter than the others at $\mu=M_{\tilde{Z}_{B-L}}$,
so the resultant mass spectra of the gauginos are relatively compressed than the other mediation
mechanisms.
From the discussion above, we see that the gauginos are considerably lighter
than the sfermions. Taking $M_a \simeq 100$ GeV, we find
\begin{equation}
M_{\tilde{Z}_{B-L}} \ln\left(\frac{\Lambda_S}{M_{\tilde{Z}_{B-L}}} \right)
\simeq 10^4 ~~\mathrm{TeV}
\end{equation}
and
\begin{equation}
{m}_{\tilde{f}} \simeq 10^{-1} M_{\tilde{Z}_{B-L}} \simeq 10^{5}~~\mathrm{GeV}.
\end{equation}
Hence, in this scheme of Z-prime mediation, all the sfermion masses become
very heavy at around $10^5$ GeV, while the gauginos are kept at at around the weak
scale, $M_a \simeq 100$ GeV, which can in principle provide a natural candidate of the dark matter.
In our choice of parameters, the gravitino mass is given by
\begin{equation}
m_{3/2} = \frac{\Lambda_S^2}{\sqrt{3} M_{\rm Pl}} = \{ 24\, {\rm keV},~2.4 \,{\rm MeV},~ 240 \,{\rm MeV}\} \;.
\end{equation}
for $\Lambda_S = \{10^7,\, 10^8, \,10^9\}$ GeV.
Hence the gravity mediation contribution to the gaugino masses is much suppressed,
and is well negligible compared to the Z-prime mediated contribution.
\section{Numerical evaluations}
Now we consider the RGEs and analyze
the running of the scalar masses $m_{\Delta_1}^2$ and
$m_{\Delta_2}^2$. The key point for implementing the radiative $B-L$
symmetry breaking is that the scalar potential $V(\Delta_1,\Delta_2)$
receives substantial radiative corrections. In particular, a
negative (mass)$^2$ would trigger the $B-L$ symmetry breaking.
We argue that the masses of Higgs fields $\Delta_1$ and
$\Delta_2$ run differently in the way that $m^2_{\Delta_1}$ can be
negative whereas $m^2_{\Delta_2}$ remains positive.
In the numerical analysis we take input all the soft SUSY breaking parameters
to be zero at the SUSY breaking scale, in which the SUSY breaking scale
is varied in the range, $\Lambda_S = 10^7 - 10^9$ GeV,
\begin{equation}
\tilde{A}_A = 0, ~ m_{\tilde{f}} = 0,~M_a = 0
\end{equation}
and use the following inputs
\begin{equation}
M_{\tilde{Z}_{B-L}}= 8.7 \times 10^5 \,{\rm GeV}\;, ~ f = 4,\ 5,\ 6, \ 7, ~g_{B-L} = 0.5 \;.
\end{equation}
Note that $\tilde{Z}_{B-L}$ has to be decoupled at the mass scale $M_{\tilde{Z}_{B-L}}$.
Using these inputs, in Fig.~{\ref{Fig1}}, we plot the evolution of the gaugino
masses $M_{1,2,3}$ from the SUSY breaking scale to the weak scale.
In this plot, we fixed the SUSY breaking scale as
$\Lambda_S = 10^9$ GeV.
It is very interesting that the gluino at the ${\tilde{Z}}_{B-L}$ scale is given as the lightest gaugino,
that is very different from most of the other models of SUSY breaking mediation.
For that reason, the gluino at the weak scale becomes relatively light,
and almost compressed mass spectra for the gaugino sector can be
realized in this scenario, which is very interesting in scope of the LHC.
The evolutions of the soft mass squared for the field $\Delta_1$
is plotted in Fig.~{\ref{Fig4}} for a given SUSY breaking scale as $\Lambda_S = 10^9$ GeV.
In Fig.~{\ref{Fig4}}, from top to the bottom curves, we varied the value of $f$ as $f=4,\, 5,\, 6,\, 7$.
For example, for the case of $f=5$, the soft mass squared for the fields $\Delta_1$
goes across the zeros at the scale $10^5$ GeV toward negative value, that is nothing but the realization
of the radiative symmetry breaking of ${\rm U}(1)_{B-L}$ gauge symmetry.
The running behavior in Fig.~{\ref{Fig4}} can be understood in the following way.
At first, starting from the high energy scale, the soft mass squared increases
because of the gauge coupling contributions, and decrease of the mass squared
is caused by the Yukawa coupling that dominate over the gauge coupling contribution
at some scale.
Next, since at the mass scale of $\tilde{Z}_{B-L}$, it is decoupled from the RGEs,
there are only the Yukawa coupling contributions to
the soft mass squared which rapidly decreases to across the zeros.
Therefore, the radiative $B-L$ symmetry breaking can naturally be realized.
The see-saw scale, which is found to be at $v_{B-L} = 10^5$ GeV, hence
the right-handed neutrino obtains a mass of $M_N = f v_{B-L} = 5 \times 10^5$ GeV.
This scale of the right-handed neutrino is nice for the thermal leptogenesis to be viable
in supersymmetric models with gravity mediation.
\begin{figure}[h]
\includegraphics[width=.8\linewidth]{Fig1c.eps}
\caption{
The evolution of the gaugino masses from the SUSY breaking scale
to the $B-L$ breaking scale.
The red line shows the running of the gluino mass, the green line is
the running of the ${\rm SU}(2)$ gaugino mass, and the blue corresponds to the running
of the ${\rm U}(1)_Y$ gaugino mass.
}
\label{Fig1}
\vspace{1cm}
\end{figure}
\begin{figure}[h]
\includegraphics[width=.8\linewidth]{Fig4.eps}
\caption{
The evolution of the soft mass squared for the field $\Delta_1$
from the SUSY breaking scale to the $B-L$ breaking scale.
In this plot, we take the SUSY breaking scale as $\Lambda_S = 10^9$ GeV.
From top to the bottom curves, we varied the value of $f$ as $f=4,\ 5,\ 6,\ 7$.
}
\label{Fig4}
\vspace{1cm}
\end{figure}
\section{Summary}
We have shown that a mechanism of radiative $B-L$ symmetry breaking
can work in analogous to the RESB.
The breaking scale of $B-L$ symmetry is related to the neutrino masses
through the see-saw mechanism.
Once we incorporate the ${\rm U}(1)_{\rm B-L}$ gauge symmetry in SUSY models,
the ${\rm U}(1)_{\rm B-L}$ gaugino, $\tilde{Z}_{B-L}$ can provide all the soft masses
in the MSSM.
Then we find a link between the neutrino mass (more precisly the see-saw or $B-L$ scale
of order $10^{5}$ GeV) and the Z-prime mediated SUSY breaking scale.
In this scheme of Z-prime mediation, all the sfermion masses become
very heavy at around $10^5$ GeV, while the gauginos are kept at at around the weak
scale, $M_a \simeq 100$ GeV.
It is also very interesting that the gluino at $\tilde{Z}_{B-L}$ scale is given as the lightest gaugino,
that is very different from most of the other models of SUSY breaking mediation.
For that reason, the gluino at the weak scale becomes relatively light,
and almost compressed mass spectra for the gaugino sector can be
realized in this scenario, which is very interesting in scope of the LHC.
\section*{Acknowledgments}
We would like to thank the organizers for providing me
with an oppotunity to talk at the conference.
T. Kikuchi would like to thank K.S. Babu
for his hospitality at Oklahoma State University.
The work of T.Kikuchi is supported by the Research
Fellowship of the Japan Society for the Promotion of Science (\#1911329).
|
1,314,259,993,378 | arxiv | \@startsection{section}{1}{\z@{Introduction}
Starburst galaxies are spirals (sometimes barred) in which gas is converted to stars at
rates that could not be sustained over typical galaxy lifetimes.
Such a phase is thought to represent a significant, if relatively brief,
stage in galactic evolution lasting about 10$^8$~years
(Rieke et al. 1980).
Starbursts tend to be characterized by copious far-infrared (FIR) radiation
from warm interstellar dust heated by massive young stars (Soifer et al. 1986),
as well as by enhanced radio and X-ray emission.
X-ray emission in starbursts has been attributed to individual point sources,
within the central 10$^3$~pc for nuclear starbursts,
such as low-mass X-ray binaries and young supernovae,
and to hot plasma heated by supernova explosions or strong stellar winds from
young massive stars.
Indeed, such hot plasmas have been termed ``superwinds'' (Heckman et al. 1990),
arising when the supernova rate (e.g. $\sim$0.1 yr$^{-1}$ for NGC253,
Antonucci \& Ulvestad 1988) and the mass of the gas involved ($\sim$10$^{8}$M$_{\odot}$)
are high enough to create a shock-heated gas cavity within the galaxy.
Such cavity could expand, break and then make the hot gas come out
as superwind. Superwind emission
has been suggested as an explanation for the plume of X-ray,
discovered by {\it Einstein}, in the northern side of NCG253 (Fabbiano 1988).
We present here {\it BeppoSAX} observations of a starburst galaxy, NGC253,
that, for the first time, reveal the Fe K line at 6.7 keV.
The detection of this line is fundamental because it constrains the
origin of the X-ray emission, and provides a diagnostic
for plasma temperatures higher than a few keV, and for elemental abundances.
Studies of starburst galaxies are interesting as they help in the understanding
of the physical processes behind
the high stellar formation rate in the nucleus and in the search for a possible link
between normal galaxies and AGN.
\@startsection{section}{1}{\z@{NGC253}
NGC253 (see table 1 for data) is a nearby edge-on late-typed barred spiral
($i=78^{\circ}.5$, Pence 1981) and represents one
of the archetypical starburst galaxies.
It is one of the brightest infrared sources in the extragalactic sky,
with a 100$\mu$m luminosity of $3.04 \times 10^{10}$~L$_{\odot}$
(Rice et al. 1988), and has been studied extensively at high energies
(Fabbiano \& Trinchieri 1984; Ohashi et al. 1990; Ptak et al. 1997),
showing a high degree of spectral and spatial complexity at X-ray wavelengths.
\begin{table}[hbt]
\begin{center}
\begin{tabular}{ccccc}
\multicolumn{5}{c}{Table 1: Relevant data of NGC253} \\
\hline
Dec & RA & D$^a$ & d$^b$ & m$_{V}$ \\
\hline
-25d17m18s & 00h47m33s & $\sim$3Mpc & 10'& 8.04 \\
\hline
\end{tabular}
\end{center}
Note: $^a$ see Tully 1988 \\
$^{}$ $^{}$ $^{}$ $^{}$ $^{}$ $^{}$ $^{}$ $^b$ MECS observation
\end{table}
\@startsection{section}{1}{\z@{The {\it BeppoSAX} observation}
NGC253 was observed from November 29 to December 2, 1996 (see table 2).
In the central 4~arcmin region, we obtained
a LECS count rate of $3.93 \times 10^{-2}$ cts s$^{-1}$ and a MECS
count rate of $9.23 \times 10^{-2}$ cts s$^{-1}$. Data are characterized,
in the 0.1-10 keV band, by S/N$>$3.
\begin{table}[hbt]
\begin{center}
\begin{tabular}{ccc}
\multicolumn{3}{c}{Table 2: Exposure time} \\
\hline
$Instrument$ & $En. range$ & $Obs. time$ \\
$ $ & $KeV$ & $sec$ \\
\hline
LECS & 0.1-4 & 54689 \\
MECS & 1.3-10 & 113403 \\
\hline
\end{tabular}
\end{center}
\end{table}
The flux observed by {\it BeppoSAX} in the 0.1-2.4 keV energy range,
is $2.36 \times 10^{-12}$~erg~s$^{-1}$~cm$^{-2}$, which is
roughly a factor of two lower than the observed ROSAT flux in the
same energy range (Moran et al. 1996); this lack of agreement may
be attributable to their larger beam, a different background
subtraction technique, or both.
The observed 2-10 keV flux is
$4.9 \times 10^{-12}$~erg~s$^{-1}$ cm$^{-2}$, consistent with {\it ASCA}
results (Ptak et al. 1997) and corresponding to
a luminosity of $1.4 \times 10^{40}$~erg~s$^{-1}$.
\@startsection{subsection}{2}{\z@{Spatial and timing analysis}
The source is clearly extended in the {\it BeppoSAX} image in both the 0.1-2 keV and
2-10 keV band.
Analysis of the resolved emission is postponed to future works;
here we present only the analysis of the unresolved nuclear emission.
In the following N$_{H gal}$=$1.28 \times 10^{20}$
cm$^{-2}$ (Dickey \& Lockman 1990) is taken.
No short or long term variability is detected from the present data
in either energy band: this is consistent with a thermal
origin of the 2-10~keV emission, as discussed below.
\begin{table}[hbt]
\begin{center}
\begin{tabular}{cccc}
&&& \\
\multicolumn{4}{c}{Table 3: Bremsstrahlung + lines model} \\
\hline
$KT_{brem.}$ & $Element$ & $Energy$ & $EW$ \\
$keV$ & $ $ & $keV$ & $eV$ \\
\hline
&&& \\
$7.40_{-0.71}^{+0.18}$ & $Fe_{XVIII}/Ne$ & $0.95_{-0.05}^{+0.04}$
& $101_{-38}^{+49}$ \\
&&& \\
& $Si_{XIV}$ & $1.91_{-0.05}^{+0.04}$ & $70_{-29}^{+23}$ \\
&&& \\
& $S_{XV}$ & $2.42_{-0.06}^{+0.05}$ & $74_{-28}^{+34}$ \\
&&& \\
& $Fe_{XXV}$ & $6.69_{-0.07}^{+0.07}$ & $310_{-78}^{+119}$ \\
&&& \\
\hline
\end{tabular}
\end{center}
Note: The value of the $A_{LECS}/A_{MECS}$ constant used for this simultaneous fit
is $0.64_{-0.03}^{+0.04}$
\end{table}
\begin{figure}[htb]
\psfig{file=./fig3.ps,width=7cm,height=6.5cm,angle=-90}
\caption{MECS and LECS fit of a simple Bremsstrahlung model; the lines are
clearly evident on the continuum}
\label{fig:largenenough}
\end{figure}
\begin{figure}[htb]
\psfig{file=./fig5.ps,width=7cm,height=6.5cm,angle=-90}
\caption{MECS and LECS fit of a double Raymond Smith model}
\label{fig:toosmall}
\end{figure}
\@startsection{subsection}{2}{\z@{Spectral analysis}
At first, we used a bremsstrahlung model plus emission lines to parameterize the
line energies and
intensities detected with {\it BeppoSAX}. The spectra in both bands were fit
simultaneously with the relative normalizations free to vary.
The emission lines are evident in
Fig.1 and the fitted line intensities and gas temperature are given in Table 3.
The {\it BeppoSAX} 2-10 keV continuum clearly requires a thermal model: a hard
power law alone (as allowed by {\it ASCA} data, Ptak et al. 1997) seems to be ruled out
($\Delta$$\chi^{2}$=49) by the present data.
We find that the spectra are well fitted by a double
Raymond-Smith model (see Table 4 and Fig.2). The results found using alternative thermal
models (e.g. Meka and Mekal models in XSPEC) confirm both the temperature
and abundances found with the Raymond-Smith model.
The LECS data show a residual excess below 1~keV, requiring a soft
component with KT$<$1~keV, as was also found by Ptak et al. (1997)
with {\it ASCA}.
\begin{table}[hbt]
\begin{center}
\begin{tabular}{cccc}
&&& \\
\multicolumn{4}{c}{Table 4: Double Raymond-Smith model} \\
\hline
$KT_{soft}$ & $Ab_{soft}$ & $KT_{hard}$ & $Ab_{hard}$ \\
$keV$ & $ $ & $keV$ & $ $ \\
\hline
&&& \\
$0.90_{-0.23}^{+0.19}$ & $\equiv 1$ & $6.52_{-0.50}^{+0.56}$ &
$0.25_{-0.07}^{+0.08}$ \\
&&& \\
\hline
\end{tabular}
\end{center}
Note: The value of the $A_{LECS}/A_{MECS}$ constant used for this simultaneous fit
is $0.64_{-0.03}^{+0.04}$
\end{table}
The Fe K line (consistent with emission from Fe$_{XXV}$) at 6.7 keV has been unambiguously
detected for the first time in NGC253.
It is relatively narrow, with
an equivalent width of 310~eV, a value roughly
consistent with the upper limits placed by previous studies
(Ohashi et al. 1990; Ptak et al. 1997);
we note that a similar emission line was also
detected in M82 by {\it ASCA} (Ptak et al. 1997).
Other lines clearly detected are Si, S and Fe$_{XVIII}$/Ne (see Table 3),
in agreement with {\it ASCA} results. The best fit temperature obtained using a
double Raymond-Smith model is $\sim$6.5 keV, higher than expected but consistent with
supernovae temperatures. The reliable detection of the Fe K line in NGC253 allows us
to determine the metallicity of the line-emitting gas, and we find, for the hard
component, a value of 0.25 solar, again consistent with the sub-solar values, based
on upper limits, predicted by Ohashi et al. (1990) and Ptak et al. (1997).
However, the quality of the LECS data is too poor to give a reliable estimate
of the metallicity of the soft component.
\@startsection{section}{1}{\z@{Conclusions}
The detection of the iron line at 6.7 keV in this starburst galaxy is particulary
interesting: other galaxies, namely LINERs and$/$or low luminosity active galaxies
tend to show Fe K line
energies higher than AGN (Iyomoto et al. 1997) raising the question of the nature
and origin of the line detected in these galaxies in the light of our results.
The high temperature thermal plasma and the presence of the bump around
1 keV in the spectra of NGC253, are at present puzzling and require
further investigation.
|
1,314,259,993,379 | arxiv |
\section{Introduction}
Sigma terms are referred to as the quark contributions to the mass of a given baryon. They consist of matrix elements of a scalar current $J$ times a quark mass such that
\begin{align}
\sigma_{qB} = m_q \langle B| J |B \rangle
\label{eq:sigma_term}
\end{align}
where $m_q$ denotes the quark mass of flavour $q$. The pion-baryon sigma terms are defined by $\sigma_{\pi B} = \sigma_{uB} + \sigma_{dB}$. We focus on scalar flavour-singlet quark currents $J = \bar{q} \,\mathds{1} \, q$,\, $q\in \{u,d,s\}$. In the matrix element, $B$ refers to the ground state of a baryon $B$.
The most prominent examples are the nucleon sigma terms ($B=N$) which appear in the expressions for WIMP-nucleon scattering cross-sections and are relevant for comparing model predictions
to the exclusion bounds obtained from direct detection dark matter experiments (such as the XENON1T experiment).
We make use of and adjust methods established for the nucleon (reviewed in \cite{Ottnad:2020qbw}) when analysing the entire baryon octet. Studying the sigma terms of the lambda $\Lambda$, sigma $\Sigma$ and cascade $\Xi$ baryons allows us to investigate flavour symmetry breaking in the octet.
In addition, discrepancies between results for the pion-nucleon sigma term from Lattice QCD and phenomenology are still to be resolved (see \cite{FlavourLatticeAveragingGroup:2019iem}, and e.g., \cite{Alexandrou:2019brg,Borsanyi:2020bpd}). In a recent paper, results more consistent with phenomenology were obtained by explicitly including $N\pi$ and $N\pi\pi$ excited states in the analysis \cite{Gupta:2021ahb}. By considering baryons other than the nucleon, we hope to understand the sigma terms in more detail so as to help solve this puzzle.
\section{Excited state analysis - Ratio method}
\label{sect:ratio_method}
The ratio method \cite{Green:2018vxw,Ottnad:2020qbw} is a way of extracting the ground-state matrix element needed to construct sigma terms (eq.~(\ref{eq:sigma_term})).
We consider the two- and three-point functions of a baryon (from the octet) at rest in the initial and final state.
The spectral decomposition of the two-point function reads
\begin{align}
C_\mathrm{2pt}(\tf)=\sum_{\vec{x}}\left\langle \mathcal{O}_\mathrm{snk}(\vec{x},\tf) \bar{\mathcal{O}}_\mathrm{src}(\vec{0},0) \right\rangle
= \sum_n |Z_n|^2 e^{-E_n \tf}
\end{align}
where $Z_n\propto\langle\Omega|\mathcal{O}_\mathrm{snk}|n\rangle$ is the overlap of the interpolator $\mathcal{O}_\mathrm{snk}$ onto the state $n$ (and $\Omega$ the vacuum state) and $\tf$ the source-sink separation. Summation over spin and colour indices and projection onto
positive parity are implied. These indices become apparent when writing down the operators explicitly. The interpolators for the four octet baryons are
\begin{align}
\mathcal{O}_\mathrm{snk}^{\alpha,\mathrm{N}} &= \epsilon^{abc} u_{a}^\alpha \left( u_{b}^\beta (C\gamma_5)^{\beta\gamma}d_c^\gamma\right) \quad \text{and} \quad
\mathcal{O}_\mathrm{snk}^{\alpha,\Lambda} = \epsilon^{abc} s_{a}^\alpha \left( u_{b}^\beta (C\gamma_5)^{\beta\gamma}d_c^\gamma\right),\\
\mathcal{O}_\mathrm{snk}^{\alpha,\Sigma} &= \epsilon^{abc} u_{a}^\alpha \left( u_{b}^\beta (C\gamma_5)^{\beta\gamma}s_c^\gamma\right) \quad \,\text{and} \quad
\mathcal{O}_\mathrm{snk}^{\alpha,\Xi} = \epsilon^{abc} s_{a}^\alpha \left( s_{b}^\beta (C\gamma_5)^{\beta\gamma}u_c^\gamma\right).
\end{align}
$a,b,c$ are colour indices, $\alpha,\beta,\gamma$ are spin indices and $\mathcal{O}_\mathrm{src}^\alpha = \mathcal{O}_\mathrm{snk}^\alpha$ and $\bar{\mathcal{O}}_\mathrm{src} = \mathcal{O}_\mathrm{src}^\dagger \gamma_4 $. $C$ stands for the charge conjugation operator.
Turning to the three-point function, its spectral decomposition reads
\begin{align}
C_\mathrm{3pt}(\tf,t) &=\sum_{\vec{x},\vec{y}}\left\langle \mathcal{O}_\mathrm{snk}(\vec{x},\tf) J(\vec{y},t) \bar{\mathcal{O}}_\mathrm{src}(\vec{0},0) \right\rangle
- \sum_{\vec{x},\vec{y}} \left\langle J(\vec{y},t)\right\rangle\left\langle \mathcal{O}_\mathrm{snk}(\vec{x},\tf)\nonumber \bar{\mathcal{O}}_\mathrm{src}(\vec{0},0) \right\rangle\\
&=\sum_{n,n'} Z_{n'} Z_n^* \langle n'|J|n\rangle
e^{-E_nt} e^{-E_{n'}(\tf-t)},
\end{align}
where $t$ is the insertion time of the scalar current, $J = \bar{q} \, \mathds{1} \, q$,\, $q\in \{u,d,s\}$.
As $J$ has the same quantum numbers as the vacuum, the vacuum expectation value needs to be subtracted.
Note that depending on the type of baryon, different Wick contractions (so different currents) contribute that result in connected and disconnected quark-line diagrams.
Taking the ratio of the two spectral decompositions leads to
\begin{align}
R_\Gamma(\tf,t) = \frac{C_\mathrm{3pt}(\tf,t)}{C_\mathrm{2pt}(\tf)} = g_S^q + c_{01} \mathrm{e}^{-\Delta \, \cdot \,t} + c_{10} \mathrm{e}^{-\Delta \, \cdot \, (\tf-t)} + c_{11} \mathrm{e}^{-\Delta \, \cdot \, \tf} + ...
\label{eq:multi_state_fit_formula}
\end{align}
where $g_S^q = \langle B|J| B\rangle =\langle B|\bar{q} \, \mathds{1} \, q| B\rangle $ is the ground-state matrix element of interest. $\Delta = E_1 - E_0$ is the energy gap between the ground state and the first excited state. The coefficients $c_{01},c_{10},c_{11}$ are made up of matrix elements of different transitions such as $N_1 \rightarrow N$ , $N \rightarrow N_1$ and
$N_1 \rightarrow N_1$ for the nucleon and similarly for the other three baryons. $N_1$ stands for the first excited state of the nucleon and may be a single- or multi-particle state. As we consider the baryon at rest, $c_{01} = c_{10} \equiv c_1$ holds in this case.
\section{Renormalisation}
Quark masses $m_q$ are renormalised via
\begin{align}
m_q^\mathrm{ren} = \zm \left[m_q \, + \, (\rmsea - 1) \frac{\Tr M}{\Nf} \right],
\end{align}
which holds up to cut-off effects. $\zm$ is the renormalisation parameter of the non-singlet scalar density and $\Tr M = \Sigma_q m_q$.
The matrix elements must renormalise in the inverse manner w.r.t. the masses so that
\begin{align}
\sigma_{qB}^{\mathrm{ren}} = \left(m_q + (\rmsea-1)\frac{\Tr M}{\Nf} \right) \left(g_{q,S}^B + (\rmsea^{-1}-1)\frac{\Tr g_{S}^B}{\Nf} \right)
\label{eq:renormalisation}
\end{align}
and $\sigma_{\pi B}^\mathrm{ren} = \sigma_{uB}^\mathrm{ren} + \sigma_{dB}^\mathrm{ren}$. The normalisation factor $\rmsea$ is the ratio of flavour non-singlet and singlet scalar density renormalisation parameters, determined in Refs. \cite{Bali:2016umi,Heitger:2021bmg} for our lattice discretisation. It accounts for the mixing of quark flavours under renormalisation for Wilson fermions.
\section{Numerical setup}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\linewidth]{figures/ensemble_overview_trm.pdf}
\caption{Overview of the $\Tr \mathrm{M}=\mathrm{const.}$ ensembles part of the \textit{CLS} effort that we plan to include in our analysis. So far the ensembles highlighted in black (diamonds) have been analysed. The pion masses and lattice spacings are given on the $y$ and $x $ axes, respectively. The red circle indicates the physical point.}
\label{fig:ensembles}
\end{figure}
We perform our calculations on \textit{CLS} gauge field ensembles \cite{Bruno:2014jqa} employing the L\"uscher-Weisz gluon action and the Sheikholeslami-Wohlert fermion action with $N_\mathrm{f} = 2 + 1$ ($m_l=m_u=m_d\leq m_s$). Pion-baryon and strange sigma terms are determined on the three ensembles highlighted in (black) diamonds in fig.~\ref{fig:ensembles} along a trajectory where the sum of the sea quark masses is kept constant. Only one lattice spacing of ${0.06426(74)(17)}\,\mathrm{fm}$ ($\beta=3.55$)~\cite{Bruno:2016plf} and a lattice size of $128 \times 48^3$ have been considered so far, focusing on the quark mass dependence. We take three pion masses into account: ${411}\,\mathrm{MeV}$, ${345}\,\mathrm{MeV}$ and ${284}\,\mathrm{MeV}$. We use $r_\mathrm{m}(\beta=3.55)=1.523(14)$ non-perturbatively determined in \cite{Heitger:2021bmg}.
To compute the connected three-point correlation functions on the $m_l = m_s$ ensemble (N202), we used the standard sequential source method \cite{Maiani:1987by}. On the other ensembles we employed the stochastic method described in \cite{Bali:2019svt,Bali:2017mft} (see also \cite{Yang:2015zja,Alexandrou:2013xon,Bali:2013gxx,Evans:2010tg}), estimating a timeslice-to-all propagator. This approach enables us to obtain measurements for all baryons of interest, as multiple source and insertion positions can be estimated simultaneously.
Four different source-sink separations, $\tf/a = [11, 14, 16, 19]$, corresponding to $\tf \approx [0.71 \,\mathrm{fm}, 0.9 \,\mathrm{fm}, 1.03 \, \mathrm{fm}, 1.22 \, \mathrm{fm}]$, are employed. Four measurements ($2$ replica $\times$ (forward and backward direction)) are performed for each $\tf$ on every configuration except for the $m_l = m_s$ ensemble (N202) where we used the sequential source method; here, only one measurement is undertaken at $\tf=11$ and two at $\tf=[14,16]$ (whereas the number of measurements is also four at $\tf=19$).
The disconnected three-point functions are constructed by correlating a quark loop with a baryon two-point function. The loop is estimated stochastically leading to additional noise on top of the Monte-Carlo gauge sampling. In order to reduce the noise, the truncated solver method~\cite{Bali:2009hu}, the hopping parameter expansion technique~\cite{Thron:1997iy} and time partitioning~\cite{Bernardson:1993he} are utilised. Forty measurements ($2$ replica $\times$ $20$ different spatial source positions) of the two-point function are performed on each configuration with the exception of N202 where the number is 52.
The source-sink separations range from $\tf/a=4 \leftrightarrow \tf \approx 0.26\,\mathrm{fm}$ to $\tf/a=19 \leftrightarrow \tf \approx 1.22\,\mathrm{fm}$.
For the analysis of the statistical errors we employ the $\Gamma$-method \cite{Wolff:2003sm} that is based on autocorrelation functions.
\section{Analysis and preliminary results}
\begin{figure}[h]
\includegraphics[width=0.5\linewidth]{figures/xi/N203RconUU.pdf}
\includegraphics[width=0.5\linewidth]{figures/xi/N203RconSS.pdf}
\includegraphics[width=0.5\linewidth]{figures/xi/N203RdisL.pdf}
\includegraphics[width=0.5\linewidth]{figures/xi/N203RdisS.pdf}
\caption{The connected and disconnected ratios that contribute to the sigma terms of the $\Xi$ baryon for ensemble N203: Simultaneous fit to the connected and disconnected ratios is indicated by the coloured shaded regions, with the resulting ground state
scalar matrix element displayed as a grey band. We obtain $\chi^2/\mathrm{d.o.f.} = 0.72$ and an energy gap of $\Delta \approx 651\, \mathrm {MeV}$.
At the top, the connected ratios are plotted against the insertion time $t$ at different source-sink separations $\tf$ for the $\bar{u}u$ current (left) and the $\bar{s}{s}$ current (right). At the bottom, the disconnected ratios are plotted against the source-sink separation $\tf$ at different insertion times $t$ for $J=\bar{l}l$ (left) and $J=\bar{s}s$ (right) .}
\label{fig:sim_fit_lambda}
\end{figure}
Connected and disconnected ratios are constructed separately for all scalar currents that contribute. In order to tackle excited state contamination we perform multi-state fits, according to eq.~(\ref{eq:multi_state_fit_formula}). For each baryon we fit all connected and disconnected ratios simultaneously, with the energy gap $\Delta$ being the common fit parameter.
As an example, the ratios (and fits) relevant for determining the sigma terms of the $\Xi$ baryon on the N203 ensemble are displayed in fig.~\ref{fig:sim_fit_lambda} showing all ratios involved. While we were able to resolve the first two excited state terms from eq.~(\ref{eq:multi_state_fit_formula}), it was not possible to resolve the third and we set $c_{11}=0$ throughout our analysis. The $\chi^2/\text{d.o.f}$ values were below one for all baryons.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{figures/ChPT_fit_free_NNLO_parms_F0.pdf}
\caption{Pion mass dependence of sigma terms: The dotted vertical lines point to the physical pion mass. The pion-baryon (left) and strange-baryon (right) terms are depicted by squares (nucleon), diamonds ($\Lambda$), circles ($\Sigma$) and pentagons ($\Xi$). Simultaneous fit to pion-baryon and strange sigma terms is displayed by the dashed lines (including the error bands as shaded regions) resulting in $\chi^2/\mathrm{d.o.f}=1.29$. Both LO LECs and the octet baryon mass in the chiral limit are kept fixed to $F = 0.446(7)$, $D = 0.731(12)$ and $m_0 = 729(42)\, \mathrm{MeV}$ from a preliminary analysis of the nucleon mass and the axial charges in the chiral limit; while the NLO LECs $b_D,b_F,\bar{b}$ and the pion decay constant $F_0$ are fitted. We get $\bar{b}=0.00317(29)$, $b_F=-0.000335(27)$, $b_D=0.0000493(21)$ and $F_0=119.9(9.8)\, \mathrm{MeV}$. It differs greatly from $F_0 = 71(2)\, \mathrm{MeV}$, the preliminary value from a combined fit to the pion decay constant and the pion mass part of the analysis mentioned above.}
\label{fig:chpt}
\end{figure}
The ground-state matrix elements of interest can now be extracted from the fit. Matrix elements of different currents are combined and multiplied by the corresponding quark masses as to make up pion-baryon and strange sigma terms for all octet baryons considered, see eq.~(\ref{eq:sigma_term}). Renormalisation is applied via eq.~(\ref{eq:renormalisation}). Our preliminary results for pion-baryon and strange sigma terms are plotted against the pion mass in fig.~\ref{fig:chpt}. From Baryon Chiral Perturbation Theory (BChPT) we can derive the pion mass dependence expected from SU(3) flavour symmetry \cite {Lehnhart:2004vi} (see also \cite{PhysRev.125.1067,10.1143/PTP.27.949,Geng:2013xn}); we
apply the Feynman-Hellmann theorem that relates sigma terms to derivatives of the baryon mass with respect to quark masses, resulting in
\begin{subequations}
\label{eq:chpts}
\begin{equation}
\sigma_{\pi{}B}=M_{\pi}^2\left\{\frac{2}{3}\bar{b}-\delta b_B
+\frac{m_0^2}{(4\pi F_0)^2}
\left[\frac{g_{B,\pi}}{2M_{\pi}}f'\left(\frac{M_{\pi}}{m_0}\right)
+\frac{g_{B,K}}{4M_{K}}f'\left(\frac{M_{K}}{m_0}\right)
+\frac{g_{B,\eta}}{6M_{\eta}}f'\left(\frac{M_{\eta}}{m_0}\right)
\right]\right\},\label{eq:cpt1}
\end{equation}
\begin{equation}
\sigma_s=\left(2M_K^2-M^2_{\pi}\right)
\left\{\frac{1}{3}\bar{b}+\delta b_B
+\frac{m_0^2}{(4\pi F_0)^2}
\left[
\frac{g_{B,K}}{4M_{K}}f'\left(\frac{M_{K}}{m_0}\right)
+\frac{g_{B,\eta}}{3M_{\eta}}f'\left(\frac{M_{\eta}}{m_0}\right)
\right]
\right\},\, \quad \quad \label{eq:cpt2}
\end{equation}
\end{subequations}
where $m_0$ and $F_0$ are the octet baryon mass and pion decay constant in the chiral limit. $\delta b_B$ is a combination of two of the three BChPT next-to-leading order (NLO) low energy constants (LECs) $b_D,b_F,\bar{b}=-6b_0-4b_D$ and depends on the baryon,
\begin{align}
\delta b_N=\tfrac23(3b_F-b_D),\quad
\delta b_\Lambda=-\tfrac43 b_D,\quad
\delta b_\Sigma=\tfrac43 b_D,\quad
\delta b_\Xi=-\tfrac23(3b_F+b_D).\label{eq:deltab}
\end{align}
\noindent The couplings $g_{B,\pi},g_{B,K}$ and $g_{B,\eta_8}$ are made up of different combinations of the leading order (LO) LECs $F$ and $D$,
\begin{align}
g_{N,\pi}&= \tfrac{3}{2}{(D+F)}^2, & g_{N,K}&=\tfrac{5}{3} D^2 - 2D F + 3 F^2, & g_{N,\eta_8} &= \tfrac{1}{6} {(D-3F)}^2,\nonumber \\
g_{\Lambda,\pi}&=2 D^2, & g_{\Lambda,K}&=\tfrac{2}{3} D^2 + 6 F^2, & g_{\Lambda,\eta_8}&=\tfrac{2}{3} D^2,\nonumber \\
g_{\Sigma,\pi}&=\tfrac{2}{3}D^2 + 4 F^2, & g_{\Sigma,K}&=2D^2 + 2F^2, & g_{\Sigma,\eta_8}&=\tfrac{2}{3} D^2,\nonumber \\
g_{\Xi,\pi}&=\tfrac{3}{2}{(D-F)}^2, & g_{\Xi,K}&=\tfrac{5}{3} D^2 + 2D F + 3 F^2, & g_{\Xi,\eta_8}&=\tfrac{1}{6} {(D+3F)}^2,
\end{align}
that also appear in the ChPT expressions for the axial charges.
$f^\prime$ is the derivative of the loop function $f$ that is set to $f(x)=-\pi x^3$ in Heavy Baryon ChPT \cite{Bernard:1992qa,Gasser:1987rb} or
\begin{align}
f(x)= -2x^3\left[\sqrt{1-\frac{x^2}{4}}\arccos\left(\frac{x}{2}\right)
+\frac{x}{2}\ln(x)\right].
\label{eq:loop_fct_BChPT}
\end{align}
in covariant BChPT in the extended on-mass-shell (EOMS) scheme \cite{Gegelia:1999gf,Fuchs:2003qc,Lehnhart:2004vi}.
This BChPT prediction (\ref{eq:chpts}) tells us that pion-baryon and strange sigma terms should be describable by the same set of LECs. We find that fitting our preliminary pion-baryon and strange sigma terms simultaneously is successful so we can describe both sigma terms consistently, see fig.~\ref{fig:chpt}.
In addition, we investigate whether we obtain consistent results for the LECs with a preliminary study where $F_0$ was estimated from a combined fit to the pion decay constant and the pion mass. As part of the same study $m_0$, $F$ and $D$ were determined in an analysis of the nucleon mass and the axial charges in the chiral limit.
We see that it is not possible to arrive at a satisfactory fit keeping these four parameters fixed to the preliminary values. Instead we find that at least one parameter has to account for the difference in curvature.
We show the best fit for our sigma terms in fig.~\ref{fig:chpt}; we perform a simultaneous fit to pion-baryon and strange sigma terms according to eq.~(\ref{eq:chpts}) using eq.~(\ref{eq:loop_fct_BChPT}) for the loop function. Here the three NLO LECs and $F_0$ are the common fit parameters whilst keeping $F$, $D$ and $m_0$ fixed to the values from the aforementioned (preliminary) analysis. Our fit result for $F_0$ is unreasonably large. This may be due to the fact that we do not yet incorporate cut-off and finite-volume effects on this small subset of ensembles at a single lattice spacing. Note that higher order ChPT effects may also contribute.
\section{Conclusion and outlook}
We have demonstrated that it is possible to obtain pion-baryon and strange sigma terms for all octet baryons using similar methods to those for the nucleon. Taking a closer look at the renormalisation pattern, it might be more convenient to consider other combinations of sigma terms. We also aim to take into account all main sources of systematics. We will for example try out further fitting techniques; in order to determine whether we control excited state contributions sufficiently, the summation method \cite{Green:2018vxw,Ottnad:2020qbw} may serve as a cross-check. In the future we plan to extend the analysis to include additional ensembles. This will allow for a chiral extrapolation to the physical pion mass and an investigation of cut-off and finite-volume effects.
|
1,314,259,993,380 | arxiv | \section{Introduction} \label{sec:intro}
In 1963, Birch and Swinnerton-Dyer \cite{BSD} carried out a seminal study of the moduli space of genus one curves equipped with degree 2 line bundles, in terms of the orbits of the action of $\mathrm{GL}_2$ on the space $\Sym^4(2)$ of binary quartic forms. This orbit space parametrization was a key ingredient in the explicit 2-descent computations that led them to the celebrated Birch and Swinnerton-Dyer conjecture. The analogues of this parametrization for line bundles of degree 3, 4, and 5 (i.e., ``elliptic normal curves'' of degrees $3$, $4$, and $5$) were subsequently investigated in the important works of Cassels \cite{Cassels}, and more recently, Cremona \cite{cremona-binarycubicquartic}, Cremona--Fisher--Stoll \cite{cremonafisherstoll}, and Fisher \cite{fisher-pfaffianECs}. These works have, in particular, enabled explicit 3-, 4-, and 5-descent computations on elliptic curves analogous to the original 2-descent computations of Birch and Swinnerton-Dyer.
Recently, these parametrizations of elliptic normal curves have also been used to obtain bounds on the average rank and Selmer ranks of elliptic curves (see \cite{arulmanjul-bqcount,arulmanjul-tccount}).
The important consequences and elegance of these classical orbit parametrizations naturally raise the question as to whether further such correspondences exist that could shed light on other data attached to genus one curves. The purpose of this article is to develop additional such correspondences. In fact, we will show that the classical representations described above for elliptic normal curves are only four among at least $20$ such representations whose orbits parametrize nontrivial data on genus one curves---such as line bundles, vector bundles, points on the Jacobian, as well as more exotic structures.
The underlying philosophy is the use of orbit spaces to parametrize algebraic or geometric objects. In 1801, Gauss gave perhaps the first nontrivial example of such a parametrization in his celebrated Disquisitiones Arithmeticae \cite{Gauss}, where he studied integral binary quadratic forms under a certain action of the group $\mathrm{GL}_2(\mathbb{Z})$. Although the space of binary quadratic forms is a {\it prehomogeneous vector space}, meaning that it has only one open orbit over $\mathbb{C}$, the rational and especially the integral orbits are in bijection with quite nontrivial arithmetic objects, namely, quadratic fields and ideal classes in quadratic rings, respectively.
In \cite{wrightyukie}, Wright and Yukie showed that orbits of many prehomogeneous vector spaces over a field $k$ correspond to field extensions of $k$.
The series of papers \cite{hcl1,hcl2,hcl3,hcl4,hcl5} describes how the integral orbits of most prehomogeneous vector spaces parametrize arithmetic objects, such as rings of low rank together with ideals and modules. These parametrizations were used in \cite{manjulcountquartic,manjulcountquintic}, for example, to determine the density of discriminants of quartic and quintic fields, thus completing the original program of Wright and Yukie (see~\cite[\S1]{wrightyukie}) of using prehomogeneous representations to determine densities of arithmetic objects.
\nopagebreak
In this paper, we study a natural series of {\it coregular} representations, that is, representations for which the ring of invariants is a polynomial ring.
More precisely, we consider here many representations of reductive groups $G$ for which the restricted representation on the semisimple part of $G$ is coregular.
It is interesting that ``most'' such representations that have more than one generating invariant---i.e., are not prehomogeneous---turn out to involve {\it genus one curves}.
Just as the space of $2\times 2\times 2$ cubes and $2\times 3\times 3$ boxes played a central role in the study of prehomogeneous vector spaces \cite{hcl1,hcl2}, here the spaces of $2\times 2\times 2\times 2$ hypercubes and $3\times 3\times 3$ cubes play a central role in the theory, from which we are then able to derive most other coregular spaces corresponding to genus one curves via suitable invariant-theoretic procedures. Also, analogous to the prehomogeneous cases, the invariant theory of our spaces plays a crucial role in constructing and describing the corresponding geometric data. Indeed, in many cases, our bijections yield natural geometric interpretations for the generators of the invariant ring.
\afterpage{%
\begin{landscape}
\begin{table} \label{table:examples}
\begin{center}
\begin{tabular*}{1.1\textwidth}{@{\extracolsep{\fill}}r|c|c|l|c|c|c|}
\cline{2-7}
& Group (s.s.) & Representation & Geometric Data & Invariants & Dynkin & \S \\
\cline{2-7}
1. & $\mathrm{SL}_2$ & $\Sym^4 (2)$ & $(C,L_2)$ & $2, 3$ & $A_2^{(2)}$ &\ref{sec:binaryquartics} \\
2. & $\mathrm{SL}_2^2$ & $\Sym^2 (2) \otimes \Sym^2 (2)$ & $(C, L_2, L_2') \sim (C, L_2, P)$ & $2, 3, 4$ & $D_3^{(2)}$&\ref{sec:bideg22forms} \\
3. & $\mathrm{SL}_2^4$ & $2 \otimes 2 \otimes 2 \otimes 2$ & $(C, L_2, L_2', L_2'')\sim(C,L_2,P,P')$ & $2, 4, 4, 6$ & $D_4^{(1)}$& \ref{sec:hypercube} \\
4. & $\mathrm{SL}_2^3$ & $2 \otimes 2 \otimes \Sym^2(2)$ & $(C, L_2, L_2') \sim (C, L_2, P)$ & $2, 4, 6$ & $B_3^{(1)}$ &\ref{sec:2symHC} \\
5. & $\mathrm{SL}_2^2$ & $\Sym^2 (2) \otimes \Sym^2 (2)$ & $(C,L_2, L_2') \sim (C, L_2, P)$ & $2, 3, 4$ & $D_3^{(2)}$ & \ref{sec:22symHC} \\
6. & $\mathrm{SL}_2^2$ & $2 \otimes \Sym^3(2)$ & $(C,L_2, P_3)$ & $2, 6$ & $G_2^{(1)}$ &\ref{sec:3symHC} \\
7. & $\mathrm{SL}_2$ & $\Sym^4 (2)$ & $(C, L_2, P_3)$ & $2, 3$ & $A_2^{(2)}$& \ref{sec:4symHC} \\
8. & $\mathrm{SL}_2^2 \times \mathrm{GL}_4$ & $ 2 \otimes 2 \otimes \wedge^2(4)$ & $(C, L_2, M_{2,6})$ & $2, 4, 6, 8$ & $D_5^{(1)}$&\ref{sec:2skewHC} \\
9. & $\mathrm{SL}_2 \times \mathrm{SL}_6$ & $ 2 \otimes \wedge^3(6)$ & $(C, L_2, M_{3,6})$ with $L^{\otimes 3} \cong \det M$
& $2, 6, 8, 12$ & $E_6^{(1)}$& \ref{sec:3skewHC}\\
10. & $\mathrm{SL}_2 \times \mathrm{Sp}_6$ & $2 \otimes \wedge^3_0(6)$ & $(C,L_2, (M_{3,6}, \varphi))$ with $L^{\otimes 3} \cong \det M$ & $2, 6, 8, 12$ & $E_6^{(2)}$ & \ref{sec:excHC} \\
11. & $\mathrm{SL}_2 \times \Spin_{12}$ & $2 \otimes S^+(32)$ & $(C \to \mathbb{P}^1(\mathscr{H}_3(\mathbb{H})), L_2)$
& $2, 6, 8, 12$ & $E_7^{(1)}$& \ref{sec:excHC} \\
12. & $\mathrm{SL}_2 \times E_7$ & $2 \otimes 56$ & $(C \to \mathbb{P}^1(\mathscr{H}_3(\mathbb{O})), L_2)$
& $2, 6, 8, 12$ & $E_8^{(1)}$ & \ref{sec:excHC} \\
\cline{2-7}
13. & $\mathrm{SL}_3$ & $\Sym^3 (3)$ & $(C,L_3)$ & $4, 6$ & $D_4^{(3)}$ & \ref{sec:ternarycubics} \\
14. & $\mathrm{SL}_3^3$ & $3 \otimes 3 \otimes 3$ & $(C,L_3,L_3') \sim (C,L_3,P)$ & $6, 9, 12$ & $E_6^{(1)}$ & \ref{sec:333} \\
15. & $\mathrm{SL}_3^2$ & $3 \otimes \Sym^2 (3)$ & $(C,L_3,P_2)$ & $6, 12$ & $F_4^{(1)}$& \ref{sec:2symRC} \\
16. & $\mathrm{SL}_3$ & $\Sym^3 (3)$ & $(C,L_3,P_2)$ & $4, 6$ & $D_4^{(3)}$&\ref{sec:3symRC} \\
17. & $\mathrm{SL}_3 \times \mathrm{SL}_6$ & $3 \otimes \wedge^2 (6)$ & $(C,L_3,M_{2,6})$ with $L^{\otimes 2} \cong \det M$
& $6, 12, 18$ & $E_7^{(1)}$ & \ref{sec:deg3special} \\
18. & $\mathrm{SL}_3 \times E_6$ & $3 \otimes 27$ & $(C \hookrightarrow \mathbb{P}^2(\mathbb{O}),L_3)$ & $6, 12, 18$ & $E_8^{(1)}$& \ref{sec:deg3moduli} \\
\cline{2-7}
19. & $\mathrm{SL}_2 \times \mathrm{SL}_4$ & $2 \otimes \Sym^2 (4)$ & $(C,L_4)$ & $8, 12$ & $E_6^{(2)}$&\ref{sec:deg4} \\
\cline{2-7}
20. & $\mathrm{SL}_5 \times \mathrm{SL}_5 $ & $\wedge^2(5) \otimes 5$ & $(C,L_5)$ & $20, 30$ & $E_8^{(1)}$ & \ref{sec:deg5} \\
\cline{2-7}
\end{tabular*}
\end{center}
\caption{Table of coregular representations and their moduli interpretations}
\end{table}
\end{landscape}
}
\vspace{.1in}
A summary of the parametrizations we obtain is provided in Table \ref{table:examples}.
In this table, the first and second columns list the representations in question, although we only list the semisimple
parts of the groups here, since some of the actions of the non-semisimple parts of the relevant groups are not completely standard.
The third column lists the geometric data (up to isomorphism) arising from a general orbit of this representation: the data in every case includes a genus one curve $C$. The curve may also be equipped with line bundles, denoted by $L_d$, $L_d'$, $L_d''$, etc., where $d$ is the degree of the line bundle, or with a vector bundle, denoted by $M_{r,d}$, where $r$ is the rank and $d$ is the degree of the vector bundle. The notation $P$ or $P'$ indicates a rational point on the Jacobian of $C$ (often in a certain arithmetic subgroup of $\Jac(C)$), and $P_e$ indicates that the point is a nontrivial rational torsion point of order $e$. The notation $\sim$ indicates that the data on the two sides are equivalent and are both suitable interpretations for the moduli problem. There may be some additional mild (open) conditions on the geometric data not indicated in column three.
The fourth column gives the degrees of the invariants of the representation of the semisimple group, and the fifth contains
the extended or affine Dynkin diagram associated with the representation (see Section \ref{sec:ExcLieAlgs}).
Finally, the sixth column indicates the subsection in which that case is proved
and/or discussed most carefully, although most of the theorems are previewed in Sections~\ref{sec:HCpreview} and \ref{sec:RCpreview}. Note that in many cases, changing the form of the group over $K$ leads to a twisted version of the geometric data in column three.
For example, line 3 of Table~\ref{table:examples} corresponds to the case of $2\times 2\times 2\times 2$ hypercubical matrices over a field $K$ ($\mathrm{char}(K) \neq 2,3$). We show that the nondegenerate $\mathrm{GL}_2(K)^4$-orbits of $K^2\otimes K^2 \otimes K^2\otimes K^2$ correspond to the data $(C,L,(P,P',P''))$, where: $C$ is a genus one curve over $K$;\, $L$ is a degree 2 line bundle on $C$;\, and $P$, $P'$, and $P''$ are non-identity $K$-points summing to zero in a certain arithmetic subgroup of the Jacobian of $C$. Meanwhile, the ring of polynomial invariants for the action of $\mathrm{SL}_2(K)^4$ on $K^2\otimes K^2 \otimes K^2\otimes K^2$ is freely generated by four invariants $a_2$, $a_4$, $a_4'$, and $a_6$, having degrees 2, 4, 4, and 6, respectively. In terms of the geometric data, if we write the Jacobian of $C$ as a Weierstrass elliptic curve $y^2=x^3+Ax+B$ on which the points $P$, $P'$, $P''$ lie, then: $a_2$ can be interpreted as the slope of the line connecting $P$, $P'$ and $P''$;\, $a_4$ and $a_4'$ are the $x$-coordinates of the points $P$ and $P'$;\, and $a_6$ is the $y$-coordinate of~$P$.
Similarly, line fourteen of Table~\ref{table:examples} corresponds to the case of $\mathrm{GL}_3(K)^3$ acting on the space $K^3\otimes K^3\otimes K^3$ of $3\times 3\times3$ cubical matrices over $K$. We prove that the nondegenerate orbits parametrize data consisting of a triple $(C,L,(P,P'))$, where $C$ is a genus one curve over $K$, $L$ is a degree 3 line bundle on $C$ defined over $K$, and $P$ and $P'$ are non-identity points summing to zero in an arithmetic subgroup of the Jacobian of $C$. The three generators $b_6$, $b_9$, and $b_{12}$ of the $\mathrm{SL}_3(K)^3$-invariant ring have degrees 6, 9, and 12, respectively. If we again write the Jacobian of $C$ as a Weierstrass elliptic curve $y^2=x^3+Ax+B$, then $P=(b_6,b_9)$, $P'=(b_6,-b_9)$, and~$A=b_{12}$.
\vspace{.1in}
We briefly describe some forthcoming applications of these parametrizations. In \cite{arulmanjul-bqcount,arulmanjul-tccount}, an implementation of certain geometric counting techniques (building on those in~\cite{manjulcountquintic})
for integral orbits of the representations in lines 1 and 13
of Table~\ref{table:examples}
has led to results on the average sizes of the 2- and 3-Selmer groups of the family of all elliptic curves over $\mathbb{Q}$ (when ordered by height), and corresponding (finite) upper bounds on the average rank of all elliptic curves.
By developing these counting techniques further, so that they may be applied to other cases in Table~\ref{table:examples}, we determine in~\cite{cofreecounting} the average sizes of the 2- and 3-Selmer groups for various families of elliptic curves, e.g., those with marked points. These results lead to corresponding average rank bounds for the curves in these families. For example,
the space of $3\times 3\times 3$ cubes allows us to show that the average size of the 3-Selmer group in the family of elliptic curves
\begin{equation*}
y^2 +a_3 y = x^3 + a_2 x^2 + a_4 x
\end{equation*}
having a marked point at $(0,0)$ is
12. As a result, we show that the (limsup of the) average rank of this family of elliptic curves is finite (indeed, at most $2\frac16$), and that a positive proportion of curves in this family have rank one. Analogous results for average sizes of Selmer groups in families of elliptic curves with one marked point of order 3 or 2 (using lines 6 and 15, respectively, of Table~\ref{table:examples}) and elliptic curves with two general marked points (using line 3, the space of hypercubes) are also obtained.
\vspace{.1in}
\noindent {\em Outline.}
The organization of this paper is as follows. Sections \ref{sec:HCpreview} and \ref{sec:RCpreview} form an extended introduction in which we describe the basic constructions and parametrizations corresponding to $2\times 2\times 2\times 2$ hypercubes and $3\times 3\times 3$ cubes, respectively, and how many of the various other coregular space orbit parametrizations in Table \ref{table:examples} may be (at least heuristically) derived from them.
Section \ref{sec:classical} describes orbit parametrizations for the moduli spaces of genus one curves with degree~$n$ line bundles, for~$2 \leq n \leq 5$. Many of the ideas in this section are classical or well known, at least over algebraically closed fields, but our constructions generalize to other fields and more general base schemes. We also show that the stabilizers of elements in these representations are naturally isomorphic to the automorphism group of a genus one curve with a degree $n$ line bundle, which is related to the so-called Heisenberg theta group. These results about stabilizers play a central role in the works~\cite{arulmanjul-bqcount,arulmanjul-tccount}. The parametrizations of elliptic normal curves of degree~$n$, especially for $n = 2$ and $n = 3$, will also be used extensively in the later parts of the paper.
In Section \ref{sec:hermRC}, we concentrate on the coregular spaces whose orbits are related
to a genus one curve and a degree~$3$ line bundle, possibly with additional data.
We first discuss some of the fundamental cases, such as the aforementioned space of
$3 \times 3 \times 3$ cubical matrices.
In each case, we show that the invariants of the representation are closely related to the geometric data, and in particular,
to the Jacobian of the genus one curve that arises.
We then study spaces of the form $V \otimes J$, where $V$ is a $3$-dimensional vector space
and $J$ is a certain type of cubic Jordan algebra (to be specified), up to
the natural action of a group which we denote by $\mathrm{GL}(V)\times \mathrm{SL}(J)$. In each case, the group
$(\mathbb G_m\times)\mathrm{SL}(J)$ acting on $J$ is a prehomogeneous vector space, equipped with
a relative invariant cubic norm form and an
adjoint map. We prove a uniform theorem about the orbits of these type of representations, and
then specialize to specific $J$ to recover a number of the introductory
cases as well as other interesting moduli problems.
Section \ref{sec:hermHC} develops analogous ideas to study orbits that parametrize genus one
curves with degree $2$ line bundles and additional data. We again begin by discussing the
most fundamental representations,
including the space of bidegree $(2,2)$ forms on $\mathbb{P}^1 \times \mathbb{P}^1$
and the aforementioned space of $2 \times 2 \times 2 \times 2$ hypercubical matrices.
We show that the invariants of
each representation are again closely related to the corresponding geometric data.
Analogously to the case of degree~$3$ line bundles, we then study a more general situation.
In particular, we consider the tensor product of a two-dimensional vector space $V$
and a space $\mathscr{C}(J)$ of ``Hermitian cubes'' with respect
to a cubic Jordan algebra $J$, under the action of
a group which we denote by $\mathrm{SL}_2(V)\times \mathrm{SL}_2(J)$; the space $\mathscr{C}(J)$ has a quartic
norm form and a natural adjoint map. Again, we study representations of this kind uniformly, and
then specializing recovers most of the earlier cases as well as several new moduli problems.
In Section \ref{sec:ExcLieAlgs}, we describe how all of the
representations we study are related to exceptional Lie algebras.
In particular, these representations all arise from Vinberg's theory of $\theta$-groups \cite{vinberg};
his constructions give a wide class of coregular spaces.
In his recent Harvard Ph.D.\ thesis, J.\ Thorne~\cite{jthorne-thesis} studies some canonical constructions in invariant
theory arising from Vinberg theory, and it is thus an interesting question as to how his more representation-theoretic constructions are related to our geometric ones. Finally, we also describe how our spaces are closely related to the representations found in the Deligne-Gross Magic Triangle \cite{delignegross}.
The ``certain arithmetic subgroup'' of the Jacobian of the genus one curve arising in many of our moduli problems is called the {\it period-index subgroup}. It is equivalent to the entire Jacobian when working over an algebraically closed field. When working over number fields, it is a finite-index subgroup of the Jacobian, consisting of points that are ``unobstructed'' in relation to the genus one curve. This is described in more detail in Appendix~\ref{appendix:torsors}, which may be of use to those interested in the arithmetic applications. The appendix may also be safely skipped for those readers more interested in the geometric constructions and bijections over an algebraically closed field.
\vspace{.1in}
\noindent {\em Acknowledgments.} We would like to thank Bhargav Bhatt, John Cremona, Benedict Gross, Joe Harris, Abhinav Kumar, and Catherine O'Neil for useful conversations.
The main theorems in \S \ref{sec:g1fromRC} and \S \ref{sec:333} are joint work with Catherine O'Neil.
\section{Main results I: Genus one curves and \texorpdfstring{$2\times 2\times 2\times 2$}{2222} hypercubes} \label{sec:HCpreview}
In this section, we discuss the space of $2\times 2\times 2\times 2$ hypercubical matrices over $K$,
and we describe the various parametrizations of genus one curves with extra data that may be obtained
from this perspective. No proofs or details are given in this section.
Here, we simply describe in an elementary way the constructions
of the genus one curves and extra data from the
orbits of our representations, and state the main theorems related to these cases.
Further details, more functorial descriptions of the constructions, and proofs may be found in Section~\ref{sec:hermHC}.
\subsection{Preliminaries on \texorpdfstring{$2\times 2\times 2$}{222} cubes}\label{sec:cubes}
Before studying $2\times 2\times 2\times 2$ hypercubical matrices, we first review the case of
$2\times 2\times 2$ cubical matrices (see \cite{hcl1} for more details).
Let $K$ be a field with $\mathrm{char}(K)\neq 2$. Let $\mathcal{C}_2(K)$ denote the space $K^2\otimes K^2 \otimes K^2$.
Then each element of $\mathcal{C}_2(K)$ can naturally be represented as a cubical matrix $A = (a_{ijk})$ with entries in $K$, where $i,j,k\in\{1,2\}$:
\vspace{.1in}
\begin{equation}\label{eq:firstcube}
\raisebox{-2\baselineskip}{
\cube {a_{111}} {a_{112}} {a_{121}} {a_{122}} {a_{211}} {a_{212}} {a_{221}} {a_{222}}
}
\qquad .
\end{equation}
If we denote by $\{e_1,e_2\}$ the standard basis of $K^2$, then the element of $\mathcal{C}_2(K)$ described by \eqref{eq:firstcube} is
\[\sum_{i,j,k} a_{ijk}\, e_i\otimes e_j\otimes e_k.\]
As the cubical matrix representation is both more intuitive and more convenient, we shall
identify $\mathcal{C}_2(K)$ with the space of $2\times 2\times 2$ matrices with entries in $K$, or the space of {\it cubes over $K$}.
Now a cube $A$ over $K$ may be naturally sliced into two $2\times 2$ matrices over $K$ in three different ways.
Namely, the cube $A=(a_{ijk})$ given by \eqref{eq:firstcube}
may be partitioned into the two $2\times 2$ matrices $M_\ell$ and $N_{\ell}$, for $\ell=1, 2, 3$, as follows:
\begin{itemize}
\item[{\rm 1)}]
$M_1=(a_{1jk})$ is the front face and $N_1=(a_{2jk})$ is the back face of $A$;
\item[{\rm 2)}]
$M_2=(a_{i1k})$ is the top face and $N_2=(a_{i2k})$ is the bottom face of $A$;
\item[{\rm 3)}]
$M_3=(a_{ij1})$ is the left face and $N_3=(a_{ij2})$ is the right face of $A$.
\end{itemize}
We may define a natural action of $\mathrm{SL}_2(K)^3$ on $\mathcal{C}_2(K)$ so that, for any $\ell \in\{1,2,3\}$, the
element ${\left(\begin{smallmatrix}r&s\\t&u\end{smallmatrix}\right)}$
in the $\ell$th factor of $\mathrm{SL}_2(K)^3$ acts on the cube $A$ by replacing
$(M_\ell,N_\ell)$ by $(rM_\ell+sN_\ell,tM_\ell+uN_\ell)$. The actions of these three
factors of $\mathrm{SL}_2(K)$ in $\mathrm{SL}_2(K)^3$ commute with each other, analogous to the fact that row and column operations on a rectangular matrix commute. Hence we obtain a natural and well-defined action of
$\mathrm{SL}_2(K)^3$ on $\mathcal{C}_2(K)$.
This action turns out to have a single polynomial invariant%
\footnote{We use throughout the standard abuse of terminology ``has a single
polynomial invariant'' (or ``has $k$ polynomial invariants'') to
mean that the corresponding polynomial invariant ring is generated
freely by one element (respectively, $k$ elements).}%
, which we call the {\em discriminant}.
Given a $2\times 2\times 2$ cube $A$ over $K$, for each
$\ell \in\{1,2,3\}$, we obtain a binary quadratic
form
\begin{equation}\label{bqfdet}
Q_\ell(x,y)=\det(M_\ell x + N_\ell y).
\end{equation}
The discriminants of these three
binary quadratic forms all coincide (see \cite[\S 2]{hcl1}), and this
gives the desired invariant, called the
{discriminant} $\disc(A)$ of the cube~$A$. (These triples of binary
quadratic forms with the same discriminant arising from cubes were
used to give a simple description of Gauss composition in
\cite{hcl1}.) This fundamental invariant of degree four on the space \linebreak
$K^2\otimes K^2\otimes K^2$ of cubical matrices over $K$ will play a
key role in understanding the next space $K^2\otimes K^2\otimes
K^2\otimes K^2$ of hypercubical matrices over $K$.
\subsection{On \texorpdfstring{$2\times 2\times 2\times 2$}{2222} hypercubes}\label{sec:HCslicing}
Assuming now that $K$ has characteristic not $2$ or $3$, let $\H_2(K)$ denote the space $K^2\otimes K^2 \otimes K^2\otimes K^2$. Then we may identify $\H_2(K)$ with the space of $2\times 2 \times 2\times 2$ hypercubical matrices $H=(h_{ijk\ell})$ over $K$, which we will call the space of {\it hypercubes over $K$}. Such hypercubes are somewhat harder to draw on paper; breaking symmetry, we write our hypercube $H=(h_{ijk\ell})$ thus:
\begin{equation}\label{eq:hyperdraw}
\raisebox{-2\baselineskip}{
\cubea {h_{1111}} {h_{1112}} {h_{1121}} {h_{1122}} {h_{1211}} {h_{1212}} {h_{1221}} {h_{1222}}
\qquad \qquad \qquad \cubea
{h_{2111}} {h_{2112}} {h_{2121}} {h_{2122}} {h_{2211}} {h_{2212}} {h_{2221}} {h_{2222}}
} \qquad .
\end{equation}
Now just as a cube $A$ over $K$ could be partitioned into two $2\times 2$ matrices in three different ways, a hypercube $H$ over $K$ may be partitioned into two $2\times 2\times 2$ matrices in four different ways.
More precisely, the hypercube $H=(h_{ijk\ell})$ can be partitioned into two cubes $A_m$ and $B_m$, for $m\in\{1,2,3,4\}$, as follows:
\begin{itemize}
\item[1)] $A_1=(h_{1jk\ell})$ and $B_1=(h_{2jk\ell})$;
\item[2)] $A_2=(h_{i1k\ell})$ and $B_2=(h_{i2k\ell})$;
\item[3)] $A_3=(h_{ij1\ell})$ and $B_3=(h_{ij2\ell})$;
\item[4)] $A_4=(h_{ijk1})$ and $B_4=(h_{ijk2})$,
\end{itemize}
where the first slicing 1) is depicted in \eqref{eq:hyperdraw}.
We define a natural action of $\mathrm{SL}_2(K)^4$ on the space of hypercubes so that, for any $i\in\{1,2,3,4\}$, an element ${\left(\begin{smallmatrix}r&s\\t&u\end{smallmatrix}\right)}$ in the $i$th factor of $\mathrm{SL}_2(K)$ acts on the hypercube $H$ by replacing
$(A_i,B_i)$ by $(rA_i+sB_i,tA_i+uB_i)$. The actions of these four
factors of $\mathrm{SL}_2(K)$ in $\mathrm{SL}_2(K)^4$ again commute with each other, so we obtain a well-defined action of
$\mathrm{SL}_2(K)^4$ on $\H_2(K)$.
Now recall that a cube over $K$ naturally yields three binary quadratic forms over $K$, through its slicings into pairs of $2\times 2$ matrices over $K$. Namely, for each slicing of a cube $A$ into a pair~$(M_i,N_i)$ of $2\times 2$ matrices, we may construct the form $Q_i(x,y)=\det(M_ix+N_iy)$. As is well-known, the determinant function is the
unique polynomial invariant for the action of $\mathrm{SL}_2(K)^2$ on $2\times 2$ matrices over $K$, and since it a degree two invariant, we obtain binary quadratic forms.
Analogously, a hypercube over $K$ naturally yields four binary {\em quartic} forms via its slicings into pairs of cubes over $K$. Indeed, we have seen that the action of $\mathrm{SL}_2(K)^3$ on $2\times 2\times 2$ cubes over $K$ has a single polynomial invariant, of degree {four}, given by its discriminant.
In analogy with the case of cubes, given
a hypercube $H\in\H_2(K)$, for each $i\in\{1,2,3,4\}$, we may construct a binary
quartic form
\begin{equation}\label{bqfdef}
f_i(x,y)=\disc(A_i x+B_i y),
\end{equation}
where the $(A_i,B_i)$ denote the four slicings of the hypercube $H$ into pairs of cubes over $K$.
Note that the form $f_1$ is invariant under the action of the
subgroup $\{\mathrm{id}\}\times \mathrm{SL}_2(K)^3\subset \mathrm{SL}_2(K)^4$ on
$H\in\H_2(K)$, because the action of $\mathrm{SL}_2(K)^3$
on the cube $A_1x+B_1y$ of linear forms in $x$ and $y$ fixes
$\disc(A_1x+B_1y)$. The remaining factor of $\mathrm{SL}_2(K)$
in $\mathrm{SL}_2(K)^4$ then acts in the usual way on the binary quartic form
$f_1$, and it is well known that this action has two independent
polynomial invariants, which are traditionally called $I(f_1)$ and
$J(f_1)$ (see \S \ref{sec:binaryquartics} for more details on
binary quartic forms). These invariants have degrees $2$ and $3$,
respectively, in the coefficients of $f_1$. Since they coincide with
the corresponding invariants $I$ and $J$ of $f_2$, $f_3$, and $f_4$
(as an easy calculation shows), this yields well-defined
$\mathrm{SL}_2(K)^4$-invariants $I(H)$ and $J(H)$ for all elements $H\in
\H_2(K)$. The invariants $I(H)$ and $J(H)$ thus have degrees 8 and
12, respectively, in the entries of $H$.
The {\em discriminant} $\disc(f)$ of a binary quartic form $f$ is defined by
\begin{equation}
\disc(f):=4I(f)^3-J(f)^2,\end{equation}
which is nonzero precisely when $f$ has four distinct roots in $\mathbb{P}^1(\overline{K})$; such a binary quartic form is called {\em nondegenerate}. For
a hypercube $H$, since the $I$ and $J$ invariants are the same for all the $f_i$, so are their discriminants.
We may define the {\em discriminant} of $H$ to be
\begin{equation}\disc(H) :=4I(H)^3-J(H)^2,\end{equation}
which is nonzero precisely when any of the binary quartic forms $f_i$ associated to $H$ has four distinct roots in $\mathbb{P}^1(\overline{K})$.
We say the hypercube $H$ is {\it nondegenerate} if its discriminant is nonzero.
We give a conceptual explanation as to why $I(f_i)=I(f_j)$ and $J(f_i)=J(f_j)$ (and thus $\disc(f_i)=\disc(f_j)$) for all $i$ and $j$ in the next subsection.
\subsection{Genus one curves from hypercubes}\label{sec:hypergenusone}
We now explain how a nondegenerate hypercube $H$ gives rise to a number of genus one curves $C_i$ ($1\leq i\leq 4$), $C_{ij}$ ($1\leq i<j\leq 4)$, and $C_{ijk}$ ($1\leq i<j<k\leq 4$). We also discuss how these genus one curves are related to each other, and the resulting description of the nondegenerate orbits of $\mathrm{SL}_2(K)^4$ on the space $K^2\otimes K^2\otimes K^2\otimes K^2$ of hypercubes over $K$.
\subsubsection{Genus one curves mapping to \texorpdfstring{$\mathbb{P}^1$}{P1}}\label{sec:binquargenusone}
Given a nondegenerate binary quartic form $f$ over $K$, we may attach to $f$ a genus one curve $C(f)$ over~$K$, namely the normalization of the projectivization of the affine curve
$y^2 = f(x,1)$.
It is known (see, e.g., \cite{BSD, ankim}) that the Jacobian of the curve $C(f)$ may be written as a Weierstrass elliptic curve with coefficients involving the invariants $I(f)$ and $J(f)$ of $f$, namely as
\begin{equation}\label{eq:BQjac}
E(f): y^2 = x^3 - 27 I(f) - 27 J(f).
\end{equation}
We always take $E(f)$ as our model for the Jacobian of $C(f)$.
Now given a nondegenerate hypercube $H\in \H_2(K)$, we have seen that we naturally obtain four binary quartic forms $f_1$, $f_2$, $f_3$, $f_4$ over $K$ from $H$. Thus each hypercube $H\in\H_2(K)$ yields four corresponding genus one curves $C_1$, $C_2$, $C_3$, $C_4$ over $K$, where $C_i=C(f_i)$.
\subsubsection{Genus one curves in \texorpdfstring{$\mathbb{P}^1\times\mathbb{P}^1$}{P1P1}}\label{p1xp1}
These genus one curves obtained from a nondegenerate hypercube
$H\in\H_2(K)$ may be seen more explicitly in $\mathbb{P}^1\times\mathbb{P}^1$.
Let us first identify $\H_2(K)$ with the space of
quadrilinear forms on $W_1\times W_2\times W_3\times W_4$, where each
$W_i$ ($i\in\{1,2,3,4\}$) is a 2-dimensional $K$-vector space. (In
this identification, when we write $\H_2(K)=K^2\otimes K^2\otimes
K^2\otimes K^2$, then the $i$th factor of $K^2$ here is the
$K$-vector space dual to $W_i$.) Then for any $H\in\H_2(K)$, viewed
as such a quadrilinear form, consider the set
\begin{align*}
C_{12}(K) \,:=\, \bigl\{(w_1, w_2)\in \mathbb{P}(W_1) \times \mathbb{P}(W_2)\,:\, \det(H(w_1,w_2,\,\cdot\,, \,\cdot\,)) = 0\bigr\}\, \subset \mathbb{P}^1\times \mathbb{P}^1,
\end{align*}
where we view $H(w_1,w_2,\,\cdot\,, \,\cdot\,)$ naturally as a
bilinear form on $W_3\times W_4$, whose determinant's vanishing or
nonvanishing is thus well-defined.
By definition, $C_{12}(K)$ consists of the set of $K$-points of a
bidegree (2,2) curve $C_{12}$ in $\mathbb{P}^1\times \mathbb{P}^1$, which is a
genus one curve if smooth (precisely when $H$ is
nondegenerate). The projections of $C_{12}$ to $\mathbb{P}(W_1)$ or to $\mathbb{P}(W_2)$ then yield the double covers
of $\mathbb{P}^1$ corresponding to $C_1$ and $C_2$, respectively. Indeed,
the points of ramification of the projection $C_{12}\to \mathbb{P}(W_1)$ are
the points $(w_1, w_2)\in C_{12}\subset \mathbb{P}(W_1)\times\mathbb{P}(W_2)$ for
which $\det(H(w_1,w_2,\,\cdot,\,,\,\cdot\,))$ has vanishing
discriminant as a quadratic form in $w_1$; this discriminant is
precisely the binary quartic form $f_1$ on $W_1$!
It follows that $C_{12}$ is isomorphic to both $C_1$
and $C_2$, and hence all these genus one curves $C_i$ are isomorphic
to each other: for $1 \leq i < j \leq 4$, we have natural isomorphisms
$$ C_i\cong C_{ij} \cong C_j.$$
It also follows then that all four binary quartic forms $f_i$ must
have the same values for the invariants $I$ and $J$, as claimed at the
end of \S\ref{sec:HCslicing}. (Indeed, all $I(f_i)$ and all $J(f_i)$
must be the same for all forms $f_i$---up to scaling by $c^2$ and
$c^3$, respectively, for some constant $c$---in order for the
Jacobians in \eqref{eq:BQjac} to be isomorphic. But then symmetry
considerations show that $c$ must be~1.)
\subsubsection{Genus one curves in \texorpdfstring{$\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$}{P1P1P1}}
The curve $C_{12}$ can in fact be mapped into $\mathbb{P}(W_1)\times\mathbb{P}(W_2)\times\mathbb{P}(W_3)\cong \mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$, as follows. For a point $(w_1,w_2) \in C_{12}$, since $H(w_1,w_2,\,\cdot\,,\,\cdot\,)$ is singular as a bilinear form on $W_3\times W_4$, the kernel of $H(w_1,w_2,\,\cdot\,,\,\cdot\,)$ in $W_3$ is nonempty; if $H$ is nondegenerate, it
can be shown that the kernel in $W_3$ is one-dimensional. We thus obtain a well-defined element $w_3\in\mathbb{P}(W_3)$ such that $H(w_1,w_2,w_3,\,\cdot\,)\equiv 0$.
Therefore,
\[C_{123}(K) \,:=\, \bigl\{(w_1,w_2,w_3)\in \mathbb{P}(W_1)\times \mathbb{P}(W_2)\times \mathbb{P}(W_3)\,:\, H(w_1,w_2,w_3, \,\cdot\,)\equiv 0\bigr\}\,\subset\,\mathbb{P}^1\times \mathbb{P}^1\times \mathbb{P}^1\]
gives the set of $K$-points of a genus one curve $C_{123}$ in $\mathbb{P}(W_1)\times\mathbb{P}(W_2)\times\mathbb{P}(W_3)\cong \mathbb{P}^1\times \mathbb{P}^1\times \mathbb{P}^1$ defined over $K$; moreover, the projection of $C_{123}$ onto $\mathbb{P}(W_1)\times\mathbb{P}(W_2)$ gives an isomorphism onto $C_{12}$. In particular, $C_{123}$ provides us with explicit isomorphisms among the three curves $C_{12}$, $C_{13}$, and $C_{23}$, through projection and un-projection, which all commute with each other.
It is natural to package this information together by keeping track simply of the single curve $C_{123}$ in $\mathbb{P}(W_1)\times\mathbb{P}(W_2)\times\mathbb{P}(W_3)$.
\subsubsection{The fundamental tetrahedron of isomorphisms}
We may thus construct curves $C_{123}$, $C_{124}$, $C_{134}$, and $C_{234}$ in $\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$ from a nondegenerate hypercube $H\in\H_2(K)$. These four genus one curves are all isomorphic to each other, as we have already seen. In fact, we may construct explicit and natural isomorphisms between them, as follows. Given a point $(w_1,w_2,w_3)$ on $C_{123}\subset \mathbb{P}(W_1)\times\mathbb{P}(W_2)\times\mathbb{P}(W_3)$, the bilinear form $H(\,\cdot\,,w_2,w_3,\,\cdot\,)$ on $W_2\times W_3$ is singular and of rank 1, so there exists a unique $w_4\in\mathbb{P}(W_4)$ such that $H(\,\cdot\,,w_2,w_3,w_4)\equiv 0$. This implies that $(w_2,w_3,w_4) \in C_{234}(K)$, giving the desired map $$\tau_{123}^{234}: C_{123}\to C_{234},$$ and similarly we obtain the maps $\tau_{123}^{134}$, $\tau_{124}^{123}$, etc. Note that each of these maps is invertible, since clearly $\tau_{234}^{123}=(\tau_{123}^{234})^{-1}$, etc.
We thus obtain a tetrahedron of maps:
\begin{equation}\label{eq:tet}
\xymatrix{
& C_{123} \ar@{<->}[dl] \ar@{<->}[dr] \ar@{<->}[dd] & \\
C_{124} \ar@{<-->}[rr] \ar@{<->}[dr] & & C_{134} \\
& C_{234} \ar@{<->}[ur] &
}
\end{equation}
However, these isomorphisms do not all commute with each other!
For example, starting at $C_{123}$ and tracing around the triangle of isomorphisms
$\tau_{134}^{123}\circ\tau_{124}^{134}\circ\tau_{123}^{124}$ yields a hyperelliptic involution $\iota_1$ on $C_{123}$. The quotient of $C_{123}$ by this involution $\iota_1$ yields the double cover of $\mathbb{P}^1$ given by $C_1:y^2=f_1(x,1)$. Similarly,
the other two triangles of isomorphisms starting at $C_{123}$ yield the involutions $\iota_2$ and $\iota_3$ on $C_{123}$ corresponding to the double covers $C_2$ and $C_3$ of $\mathbb{P}^1$ whose branch points are the roots of $f_2$ and $f_3$, respectively.
If one instead starts at $C_{123}$ and goes around a quadrilateral of isomorphisms, then (viewing the traversal of the quadrilateral as a traversal of two triangles) we obtain the automorphism $\iota_i\circ\iota_j$ of $C_{123}$, which is a translation of $C_{123}$ by a point $P_{ij}$ on the Jacobian of $C_{123}$. We thus obtain three~points $P_{12}$, $P_{23}$, and $P_{31}$ on the Jacobian $E := E(H)$ of $C:=C_{123}$. Since $(\iota_1\circ\iota_2)\circ(\iota_2\circ\iota_3)\circ(\iota_{3}\circ\iota_1)={\rm id}$, we have the relation
\begin{equation}\label{Prel}
P_{12}+P_{23}+P_{31}=0.
\end{equation}
Moreover, these points $P_{ij}$ are nonzero, and they lie in a certain subgroup of $E(K)$ related to the curve $C$,
called the {\em degree $2$ period-index subgroup} and denoted by $\Jac_{C}^2(K)$ (see Appendix~\ref{appendix:torsors} for more details on the period-index subgroup). The difference is that the points of $\Jac_{C}^2(K)$ correspond to the divisor classes of $K$-rational divisors of degree $0$ on $C$, whereas the points of $E(K)$ correspond to the $K$-rational divisor classes of degree $0$ (that is, where the divisor class is $K$-rational, but not necessarily any of the divisors in it). Here, these points $P_{ij}$ arise as differences of actual rational divisors and thus lie in the period-index subgroup. Indeed, if $D_1$, $D_2$, and $D_3$ denote the degree two divisors on $C_{123}$ corresponding to the three projections to $\mathbb{P}(W_1)$, $\mathbb{P}(W_2)$, and $\mathbb{P}(W_3)$, respectively, then for a point $x$ on $C_{123}$ we have $\iota_i(x)=D_i-x$.
Thus $\iota_i\circ \iota_j(x)=x+(D_i-D_j),$ so that $P_{ij}=D_i-D_j$. This also implies the relation (\ref{Prel}).
In summary, we have seen that there is a canonical
elliptic curve $E(H)$ attached to any nondegenerate hypercube $H\in\H_2(K)$, namely,
\begin{equation}\label{jac2}
E(H): y^2 = x^3 - 27I(H) - 27 J(H)
\end{equation}
with
\begin{equation*}
I(H):=I(f_i) \qquad\textrm{and}\qquad J(H):=J(f_i)
\end{equation*}
for $1 \leq i \leq 4$, where $f_1$, $f_2$, $f_3$, $f_4$ are the binary quartic forms naturally arising from $H$.
Furthermore, $E:=E(H)$ is canonically the Jacobian of each of the genus one curves $C_i$ ($1\leq i\leq 4$) as well
as the genus one curves $C_{ij}$ ($1\leq i\leq j\leq 4$) and $C_{ijk}$ ($1\leq i\leq j\leq k\leq 4)$ arising from $H$. Finally, there is a natural tetrahedron of isomorphisms \eqref{eq:tet} among the $C_{ijk}$ which does not commute. We thus obtain, in addition to a degree 2 divisor $D_1$ on the curve $C_{123}$, three nonzero rational~points $P_{12}$, $P_{23}$, and $P_{31}$ in the period-index subgroup $\Jac_{C_{123}}^2(K)$ for the curve~$C_{123}$ that sum to zero.
\subsection{Orbit classification of \texorpdfstring{$2\times 2\times 2\times 2$}{2222} hypercubes} \label{sec:HCorbitpreview}
We will show in \S \ref{sec:hypercube} that the data of a genus one curve $C=C_{123}$, the equivalence class of a degree two rational divisor $D=D_1$ on $C$, and three nonzero points $P=P_{12}$, $P'=P_{23}$, and $P''=P_{31}$ (with $P+P'+P''=0$) in the period-index subgroup $\Jac_{C}^2(K)$ of the Jacobian of $C$ is in fact sufficient to recover the orbit of a hypercube $H$. We have:
\begin{thm}\label{hyperpar}
For any field $K$ with $\mathrm{char}(K) \nmid 6$, there is a canonical bijection between
nondegenerate $\mathrm{GL}_2(K)^4$-orbits on the space $K^2 \otimes K^2 \otimes K^2 \otimes K^2$ of hypercubes over $K$ and isomorphism classes of triples $(C,L,(P,P',P''))$, where $C$ is a smooth irreducible genus one curve over $K$, $L$ is a degree $2$ line bundle on~$C$, and $P$, $P'$, $P''$ are nonzero $K$-points that sum to zero in the degree $2$ period-index subgroup $\Jac_C^2(K)$ of the group of $K$-points of the Jacobian of~$C$.
\end{thm}
It is known (see \cite{vinberg,littelmann}) that the ring of polynomial invariants for the action of $\mathrm{SL}_2(K)^4$ on $K^2 \otimes K^2 \otimes K^2 \otimes K^2$ is freely generated by four polynomials $a_2$, $a_4$, $a_4'$, and $a_6$, having degrees $2$, 4, 4, and $6$, respectively, in the entries of the hypercube. In terms of the geometric data in Theorem~\ref{hyperpar}, we may write the Jacobian of $C$ as a Weierstrass elliptic curve $y^2=x^3+a_{8}x+a_{12}$,
on which the points $P=(a_4,a_6)$, $P'=(a_4',a_6')$, $P''=(a_4'',a_6'')$ lie, such that $a_2$ can be interpreted
as the slope $\frac{a_6'-a_6}{a_4'-a_4}$ of the line connecting $P$ and $P'$ (and $P''$).
From these four invariants, the invariant $a_6'$ (i.e., the $y$-coordinate of $P'$) may be determined:
$$a_6' = a_6 + a_2 (a_4'-a_4).$$
The coefficients $a_{8}$ and $a_{12}$ of the Weierstrass elliptic curve may also be determined, since there is a unique such elliptic curve passing through the two points $P$ and $P'$. Indeed, we find
\begin{align} \label{eq:a8a12}
a_8 &= a_2(a_6+a_6')-(a_4^2+a_4a_4'+a_4'^2), \qquad \textrm{and} \\
a_{12}&= a_6^2 -a_2a_4(a_6+a_6')+a_4a_4'(a_4+a_4'). \nonumber
\end{align}
Finally, the coordinates of $P''=(a_4'',a_6'')$ may be recovered by finding the third point of intersection of the line $y-a_6=a_2(x-a_4)$ with the elliptic curve $y^2=x^3+a_{8}x+a_{12}$; this yields
\begin{align*}
a_4''&= 3 a_2^2 - a_4-a_4', \qquad \textrm{and} \\
a_6''&= a_2^3-3(a_2a_4-a_6)-a_6-a_6'.
\end{align*}
In conclusion, $a_2$, $a_4$, $a_4'$, $a_4''$, $a_6$, $a_6'$, $a_6''$, $a_{8}=-27I$, and $a_{12}=-27J$ are all fundamental and important polynomial
invariants for the action of $\mathrm{SL}_2(K)^4$ on $K^2\otimes K^2 \otimes K^2\otimes K^2$; they all have key
geometric interpretations and can be expressed as simple polynomials in the four basic invariants
$a_2$, $a_4$, $a_4'$, and~$a_6$.
\subsection{Symmetrization}\label{sec:symHCpreview}
Just as one may identify the binary quadratic form $ax^2+2bxy+cy^2$ with the symmetric $2\times 2$~matrix
\[\left[\begin{array}{cc}
a\, &b \\
b\, & c \end{array}\right], \]
and the binary cubic form $ax^3+3bx^2y+3cxy^2+dy^3$
with the triply symmetric $2\times 2\times 2$ matrix
\begin{equation}\label{sym3cube}
\raisebox{-2\baselineskip}{
\cube a b b c b c c d
}
\end{equation}
\noindent
(see \cite{hcl1}), one may associate the binary quartic form
\begin{equation}\label{symquartic}
a x^4 + 4 b x^3 y + 6 c x^2 y^2 + 4 d x y^3 + e y^4
\end{equation}
with the quadruply-symmetric $2\times2\times2\times2$ matrix
\begin{equation}\label{sym4hyper}
\raisebox{-2\baselineskip}{
\cube a b b c b c c d \qquad \qquad \qquad \cube b c c d c d d e
} \qquad .
\end{equation}
Using $\Sym_4 K^2$ to denote the space of binary quartic forms of this type, the above association of
the binary quartic form \eqref{symquartic} with the hypercube \eqref{sym4hyper} corresponds
to the natural inclusion
\[
\Sym_4 K^2\hookrightarrow K^2\otimes K^2\otimes K^2\otimes K^2
\]
of the space of quadruply-symmetric hypercubes into the space of hypercubes.
Such hypercubes lead to geometric data $(C,L,(P,P',P''))$ as in Theorem~\ref{hyperpar}, but (due to the symmetry) we also have $P=P'=P''$. Since $P+P'+P''=0$, we see that $P$ is a 3-torsion point on the Jacobian
of $C$. Conversely, we will show in \S \ref{sec:4symHC} that this is the only constraint on $P$. Thus we obtain the
following theorem classifying the orbits of $\mathrm{GL}_2(K)$ on $\Sym_4K^2$:
\begin{thm}\label{sympar}
For any field $K$ with $\mathrm{char}(K) \nmid 6$, there is a canonical bijection between nondegenerate $\mathrm{GL}_2(K)$-orbits on the space $\Sym_4K^2$ of binary quartic forms over $K$ and isomorphism classes of triples $(C,L,P)$, where $C$ is a smooth genus one curve over $K$, \;$L$ is a degree $2$ line bundle on~$C$, and $P$ a nonzero $3$-torsion point on the Jacobian of $C$ defined over $K$.
\end{thm}
We have already noted in \S\ref{sec:binquargenusone} (see \S\ref{sec:binaryquartics} for further details) that certain $\mathrm{GL}_2(K)$-orbits on $\Sym^4K^2$ correspond to pairs $(X,L)$, where $X$ is a genus one curve and $L$ is a degree 2 line bundle on $X$. When char$(K) \nmid 6$, these two spaces $\Sym_4K^2$ and $\Sym^4K^2$ are naturally identified, so we obtain two ``dual'' moduli interpretations of the space of binary quartic forms in terms of genus one curves.
The two genus one curves coming from a binary quartic are {\it not} the same, however; they are related by a Hessian-type construction (see \S \ref{sec:4symHC}).
\subsection{Triple symmetrization}\label{sec:triplesym}
The orbit description for binary quartic forms in \S\ref{sec:symHCpreview} was obtained
by imposing a symmetry condition on the orbit description for
hypercubes. Rather than imposing a fourfold
symmetry, we may instead impose only a threefold symmetry. This leads
to hypercubes of the form
\begin{equation}\label{sym3hyper}
\raisebox{-2\baselineskip}{
\cube a b b c b c c d
\qquad \qquad \qquad
\cube e f f g f g g h
} \qquad .
\end{equation}
That is, these hypercubes can be sliced (in a certain fixed
direction) into two triply symmetric cubes, and therefore
can naturally be viewed as a pair of binary cubic forms
\begin{equation}\label{pairofbcfs}
(ax^3+3bx^2y+3cxy^2+dy^3,ex^3+3fx^2y+3gxy^2+hy^3).
\end{equation}
The above association of the pair (\ref{pairofbcfs}) of binary cubic forms
with the hypercube
\eqref{sym3hyper} corresponds to the natural inclusion map
\[\jmath:K^2\otimes\Sym_3 K^2\hookrightarrow K^2\otimes K^2\otimes K^2\otimes K^2.\]
To such nondegenerate triply symmetric hypercubes, we may associate the usual geometric data $(C,L,(P,P',P''))$ as in Theorem~\ref{hyperpar}, and as in the fully symmetric case,
the symmetry implies that $P$ is a 3-torsion point.
We will show in \S \ref{sec:3symHC} that this is again the only constraint on $P$, and so we obtain the
following theorem classifying the orbits of $\mathrm{GL}_2(K)^2$ on $K^2 \otimes \Sym_3K^2$:
\begin{thm}\label{triplesympar}
For any field $K$ with $\mathrm{char}(K) \nmid 6$, there is a canonical bijection between
nondegenerate $\mathrm{GL}_2(K)^2$-orbits on the space $K^2\otimes\Sym_3K^2$ of pairs of binary cubic forms over $K$ and isomorphism classes of triples $(C,L,P)$, where $C$ is a smooth genus one curve over $K$, $L$ is a degree $2$ line bundle on~$C$, and $P$ a nonzero $3$-torsion point on the Jacobian of $C$ defined over $K$.
\end{thm}
It is interesting that the data parametrized by the orbits on both $\Sym_4K^2$ and $K^2\otimes\Sym_3K^2$ are the same, and our orbit description in fact allows us to determine an explicit linear transformation in
$\mathrm{GL}_2(K)^2$ that takes any given nondegenerate element of $K^2 \otimes \Sym_3 K^2$ to an element of $\Sym_4 K^2$.
\subsection{Double symmetrization}
We may instead impose only a twofold symmetry, leading us to study the space
$K^2\otimes K^2\otimes \Sym_2 K^2$ of $2\times 2$ matrices of binary quadratic forms.
In terms of the geometric data $(C,L,(P,P',P''))$ of Theorem~\ref{hyperpar}, we see that $P'$ and $P''$ coincide, which then determines $P$ by the relation $P+P'+P''=0$. Thus only the information of the point $P'$ needs to be retained. Since $P\neq 0$, the point $P'$ cannot be 2-torsion, and so (writing now $P'$ as $P$) we obtain the following:
\begin{thm}\label{doublesympar}
For any field $K$ with $\mathrm{char}(K) \nmid 6$, there is a canonical bijection between
nondegenerate $\mathrm{GL}_2(K)^3$-orbits on the space $K^2\otimes K^2\otimes\Sym_2K^2$ of $2\times 2$ matrices of binary quadratic forms over $K$ and isomorphism classes of triples $(C,L,P)$, where $C$ is a smooth genus one curve over~$K$, \;$L$ is a degree $2$ line bundle on $C$, and $P$ is a non-$2$-torsion point of the period-index subgroup $\Jac_C^2(K)$ in $\Jac(C)(K)$.
\end{thm}
\subsection{Double-double symmetrization}\label{sec:doubledoublesym}
We may, in fact, ask for the hypercubes to be symmetric under any subgroup of the symmetric group $S_4$. One of the interesting cases arises from the hypercubes fixed under the action of $S_2\times S_2\subset S_4$, which we call double-double symmetrization:
\begin{thm}\label{doubledoublesympar}
For any field $K$ with $\mathrm{char}(K) \nmid 6$, there is a canonical bijection between
nondegenerate $\mathrm{GL}_2(K)^2$-orbits on the space $\Sym_2K^2\otimes \Sym_2K^2$ of symmetric $2\times 2$ matrices of binary~quadratic forms over $K$ and isomorphism classes of triples $(C,L,P)$, where $C$ is a smooth genus~one curve over $K$, \;$L$ is a degree $2$ line bundle on $C$, and $P$ is a non-$2$-torsion point of the period-index subgroup $\Jac_C^2(K)$ in $\Jac(C)(K)$.
\end{thm}
Over a field $K$ not of characteristic dividing $6$, the space $\Sym^2 K^2 \otimes \Sym^2 K^2$ is isomorphic to the space of the doubly-doubly symmetric hypercubes. Analogous to the case discussed in \S\ref{sec:triplesym}, there is a natural ``dual'' interpretation for the orbits of this space, also involving genus one curves~$X$ with degree $2$ line bundles and a point in $\Jac_X^2(K)$ (see \S \ref{sec:bideg22forms}); however, the two genus one curves~$C$ and $X$ obtained from an element of $\Sym^2 K^2 \otimes \Sym^2 K^2$ are not the same, but are again related by a certain Hessian-type construction (see \S\ref{sec:22symHC}).
\subsection{Double skew-symmetrization} \label{sec:2skewHCpreview}
Instead of imposing conditions of symmetry, one may impose
conditions of {\it skew-symmetry} on hypercubes, analogous to those described in \cite[\S 2.6]{hcl1}.
To define these skew-symmetrizations, let us view again our original hypercube
space $K^2\otimes K^2\otimes K^2\otimes K^2$ as the space of $K$-quadrilinear maps
$W_1\times W_2\times W_3\times W_4\rightarrow K$, where $W_1,W_2,W_3,W_4$ are
$K$-vector spaces of dimension 2 (namely, the $K$-duals of the four factors $K^2$
in $K^2\otimes K^2\otimes K^2\otimes K^2$).
Then given such a quadrilinear map
\[\phi:W_1\times W_2\times W_3\times W_4\rightarrow K\]
in $K^2\otimes K^2\otimes K^2\otimes K^2$, one may naturally construct another
$K$-quadrilinear map
\[\bar\phi:W_1 \times W_2\times (W_3\oplus W_4)\times (W_3\oplus W_4)\rightarrow K\]
that is skew-symmetric in the third and fourth variables; this map
$\bar\phi={\rm id}\otimes{\rm id} \otimes \wedge_{2,2}(\phi)$ is given by
\[ \bar\phi\left(r,s,(t,u),(v,w)\right)
= \phi(r,s,t,w) - \phi(r,s,v,u). \]
Thus we have a natural $K$-linear mapping
\begin{equation}\label{doublefusion}
{\rm id}\otimes\wedge_{2,2}:
K^2\otimes K^2\otimes K^2 \otimes K^2\rightarrow K^2\otimes K^2\otimes\wedge^2(K^2\oplus K^2)=
K^2\otimes K^2\otimes\wedge^2K^4
\end{equation}
taking $2\times 2\times 2\times 2$ hypercubes to $2\times 2$ matrices of alternating 2-forms
in four variables. Explicitly, in terms of fixed bases for $W_1,W_2,W_3,W_4$,
the hypercube \eqref{eq:hyperdraw} maps to the $2\times 2$ matrix of skew-symmetric matrices
as follows:
\vspace{-.1in}
\begin{equation}\label{explicit}
\left(\begin{array}{cc}
\left[\begin{array}{cccc}
{} & {} & \,h_{1111} & \,\,\,h_{1112} \\
{} & {} & \,h_{1121} & \,\,\,h_{1122} \\
-h_{1111} & -h_{1121} & & \\
-h_{1112} & -h_{1122} & & \end{array}\right]
&
\left[\begin{array}{cccc}
{} & {} & \,h_{1211} & \,\,\,h_{1212} \\
{} & {} & \,h_{1221} & \,\,\,h_{1222} \\
-h_{1211} & -h_{1221} & & \\
-h_{1212} & -h_{1222} & & \end{array}\right]
\\[.5in]
\left[\begin{array}{cccc}
{} & {} & \,h_{2111} & \,\,\,h_{2112} \\
{} & {} & \,h_{2121} & \,\,\,h_{2122} \\
-h_{2111} & -h_{2121} & & \\
-h_{2112} & -h_{2122} & & \end{array}\right]
&
\left[\begin{array}{cccc}
{} & {} & \,h_{2211} & \,\,\,h_{2212} \\
{} & {} & \,h_{2221} & \,\,\,h_{2222} \\
-h_{2211} & -h_{2221} & & \\
-h_{2212} & -h_{2222} & & \end{array}\right]
\end{array}\right).
\end{equation}
\vspace{-.05in}
Analogous to the case of double skew-symmetrization of $2\times 2\times 2$ cubes,
where two ideal classes become replaced with a single rank 2 module, in the geometric data
corresponding to a double-skew-symmetric hypercube, two degree 2 line bundles $L_3$ and $L_4$
that come from the curves $C_3$ and $C_4$ are replaced by a single vector bundle $M$ of rank 2 and degree 4.
We thus have the following result (where the nondegeneracy condition on the geometric data is a
mild open condition and will be discussed in \S\ref{sec:deg2moduli}).
\begin{thm}\label{doubleskewpar}
For any field $K$ with $\mathrm{char}(K) \nmid 6$, there is a canonical bijection between
nondegenerate $\mathrm{GL}_2(K)^2 \times \mathrm{GL}_4(K)$-orbits on the space $K^2\otimes K^2\otimes\wedge^2K^4$ of $2\times 2$ matrices of alternating quaternary $2$-forms over $K$ and isomorphism classes of nondegenerate quadruples $(C,L,P,M)$, where $C$ is a smooth genus one curve over $K$, \;$L$ is a degree~$2$ line bundle on $C$, $P$ is a nonzero point of the period-index subgroup $\Jac_C^2(K)$ in $\Jac(C)(K)$, and $M$ is a rank $2$ vector bundle of degree $4$ such that $\det M \cong P \otimes L^{\otimes 2}$.
\end{thm}
\subsection{Triple skew-symmetrization}
We may also impose a triple skew-symmetry on hypercubes. With the same notation as in the previous section, given a quadrilinear map
$$\phi: W_1 \times W_2 \times W_3 \times W_4 \to K,$$
we may construct the $K$-quadrilinear map
$$\bar{\phi}: W_1 \times (W_2 \oplus W_3 \oplus W_4) \times(W_2 \oplus W_3 \oplus W_4) \times (W_2 \oplus W_3 \oplus W_4) \to K,$$
which is alternating in the last three factors and explicitly given by
$$\bar{\phi}(r,(s_1,s_2,s_3),(t_1,t_2,t_3),(u_1,u_2,u_3)) = \sum_{\sigma \in S_3} (-1)^\sigma \phi(r,s_{\sigma(1)},t_{\sigma(2)},u_{\sigma(3)}).$$
Thus, we obtain a natural $K$-linear injection
\begin{equation}\label{triplefusion}
{\rm id}\otimes\wedge_{2,2,2}:
K^2\otimes K^2\otimes K^2 \otimes K^2 \rightarrow K^2\otimes \wedge^3(K^2 \oplus K^2\oplus K^2) = K^2\otimes \wedge^3 K^6
\end{equation}
from the space of hypercubes to the space of pairs of senary alternating $3$-forms. In analyzing the $\mathrm{GL}_2(K) \times \mathrm{GL}_6(K)$-orbits of this larger space, one finds that the degree 2 line bundles $L_2, L_3, L_4$ coming from $C_2,C_3,C_4$ are now replaced by a single rank $3$ vector bundle (which splits into the direct sum of these line bundles for elements in the image of ${\rm id} \otimes \wedge_{2,2,2}$). We thus obtain the following:
\begin{thm} \label{thm:3skewHCpreview}
For any field $K$ with $\mathrm{char}(K) \nmid 6$, there is a canonical bijection between nondegenerate $\mathrm{GL}_2(K) \times \mathrm{GL}_6(K)$-orbits on the space $K^2 \otimes \wedge^3 K^6$ of pairs of senary alternating $3$-forms over $K$ and isomorphism classes of
nondegenerate triples $(C,L,M)$, where $C$ is a smooth genus one curve over $K$, $L$ is a degree $2$ line bundle on $C$, and $M$ is a rank $3$ vector bundle of degree $6$ such that $\det M \cong L^{\otimes 3}$.
\end{thm}
\subsection{A simultaneous generalization: triply Hermitian hypercubes} \label{sec:3hermHCpreview}
Many of the orbit parametrizations we have discussed in this section can be unified and generalized, via a process we call {\it Hermitianization}. Just as one can consider square matrices that are Hermitian over a quadratic algebra, we may consider cubical matrices that are Hermitian over a cubic algebra. The most convenient notion for this purpose is Springer's definition of a {\it cubic Jordan algebra}, which we discuss in more detail in \S \ref{sec:cubicjordan}. For now, it suffices to say that a cubic Jordan algebra is a generalization of the notion of a cubic field extension, where each element of the cubic Jordan algebra has two other formal conjugates, and there is a well-defined notion of a characteristic polynomial that these conjugate elements all satisfy. Some simple examples of cubic Jordan algebras over a field $K$ include $K^3$, $K^2$, $K$, cubic field extensions of $K$, $K\times \Mat_{2\times2}(K)$, and $\Mat_{3\times 3}(K)$.
A {\em triply Hermitian} {hypercube} for the cubic algebra $J$ over $K$ is one of the form
\begin{equation}\label{hermhyper}
\raisebox{-2\baselineskip}{
\cube a b {b'} {c''} {b''} {c'} c d \qquad \qquad \qquad \cube e f {f'} {g''} {f''} {g'} g h
} \qquad
\end{equation}
where $a,d,e,h\in K$ and $b,c,f,g\in J$, and $b',b''$ and $c',c''$ are formal conjugates of $b$ and $c$, respectively. We denote the space of all triply Hermitian hypercubes for such a cubic Jordan algebra $J/K$ by $\mathscr{C}_2(J)$.
For each such $J$, there is a natural group $\mathrm{SL}_2(J)$ acting by linear transformations but preserving a certain discriminant quartic form on the space $\mathscr{C}_2(J)$. Then we obtain a parametrization for the orbits of $\mathrm{GL}_2(K) \times \mathrm{SL}_2(J)$ on $\mathscr{C}_2(J)$ (a full version of the theorem may be found in \S\ref{sec:deg2moduli}):
\begin{thm}\label{triplehermpar}
For any field $K$ with $\mathrm{char}(K) \nmid 6$, and a cubic Jordan algebra $J$ over $K$, there is a canonical bijection between nondegenerate $\mathrm{GL}_2(K) \times \mathrm{SL}_2(J)$-orbits on the space $\mathscr{C}_2(J)$ of triply Hermitian hypercubes for $J$ over $K$ and isomorphism classes of triples $(C,L,\mathcal{F})$, where $C$ is a smooth genus one curve over $K$, \;$L$ is a degree~$2$ line bundle on~$C$, and $\mathcal{F}$ is a flag of vector bundles on $C$ with additional structure coming from $J$, subject to a relation between $L$ and~$\mathcal{F}$.
\end{thm}
In each case, one of the key components of the construction of the correspending geometric~data is a flag variety related to the representation of $\mathrm{SL}_2(J)$ on $\mathscr{C}_2(J)$. Each element of $\mathscr{C}_2(J)$ produces a map from a genus one curve $C$ to this flag variety, thereby giving the flag $\mathcal{F}$ on $C$ described in the theorem. See \S\ref{sec:deg2moduli} for full details.
The cases $J=K^3$, $K\times K$, $K$, $K\times \Mat_{2 \times 2}(K)$, and $\Mat_{3\times 3}(K)$ yield regular hypercubes, triply symmetric hypercubes, doubly symmetric hypercubes, doubly skew-symmetric hypercubes, and triply skew-symmetric hypercubes, respectively. From this perspective, one may also obtain moduli descriptions of some more exotic spaces, e.g., $\mathrm{GL}_2(K) \times \mathrm{Sp}_6(K)$-orbits on $K^2 \otimes \wedge^3_0(K^6)$, $\mathrm{GL}_2(K) \times \Spin_{12}(K)$-orbits on $K^2 \otimes K^{32}$, and $\mathrm{GL}_2(K) \times E_7(K)$-orbits on $K^2 \otimes K^{56}$.
\section{Main results II: Genus one curves and \texorpdfstring{$3\times 3\times 3$}{333} Rubik's cubes} \label{sec:RCpreview}
In this section, we discuss the space of $3\times 3\times 3$ cubical
matrices (``Rubik's cubes'') over $K$, and describe the various
parametrizations of genus one curves with extra data that can be obtained from this
perspective. Again, no proofs or details are given in this section.
Further details, more basis-free constructions, and proofs may be found in
Section~\ref{sec:hermRC}.
\subsection{On Rubik's cubes}\label{sec:RCslicing}
Let $K$ be a field of characteristic not dividing $6$. Analogous to
the space of $2 \times 2 \times 2$ cubes from \S \ref{sec:cubes},
we may consider the space of $3 \times 3 \times 3$ cubes, which we call
{\em Rubik's cubes}. Let $\mathcal{C}_3(K)$ denote the space $K^3
\otimes K^3 \otimes K^3$. Then each element of $\mathcal{C}_3(K)$ may be
represented as a $3 \times 3 \times 3$
cubical matrix $B = (b_{ijk})_{1 \leq i,j,k \leq 3}$
with entries in $K$:
\vspace{-.0875in}
\begin{equation} \label{eq:RCpicture}
\raisebox{5\baselineskip}{
\xymatrix@!0@=13pt{
& & & & b_{311} \ar@{-}[rrr] \ar@{-}[lldd] \ar@{--}[dddddd] & & & b_{312} \ar@{-}[rrr] \ar@{-}[lldd] & & & b_{313} \ar@{-}[ddd] \ar@{-}[lldd] \\
& & & & & & & & & & \\
& & b_{211} \ar@{-}[rrr] \ar@{-}[lldd]& & & b_{212} \ar@{-}[rrr] \ar@{-}[lldd]& & & b_{213} \ar@{-}[ddd] \ar@{-}[lldd] & & \\
& & & & & & & & & & b_{323} \ar@{-}[ddd] \ar@{-}[lldd] \\
b_{111} \ar@{-}[rrr] \ar@{-}[ddd] & & & b_{112} \ar@{-}[rrr] \ar@{-}[ddd] & & & b_{113} \ar@{-}[ddd] & & & & \\
& & & & & & & & b_{223} \ar@{-}[ddd] \ar@{-}[lldd] & & \\
& & & & b_{331} \ar@{--}[rrrrrr] \ar@{--}[lllldddd] & & & & & & b_{333} \ar@{-}[lldd] \\
b_{121} \ar@{-}[rrr] \ar@{-}[ddd] & & & b_{122} \ar@{-}[rrr] \ar@{-}[ddd] & & & b_{123} \ar@{-}[ddd] & & & & \\
& & & & & & & & b_{233} \ar@{-}[lldd] & & \\
& & & & & & & & & & \\
b_{131} \ar@{-}[rrr] & & & b_{132} \ar@{-}[rrr] & & & b_{133} & & & &
}
} \qquad .
\end{equation}
If we denote by $\{e_1,e_2,e_3\}$ the standard basis of $K^3$, then
the element of $\mathcal{C}_3(K)$ described by the cubical matrix
$B = (b_{ijk})_{1 \leq i,j,k \leq 3}$ above
is $$\sum_{1 \leq i, j, k \leq 3} b_{ijk}\, e_i \otimes e_j \otimes e_k.$$
Thus we may
identify $\mathcal{C}_3(K)$ with the space of $3\times 3\times 3$ matrices
with entries in $K$ or, simply, the space of {\it Rubik's cubes over $K$}.
As in \S \ref{sec:cubes}, a Rubik's cube can naturally be partitioned
into three $3 \times 3$ matrices in three
distinct ways. Namely, the cube $B=(b_{ijk})$ given by
\eqref{eq:RCpicture} can be sliced into the three $3\times 3\times 3$
matrices $M_\ell$, $N_\ell$, and $P_\ell$, for $\ell=1,2,3$, as follows:
\begin{enumerate}
\item[1)] $M_1 = (b_{1jk})$ is the front face, $N_1 = (b_{2jk})$ is the middle slice, and $P_1 = (b_{3jk})$ is the back~face;
\item[2)] $M_2 = (b_{i1k})$ is the top face, $N_2 = (b_{i2k})$ is the middle slice, and $P_2 = (b_{i3k})$ is the bottom~face;
\item[3)] $M_3 = (b_{ij1})$ is the left face, $N_3 = (b_{ij2})$ is the middle slice, and $P_3 = (b_{ij3})$ is the right~face.
\end{enumerate}
There is also a natural action of $\mathrm{SL}_3(K)^3$ on $\mathcal{C}_3(K)$,
analogous to the action of $\mathrm{SL}_2(K)^3$ on the space
$\mathcal{C}_2(K)$ of $2 \times 2 \times 2$ cubes. Namely, for any
$i \in \{1,2,3\}$, the element $g = (g_{\ell m}) \in \mathrm{SL}_3(K)$ in the
$i$th factor acts on the cube $B$ by replacing $(M_i,N_i,P_i)$ by
$(g_{11} M_i + g_{12} N_i + g_{13} P_i, \,g_{21} M_i + g_{22} N_i +
g_{23} P_i, \,g_{31} M_i + g_{32} N_i + g_{33} P_i)$. These three
$\mathrm{SL}_3(K)$-actions commute, giving a well-defined action of
$\mathrm{SL}_3(K)^3$ on $\mathcal{C}_3(K)$.
Recall that a $2 \times 2 \times 2$ cube naturally gave rise to
three binary quadratic forms, by slicing the cube and taking
determinants as in \eqref{bqfdet}.
In the analogous manner, a Rubik's cube $B = (b_{ijk})$
naturally gives rise to three ternary cubic forms
\begin{equation}\label{tcfdet}
f_i(x,y,z) = \det(M_i x + N_i y + P_i z)
\end{equation}
for $1 \leq i \leq 3$. The ternary cubic form $f_1$ is invariant
under the action of the subgroup ${\rm id} \times \mathrm{SL}_3(K)^2
\subset \mathrm{SL}_3(K)^3$ on $B\in \mathcal{C}_3(K)$. The remaining factor
of $\mathrm{SL}_3(K)$ acts in the standard way on the ternary cubic form
$f_1$, and it is well known that this action has two independent
polynomial invariants, which are traditionally called $S(f_1)$ and
$T(f_1)$ (see \S\ref{sec:ternarycubics} for more details on
ternary cubic forms). These invariants have degrees $4$ and $6$,
respectively, in the coefficients of $f_1$. Analogous to the
situation with $2\times 2\times 2\times 2$ hypercubes, one
checks that $f_2$ and $f_3$ also have the same invariants as $f_1$,
and so we have produced well-defined
$\mathrm{SL}_3(K)^3$-invariants $S(B)$ and $T(B)$ for Rubik's cubes $B$,
having degrees $12$ and $18$, respectively.
The {\it discriminant} $\disc(f)$ of a ternary cubic form $f$ is defined by
\begin{equation}
\disc(f) = \frac{1}{1728}(S(f)^3 - T(f)^2),
\end{equation}
and the discriminant of the ternary cubic form $f$ is nonzero
precisely when it cuts out a smooth curve in $\mathbb{P}^2$; such a
ternary cubic form is called {\em nondegenerate}. Since the $S$ and
$T$ invariants are common to all the $f_i$, their discriminants are
also all the same, and thus we may naturally define the {\it
discriminant} of a Rubik's cube $B$ to be
\begin{equation}\disc(B) := \frac{1}{1728}(S(B)^3 - T(B)^2),\end{equation}
which is nonzero precisely when any of the ternary cubic forms $f_i$
associated to $B$ cut out a smooth curve in $\mathbb{P}^2$.
We say then that the Rubik's cube $B$ is {\it nondegenerate} if its discriminant is nonzero.
We give a conceptual explanation as to why $S(f_i)=S(f_j)$ and $T(f_i)=T(f_j)$ (and thus $\disc(f_i)=\disc(f_j)$) for all $i$ and $j$ in the next subsection.
\subsection{Genus one curves from Rubik's cubes} \label{sec:g1fromRC}
We now explain how a nondegenerate Rubik's cube $B$ naturally gives rise to
a number of genus one curves $C_i$ ($1\leq i\leq 3$) and $C_{ij}$
($1\leq i< j\leq 3$). We also discuss how these genus one
curves are related to each other, and the resulting description of the
nondegenerate orbits of $\mathrm{SL}_3(K)^3$ on the space $K^3\otimes
K^3\otimes K^3$ of Rubik's cubes over $K$.
\subsubsection{Genus one curves in \texorpdfstring{$\mathbb{P}^2$}{P2}}
Given a nondegenerate ternary cubic form $f$ over $K$, we may
attach to $f$ a genus one curve $C(f)$ over $K$, namely the
zero locus of the polynomial $f$ in $\mathbb{P}^2$.
It is known (see, e.g., \cite{ArtinRodriguezVillegasTate,ankim}) that the
Jacobian of the curve $C(f)$ may be written as a Weierstrass
elliptic curve with coefficients expressed in terms of the invariants
$S(f)$ and $T(f)$, namely as
$$E(f): y^2 = x^3 - 27 S(f) x - 54 T(f).$$
We always take $E(f)$ as our model for the Jacobian of $C(f)$.
Now given a nondegenerate Rubik's cube $B\in \mathcal{C}_3(K)$, we have seen that
we naturally obtain three ternary cubic forms $f_1$, $f_2$, $f_3$,
over $K$ from $B$. Thus each Rubik's cube $B\in\mathcal{C}_3(K)$ yields
three corresponding genus one curves $C_1$, $C_2$, $C_3$ over
$K$, where $C_i=C(f_i)$.
\subsubsection{Genus one curves in \texorpdfstring{$\mathbb{P}^2\times\mathbb{P}^2$}{P2P2}}
The genus one curves we have obtained from a nondegenerate Rubik's cube
$B\in\mathcal{C}_3(K)$ may also be embedded naturally in $\mathbb{P}^2 \times \mathbb{P}^2$.
Let us first identify $\mathcal{C}_3(K)$ with the space of trilinear forms
on $W_1 \times W_2 \times W_3$, where each $W_i$ ($i\in\{1,2,3\}$) is
a $3$-dimensional $K$-vector space. (In this identification, when we
write $\mathcal{C}_3(K) = K^3 \otimes K^3 \otimes K^3$, then the $i$th factor of
$K^3$ is the $K$-vector space dual to $W_i$.) Then for any $B
\in \mathcal{C}_3(K)$, viewed as a trilinear form, we consider the set
$$C_{12}(K) := \{(w_1,w_2) \in \mathbb{P}(W_1) \times \mathbb{P}(W_2) : B(w_1,w_2,
\cdot) \equiv 0 \} \subset \mathbb{P}(W_1) \times \mathbb{P}(W_2) \cong \mathbb{P}^2 \times
\mathbb{P}^2.$$
If $B$ is nondegenerate, then $C_{12}$ in fact yields the graph of an
isomorphism between $C_1$ and $C_2$. Indeed, for any point
$w_1 \in C_1$, the form $B(w_1, \cdot, \cdot)$ is singular with a
one-dimensional kernel in $W_2$; this gives a point of $\mathbb{P}(W_2)$ that
then must lie on $C_2$!
As this process is reversible, it follows that $C_{12}$ is isomorphic
to both $C_1$ and $C_2$ via projection, and hence all the curves
$C_i$ are isomorphic to each other: for $1\leq i< j \leq 3$, if we
define $C_{ij}\subset \mathbb{P}^2\times \mathbb{P}^2$ in the analogous manner, then we have
natural isomorphisms
$$ C_i\cong C_{ij} \cong C_j.$$
It also follows (by the same argument as given at the end of \S\ref{p1xp1}) that all three ternary cubic forms $f_i$
have the same values for the invariants $S$ and $T$, as was claimed at the end of \S\ref{sec:RCslicing}.
\subsubsection{The fundamental triangle of isomorphisms}
From a nondegenerate Rubik's cube $B$, we have thus constructed three genus
one curves $C_{12}$, $C_{23}$, and $C_{13}$, which are all isomorphic.
In fact, we may construct explicit and natural isomorphisms between
them, e.g.,
$$\tau_{12}^{23}: C_{12} \to C_2 \to C_{23},$$
given by projection and un-projection. More
explicitly, given a point $(w_1, w_2) \in C_{12}$, the bilinear form
$B(\cdot,w_2,\cdot)$ is singular and has rank $2$, so there exists a
unique $w_3 \in \mathbb{P}(W_3)$ such that $B(\cdot, w_2, w_3) \equiv 0$.
Then $(w_2, w_3)$ is a point on $C_{23}$, giving the map
$\tau_{12}^{23}$. Clearly, all such maps are invertible, e.g.,
$\tau_{23}^{12} = (\tau_{12}^{23})^{-1}$.
We thus obtain a triangle of maps
\begin{equation} \label{eq:triangleintro}
\raisebox{2\baselineskip}{
\xymatrix@C=10pt@R=30pt{
& C_{12} \ar@{<->}[rd] \ar@{<->}[ld] & \\
C_{23} \ar@{<->}[rr] && C_{13}
}} \qquad .
\end{equation}
However, this triangle does not commute! Composing the three maps of
this triangle in a clockwise direction, starting from say $C=C_{12}$, yields an automorphism of $C$ given
by translation by a point $P$ on the Jacobian $E=\Jac(C)$ of $C$.
Composing the three maps in the counterclockwise
direction then yields the automorphism of $C$ given by translation by $P'=-P$. As
before, these points $P$, $P'$ are in fact in the period-index subgroup
$\Jac_C^3(K)$. Indeed, if $D_1$ and $D_2$ are the degree~$3$ divisors on $C$ corresponding
to the embeddings into $\mathbb{P}(W_1)$ and $\mathbb{P}(W_2)$, respectively, then we find that the difference $D_2 - D_1$
corresponds to the point $P$ on the Jacobian of $C$.
In summary, from a nondegenerate Rubik's cube $B \in
\mathcal{C}_3(K)$, we obtain an elliptic curve
$$E(B): y^2 = x^3 - 27 S(B)x - 54 T(B),$$
with
\begin{equation*}
S(B) = S(f_i) \qquad \textrm{and} \qquad T(B) = T(f_i)
\end{equation*}
for $1 \leq i \leq 3$, where $f_1, f_2$, and $f_3$ are the ternary
cubic forms naturally arising from $B$. Furthermore, the elliptic
curve $E := E(B)$ is canonically the Jacobian of each of the genus one
curves $C_i$ ($1 \leq i \leq 3$) and $C_{ij}$ ($1 \leq i \leq j \leq
3$) arising from $B$. Finally, there is a natural triangle
\eqref{eq:triangleintro} of isomorphisms among the $C_{ij}$, which
does not commute.
We thus obtain, in addition to a degree 3 divisor $D_1$ on the curve $C_{12}$, a pair of points
$P$, $P'$ in the period-index subgroup $\Jac^3_{C_{12}}(K)$ for the curve $C_{12}$ that sum to zero.
\subsection{Orbit classification of Rubik's cubes}\label{sec:ocrc}
We will show in \S \ref{sec:333} that the data of a genus one
curve $C=C_{12}$, the equivalence class of a degree $3$ rational divisor $D=D_1$ on $C$, and a
pair $P,P'$ of nonzero points summing to zero in the
period-index subgroup $\Jac^3_{C}(K)$ of the Jacobian of $C$ is in
fact sufficient to recover the orbit of a Rubik's cube $B$. We have:
\begin{thm} \label{thm:RCparam1} For any field $K$ with
$\mathrm{char}(K) \nmid 6$, there is a canonical bijection between
nondegenerate $\mathrm{GL}_3(K)^3$-orbits on the space $K^3 \otimes K^3
\otimes K^3$ of Rubik's cubes over $K$ and isomorphism classes of
triples $(C,L,(P,P'))$, where $C$ is a smooth genus one curve
over~$K$, $L$ is a degree $3$ line bundle on $C$, and $P$ and
$P'$ are nonzero $K$-points that sum to zero in the degree $3$ period-index subgroup
$\Jac^3_C(K)$ of the group of $K$-points of the Jacobian of $C$.
\end{thm}
It is known (see, e.g., \cite{vinberg}) that the ring of polynomial invariants for the action of $\mathrm{SL}_3(K)^3$ on the space $K^3 \otimes K^3 \otimes K^3$ is freely generated by three polynomials
$d_6$, $d_9$, and $d_{12}=-27S$, having degrees $6$, $9$, and $12$, respectively, in the entries of the Rubik's cube.
In terms of the geometric data in Theorem \ref{thm:RCparam1}, these three invariants have geometric meaning. We may write the Jacobian of the curve $C$ in Weierstrass form as
$$E: y^2 = x^3 + d_{12} x + d_{18},$$
where $d_{18}=-54T$ is a degree $18$ polynomial in the entries of the Rubik's cube, and where the points $P$ and $P'$ are given by
$(x,y) = (d_6, \pm d_9)$ on the model $E$.
It is clear then that $d_{18}$ may be expressed in terms of the generators $d_6$, $d_9$, and $d_{12}$.
In conclusion, $d_6$, $d_9$, $d_{12}$ and $d_{18}$ are all fundamental and important
polynomial invariants for the action of $\mathrm{SL}_3(K)^3$ on $K^3\otimes K^3 \otimes K^3$; they all have key geometric interpretations and can be expressed as simple polynomials in the three basic invariants $d_6$, $d_9$, and $d_{12}$.
\subsection{Triple symmetrization}\label{sec:symRCpreview}
Analogous to the case of symmetric hypercubes discussed in \S \ref{sec:symHCpreview},
we may consider symmetric Rubik's cubes. If
we ask for complete symmetry under the action of $S_3$, the Rubik's
cubes in question will be of the form
\begin{equation}\label{symrubik}
\begin{pmatrix}
a & b & c \\
b & d & e \\
c & e & f
\end{pmatrix}
\qquad
\begin{pmatrix}
b & d & e \\
d & g & h \\
e & h & i
\end{pmatrix}
\qquad
\begin{pmatrix}
c & e & f \\
e & h & i \\
f & i & j \\
\end{pmatrix}
\end{equation}
where each $3 \times 3$ matrix represents one ``slice.''
Just as quadruply-symmetric hypercubes~\eqref{sym4hyper} could be identified with binary
quartic forms \eqref{symquartic}, in the same way the triply-symmetric Rubik's cube
(\ref{symrubik}) may be identified with the ternary cubic form
\begin{equation}ax^3+3bx^2y+3cx^2z+3dxy^2+6exyz+3fxz^2+gy^3+3hy^2z+3iyz^2+jz^3.\end{equation}
This identification corresponds to the natural inclusion
$$\Sym_3 (K^3) \hookrightarrow K^3 \otimes K^3 \otimes K^3$$
of the space of triply-symmetric Rubik's cubes into the space of
all Rubik's cubes.
Such Rubik's cubes lead to geometric data $(C,L,(P,P'))$ as in
Theorem~\ref{thm:RCparam1},
but due to the symmetry we also have $P=P'$. Since $P+P'=0$, we see that $P$ is a 2-torsion point on the Jacobian of $C$. Conversely, we will show in \S \ref{sec:3symRC} that this is the only constraint on $P$; thus we obtain the following theorem classifying the orbits of
$\mathrm{GL}_3(K)$ on $\Sym_3 K^3$:
\begin{thm} \label{thm:3symRCpreview}
For any field $K$ with $\mathrm{char}(K) \nmid 6$, there is a
natural bijection between nondegenerate $\mathrm{GL}_3(K)$-orbits on the
space $\Sym_3 K^3$ of ternary cubic forms over $K$ and isomorphism
classes of triples $(C,L,P)$, where $C$ is a smooth genus one curve
over $K$, $L$ is a degree $3$ line bundle on $C$, and $P$ is a
nonzero $2$-torsion point on the Jacobian of $C$ defined over $K$.
\end{thm}
We have already noted in (see \S\ref{sec:ternarycubics} for further details) that certain $\mathrm{GL}_3(K)$-orbits on $\Sym^3K^3$ correspond to pairs $(X,L)$, where $X$ is a genus one curve and $L$ is a degree 3 line bundle on $X$. When char$(K)\nmid6$, these two spaces $\Sym_3K^3$ and $\Sym^3K^3$ are naturally identified, so we obtain two ``dual'' moduli interpretations of the space of ternary cubics in terms of genus one curves.
However, as in the case of symmetric hypercubes viewed as binary quartics in \S\ref{sec:symHCpreview}, these two genus one curves arising from a ternary cubic are {\it not} the same; they are related by the classical Hessian construction (see \S \ref{sec:3symRC}).
\subsection{Double symmetrization}
The orbit description for ternary cubic forms in \S\ref{sec:symRCpreview} was obtained by imposing a symmetry condition on the orbit description for Rubik's cubes.
Rather than imposing a threefold symmetry, we can impose only a double symmetry to obtain Rubik's cubes of the form
\begin{equation}\label{dsrc}
\begin{pmatrix}
a_1 & b_1 & c_1 \\
b_1 & d_1 & e_1\\
c_1 & e_1 & f_1
\end{pmatrix}
\qquad
\begin{pmatrix}
a_2 & b_2 & c_2 \\
b_2 & d_2 & e_2 \\
c_2 & e_2 & f_2
\end{pmatrix}
\qquad
\begin{pmatrix}
a_3 & b_3 & c_3 \\
b_3 & d_3 & e_3 \\
c_3 & e_3 & f_3
\end{pmatrix},
\end{equation}
where again, each $3 \times 3$ matrix represents a slice of the
Rubik's cube. Since a symmetric $3 \times 3$ matrix represents a
ternary quadratic form, a doubly symmetric Rubik's cube may be viewed as a triple of ternary quadratic forms
\begin{align}\nonumber
\!\!(a_1 x^2 + 2 b_1 x y + 2 c_1 x z + d_1 y^2 + e_1 y z+ f_1 z^2, a_2 x^2 + 2 b_2 x y + 2 c_2 x z + d_2 y^2 + e_2 y z+ f_2 z^2, \\ \label{ttqfs}
a_3 x^2 + 2 b_3 x y + 2 c_3 x z + d_3 y^2 + e_3 y z+ f_3 z^2). \!\!
\end{align}
The above association of the triple (\ref{ttqfs}) of ternary quadratic forms with the doubly
symmetric Rubik's cube (\ref{dsrc}) corresponds to the natural inclusion
$$K^3 \otimes \Sym_2 K^3 \hookrightarrow K^3 \otimes K^3 \otimes K^3.$$
To such nondegenerate doubly symmetric Rubik's cubes,
we can associate the usual geometric data $(C,L,(P,P'))$ as in Theorem~\ref{thm:RCparam1}, and as in the fully symmetric case,
the symmetry implies that $P$ is a 2-torsion point.
We will show in \S \ref{sec:3symRC} that this is again the only constraint on $P$, and so we obtain the
following theorem classifying the orbits of $\mathrm{GL}_3(K)^2$ on $K^3 \otimes \Sym_2K^3$:
\begin{thm} \label{thm:2symRCpreview}
For any field $K$ with $\mathrm{char}(K) \nmid 6$, there is a natural bijection between nondegenerate $\mathrm{GL}_3(K)^2$-orbits on the space $K^3 \otimes \Sym_2 K^3$ of triples of ternary quadratic forms over $K$ and isomorphism classes of triples $(C,L,P)$, where $C$ is a smooth genus one curved over $K$, $L$ is a degree $3$ line bundle on $C$, and $P$ is a nonzero $2$-torsion point on the Jacobian of $C$ defined over~$K$.
\end{thm}
Note that the data parametrized by triply symmetric and doubly symmetric Rubik's cubes is the same! The two orbit parametrizations in fact allow us to construct an explicit linear transformation taking any given nondegenerate element of the space $K^3 \otimes \Sym_2 K^3$ to an element of $\Sym_3 K^3$.
\subsection{Double skew-symmetrization}
Instead of imposing conditions of symmetry, one may impose conditions of skew-symmetry on Rubik's cubes.
Let us view again our Rubik's cube space $K^3\otimes K^3\otimes K^3$ as the space of $K$-trilinear map $W_1 \times W_2 \times W_3 \to K$, where $W_1$, $W_2$, and $W_3$ are $3$-dimensional $K$-vector spaces. Then given such a $K$-trilinear map $\phi$, one may construct another $K$-trilinear map
$$\bar{\phi} : W_1 \times (W_2 \oplus W_3) \times (W_2 \oplus W_3)$$
that is skew-symmetric in the last two variables and is given by
$$\bar{\phi}(r,(s,t),(u,v)) = \phi(r,s,v)-\phi(r,u,t).$$
This gives a natural $K$-linear injection
$${\rm id} \otimes \wedge_{3,3} : K^3 \otimes K^3 \otimes K^3 \to K^3 \otimes \wedge^2(K^6)$$
taking Rubik's cubes to triples of alternating $2$-forms in six variables.
In analyzing the $\mathrm{GL}_3(K) \times \mathrm{GL}_6(K)$-orbits of this larger space, one finds that the degree~3 line bundles $L_2$ and $L_3$ coming from $C_2$ and $C_3$ are now replaced by a single rank 2 vector bundle (which splits into the direct sum of these line bundles for elements in the image of ${\rm id}\otimes\wedge_{3,3}$). We thus obtain the following (see \S \ref{sec:deg3special} for details):
\begin{thm} \label{thm:2skewRCpreview}
For any field $K$ with $\mathrm{char}(K) \nmid 6$, there is a natural bijection between nondegenerate $\mathrm{GL}_3(K) \times \mathrm{GL}_6(K)$-orbits on the space $K^3 \otimes \wedge^2 K^6$ of triples of alternating senary $2$-forms over $K$ and isomorphism classes of nondegenerate triples $(C,L,M)$, where $C$ is a smooth genus one curved over $K$, $L$ is a degree $3$ line bundle on $C$, and $M$ is a rank $2$ degree $6$ vector bundle on $C$ with $L^{\otimes 2} \cong \det M$.
\end{thm}
\noindent
The nondegeneracy condition on the geometric data is a mild open condition and will be discussed in \S\ref{sec:deg3moduli}.
\subsection{A simultaneous generalization: doubly Hermitian Rubik's cubes}
Many of the orbit parametrizations we have discussed in this section can be unified and generalized, by a Hermitianiziation process that is analogous to the one introduced in \S \ref{sec:3hermHCpreview}. Recall that a Rubik's cube
may be seen as a triple of $3 \times 3$ matrices. To define a {\em doubly Hermitian Rubik's cube}, we replace the triple of standard $3 \times 3$ matrices by a triple of $3 \times 3$ matrices that are Hermitian with respect to some quadratic algebra $A$ over $K$. By a quadratic algebra $A$, we mean an algebra $A$ over $K$ such that each element $a$ of such a quadratic algebra $A$ has a natural conjugate $\bar{a}\in A$ such that both $a$ and~$\bar{a}$ satisfy a common (quadratic) characteristic polynomial over $K$.
A Hermitian $3 \times 3$ matrix $M$ for the quadratic algebra
$A$ over $K$ is one of the form
\begin{equation*}
\begin{pmatrix}
a & d & e \\
\bar{d} & b & f \\
\bar{e} & \bar{f} & c
\end{pmatrix}
\end{equation*}
where $a, b, c \in K$ and $d, e, f \in A$. We denote the space of all doubly Hermitian Rubik's cubes for such a quadratic algebra $A$ by $\mathscr{C}_3(A)$. Examples of suitable quadratic algebras $A$ include $K$ itself, $K^2$, $\Mat_{2 \times 2}(K)$, or general quaternion or octonion algebras over $K$. The various quadratic algebras $A$ used in this paper are discussed in \S \ref{sec:cubicjordan}.
For each such quadratic algebra $A$, there is a natural group $\mathrm{SL}_3(A)$ that acts on $3 \times 3$ Hermitian matrices by linear transformations that preserve the determinant. Then we may classify the orbits of the action of $\mathrm{GL}_3(K) \times \mathrm{SL}_3(A)$ on $\mathscr{C}_3(A)$ as follows (a full version of this theorem may be found in \S \ref{sec:deg3moduli}):
\begin{thm} \label{thm:2hermRCpreview}
For any field $K$ with $\mathrm{char}(K) \nmid 6$, and a quadratic algebra $A$ over $K$, there is a canonical bijection between nondegenerate $\mathrm{GL}_3(K) \times \mathrm{SL}_3(A)$-orbits on the space $\mathscr{C}_3(A)$ of doubly Hermitian Rubik's cubes for $A$ over $K$ and isomorphism classes of triples $(C,L,M)$, where $C$ is a smooth genus one curve over $K$, \;$L$ is a degree~$3$ line bundle on~$C$, and $M$ is a vector bundle of rank equal to the dimension of $A$ over $K$, with a global faithful $A$-action and other structure coming from $A$, subject to a relation between $L$ and $M$.
\end{thm}
In each case, the vector bundle $M$ arises via a natural map from the curve $C$ to the variety of rank one $3 \times 3$ Hermitian matrices up to scaling. The rank one Hermitian matrices form a flag variety in the space of all Hermitian matrices, and the vector bundle
$M$ on $C$ is part of the flag on~$C$ coming from the pullback of the universal flag.
See \S \ref{sec:deg3moduli} for details.
The cases $A = K \times K$, $K$, $\Mat_{2 \times 2}(K)$ yield regular Rubik's cubes, doubly symmetric Rubik's cubes, and doubly skew-symmetric Rubik's cubes, respectively. Other examples include twists of these spaces, as well as the space $K^3 \otimes K^{27}$ under the action of $\mathrm{GL}_3(K) \times \mathrm{E}_6(K)$, which arises when $A$ is an octonion algebra.
\subsection*{Preliminaries and notation}
Let $K$ be a field not of characteristic $2$ or $3$. We will work over the field $K$ for the
majority of the paper, but many of the results have analogues over a $\mathbb{Z}[\frac{1}{6}]$-scheme as well.
We use the convention that the projectivization of a vector space
parametrizes lines, not hyperplanes. For example, a basepoint-free
line bundle $L$ on a variety $X$ induces a natural map $\phi_L : X \to
\mathbb{P}(H^0(X,L)^\vee)$.
A {\em genus one curve} means a proper, smooth, geometrically
connected curve with arithmetic genus $1$, and an {\em elliptic curve}
is such a genus one curve equipped with a base point.
An isomorphism of sets of data $D_1$ and $D_2$, where $D_i$ consists
of a genus one curve $C_i$ and vector bundles for $i = 1$ or $2$, is
an isomorphism $C_1 \to C_2$ such that the pullback of the vector
bundles on $C_2$ are isomorphic to the respective bundles on $C_1$.
If $A$ is an element in a tensor product of vector spaces, we use the notation $A(\cdot,\ldots,\cdot)$ to
denote the multilinear form, where the dots may be replaced by substituting elements of the respective
dual vector spaces. For example, for vector spaces $V_1$, $V_2$, and $V_3$,
if $A \in V_1 \otimes V_2 \otimes V_3$ and $v \in V_1^\vee$,
the notation $A(v,\cdot,\cdot)$ will refer to the evaluation $A \lrcorner \, v$ of the trilinear form $A$ on $v$,
which gives an element of $V_2 \otimes V_3$. By a slight abuse of notation, we will also use this notation to specify whether $A(v,\cdot,\cdot)$ vanishes for $v \in \mathbb{P}(V_1^\vee)$, for example.
\section{Genus one curves with degree \texorpdfstring{$2$, $3$, $4$, or $5$}{2, 3, 4, or 5} line bundles} \label{sec:classical}
In this section, we describe representations $V$ of algebraic groups $G$ whose orbits correspond to genus one curves with degree $d$ line bundles for $2 \leq d \leq 5$. For any field $K$ not of characteristic $2$, $3$, or $5$, there is a natural bijection between nondegenerate orbits in $V(K)/G(K)$ and isomorphism classes of genus one curves defined over $K$ equipped with degree $d$ line bundles.
Most of these quotient descriptions are classical (or at least fairly well known) over an algebraically closed field. For example, for $d = 2$, the space under consideration is that of binary quartic forms, and for $d = 3$, ternary cubic forms. What is new here is that, for each case, we also show that with the right forms of the group $G$ acting on the representation $V$, the stabilizer of a generic element of the representation agrees with the automorphism group of the curve and the line bundle. Thus we obtain an equivalence of moduli stacks, which is important in the arithmetic applications (e.g., in~\cite{arulmanjul-bqcount,arulmanjul-tccount}). We suspect that some of this section is known to the experts, but has not previously been stated explicitly; see also the related work of Cremona, Fisher, and Stoll \cite{cremonafisher, fisher-ternarycubics, cremonafisherstoll, fisher-pfaffianECs}.
\subsection{Binary quartic forms} \label{sec:binaryquartics}
A {\em binary quartic form} over $K$ is a two-dimensional vector space
$V$ over $K$ equipped with an element $q$ of $\Sym^4 V$. With a choice of basis
$\{w_1, w_2\}$ for $V^\vee$, a binary quartic form over $K$ may be represented as a
homogeneous degree $4$ polynomial
\begin{equation} \label{eq:BQpoly}
q(w_1,w_2) = a w_1^4 + b w_1^3 w_2 + c w_1^2 w_2^2 + d w_1 w_2^3 + e w_2^4,
\end{equation}
where $a, b, c, d, e \in K$. The group $\mathrm{GL}(V)$ acts on $\Sym^4 V$ by
acting on $V$ in the standard way. The ring of $\mathrm{SL}(V)$-invariants of
a binary quartic form $q$ as in equation \eqref{eq:BQpoly} is a
polynomial ring, generated by the two invariants
\begin{equation*}
I(q) = 12 a e - 3 b d + c^2 \qquad \textrm{and}\qquad
J(q) = 72 a c e + 9 b c d - 27 a d^2 - 27 e b^2 - 2 c^3.
\end{equation*}
The coarse moduli space $\Sym^4 V /\!\!/ \mathrm{SL}_2(V)$ is thus birational to the
affine plane, with coordinates given by the invariants $I$ and $J$. There is also
a natural notion of the discriminant $\Delta(q) = 4I(q)^3 - J(q)^2$ of a binary quartic form.
The nonvanishing of the discriminant $\Delta(q)$ corresponds to $q$ having four distinct roots
over the separable closure of $K$; we call such binary quartic forms
{\em nondegenerate}.
We may also consider the following twisted action of $g \in \mathrm{GL}(V)$ on $q \in \Sym^4 V$:
\begin{equation}
g \cdot q(w_1,w_2) = (\det g)^{-2} q((w_1,w_2)g)
\end{equation}
for which the $\mathrm{SL}(V)$-invariants described above are preserved as well. This representation
is $\mathrm{GL}(V)$ acting on $\Sym^4 V \otimes (\wedge^2 V)^{-2}$; we will sometimes denote it by
the action of $\mathrm{GL}(V)^{(-2)}$ on binary quartic forms. Note that the stabilizer of this
twisted action contains the diagonal $\mathbb{G}_m$ of $\mathrm{GL}(V)$ (e.g., scalar matrices), thereby inducing
an action of $\mathrm{PGL}(V)$ on the space of binary quartic forms.
There is one more action we will consider, which is scaling by squares, {\textrm i.e., } $\gamma \in \mathbb{G}_m$ sends
$q \in \Sym^4 V \otimes (\wedge^2 V)^{-2}$ to $\gamma^2 q$. We will denote this as the action of $2 \mathbb{G}_m$
on binary quartics.
A nondegenerate binary quartic form $q$ may be associated to a genus one
curve $C$ in the weighted projective space $\mathbb{P}(1,1,2)$ by the equation
\begin{equation} \label{eq:y2=bq}
y^2 = q(w_1,w_2) = a w_1^4 + b w_1^3 w_2 + c w_1^2 w_2^2 + d w_1 w_2^3 + e w_2^4,
\end{equation}
where $w_1$ and $w_2$ each have degree $1$ and $y$ has degree $2$. Over the algebraic
closure, the four roots of the binary quartic correspond to the
four points of $\mathbb{P}(V^\vee)$ over which $C$ ramifies; in other words, the subscheme of $\mathbb{P}(V^\vee) = \mathbb{P}^1$ cut out by $q$ is the ramification locus of the two-to-one map $C \to \mathbb{P}(V^\vee)$. Nondegeneracy is clearly preserved by all of the group actions above.
From a nondegenerate binary quartic $q$, we thus obtain a smooth irreducible
genus one curve $C$, as well as a degree $2$ line bundle $L$ on $C$, which is the pullback of
$\mathcal{O}_{\mathbb{P}(V^\vee)}(1)$ to $C$. Then the space of sections $H^0(C,L)$ may be identified with the vector space $V$.
The $\mathrm{GL}(V)^{(-2)} \times 2 \mathbb{G}_m$-action on a binary quartic does not change the isomorphism class of the curve $C$ obtained in this way.
The Jacobian of the curve $C$ associated to $q$ has Weierstrass form
$$y^2 = x^3 - 27 I(q) x - 27 J(q)$$
(see, e.g., \cite{ankim, cremonafisher}).
Conversely, given a smooth irreducible genus one curve $C$ over
$K$ and a degree~$2$ line bundle $L$, the hyperelliptic map $\eta:
C \to \mathbb{P}(H^0(C,L)^\vee)$ given by the complete linear series $\left| L \right|$
has a ramification divisor of degree $4$ by the Riemann--Hurwitz Theorem. The branch divisor is a degree $4$
subscheme of $\mathbb{P}(H^0(C,L)^\vee)$ defined over $K$, which recovers a
binary quartic form over $K$, up to scaling. We next compute this scaling factor more precisely,
in order to keep track of the group actions for Theorem \ref{thm:bqorbit} below.
Given the genus one curve $\pi: C \to \Spec K$ and a degree $2$ line bundle $L$ on $C$, we have the
exact sequence
\begin{equation*}
0 \longrightarrow \eta^* \Omega^1_{\mathbb{P}(H^0(C,L)^\vee)} \longrightarrow \Omega^1_C \longrightarrow \Omega^1_{C/\mathbb{P}(H^0(C,L)^\vee)} \longrightarrow 0,
\end{equation*}
and taking the pushforward under $\eta$ gives the exact sequence
\begin{equation} \label{eq:bqdefseq}
0 \longrightarrow \Omega^1_{\mathbb{P}(H^0(C,L)^\vee)} \otimes \eta_* \mathcal{O}_C \longrightarrow \eta_* \Omega^1_C \longrightarrow \eta_* \Omega^1_{C/\mathbb{P}(H^0(C,L)^\vee)} \longrightarrow 0.
\end{equation}
The first two terms of the sequence \eqref{eq:bqdefseq} are rank two bundles whose
determinants have degrees $-6$ and $-2$, respectively. Taking determinants and twisting
gives the map
\begin{equation*}
\mathcal{O} \to (\Omega^1_{\mathbb{P}(H^0(C,L)^\vee)})^{\otimes -2} \otimes \omega_{C}^{\otimes 2}
\end{equation*}
where $\omega_C := \pi_* \Omega^1_C$ is the Hodge bundle for $C$.
The induced map on cohomology is
\begin{equation} \label{eq:bqconstr}
K \longrightarrow \Sym^4 (H^0(C,L)) \otimes (\wedge^2 (H^0(C,L)))^{\otimes (-2)} \otimes \omega_{C}^{\otimes 2}
\end{equation}
and the image of $1 \in K$ is the desired binary quartic form. If we were not
allowing the action of $2\mathbb{G}_m$ on binary quartics as well, then we would need to
specify a differential to pin down the scaling of the binary quartic form.
In other words, there is an isomorphism between the substack of $\Sym^4 V$ of nondegenerate binary quartic forms
and the moduli problem for $(C,L,\phi,\delta)$, where $C$ is a smooth irreducible genus one curve, $L$ is a degree
$2$ line bundle on $C$, $\phi : H^0(C,L) \to V$ is an isomorphism, and $\delta$ is a differential on $C$. By
the computation \eqref{eq:bqconstr}, this isomorphism is $\mathrm{GL}(V) \times \mathbb{G}_m$-equivariant: the group acts on binary quartic forms as described above (namely, as $\mathrm{GL}(V)^{(-2)} \times 2\mathbb{G}_m$), while on the geometric data, $\mathrm{GL}(V)$ goes through the isomorphism $\phi$ to act on $H^0(C,L)$, and $\mathbb{G}_m$ acts by scaling on $\delta$.
Thus, by descending to the quotient by these group actions, we obtain the correspondence of Theorem~\ref{thm:bqorbit} below. The stabilizer of a binary quartic form here is exactly the automorphism group of the geometric data corresponding to the form. In order to describe the stabilizer, we recall the definition
of the Heisenberg group $\Theta_{E,2}$ for an elliptic curve $E$ as the extension given by the following commutative diagram:
\begin{equation} \label{eq:Heisenberggroup}
\xymatrix{
0 \ar[r] & \mathbb{G}_m \ar[r] \ar@{=}[d] & \Theta_{E,2} \ar[r] \ar[d] & E[2] \ar[r] \ar[d] & 0 \\
0 \ar[r] & \mathbb{G}_m \ar[r] & \mathrm{GL}_2 \ar[r] & \mathrm{PGL}_2 \ar[r] & 0.\!\!
}
\end{equation}
More generally, for any $n$, we may define $\Theta_{E,n}$ analogously as an extension of $E[n]$ by $\mathbb{G}_m$.
\begin{thm} \label{thm:bqorbit}
Let $V$ be a $2$-dimensional vector space over $K$. Then nondegenerate $\mathrm{GL}(V)^{(-2)} \times 2 \mathbb{G}_m$-orbits on $\Sym^4 V$ are in bijection with isomorphism classes of pairs $(C,L)$, where $C$ is a smooth irreducible genus one curve over $K$ and $L$ is a degree $2$ line bundle on $C$.
The stabilizer group $($as a $K$-scheme$)$ of a nondegenerate element of $\Sym^4 V$ corresponding to $(C,L)$ is an extension of $\Aut(\Jac(C))$ by $\Theta_{\Jac(C),2}$, where $\Jac(C)$ is the Jacobian of the curve $C$, $\Aut(\Jac(C))$ is its automorphism group as an elliptic curve, and $\Theta_{\Jac(C),2}$ is the Heisenberg group as defined in $(\ref{eq:Heisenberggroup})$.
\end{thm}
\begin{remark}
When the Jacobian of $C$ does not have $j$-invariant $0$ or $1728$, the automorphism group of $(C,L)$ is in fact just the direct product $\Theta_{\Jac(C),2} \times \mathbb{Z}/2\mathbb{Z} \subset \mathrm{GL}(V) \times \mathbb{G}_m$. More generally, the automorphism group scheme described in Theorem \ref{thm:bqorbit} is not necessarily a split extension.
\end{remark}
The isomorphism class of the pair $(C,L)$ is a torsor for $(E, \mathcal{O}(3O))$, where $E$ is the Jacobian of $C$ and $O$ is the identity point of $E$. In the language of \cite{cathy-periodindex, explicitndescentI} (see Appendix~\ref{appendix:torsors}), the pair $(C,L)$ represents an element of the kernel of the obstruction map and
may be identified with an element of $H^1(K,\Theta_{E,2})$, where $E$
is the Jacobian of $C$. Therefore, given an elliptic curve
$$E: y^2 = x^3 - 27 I x - 27 J,$$
the set $H^1(K,\Theta_{E,2})$ is in correspondence with the set of $\mathrm{GL}(V)^{(-2)}$-orbits of binary quartic
forms $q$ having invariants $I(q) = I$ and $J(q) = J$.
\begin{remark} \label{rmk:bqdivisors}
Theorem \ref{thm:bqorbit} may also be rephrased in terms of {\em divisors} on the genus one curves instead of {\em line bundles}; we also replace the group $\mathrm{GL}(V)^{(-2)} \times 2 \mathbb{G}_m$ with $\mathrm{PGL}(V) \times 2 \mathbb{G}_m$. In this case, the stabilizer of a binary quartic corresponding to a pair $(C,[D])$ for a genus one curve $C$ with a degree $2$ $K$-rational divisor $D$ (of equivalence class $[D]$) is the group of $K$-points of the corresponding extension of $\Aut(\Jac(C))$ by $\Jac(C)[2]$. For example, if the $j$-invariant of $\Jac(C)$ is not $0$ or $1728$, then the stabilizer is just $\mathbb{Z}/2\mathbb{Z} \times \Jac(C)(K)[2]$. While the nondegenerate orbits of $\Sym^4 V$ under our original group $\mathrm{GL}(V)^{(-2)} \times 2 \mathbb{G}_m$ and the group $\mathrm{PGL}(V) \times 2 \mathbb{G}_m$ are identical, with the latter group action the stabilizer of an element matches the naturally defined automorphism group of the corresponding pair $(C,[D])$.
This revision of Theorem~\ref{thm:bqorbit}---i.e., the bijection between $\mathrm{PGL}(V) \times 2 \mathbb{G}_m$-orbits of binary quartic forms and isomorphism classes of pairs $(C,[D])$---is used in \cite{arulmanjul-bqcount} to prove that the
average size of $2$-Selmer groups for elliptic curves over $\mathbb{Q}$, ordered
by height, is $3$, which in turn implies that the average rank of elliptic
curves (ordered in the same way) is bounded by $1.5$.
\end{remark}
\begin{remark} \label{rmk:bqM11}
By generalizing the above constructions to base schemes over $\mathbb{Z}[1/6]$ and letting $V$ be a rank $2$ free module over $\mathbb{Z}[1/6]$, we obtain an isomorphism between the nondegenerate substack of the double quotient stack $[2 \mathbb{G}_m \setminus \Sym^4 V \otimes (\wedge^2 V)^{-2} / \mathrm{GL}(V)]$ and the quotient stack $[\mathscr{M}_{1,1} / \Theta_{E^{\mathrm{univ}},2}]$, where $\mathscr{M}_{1,1}$ is the moduli space of elliptic curves and $\Theta_{E^{\mathrm{univ}},2}$ is the theta group scheme for the universal elliptic curve $E^{\mathrm{univ}}$ over $\mathscr{M}_{1,1}$. Here, the $T$-points of the double quotient stack are triples $(\mathcal{V}, L_T, s)$, where $p: \mathcal{V} \to T$ is a rank $2$ vector bundle over $T$, $L_T$ is a line bundle on $T$, and $s$ is a map $L_T^{\otimes 2} \to \Sym^4 (\mathcal{V}) \otimes (\wedge^2 \mathcal{V})^{-2}$.
Analogous statements will be true for all of the other cases discussed in this section; we discuss this further in \S \ref{sec:diffbases}.
\end{remark}
\subsection{Ternary cubic forms} \label{sec:ternarycubics}
A {\em ternary cubic form} over $K$ is a three-dimensional vector
space $V$ and an element $f$ of $\Sym^3 V$; with a choice of basis
$\mathfrak{B} = \{ w_1, w_2, w_3 \}$ for $V^\vee$, such a form may
be represented as a homogeneous degree $3$ polynomial
\begin{align} \label{eq:ternarycubic}
f(w_1,w_2,w_3) = a w_1^3 &+ b w_2^3 + c w_3^3 + a_2 w_1^2 w_2 + a_3 w_1^2 w_3 \\
&+ b_1 w_1 w_2^2 + b_3 w_2^2 w_3 + c_1 w_1 w_3^2 + c_2 w_2 w_3^2 + m w_1 w_2 w_3 \nonumber
\end{align}
with coefficients in $K$.
There is a natural action of $\mathrm{GL}(V)$ on the space of
all ternary cubic forms by the standard action of $\mathrm{GL}(V)$ on $V$. The ring of
$\mathrm{SL}(V)$-invariants of the space of ternary cubic forms is a polynomial ring
generated by a degree $4$ invariant $d_4=S$ and a degree $6$ invariant
$d_6=T$, and they may be computed by classical formulas (see
\cite{fisher-ternarycubics}, for our choice of scaling). Thus, the coarse
moduli space $\Sym^3 V /\!\!/ \mathrm{SL}(V)$ is
birational to the affine plane $\mathbb{A}^2$ with coordinates~$d_4$ and~$d_6$.
For the twisted action of $\mathrm{GL}(V)$ on $\Sym^3 V$ where $g \in \mathrm{GL}(V)$ sends
$f(X,Y,Z) \in \Sym^3 V$ to $(\det g)^{-1} f((X,Y,Z)g)$, the $\mathrm{SL}(V)$-invariants
described above are also preserved. This representation
is described more accurately as the action of $\mathrm{GL}(V)$ on $\Sym^3 V \otimes (\wedge^3 V)^{-1}$,
and we will also sometimes denote it by the action of $\mathrm{GL}(V)^{(-1)}$ on $\Sym^3 V$ to indicate the $-1$-twist
of the determinant. Note that this representation has a nontrivial kernel, namely the
diagonal $\mathbb{G}_m$ of $\mathrm{GL}(V)$ (e.g., scalar matrices), so it induces a natural $\mathrm{PGL}(V)$-action
on the space of ternary cubic forms.
We will also consider the action of $\mathbb{G}_m$ on $\Sym^3 V$ (or on $\Sym^3 V \otimes (\wedge^3 V)^{-1}$)
by scaling. Scaling the form $f$ by $\gamma \in \mathbb{G}_m$ scales each $\mathrm{GL}(V)^{(-1)}$-invariant by $\gamma^d$,
where $d$ is the degree of the invariant.
We claim that the nondegenerate subset of ternary cubic forms,
up to the $\mathrm{GL}(V)^{(-1)} \times \mathbb{G}_m$-action,
parametrizes genus one curves equipped with degree $3$ line
bundles, up to isomorphisms. In particular, a ternary cubic form $f$
defines a curve $\iota: C := \{f = 0\} \hookrightarrow \mathbb{P}(V^\vee)$.
We say that $f$ a {\em nondegenerate} ternary cubic form if $C$ is smooth,
which occurs if and only if the degree $12$ discriminant $\Delta(f) :=
(d_4^3-d_6^2)/1728$ of $f$ is nonzero. In this case, the curve $C$ has genus one, and
the pullback $\iota^* \mathcal{O}_{\mathbb{P}(V^\vee)}(1)$ is a degree $3$ line
bundle on $C$.
On the other hand, given a (smooth irreducible) genus one curve $\pi: C \to \Spec K$ and a degree~$3$ line
bundle $L$ on $C$, the embedding of $C$ into $\mathbb{P}(H^0(C,L)^\vee) \cong \mathbb{P}^2$
gives rise to the exact sequence of sheaves
\begin{equation*}
0 \longrightarrow \mathcal{I}_C \longrightarrow \mathcal{O}_{\mathbb{P}(H^0(C,L)^\vee)} \longrightarrow \mathcal{O}_C \longrightarrow 0
\end{equation*}
on $\mathbb{P}(H^0(C,L)^\vee)$, where $\mathcal{I}_C$ is the ideal defining the curve
$C$. Tensoring with $\mathcal{O}_{\mathbb{P}(H^0(C,L)^\vee)}(3)$, taking cohomology, and tensoring with $H^0(\mathbb{P}(H^0(C,L)^\vee),\mathcal{I}_C(3))^\vee$ gives the map
\begin{equation*}
K \longrightarrow H^0(\mathbb{P}(H^0(C,L)^\vee),\mathcal{O}(3)) \otimes (\wedge^3(H^0(C,L)))^{-1} \otimes \omega_C,
\end{equation*}
where $\omega_C := \pi_* \Omega^1_C$ is the Hodge bundle for the curve $C$. The image of $1 \in K$ is an element of
$\Sym^3(H^0(C,L)) \otimes (\wedge^3(H^0(C,L)))^{-1}$, {\textrm i.e., } a ternary cubic form with $V := H^0(C,L)$. Although
the Hodge bundle $\omega_C$ is trivial over a field, if we did not include the $\mathbb{G}_m$-action here, we would
need to specify a differential to pin down the scaling of the ternary cubic form.
These two functors between ternary cubic forms and pairs $(C,L)$ are inverse to one another.
As in the binary quartic case, there is in fact an isomorphism between the nondegenerate subset of $\Sym^3 V$ and the
moduli problem for $(C,L,\phi,\delta)$, where $C$ is a genus one curve, $L$ is a degree $3$ line bundle, $\phi: H^0(C,L) \to V$ is an isomorphism, and $\delta$ is a differential on $C$. This isomorphism is $\mathrm{GL}(V) \times \mathbb{G}_m$-equivariant, and so we obtain the following:
\begin{thm} \label{thm:tcorbit}
Let $V$ be a $3$-dimensional $K$-vector space. Then nondegenerate $\mathrm{GL}(V)^{(-1)} \times \mathbb{G}_m$-orbits of $\Sym^3 V$
are in bijection with isomorphism classes of pairs $(C,L)$, where $C$ is a smooth irreducible genus one curve over $K$ and
$L$ is a degree $3$ line bundle on $C$.
The stabilizer group $($as a $K$-scheme$)$ of a nondegenerate element of $\Sym^3 V$ corresponding to $(C,L)$ is an extension of $\Aut(\Jac(C))$ by $\Theta_{\Jac(C),3}$, where $\Jac(C)$ is the Jacobian of $C$, $\Aut(\Jac(C))$ is its automorphism group as an elliptic curve, and $\Theta_{\Jac(C),3}$ is the degree $3$ Heisenberg group of $\Jac(C)$.
\end{thm}
\begin{remark}
More precisely, the stabilizer group of a nondegenerate element of $\Sym^3 V$ coincides with the automorphism group of the corresponding pair $(C,L)$. Here, the automorphism group of the pair $(C,L)$ consists of the $K$-points of the group scheme given by a possibly non-split extension of $\Aut(\Jac(C))$ by $\Theta_{\Jac(C),3}$.
For example, if $C$ has a point $O$, and $L$ is the line bundle $\mathcal{O}(3O)$, then the extension is indeed split, since for all automorphisms of $C$ fixing $O$, the pullback of $L$ is isomorphic to $L$. However, in general this extension will not split. For example, if $C$ is a nontrivial torsor of its Jacobian, then the automorphism group of $(C,L)$ is just $\Theta_{\Jac(C),3}$.
Note that the automorphism group of $(C,L)$ as a {\em torsor} for $(\Jac(C), \mathcal{O}(3O))$ is always~$\Theta_{\Jac(C),3}$.
\end{remark}
Given a ternary cubic form $f$, the associated genus one curve $C$ has
Jacobian $E := \Jac(C)$ which is determined by the $\mathrm{SL}(V)$-invariants $d_4 = d_4(f)$ and $d_6 = d_6(f)$ (see \cite{ankim} for details). In
Weierstrass form, the elliptic curve $E$ may be expressed as
\begin{equation} \label{eq:JacTC}
E: y^2 = x^3 - 27 d_4 x - 54 d_6.
\end{equation}
The discriminant $\Delta(E)$ is then given by the formula
$$1728 \Delta(E) = d_4^3 - d_6^2.$$
Note that $d_4$ and $d_6$ are scaled by the usual action of $\mathbb{G}_m$ on $f$, so they are only relative invariants for the action of the group $\mathrm{GL}(V)^{(-1)} \times \mathbb{G}_m$ on the orbit of $f$; however, all elliptic curves in this orbit
are isomorphic, as $E$ is isomorphic to the elliptic curve given by $y^2 = x^3 - 27 \lambda^4 d_4 x - 54 \lambda^6 d_6$
for any $\lambda \in K^*$.
Conversely, given two numbers $d_4, d_6 \in K$ such that $d_4^3 - d_6^2 \neq 0$, the $\mathrm{GL}(V)^{(-1)} \times \mathbb{G}_m$-orbits of $\Sym^3V$
having {\em relative} invariants $d_4$ and $d_6$ are made up of ternary cubic forms having invariants
$\gamma^4 d_4$ and $\gamma^6 d_6$ for $\gamma \in K^*$. Over an algebraically closed field,
these ternary cubic forms comprise exactly one orbit, but over a general field they may break up into many $K$-orbits.
Given invariants $d_4, d_6 \in K$, one may specify an elliptic curve $E$, say
in the form of \eqref{eq:JacTC}. Then the $\mathrm{GL}(V)^{(-1)} \times \mathbb{G}_m$-orbits of ternary cubic forms over $K$
having associated Jacobian $E$ correspond to pairs $(C,L)$ with Jacobian isomorphic to $E$.
As described more carefully in Appendix~\ref{appendix:torsors}, such pairs $(C,L)$
are twists of the elliptic curve $E$ and the standard degree $3$
line bundle $\mathcal{O}(3 \cdot O)$ where $O$ is the identity point on $E$.
The $K$-rational pairs $(C,L)$ with a choice of isomorphism $\Aut^0(C) \stackrel{\cong}{\longrightarrow} E$ are exactly parametrized up to
isomorphism by $H^1(K,\Theta_{E,3})$.
Thus, the $\mathrm{GL}(V)^{(-1)}$-orbits of $\Sym^3V$ with invariants $d_4$ and $d_6$
are in bijection with the pointed set $H^1(K,\Theta_{E,3})$, where $E$ is the elliptic curve in \eqref{eq:JacTC}.
We recover the following proposition, which is Theorem 2.5 of \cite{fisher-ternarycubics}:
\begin{prop} \label{prop:ternarycubictorsors}
Let $E$ be an elliptic curve over $K$ with Weierstrass form
$$y^2 = x^3 - 27 d_4 x - 54 d_6.$$
Then the set $H^1(K,\Theta_{E,3})$ parametrizes
$\mathrm{GL}(V)^{(-1)}$-equivalence classes of $\Sym^3 V$ with invariants $d_4$ and $d_6$.
\end{prop}
\begin{remark}
The difference between the $\mathrm{GL}(V)^{(-1)}$-orbits of $\Sym^3V$ with invariants $d_4$ and $d_6$ and the $\mathrm{GL}(V)^{(-1)} \times \mathbb{G}_m$-orbits with the same relative invariants is subtle. For example, under the first action, a ternary cubic form $f$ and its negative $-f$ are not generally in the same orbit (but they are in the same $\mathrm{GL}(V)^{(-1)} \times \mathbb{G}_m$-orbit). While $f$ and $-f$ cut out the same genus one curve $C$ in the plane, they correspond to inverse elements in $H^1(K, \Theta_{\Jac(C),3})$; this exactly reflects the $\mathbb{Z}/2\mathbb{Z}$ in the automorphism group of any elliptic curve.
\end{remark}
\begin{remark}
Analogously to Remark \ref{rmk:bqdivisors}, we may replace line bundles with equivalence classes of divisors in the statement of Theorem \ref{thm:tcorbit} and the group $\mathrm{GL}(V)^{(-1)} \times \mathbb{G}_m$ with $\mathrm{PGL}(V) \times \mathbb{G}_m$. Then we obtain a bijection between $\mathrm{PGL}(V) \times \mathbb{G}_m$-orbits of $\Sym^3 V$ and isomorphism classes of pairs $(C,[D])$, where $C$ is a genus one curve with a rational degree $3$ divisor $D$. The stabilizer of a nondegenerate element of $\Sym^3 V$ corresponding to $(C,[D])$ is the automorphism group of $(C,[D])$, namely the group of $K$-points of a possibly non-split extension of $\Aut(\Jac(C))$ by $\Jac(C)[3]$.
In \cite{arulmanjul-tccount}, this bijection is used to prove that the average size of $3$-Selmer groups for elliptic curves over $\mathbb{Q}$, ordered by height, is $4$, implying an improved upper bound of $7/6$ for the average rank of elliptic curves over $\mathbb{Q}$.
\end{remark}
\begin{remark}
As in Remark \ref{rmk:bqM11}, we actually have an isomorphism between the nondegenerate substack of the quotient stack $[\Sym^3 V \otimes (\wedge^3 V)^{-1} / \mathrm{GL}(V) \times \mathbb{G}_m]$ and the quotient stack $[\mathscr{M}_{1,1}/\Theta_{E^{\mathrm{univ}},3}]$, where $\Theta_{E^{\mathrm{univ}},3}$ is the theta group scheme for the universal elliptic curve $E^{\mathrm{univ}}$ over $\mathscr{M}_{1,1}$. See \S \ref{sec:diffbases} for more details.
\end{remark}
\subsection{Pairs of quaternary quadratic forms} \label{sec:deg4}
Next, let $V$ and $W$ be vector spaces of dimensions $4$ and $2$, respectively, over $K$. We study the space $W \otimes \Sym^2 V$. With a choice of bases for both $V$ and $W$, any element of $W \otimes \Sym^2 V$ may be represented as a pair of symmetric $4 \times 4$ matrices, say $A$ and $B$. There is a natural action of $\mathrm{GL}(W) \times \mathrm{GL}(V)$ on this space, and the $\mathrm{SL}(W) \times \mathrm{SL}(V)$-invariants for this space form a polynomial ring generated by two invariants $d_8$ and $d_{12}$ of degrees $8$ and $12$, respectively \cite{ankim}. In particular, the $\mathrm{SL}(V)$-invariants of the element represented by the pair $(A,B)$ of symmetric matrices are the coefficients of the binary quartic form $q(x,y) = \det (Ax + By)$ (see, e.g., \cite{ankim,merrimansikseksmart}), and the $\mathrm{SL}(W)$-invariants of this binary quartic form $q(x,y)$ are the polynomials $I(q)$ and $J(q)$ from~\S\ref{sec:binaryquartics}.
We describe briefly how nondegenerate elements of this space naturally give genus one curves with degree $4$ line bundles (see \cite{merrimansikseksmart} for an excellent exposition of the details). An element of $W \otimes \Sym^2 V$ represents a pencil (parametrized by $\mathbb{P}(W^\vee)$) of quadrics in $\mathbb{P}(V^\vee)$. If this pencil is nontrivial, its base locus is a curve $C$, which is of genus one if smooth, by adjunction. The curve $C$ is smooth exactly when the discriminant $\Delta(q) = 4I(q)^3 - J(q)^2$ is nonzero, just as for binary quartic forms, and elements of $W \otimes \Sym^2 V$ giving smooth curves are called {\em nondegenerate}. Pulling back $\mathcal{O}_{\mathbb{P}(V^\vee)}(1)$ to the genus one curve $C$ gives a degree $4$ line bundle. Furthermore, the rulings on these quadrics parametrize a double cover $D$ of $\mathbb{P}(W^\vee)$ ramified at the degree $4$ subscheme given by the binary quartic form $q(x,y)$. This curve $D$ is also a genus one curve, and in fact, as elements of $H^1(K,E)$, the class of $D$ is double that of $C$.
On the other hand, starting with a genus one curve $C$ and a degree $4$ line bundle $L$, the embedding of $C$ into $\mathbb{P}(H^0(C,L)^\vee) \cong \mathbb{P}^3$
gives rise to the exact sequence of sheaves
\begin{equation*}
0 \longrightarrow \mathcal{I}_C \longrightarrow \mathcal{O}_{\mathbb{P}(H^0(C,L)^\vee)} \longrightarrow \mathcal{O}_C \longrightarrow 0,
\end{equation*}
and twisting by $\mathcal{O}_{\mathbb{P}(H^0(C,L)^\vee)}(2)$ shows that $H^0(\mathbb{P}(H^0(C,L)^\vee),\mathcal{I}_C(2))$ is at least $2$-dimensional. It is easy to check
that for $C$ to be a smooth irreducible genus one curve, this space is exactly $2$-dimensional (e.g., by computing the free resolution for $\mathcal{O}_C$ over $\mathbb{P}(H^0(C,L)^\vee) \cong \mathbb{P}^3$). We thus obtain a $2$-dimensional subspace of $\mathcal{O}_{\mathbb{P}(H^0(C,L)^\vee)}(2)$ (the space of quadratic forms on $\mathbb{P}^3$) as desired.
Therefore, we have an isomorphism between nondegenerate elements of $W \otimes \Sym^2 V$ and the moduli problem for triples $(C,L,\phi, \psi)$, where $C$ is a genus one curve, $L$ is a degree $4$ line bundle, and $\phi: H^0(C,L) \to V$ and $\psi: H^0(\mathbb{P}(H^0(C,L)^\vee),\mathcal{I}_C) \to W^\vee$ are isomorphisms. The group $\mathrm{GL}(W) \times \mathrm{GL}(V)$ acts on $W \otimes \Sym^2 V$ in the standard way, and it acts on $\phi$ and $\psi$ by the actions on $V$ and $W^\vee$, respectively. Therefore, taking the quotients of both sides of this isomorphism by $\mathrm{GL}(W) \times \mathrm{GL}(V)$ gives a correspondence between the orbits and the isomorphism class of pairs $(C,L)$.
\begin{thm} \label{thm:deg4orbit}
Let $W$ and $V$ be $K$-vector spaces of dimensions $2$ and $4$, respectively. There exists a bijection from nondegenerate
$\mathrm{GL}(W) \times \mathrm{GL}(V)$-orbits of $W \otimes \Sym^2 V$ and isomorphism classes of pairs $(C,L)$, where $C$ is a smooth irreducible genus one curve and $L$ is
a degree $4$ line bundle on $C$. The stabilizer group $($as a $K$-scheme$)$ of a nondegenerate element of $W \otimes \Sym^2 V$ corresponding to $(C,L)$ is an extension of $\Aut(\Jac(C))$ by $\Theta_{\Jac(C),4}$, where $\Jac(C)$ is the Jacobian of $C$, $\Aut(\Jac(C))$ is its automorphism group as an elliptic curve and $\Theta_{\Jac(C),4}$ is the degree $4$ Heisenberg group of $\Jac(C)$.
\end{thm}
\begin{remark}
As in the ternary cubic case, the automorphism group of a pair $(C,L)$ is made up of the $K$-points of the group scheme given by a possibly non-split extension of $\Aut(\Jac(C))$ by $\Theta_{\Jac(C),4}$. In this case, part of the extension splits more often, e.g., if the curve $C$ has a degree $2$ line bundle $M$, then the pair $(C, M^{\otimes 2})$ has automorphism group $\Theta_{\Jac(C),4} \rtimes \mathbb{Z}/2\mathbb{Z}$ if $\Jac(C)$ does not have $j$-invariant $0$ or $1728$.
\end{remark}
\begin{remark} \label{rmk:deg4divisors}
Again, in Theorem \ref{thm:deg4orbit}, we may replace the line bundle $L$ with the equivalence class of a rational degree $4$ divisor $D$, and the group $\mathrm{GL}(W) \times \mathrm{GL}(V)$ with its quotient by the kernel of the multiplication map $\mathbb{G}_m \times \mathbb{G}_m \to \mathbb{G}_m$, sending $(\gamma_1, \gamma_2)$ to $\gamma_1 \gamma_2^2$. The stabilizer of a nondegenerate element of $W \otimes \Sym^2 V$ corresponding to $(C,[D])$ then consists of the $K$-points of the group scheme given by the induced extension of $\Aut(\Jac(C))$ by $\Jac(C)[4]$.
This correspondence is used in \cite{arulmanjul-4Sel} to compute the average size of the $4$-Selmer group of elliptic curves over $\mathbb{Q}$.
\end{remark}
As in the previous cases, the Jacobian of the curve $C$ corresponding to an element of $W \otimes \Sym^2 V$ is given by its $\mathrm{SL}(W) \times \mathrm{SL}(V)$-invariants $d_8$ and $d_{12}$:
$$\Jac(C): y^2 = x^3 - 27 d_8 x - 27 d_{12}.$$
This follows easily from the fact that $d_8$ and $d_{12}$ are the $\mathrm{SL}(W)$-invariants $I(q)$ and $J(q)$ of the binary quartic form $q$.
\subsection{Quintuples of \texorpdfstring{$5 \times 5$}{5x5} skew-symmetric matrices} \label{sec:deg5}
The degree $5$ problem was studied extensively by Fisher in \cite{fisher-genus1pf, fisher-invts} and much of the following can be deduced from the work of Buchsbaum-Eisenbud in \cite{buchsbaumeisenbud1}. For completeness, we very briefly remind the reader of the constructions involved. In this section, let $K$ be a field not of characteristic $2$, $3$, or $5$.
Let $V$ and $W$ be $5$-dimensional $K$-vector spaces. We consider the space $V \otimes \wedge^2 W$, which has $\mathrm{SL}(V) \times \mathrm{SL}(W)$-invariant ring generated by two generators $d_{20}$ and $d_{30}$ of degrees $20$ and $30$, respectively. An element of $V \otimes \wedge^2 W$, with a choice of bases for $V$ and $W$, may be thought of as five $5 \times 5$ skew-symmetric matrices, or a single $5 \times 5$ skew-symmetric matrix of linear forms on $V^\vee$. Then generically the $4 \times 4$ Pfaffians intersect in a genus one curve in $\mathbb{P}(V^\vee) = \mathbb{P}^4$, and pulling back $\mathcal{O}_{\mathbb{P}(V^\vee)}(1)$ to this curve gives a degree $5$ line bundle on the curve. The isomorphism class of the curve and the degree $5$ line bundle are clearly $\mathrm{GL}(V) \times \mathrm{GL}(W)$-equivariant. The nondegeneracy required here is the nonvanishing of the degree $60$ discriminant formed in the usual way from the generators of the invariant ring.
To construct an element of $V \otimes \wedge^2 W$ from a smooth irreducible genus one curve $C$ and a degree $5$ line bundle $L$, one identifies $V = H^0(C,L)$ and $W = H^0(\mathbb{P}(V^\vee),\mathcal{I}_C(2))$, where $\mathcal{I}_C$ is the ideal sheaf for $C$. Then one immediately obtains a section of $\mathcal{O}_{\mathbb{P}(V^\vee)}(1) \otimes \wedge^2 W$ from the free resolution
$$0 \to \mathcal{O}(5) \to \mathcal{O}(-3) \otimes W^\vee \to \mathcal{O}(-2) \otimes W \to \mathcal{O} \to \mathcal{O}_C$$
of $\mathcal{O}_C$ over $\mathbb{P}(H^0(C,L)^\vee)$.
Therefore, there is an isomorphism between elements of $V \otimes \wedge^2 W$ and the moduli problem parametrizing $(C,L,\phi,\psi)$, where $C$ is a genus one curve, $L$ is a degree $5$ line bundle, and $\phi: H^0(C,L) \to V$ and $\psi: H^0(\mathbb{P}(H^0(C,L)^\vee), \mathcal{I}_C(2)) \to W$ are isomorphisms. Quotienting both sides by the natural actions of $\mathrm{GL}(V) \times \mathrm{GL}(W)$ gives
\begin{thm} \label{thm:deg5moduli}
Let $V$ and $W$ be $5$-dimensional $K$-vector spaces. Then there is a canonical bijection between nondegenerate $\mathrm{GL}(V) \times \mathrm{GL}(W)$-orbits of $V \otimes \wedge^2 W$ and isomorphism classes of pairs $(C,L)$, where $C$ is a genus one curve and $L$ is a degree $5$ line bundle on $C$. The stabilizer group $($as a $K$-scheme$)$ of a nondegenerate element of $V \otimes \wedge^2 W$ corresponding to $(C,L)$ is an extension of $\Aut(\Jac(C))$ by $\Theta_{\Jac(C),5}$, where $\Jac(C)$ is the Jacobian of $C$, $\Aut(\Jac(C))$ is its automorphism group as an elliptic curve, and $\Theta_{\Jac(C),5}$ is the degree $5$ Heisenberg group of $\Jac(C)$.
\end{thm}
\begin{remark} \label{rmk:deg5divisors}
As in the previous three cases, Theorem \ref{thm:deg5moduli} may be rephrased using isomorphism classes of pairs $(C,[D])$, where $D$ is a degree $5$ rational divisor on $C$, where the group $\mathrm{GL}(V) \times \mathrm{GL}(W)$ is replaced by its quotient by the kernel of the multiplication map $\mathbb{G}_m \times \mathbb{G}_m \to \mathbb{G}_m$ sending $(\gamma_1, \gamma_2)$ to $\gamma_1 \gamma_2^2$. This correspondence is used in \cite{arulmanjul-5Sel} to determine the average size of the $5$-Selmer group of elliptic curves over $\mathbb{Q}$.
\end{remark}
Again, the Jacobian of the curve $C$ associated to a nondegenerate element of $V \otimes \wedge^2 W$ is given by the $\mathrm{SL}(V) \times \mathrm{SL}(W)$-invariants $d_{20}$ and $d_{30}$:
$$\Jac(C) : y^2 = x^3 + d_{20} x + d_{30}.$$
See \cite{fisher-invts} for details and methods for computing these invariants.
\subsection{Some remarks on different bases} \label{sec:diffbases}
This short section may be safely skipped for readers interested in the main theorems of this paper over fields $K$ (as opposed to more general base rings or schemes). Here, we simply comment on how one can vary the base schemes to study the moduli problems in Section \ref{sec:classical}.
All of the constructions discussed in this section may be made precise over arbitrary $\mathbb{Z}[1/30]$-schemes $S$. In particular, let $\mathcal{M}_d$ denote the moduli stack of genus one curves with degree $d$ line bundles. Then for $2 \leq d \leq 5$, we claim that $\mathcal{M}_d$ is isomorphic to an open substack of a certain quotient stack $[V/G]$ for a group $G$ and representation $V$ of $G$. The constructions are straightforward generalizations of those over fields in \S \ref{sec:binaryquartics} through \S \ref{sec:deg5}, e.g., as in Remark \ref{rmk:bqM11}.
For example, the case of ternary cubics is discussed in detail in \cite[\S 2.A.2]{wei-thesis}: the idea is that the $T$-points of the double quotient $[\mathbb{G}_m \setminus \Sym^3 V \otimes (\wedge^3 V)^{-1} / \mathrm{GL}(V)]$ are triples $(\mathcal{V}, L_T, f)$, where $\mathcal{V}$ is a rank $3$ vector bundle over $T$, $L_T$ is a line bundle on $T$, and $f$ is a section of $\Sym^3 \mathcal{V} \otimes (\wedge^3 \mathcal{V})^{-1} \otimes L_T$, and nondegenerate triples correspond to genus one curves over $T$ with degree $3$ line bundles. The curve $C$ is the zero set of the section of $f$ in $\mathbb{P}(\mathcal{V}^\vee)$, and the line bundle $L$ is the pullback of $\mathcal{O}_{\mathbb{P}(\mathcal{V}^\vee)}(1)$ via $C \to \mathbb{P}(\mathcal{V}^\vee)$; conversely, given the curve $\pi: C \to T$ and $L$, the construction gives the vector bundle $\pi_*L$ with $L_T := \pi_* \Omega^1_{C/T}$ and an appropriate section.
In each of these cases $2 \leq d \leq 5$, because the groups $G$ appearing have vanishing Galois cohomology group $H^1(K,G)$, this description of $\mathcal{M}_d$ as an open substack of $[V/G]$ immediately implies that isomorphism classes of objects parametrized by $\mathcal{M}_d(K)$ are in bijection with the nondegenerate elements of $V(K)/G(K)$. The situation is quite different for the moduli space $\mathcal{N}_d$ of genus one curves with degree $d$ rational divisor classes. For $2 \leq d \leq 5$, it is possible to also describe $\mathcal{N}_d$ as an open substack of a quotient stack $[V/G]$; however, there are genus one curves over $K$ with degree $d$ rational divisor classes that cannot be represented by an element of $V(K)$. That is, the groups $G$ no longer have connected centers, implying that the quotient map $V \to \mathcal{N}_d$ is not necessarily surjective on $K$-points.
In the remainder of this paper, even though we restrict our attention to working over a base field $K$ not of characteristic $2$ or $3$, most of the constructions we describe may be generalized to other base schemes. Just as for $\mathcal{M}_d$ here, one may show that the moduli spaces we encounter, of genus one curves with extra data, may be given as open substacks of quotient stacks.
\section{Representations associated to degree \texorpdfstring{$3$}{3} line bundles}
\label{sec:hermRC}
In this section, we study a class of representations whose orbits are related to genus one curves with degree $3$ line bundles. The main results in this section are summarized in Section \ref{sec:RCpreview}.
We begin by studying the space of Rubik's cubes, which is one of the fundamental examples of this paper, and some of its simpler variants. The orbit space for Rubik's cubes (also called $3 \times 3 \times 3$ boxes) is related to the moduli space of genus one curves with two non-isomorphic degree~$3$ line bundles. This may also be identified with the moduli space of genus one curves with a single degree~$3$ line bundle and one nonzero point on the Jacobian (lying in the appropriate period-index subgroup).
In \S \ref{sec:cubicjordan}, we then recall the theory of cubic Jordan algebras, in preparation for the general case. The main general theorem is in \S \ref{sec:deg3moduli}, where we describe how the orbit spaces of these representations, built from cubic Jordan algebras, are moduli spaces for genus one curves with degree $3$ line bundles and additional vector bundles.
In later subsections, we then explain how to recover the earlier cases from the general theorem, and we also describe another special case that gives rank $2$
vector bundles on genus one curves.
Many of the orbit problems described in this section
are used in \cite{cofreecounting}
to determine average sizes of $3$-Selmer groups in certain families of elliptic curves over $\mathbb{Q}$.
\subsection{Rubik's cubes, or \texorpdfstring{$3 \times 3 \times 3$}{3x3x3} boxes} \label{sec:333}
We study an important example of a representation whose orbits naturally produce
genus one curves with degree $3$ line bundles. This representation was studied by K.\;Ng in
\cite{ng} over $\mathbb{C}$, and we extend his results to more general fields $K$.
The main theorems of this section (Theorem \ref{thm:333bij} and Corollary \ref{cor:333CLP})
are joint with C.\;O'Neil.
Let $V_1$, $V_2$, and $V_3$ be three-dimensional $K$-vector spaces, so $G := \mathrm{GL}(V_1) \times \mathrm{GL}(V_2) \times \mathrm{GL}(V_3)$
acts on the representation $V := V_1 \otimes V_2 \otimes V_3$.
The following theorem, which is proved for $K=\mathbb{C}$ in \cite{ng},
describes the orbits of these ``Rubik's cubes'' over more general fields $K$.
Here, nondegeneracy is equivalent to the nonvanishing of a degree $36$ invariant,
which we will describe in more detail below.
\begin{thm} \label{thm:333bij}
Let $V_1$, $V_2$, and $V_3$ be $3$-dimensional vector spaces over $K$. Then
nondegenerate $G$-orbits of $V_1 \otimes V_2 \otimes V_3$ are in bijection
with isomorphism classes of quadruples $(C,L_1,L_2,L_3)$, where $C$ is a genus one curve over $K$ and $L_1$, $L_2$, and $L_3$
are degree $3$ line bundles on $C$ with $L_1$ not isomorphic to $L_2$ or $L_3$ and satisfying $L_1^{\otimes 2} \cong L_2 \otimes L_3$.
\end{thm}
Note that the action of the group $G$ on $V$ is clearly not faithful: the kernel of the multiplication map
$$\mathbb{G}_m(V_1) \times \mathbb{G}_m(V_2) \times \mathbb{G}_m(V_3) \to \mathbb{G}_m \subset \mathrm{GL}(V),$$
where each $\mathbb{G}_m(V_i)$ is the group of scalar transformations of $V_i$ for $i = 1,2,3$, fixes
every element in $V_1 \otimes V_2 \otimes V_3$. For the sole purpose of classifying the orbits, this does not
matter, since the orbits of $G$ and of the quotient by this kernel on the representation $V$ are identical.
However, this kernel, isomorphic to $\mathbb{G}_m^2$, shows up as part of the automorphism group of the geometric data; in particular, the stabilizer in $G$ of a generic nondegenerate element of $V_1 \otimes V_2 \otimes V_3$ giving the curve $C$ corresponds to the $K$-points of an extension $H$ of the group scheme $\Jac(C)[3]$ by this $\mathbb{G}_m^2$. More generally, the stabilizer of a nondegenerate element consists of the $K$-points of a possibly non-split extension of $\Aut(\Jac(C))$ by the group scheme $H$.
For each nondegenerate element, the stabilizer group coincides exactly with the group of automorphisms of the geometric data (if we also record the isomorphism $L_1^{\otimes 2} \cong L_2 \otimes L_3$ in Theorem \ref{thm:333bij}).
The rest of this section is devoted to describing the construction of the genus one curve and its line bundles from an orbit. In particular, we prove Theorem \ref{thm:333bij}.
\subsubsection{Geometric construction} \label{sec:RCgeom}
We first describe the construction of the genus one curve and the line bundles from a $G$-orbit.
Let $\AA \in V = V_1 \otimes V_2 \otimes V_3$, so $\AA$ induces a linear map from $V_1^\vee$ to $V_2 \otimes V_3$. There is a natural {\em determinantal} cubic hypersurface $Y$ in $\mathbb{P}(V_2 \otimes V_3)$; with the choice of bases for $V_2$ and $V_3$, it can be described as the vanishing of the determinant of the $3 \times 3$ matrices that comprise $V_2 \otimes V_3$. Then the intersection of $Y$ with the image of $\mathbb{P}(V_1^\vee) \to \mathbb{P}(V_2 \otimes V_3)$ is generically a cubic curve $C_1$ on the image of $\mathbb{P}(V_1^\vee)$, given as the vanishing of a covariant ternary cubic form in $\Sym^3 V_1$.
In other words, the curve $C_1$ is a determinantal variety, given by the determinant of a matrix of linear forms on $\mathbb{P}(V_1^\vee)$. Explicitly, with a choice of bases for $V_1$, $V_2$, and $V_3$, one can denote $\AA$ as a $3 \times 3 \times 3$ cube $(a_{rst})_{1 \leq r,s,t \leq 3}$ of elements $a_{rst} \in K$. Then this ternary cubic form in $\Sym^3 V_1$ may be written simply as
$$f_1(v) := \det (\AA(v,\cdot,\cdot)),$$
for $v \in V_1^\vee$. One may similarly define cubic curves $C_2 \subset \mathbb{P}(V_2^\vee)$ and $C_3 \subset \mathbb{P}(V_3^\vee)$,
cut out by ternary cubic forms $f_2 \in \Sym^3 V_2$ and $f_3 \in \Sym^3 V_3$.
We call a Rubik's cube $\AA$ {\em nondegenerate} if the variety $C_1$
(equivalently, $C_2$ or $C_3$) thus defined is smooth and one-dimensional, which
corresponds to the nonvanishing of a degree $36$ polynomial in
$a_{rst}$. This polynomial is called the {\em discriminant} of the
Rubik's cube $\AA$, and it coincides with the usual degree $12$
discriminant $\Delta(f_1)$ of the ternary cubic form $f_1$ (which is
equal to $\Delta(f_2)$ and $\Delta(f_3)$).
If $\AA$ is nondegenerate, then the degree $3$ plane curve $C_1$ is smooth
of genus one. For all points $w^\dagger \in C_1$, we claim that
the singular matrix $\AA(w^\dagger,\cdot,\cdot)$ has exactly rank $2$. If
not, then the $2 \times 2$ minors of $\AA(w,\cdot,\cdot)$ would vanish on
$w^\dagger$, and so would all the partial derivatives
\begin{equation*}
\left. \frac{\partial f}{\partial w_i} \right|_{w=w^\dagger} = \sum_{s,t=1}^3 a_{ist} A_{ij}^*(w^\dagger)
\end{equation*}
where $A_{ij}^*(w^\dagger)$ is the $(i,j)$th $2 \times 2$ minor of
$\AA(w^\dagger,\cdot,\cdot)$. Thus, since $C_1$ was assumed to be smooth, we see that
the rank of the matrix $\AA(w^\dagger,\cdot,\cdot)$ cannot drop by two.
In other words, the nondegeneracy condition is equivalent to the condition that the image of $\mathbb{P}(V_1^\vee)$ in $\mathbb{P}(V_2 \otimes V_3)$
not intersect the image of the Segre variety $\mathbb{P}(V_2) \times \mathbb{P}(V_3) \hookrightarrow \mathbb{P}(V_2 \otimes V_3)$.
In the sequel, we assume that $\AA$ is nondegenerate. (Note that nondegeneracy is preserved
by the group action.)
Given a nondegenerate Rubik's cube $\AA$, define the variety
\begin{equation*}
C_{12} := \{(w,x) \in \mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee) : \AA(w,x,\cdot) = 0 \} \subset \mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee).
\end{equation*}
Because $\AA$ is a trilinear form and the locus on which it
vanishes in $V_1 \times V_2$ is invariant under scaling, this is a
well-defined locus in $\mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee)$. Since
$\AA$ is nondegenerate, the projection
\begin{equation*}
C_{12} \longrightarrow \mathbb{P}(V_1^\vee)
\end{equation*}
is an isomorphism onto $C_1$. The inverse map takes a point $w \in
C_1 \subset \mathbb{P}(V_1^\vee)$ to the pair $(w,x) \in \mathbb{P}(V_1^\vee) \times
\mathbb{P}(V_2^\vee)$, where $x$ corresponds to the exactly one-dimensional
kernel of the linear map $\AA(w,\cdot,\cdot) \in V_2 \otimes V_3 \cong \Hom
(V_2^\vee,V_3)$. This map $C_1 \to C_{12}$ is algebraic, as this
kernel is given as a regular map by the $2 \times 2$ minors of the
matrix $\AA \lrcorner \, w$. Therefore, by dimension considerations, the
curve $C_{12}$ is the complete intersection of three bidegree $(1,1)$
forms on $\mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee) = \mathbb{P}^2 \times \mathbb{P}^2$.
Similarly, the projection from $C_{12}$ to $\mathbb{P}(V_2^\vee)$ is an
isomorphism onto $C_2$, which shows that $C_1$ and $C_2$ are
isomorphic.
We may also consider the curves
\begin{align*}
C_{13} &:= \{(w,y) \in \mathbb{P}(V_1^\vee) \times \mathbb{P}(V_3^\vee) : \AA(w,\cdot,y) = 0 \} \\
C_{23} &:= \{(x,y) \in \mathbb{P}(V_2^\vee) \times \mathbb{P}(V_3^\vee) : \AA(\cdot, x, y) = 0 \}
\end{align*}
and the analogous maps between $C_i$, $C_3$, and $C_{i3}$ for $i = 1$ or $2$ are also
isomorphisms. Thus, all the curves $C_1$, $C_2$, $C_3$, $C_{12}$,
$C_{13}$, and $C_{23}$ are isomorphic, and the nondegeneracy condition is equivalent
to the smoothness of any or all of these curves. The diagram
\begin{equation} \label{eq:RCcurvediag}
\raisebox{5\baselineskip}{
\xymatrix@R=15pt@C=12pt@M=5pt{
& & \mathbb{P}(V_1^\vee) \\
C_{12} \ar[rr]^{\pi_1^2} \ar[rdd]_{\pi_2^1} & & C_1 \ar@{^{(}->}[u]_{\iota_1} \ar@<0.5ex>[ldd]^{\tau_1^2} \ar@<0.5ex>[rdd]^{\tau_1^3} & & C_{13} \ar[ll]_{\pi_1^3} \ar[ldd]^{\pi_3^1} \\
\\
& C_2 \ar@{^{(}->}[ld]_{\iota_2} \ar@<0.5ex>[ruu]^{\tau_2^1} \ar@<0.5ex>[rr]^{\tau_2^3} & & C_3 \ar@{^{(}->}[rd]^{\iota_3} \ar@<0.5ex>[luu]^{\tau_3^1} \ar@<0.5ex>[ll]^{\tau_3^2} \\
\mathbb{P}(V_2^\vee) & & & & \mathbb{P}(V_3^\vee) \\
& & C_{23} \ar[uul]^{\tau_2^3} \ar[uur]_{\tau_3^2}
}} \end{equation}
summarizes the relationships between these curves. By construction,
the maps $\tau_i^j$ and $\tau_j^i$ are inverses to one another. These
maps from the curve $C_1$ to each projective space give the degree $3$
line bundles
\begin{align*}
L_1 &:= \iota_1^* \mathcal{O}_{\mathbb{P}(V_1^\vee)}(1) \\
L_2 &:= (\iota_2 \circ \tau_1^2)^* \mathcal{O}_{\mathbb{P}(V_2^\vee)}(1) \\
L_3 &:= (\iota_3 \circ \tau_1^3)^* \mathcal{O}_{\mathbb{P}(V_3^\vee)}(1)
\end{align*}
on $C_1$. For $1 \leq i \leq 3$, all three dimensions of sections of
the degree $3$ bundle $L_i$ arise from pulling back sections from
$\mathcal{O}_{\mathbb{P}(V_i^\vee)}(1)$.
\begin{lemma} \label{lem:L1notL2orL3}
The degree $3$ line bundle $L_1$ on $C_1$ is not isomorphic to either
of the line bundles $L_2$ or $L_3$.
\end{lemma}
\begin{proof}
It suffices, by symmetry, to show that $L_1$ and $L_2$
are not isomorphic line bundles. If $L_1 \cong L_2$, then the curve
$C_{12}$ would lie on a diagonal of $\mathbb{P}^2 \times \mathbb{P}^2 =
\mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee)$, and with an identification
of the bases for $V_1$ and $V_2$, we have $\AA(w,w,\cdot) = 0$ for all $w
\in C_1$. Because $C_1$ spans $\mathbb{P}(V_1^\vee)$, we must
have that $\AA(\cdot,\cdot,y)$ is a skew-symmetric $3 \times 3$ matrix for any
$y \in \mathbb{P}(V_3^\vee)$. Since odd-dimensional skew-symmetric matrices
have determinant zero, we would have $C_3 = \mathbb{P}(V_3^\vee)$, which is a
contradiction.
\end{proof}
\begin{lemma} \label{lem:RCrelation}
The line bundles $L_1, L_2, L_3$ on $C_1$ defined above
satisfy the relation
\begin{equation} \label{eq:RCrelation}
L_1 \otimes L_1 \cong L_2 \otimes L_3.
\end{equation}
\end{lemma}
\begin{proof}
For $w \in C_1 \subset \mathbb{P}(V_1^\vee)$, each coordinate of
$\tau_1^2(w) \in \mathbb{P}(V_2^\vee)$ is given by the $2 \times 2$ minors
$A_{ij}^*(w)$ of $\AA(w,\cdot,\cdot)$ for some fixed $j$ where not all
$A_{ij}^*(w)$ vanish. Let $D_2$ be an effective degree $3$ divisor
on $C_1$ such that $\mathcal{O}(D_2) \cong L_2$. Then the points of $D_2$
(defined over an appropriate extension of $K$) are the preimage on $C_1$
of the intersection of a hyperplane with the image of the curve $C_{12}$
in $\mathbb{P}(V_2^\vee)$; in particular,
we may choose $D_2$, without loss of generality, to be the divisor
defined by the locus where a particular minor, say $A_{11}^*(w)$,
vanishes on the curve $C_1$ but at least one $A_{i1}^*(w)$ is
nonzero. Similarly, we may choose a divisor $D_3$ such that
$\mathcal{O}(D_3) \cong L_3$ to be the points $w \in C_1$ where $A_{11}^*(w)
= 0$ but not all other $A_{j1}^*(w)$ vanish. Then
the points of the degree $6$ effective divisor $D_2 +
D_3$ are exactly the intersection of the curve $C_1$ and
$A_{11}^*(w) = 0$, and the line bundle $\mathcal{O}(D_2 + D_3)$ is isomorphic
to the pullback of $\mathcal{O}_{\mathbb{P}(V_1^\vee)}(2)$ to $C_1$.
\end{proof}
The composition maps arising from traversing the inner triangle in \eqref{eq:RCcurvediag}, such as
\begin{equation*}
\alpha_{123} := \tau_3^1 \circ \tau_2^3 \circ \tau_1^2 : C_1 \longrightarrow C_1,
\end{equation*}
are not the identity map. A straightforward calculation
using Lemma \ref{lem:RCrelation} and its symmetric analogues shows that the automorphism
$\alpha_{123}$ of $C_1$ is given by translation by the point $P_{123}$ in $\Jac(C_1)$
corresponding to the degree $0$ line bundle $L_2 \otimes L_1^{-1} \in \Pic^0(C_1)$.
More generally, the analogous three-cycle $\alpha_{ijk}$ is the automorphism of $C_i$ given by translation
by a point $P_{ijk} \in \Jac(C_i)$, where $P_{ijk}$ is the image of the point $\mathrm{sgn}(ijk) P_{123} = \pm P_{123} \in \Jac(C_1)$
under the canonical isomorphism $\Jac(C_1) \to \Jac(C_i)$.
\subsubsection{Bijections}
Because the geometric constructions in the previous section are $G$-invariant,
we have shown that a $G$-orbit of $V$ produces a genus one curve $C$ with three
degree $3$ line bundles $L_1$, $L_2$, $L_3$, such that $L_1^2 \cong L_2 \otimes L_3$
and $L_1$ is not isomorphic to $L_2$ or $L_3$. We show that this data exactly
determines a $G$-orbit of $V$.
\begin{proof}[Proof of Theorem $\ref{thm:333bij}$]
We have already shown that there is a well-defined map $\Phi$ from
$G$-orbits of nondegenerate elements of $V$ to the listed
geometric data. In the other direction, given such a quadruple $(C,
L_1, L_2, L_3)$, we consider the multiplication map ({\textrm i.e., } the cup
product on cohomology)
\begin{equation}
\mu_{12}: H^0(C,L_1) \otimes H^0(C,L_2) \longrightarrow H^0(C,L_1 \otimes L_2).
\end{equation}
A simple case of \cite[Theorem 6]{mumford} shows that $\mu$ is
surjective.
Thus, by Riemann-Roch, the kernel of $\mu_{12}$ has dimension $9 - 6
= 3$. Now let $V_1 := H^0(C,L_1)$, \,$V_2 := H^0(C,L_2)$, and $V_3 :=
(\ker(\mu_{12}))^\vee$, so the injection
\begin{equation*}\label{kercube}
\ker(\mu_{12}) \hookrightarrow H^0(C,L_1) \otimes H^0(C,L_2)
\end{equation*}
gives an element of $\Hom(\ker(\mu_{12}), H^0(C,L_1) \otimes
H^0(C,L_2)) \cong V_1 \otimes V_2 \otimes V_3$.
If quadruples $(C,L_1,L_2,L_3)$ and $(C',L_1',L_2',L_3')$ are equivalent, then there
is an isomorphism $\sigma: C \to C'$ such
that $\sigma^* L_i' \cong L_i$ for $1 \leq i \leq 3$. The isomorphisms
induced on the spaces of sections, {\textrm e.g., } $H^0(C,L_1)
\stackrel{\cong}{\longrightarrow} H^0(C',L_1')$, commute with the multiplication
maps, so the Rubik's cubes constructed by their kernels differ only
by choices of bases.
We check that the two functors between $G$-orbits of $V$ and
the equivalence classes of quadruples are inverse to one another.
Given a quadruple $(C, L_1, L_2, L_3)$ of the appropriate type,
let the images of the natural embeddings be $C_1 \subset \mathbb{P}(H^0(C,L_1)^\vee)$,
$C_2 \subset \mathbb{P}(H^0(C,L_2)^\vee)$, and $C_{12} \subset \mathbb{P}(H^0(C,L_1)^\vee) \times \mathbb{P}(H^0(C,L_2)^\vee)$.
We construct the trilinear form $\AA \in H^0(C,L_1)
\otimes H^0(C,L_2) \otimes (\ker \mu_{12})^\vee$ as above. Now let
\begin{align*}
C_1' &:= \{w \in \mathbb{P}(H^0(C,L_1)^\vee) : \det \AA(w,\cdot,\cdot) = 0 \} &&\subset \mathbb{P}(H^0(C,L_1)^\vee) \\
C_2' &:= \{x \in \mathbb{P}(H^0(C,L_2)^\vee) : \det \AA(\cdot,x,\cdot) = 0 \} &&\subset \mathbb{P}(H^0(C,L_2)^\vee) \\
C_{12}' &:= \{(w,x) \in \mathbb{P}(H^0(C,L_1)^\vee) \times \mathbb{P}(H^0(C,L_2)^\vee)&&\hspace{-20pt} : \AA(w,x,\cdot) = 0 \} \\
&&&\subset \mathbb{P}(H^0(C,L_1)^\vee) \times \mathbb{P}(H^0(C,L_2)^\vee)
\end{align*}
be the varieties cut out by the trilinear form $\AA(\cdot,\cdot,\cdot)$.
We claim that $C_1 = C_1'$, \,$C_2 = C_2'$, and $C_{12} = C_{12}'$ as sets
and thus as varieties, which implies that the curve $C_1'$ is
isomorphic to $C$ and that the line bundles on $C_1'$ defined as
pullbacks of $\mathcal{O}(1)$ on $\mathbb{P}(H^0(C,L_1)^\vee)$ and
$\mathbb{P}(H^0(C,L_2)^\vee)$ are isomorphic to the pullbacks of $L_1$ and
$L_2$, respectively, via the isomorphism $C \stackrel{\cong}{\longrightarrow}
C_1'$. For all $(w, x) \in C_{12}$, the construction
of $\AA$ as the kernel of $\mu_{12}$ implies that
$\AA(w, x,\cdot) = 0$, so $C_{12}'$ contains $C_{12}$ and
also $C_1' \supset C_1$ and $C_2' \supset C_2$.
Now either the polynomial $\det \AA(w,\cdot,\cdot)$ or $\det \AA(\cdot,x,\cdot)$ is
not identically $0$. If they both were identically $0$, then
$\AA(w,x,\cdot) = 0$ for all $(w,x) \in \mathbb{P}(H^0(C,L_1)^\vee) \times
\mathbb{P}(H^0(C,L_2)^\vee)$, which contradicts the fact that $\AA$ must
have nonzero tensor rank by construction. Without loss of
generality, assume $\det \AA(w,\cdot,\cdot)$ is not identically zero. Then
both $C_1'$ and $C_1$ are given by nonzero degree $3$ polynomials and
thus define the same variety, so $C_1'$ is a smooth irreducible genus
one curve in $\mathbb{P}^2 = \mathbb{P}(H^0(C,L_1)^\vee)$. Because $C_1'$ is
smooth, the trilinear form $\AA$ is nondegenerate, and $C_{12}'$ is
also smooth and irreducible, hence exactly the same variety as $C_{12}$.
It remains to show that the geometric data coming from a Rubik's cube produces the $G$-orbit
of the same cube again. Given nondegenerate $\AA \in V_1 \otimes V_2 \otimes V_3$, where $V_i$ are
$3$-dimensional vector spaces for $1 \leq i \leq 3$, we have
described the associated quadruple $(C, L_1, L_2, L_3)$ as the image
of~$\Phi$. Then the vector spaces $V_i$ and $H^0(C,L_i)$ are
naturally isomorphic for $i = 1$, $2$, and $V_3^\vee$ can be identified
with the kernel of $\mu_{12}$. With
these identifications, the Rubik's cube constructed from this
quadruple is well-defined and $G$-equivalent to the original
$\AA$.
\end{proof}
We may also rewrite the geometric data in Theorem \ref{thm:333bij} in terms of $K$-points
on the Jacobian of~$C$. Indeed, instead of keeping track of the line bundles
$L_2$ and $L_3$, it suffices to record the difference of the line bundles as a point
in the Jacobian. However, not all points $P \in \Jac(C)(K)$ arise
in this way---only those that are the difference of two degree $3$ line bundles arise. The point $P$ below is exactly
the same point on $\Jac(C)$ by which the curve is translated via the automorphism~$\alpha_{123}$:
\begin{cor} \label{cor:333CLP}
Let $V_1$, $V_2$, and $V_3$ be $3$-dimensional vector spaces over $K$. Then
nondegenerate $G$-orbits of $V_1 \otimes V_2 \otimes V_3$ are in bijection
with isomorphism classes of triples $(C,L,P)$, where $C$ is a genus one curve over $K$, $L$ is a degree
$3$ line bundle on $C$, and $0 \neq P \in \Jac(C)(K)$ such that $P \in \Jac(C)(K) = \Pic^0(C)(K)$ is
the difference of two elements of $\Pic^3(C)(K)$.
\end{cor}
In Appendix~\ref{appendix:torsors}, we show in Proposition \ref{prop:periodindexsubgp} that
such points $P \in \Jac(C)(K)$ are exactly the nonzero points in the period-index subgroup $\Jac_C^3(K)$,
yielding Theorem \ref{thm:RCparam1}.
\subsubsection{Invariant theory} \label{sec:333invthy}
The ring of $\mathrm{SL}(V_1) \times \mathrm{SL}(V_2) \times \mathrm{SL}(V_3)$-invariants of the representation $V_1 \otimes V_2 \otimes V_3$
for three-dimensional vector spaces $V_1$, $V_2$, $V_3$ is generated freely by polynomial
invariants of degrees $6$, $9$, and $12$, respectively. These have interpretations
in terms of the geometric data described in the bijection of
Corollary \ref{cor:333CLP}.
\begin{prop} \label{prop:333invthy}
There exists a choice of normalization for the relative invariants $c_6$, $c_9$, $c_{12}$
such that given a nondegenerate tensor in $V_1 \otimes V_2 \otimes V_3$
corresponding to $(C,L,P)$ as in Corollary~$\ref{cor:333CLP}$, the Jacobian of $C$
may be expressed in generalized Weierstrass form as
\begin{equation} \label{eq:333EC}
E : y^2 + c_9 y = x^3 + c_6 x^2 + c_{12} x
\end{equation}
and the point $P$ on $E$ is given by $(0,0)$.
\end{prop}
The $\mathrm{SL}(V_1)$-invariants of the ternary cubic $f = f_1$ given by $(C,L)$ are also clearly invariants
for the whole space $V_1 \otimes V_2 \otimes V_3$, since $f$ is fixed under the action of
$\mathrm{SL}(V_2) \times \mathrm{SL}(V_3)$. The polynomials $d_4(f)$ and $d_6(f)$
have degrees $12$ and $18$ as invariants of $V_1 \otimes V_2 \otimes V_3$. One may check
that with the normalizations above, we have
\begin{eqnarray*}
d_4(f) = 16 c_6^2 - 48 c_{12} & \textrm{and} & d_6(f) = - 64 c_6^3 - 216 c_9^2 + 288 c_6 c_{12}
\end{eqnarray*}
so that $(E,P)$ may be taken by linear changes of variables to the elliptic curve
$$E': y^2 = x^3 - 27 d_4(f) x - 54 d_6(f)$$
with the point $P$ becoming $(12 c_6, 108 c_9)$ on $E'$.
This recovers the interpretation of the invariants described in~\S\ref{sec:ocrc}.
One proof of the above proposition is obtained by computing the expression for
the point $P$ in terms of the orbit. We omit these computations, as all the invariants
have a very large number of terms!%
\footnote{The degree $9$ invariant is also known as the Strassen invariant and has a simple
closed form expression \cite{sturmfels}. If $\phi$ is represented by three
$3 \times 3$ matrices $M_1$, $M_2$, and $M_3$
by ``slicing'' in any of the three directions with $\det M_2 \neq 0$, then $c_9(\phi)$ may
be given by the expression $(\det M_2^2(M_1 M_2^{-1} M_3 - M_3 M_2^{-1} M_1))$.}
A more abstract argument is simple: any elliptic curve with a non-identity rational point $P$ may be written in the form
\eqref{eq:333EC}, for some $c_6$, $c_9$, $c_{12} \in K$, where $(0,0)$ is the point $P$. Then the numbers $c_6$, $c_9$, $c_{12}$ are algebraic invariants of the geometric data $(C, L_1, L_2, L_3)$ coming from a Rubik's cube, up to scaling by $\lambda^6$, $\lambda^9$, $\lambda^{12}$, respectively, for some $\lambda \in K^*$. Thus, these must be relative $G$-invariants
of the representation $V$.
Given this interpretation of the invariants, we may specialize the correspondence
for a particular elliptic curve $E$ over $K$.
We think of $c_6$, $c_9$, and $c_{12}$
as polynomial functions of the corresponding degree from $V_1 \otimes V_2 \otimes V_3 \to K$
(which are, of course, $\mathrm{SL}(V_1) \times \mathrm{SL}(V_2) \times \mathrm{SL}(V_3)$-invariant). Let
$d_{12} = 16 c_6^2 - 48 c_{12}$ and $d_{18} = - 64 c_6^3 - 216 c_9^2 + 288 c_6 c_{12}$.
\begin{cor} \label{cor:333torsors}
Let $E$ be an elliptic curve over $K$, given in Weierstrass form as
$$y^2 = x^3 - 27 a_4 x - 54 a_6.$$
Then the subset of triples $(\alpha_1,\alpha_2,\alpha_3) \in H^1(K,\Theta_{E,3})^3$ such that
\begin{enumerate}[{\rm (i)}]
\item the sum of the images of $\alpha_i$ under the natural map
$H^1(K,\Theta_{E,3}) \to H^1(K,E[3])$ is zero,
\item $\alpha_1$ is not equal to $\alpha_2$ or $\alpha_3$, and
\item the images of $\alpha_i$ under $H^1(K,\Theta_{E,3}) \to H^1(K,E[3]) \to H^1(K,E)$ all coincide
\end{enumerate}
are in bijection with the $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2) \times \mathrm{GL}(V_3)$-orbits
of $V_1 \otimes V_2 \otimes V_3$ that have representatives $\AA$ with
$d_{12}(\AA) = \lambda^{12}a_4$ and $d_{18}(\AA) = \lambda^{18}a_6$, for some nonzero $\lambda \in K^*$.
\end{cor}
The condition on the invariants merely ensures that both sides of the bijection are restricted
to exactly those Rubik's cubes corresponding to curves with Jacobian $E$.
Given any elliptic curve over $K$, there may be no such $(\alpha_1,\alpha_2,\alpha_3)$, for example,
if the elliptic curve does not have any non-identity rational points. However, given an elliptic curve $E$ over $K$
of the form \eqref{eq:333EC}, there always exists a $G$-orbit of $V$ where $E$ is the Jacobian of the associated genus one curve. In particular, taking
$C$ to simply be the trivial torsor $E$ with the degree $3$ line bundles $\mathcal{O}(3\cdot O)$ and
$\mathcal{O}(2\cdot O + P)$, where $P$ is the point $(0,0)$ on $E$, constructs such an orbit.
\begin{cor} \label{cor:333surjorbits}
The map from nondegenerate orbits $V(K)/G(K)$ to elliptic curves of the form
$$y^2 + c_9 y = x^3 + c_6 x^2 + c_{12} x$$
with $c_6$, $c_9$, $c_{12} \in K$, obtained by
taking the Jacobian of the genus one curve associated to the orbit, is surjective.
\end{cor}
For a global number or function field $K$, if we restrict to orbits where the curve $C$ is everywhere locally soluble
(meaning that $C$ has a $K_\nu$-point for every place $\nu$ of $K$),
Corollary \ref{cor:333torsors} simplifies significantly
and yields a bijection between certain orbits and elements of the $3$-Selmer group of elliptic curves of the form
\eqref{eq:333EC}. See \cite{cofreecounting} for details and applications.
\subsection{Symmetric Rubik's cubes} \label{sec:sym333}
In this subsection, we study ``symmetrized'' Rubik's cubes. There is a natural $S_3$-action on each Rubik's cube, obtained by
permuting the factors $V_i$ for $i = 1$, $2$, $3$, and we study the subset of Rubik's cubes
that are invariant under the subgroup $S_2\subset S_3$, or under all of $S_3$.
\subsubsection{Doubly symmetric Rubik's cubes} \label{sec:2symRC}
The simplest case is that of doubly symmetric Rubik's cubes, i.e., triples of $3 \times 3$ symmetric matrices.
This is the subrepresentation $V_1 \otimes \Sym_2V_2 \subset V_1 \otimes V_2 \otimes V_2$ of $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2)$, for three-dimensional $K$-vector spaces
$V_1$ and $V_2$. Away from characteristic $2$, this is the same as the quotient $V_1 \otimes \Sym^2V_2$. We give the orbit parametrization for this space in the following basis-free version of Theorem \ref{thm:2symRCpreview}:
\begin{thm} \label{thm:2symRC}
Let $V_1$ and $V_2$ be $3$-dimensional vector spaces over $K$. Then the nondegenerate
$\mathrm{GL}(V_1)\times\mathrm{GL}(V_2)$-orbits of $V_1 \otimes \Sym_2V_2$ are in bijection with
isomorphism classes of triples $(C,L,P)$, where $C$ is a genus one curve
over $K$, $L$ is a degree $3$ line bundle on $C$, and $P$ is a nonzero $2$-torsion point of $\Jac(C)(K)$.
\end{thm}
\begin{proof}
Given an element $\AA$ of $V_1 \otimes \Sym_2V_2 \subset V_1 \otimes V_2 \otimes V_2$, we construct the genus one
curve $C$ and line bundles $L_1,L_2,L_3$ as before. Because of the symmetry, the line bundles
$L_2$ and $L_3$ coincide. The relation $L_1^{\otimes 2} \cong L_2 \otimes L_3$ and the fact that
$L_1 \not\cong L_2$ shows that the point $P := L_1 \otimes L_2^{-1}$ is a nonzero $2$-torsion point of $\Jac(C)$.
Note that because $3E(K) \subset \Jac_C^3(K) \subset E(K)$, all $2$-torsion points
of $E(K)$ are contained in $\Jac_C^3(K)$, so requiring $P$ to be in $\Jac_C^3(K)$ is not an extra condition.
On the other hand, given such $(C,L,P)$ and setting $L' = L \otimes P$, the proof of
Theorem \ref{thm:333bij} shows that
the quadruple $(C,L_1,L_2,L_3) = (C,L,L',L')$ recovers a
$\mathrm{GL}(U_1) \times \mathrm{GL}(U_2) \times \mathrm{GL}(U_3)$-orbit of $U_1 \otimes U_2 \otimes U_3$, where
$U_1 = H^0(C,L_1)$, $U_2 = H^0(C,L_2)$, and $U_3$ is the dual of the three-dimensional
kernel of the multiplication map
$\mu_{12} : U_1 \otimes U_2 \to H^0(C,L_1 \otimes L_2)$.
There exists a natural identification $\psi_{12}: U_1 \to U_2$, taking a
section $s \in U_1$ to the section $t \in U_2$ such that
$s(x) = t(x)$ for any $x \in C$. This identification exists
because the $2$-torsion point $P = 3 P$ induces a linear automorphism of
$\mathbb{P}(H^0(C,L_1)^\vee)$ preserving the image of $C$ as a variety.
The map
$$\mu_{12} \circ (1 \otimes \psi_{12}): U_1 \otimes U_1 \to H^0(C,L_1 \otimes L_2).$$
factors through $\Sym_2U_1$ because for all $s_1, s_2 \in U_1$, the image of
$s_1 \otimes s_2$ and of $s_2 \otimes s_1$ evaluated on $x \in C$
are both equal to $s_1(x) s_2(x).$
\end{proof}
The representation $V_1 \otimes \Sym_2V_2$ has $\mathrm{SL}(V_1) \times \mathrm{SL}(V_2)$-invariant ring
generated by the two previously defined polynomials $c_6$ and $c_{12}$ of degrees $6$ and $12$,
respectively. These have the same interpretation as in \S \ref{sec:333invthy}. That is,
the Jacobian of $C$ may be written in normal form as
$$E: y^2 = x^3 + c_6 x^2 + c_{12} x$$
with nonzero $2$-torsion point $P$ having coordinates $(x,y) = (0,0)$. In other words, the symmetric tensors are
simply the elements of $V_1 \otimes V_2 \otimes V_2$ where the degree $9$ invariant vanishes.
Nondegeneracy is again given by the same degree $36$ discriminant, which now factors:
$$\Delta = 16 c_{12}^2 (-4 c_{12} + c_6^2).$$
The reduced discriminant has degree $24$, and nondegeneracy is determined by the nonvanishing of $c_{12}$ and of $-4 c_{12} + c_6^2$.
\subsubsection{Triply symmetric Rubik's cubes, or ternary cubic forms again} \label{sec:3symRC}
We now consider triply symmetric Rubik's cubes, {\textrm i.e., } orbits of $\mathrm{GL}(V)$ on $\Sym_3V$ for a three-dimensional vector space $V$ over $K$.
Although this space, away from characteristic $3$, is isomorphic to the space
of ternary cubic forms $\Sym^3V$ discussed in \S \ref{sec:ternarycubics},%
\footnote{More precisely, these representations are dual to one another if $V$ is self-dual.}
we treat this space as a subspace of $V \otimes V \otimes V$, instead of a quotient, to obtain a different moduli interpretation.
Because $\Sym_3 V \hookrightarrow V \otimes \Sym_2 V$, for
each tensor $\AA \in \Sym_3 V$ we may construct the associated genus one curve $C$ and
line bundles on $C$.
We then find that the orbit spaces of these two representations in fact
correspond to the same moduli problem! The following is a basis-free version of Theorem \ref{thm:3symRCpreview}:
\begin{thm} \label{thm:3symRC}
Let $V$ be a three-dimensional vector space over $K$. Then nondegenerate $\mathrm{GL}(V)$-orbits of
$\Sym_3V$ are in bijection with isomorphism classes of triples $(C,L,P)$, where $C$ is a genus one curve
over $K$, $L$ is a degree $3$ line bundle on $C$, and $P$ is a nonzero $2$-torsion point of $\Jac(C)(K)$.
\end{thm}
\begin{proof}
We simply strengthen the proof of Theorem \ref{thm:2symRC} with the observation that all the vector spaces in question may be naturally identified. As in the proof of Theorem \ref{thm:2symRC}, given a triple $(C,L,P)$, we let $U_1 = H^0(C,L)$ and $U_2 = U_3 = H^0(C,L \otimes P)$. Using the identifications $\psi_{12}$ and $\psi_{13}$, and by taking corresponding identified bases for $U_1$, $U_2$, and $U_3$, we obtain a triply symmetric
element of $U_1 \otimes U_2 \otimes U_3$.
\end{proof}
This curve $C$ is not the same as the curve $Z$ associated to the
ternary cubic via \S \ref{sec:ternarycubics}. The curve $C$ has
a degree $36$ discriminant, while $Z$ has a discriminant of degree
$12$. In fact, it is easy to check that the curve $C$ is the zero locus of
the Hessian of the ternary cubic form defining $Z$! (Recall that the Hessian of a ternary cubic form is given by the determinant of the matrix of second partial
derivatives, which yields another ternary cubic form.) Therefore, we have
another proof of the following:
\begin{prop}
Given a ternary cubic form $f$ over $K$, let $H(f)$ denote the Hessian ternary cubic form. If the discriminant of $H(f)$ is nonzero,
then the Jacobian of the genus one curve cut out by $H(f)$ is an elliptic curve with a nonzero rational $2$-torsion point.
\end{prop}
\noindent
This fact is classically shown by constructing a fixed-point free involution on the curve cut out by $H(f)$; see \cite[Chapter 3]{dolgachevCAG}.
To describe the invariant theory and the relationship with this curve $Z$, let $\AA$ be a nondegenerate element of $\Sym_3 V$. Recall that the generators $d_4$ and $d_6$ of the $\mathrm{SL}(V)$-invariant ring of this representation are of degrees $4$ and $6$; we maintain the normalization from \S \ref{sec:ternarycubics}. Then $\AA$, viewed as a ternary cubic form as in \S \ref{sec:ternarycubics}, gives rise to a genus one curve $Z$ in $\mathbb{P}(V^\vee)$ whose Jacobian is given by \eqref{eq:JacTC}.
On the other hand, let $C$ denote the genus one curve obtained from viewing $\AA$ as a (symmetric) Rubik's cube. Then an easy computation shows that the Jacobian of $C$ may be given in Weierstrass form as
\begin{equation} \label{eq:3symRCJac}
E: y^2 = x^3 - 72 d_6 x^2 + 1296 d_4^3 x,
\end{equation}
where the point $(0,0)$ is the nonzero $2$-torsion point on $E$. The discriminant of $E$ factors as a rational multiple of
$$d_4^6 (d_4^3 - d_6^2),$$
so the reduced discriminant has degree $4 + 12 = 16$. Note that the factor $d_4^3 - d_6^2$ is a scalar multiple of the discriminant of the elliptic curve $\Jac(Z)$. Thus, requiring nondegeneracy of $\AA$ as a Rubik's cube implies that both the genus one curve $Z$ and the curve $C$ are smooth. Also, observe that even without the parametrization of Theorem \ref{thm:3symRC}, we know that all elliptic curves over $K$ with a rational $2$-torsion point may be written in the form \eqref{eq:3symRCJac}, since such an elliptic curve may clearly be written as
$$y^2 = x^3 + a x^2 + b x$$
for any $a, b \in K$ with nonzero discriminant (i.e., $b (a^2 - 4 b) \neq 0$). Taking $d_4 = 1296 / b$ and $d_6 = -23328 a / b^2$ gives \eqref{eq:3symRCJac}.
\subsection{Cubic Jordan algebras} \label{sec:cubicjordan}
A {\em Jordan algebra} over $K$ is a $K$-algebra with a commutative
bilinear product $\bullet$ satisfying the Jordan identity $(x^2
\bullet y) \bullet x = x^2 \bullet (y \bullet x)$. In this section,
we introduce the ``cubic Jordan algebras'' that will be relevant for the more general degree
$3$ moduli problem. Some of their connections with geometry
and representation theory will prove vital for describing and proving the
related orbit parametrizations in \S \ref{sec:deg3moduli}.
\subsubsection{Jordan cubic forms and Springer's construction} \label{sec:springer}
Following
\cite{mccrimmon}, we first briefly describe Springer's
construction of cubic Jordan algebras from cubic forms.
Let $U$ be a finite-dimensional vector space over $K$, and let $\Norm$ be
a cubic form on $U$ such that $\Norm(e) = 1$ for some basepoint $e$.
Then there are naturally associated spur and trace forms, and their
(bi)linearizations, denoted by
\begin{align} \label{eq:jordanalgforms}
\Spur(x) &:= \Norm(x,x,e) & \Spur(x,y) &:= \Norm(x,y,e) = \Spur(x+y) - \Spur(x) - \Spur(y) \\
\Tr(x) &:= \Norm(e,e,x) & \Tr(x,y) &:= \Tr(x)\Tr(y) - \Spur(x,y). \nonumber
\end{align}
In general, an {\em adjoint map} $\sharp$ for the cubic form
$(\Norm,e)$ is a quadratic map $\sharp: U \to U$ satisfying
\begin{align}
\Tr(x^\sharp,y) &= \Norm(x,x,y) \label{eq:tracesharpformula} \\
(x^\sharp)^\sharp &= N(x) x \label{eq:adjointidentity}\\
e \times x &= \Tr(x)e - x \label{eq:csharp}
\end{align}
where $\times$ denotes the bilinearization
\begin{equation} \label{eq:bilinearsharp}
x \times y := (x+y)^\sharp - x^\sharp - y^\sharp.
\end{equation}
An {\em admissible} cubic form is a cubic form $\Norm$ with basepoint $e$ and an
associated adjoint map~$\sharp$. Such a form gives rise to a natural
Jordan algebra structure on $U$ with unit element $e$ and product given by
\begin{equation*}
x \bullet y := \frac{1}{2}\left(x \times y + \Tr(x)y + \Tr(y) x - \Spur(x,y)e \right).
\end{equation*}
We also have the identity
$$x^3 - \Tr(x) x^2 + \Spur(x) x - \Norm(x) e=0,$$
i.e., the ``characteristic polynomial of $x$'', evaluated at $x$, vanishes for all $x \in U$ (Cayley--Hamilton Theorem).
For cubic forms, there is often a natural choice of an adjoint map.
In particular, a cubic form is called {\em nondegenerate} if its
associated bilinear trace form $\Tr(\cdot,\cdot)$ is nondegenerate, in
which case $U$ and its dual $U^\vee$ may be canonically identified.
In other words, for fixed $x$, the linear functional $y \mapsto \Norm(x,x,y)$
corresponds to an element $x^\sharp \in U$, giving a quadratic map
\begin{equation} \label{eq:sharpmap} \xymatrix@R=0pt@C=18pt{
\sharp:& U \ar[r] & U^\vee \ar[r]^\cong & U \\
&x \ar@{|->}[r] & \Norm(x,x,\cdot) \ar@{|->}[r] & x^\sharp.
}\end{equation}
This map $\sharp$ by definition satisfies
\eqref{eq:tracesharpformula}; if $\sharp$ also satisfies
\eqref{eq:adjointidentity}, then it is easy to check that
\eqref{eq:csharp} holds as well and thus $\sharp$ is an adjoint map
\cite[\S 4.3]{mccrimmon}. Therefore, given a nondegenerate cubic form $\Norm$
on $U$ with basepoint $e$, there is a natural Jordan algebra structure
on $U$.
By an abuse of notation, sometimes the map $\sharp$ will refer to
just the first map in \eqref{eq:sharpmap}, that is, the map from $x \in U$
to the linear functional $y \mapsto \Norm(x,x,y)$ in $U^\vee$,
since $U$ and $U^\vee$ are naturally identified.
\subsubsection{Composition algebras and Hermitian matrices} \label{sec:compalgs}
We now describe a class of cubic Jordan algebras that will play a crucial role in
the representations we study. We begin with some remarks about composition algebras,
which are used to construct these Jordan algebras.
A {\em composition algebra} $A$ over a field $K$ is a $K$-algebra $A$
with identity element $e$ and a nondegenerate quadratic norm form $q$
on $A$ that satisfies
\begin{eqnarray*}
q(e) = 1 & \textrm{and} & q(ab) = q(a)q(b)
\end{eqnarray*}
for any elements $a,b \in A$.
By a theorem of Hurwitz, such an algebra $A$ is
either $K$ itself, a quadratic \'{e}tale $K$-algebras, a quaternion
algebra over $K$, or a Cayley algebra over $K$.
\begin{example} \label{ex:compalgsKbar}
If $K = \overline{K}$ is algebraically closed, then the only
composition algebras are the four {\em split} composition algebras
of dimensions $1$, $2$, $4$, and $8$, namely the split unarions
$\mathscr{U}(K) := K$, the split binarions $\mathscr{B}(K) \cong K
\times K$, the split quaternions $\mathscr{Q}(K) \cong \Mat_{2
\times 2}(K)$, and the split octonions $\mathscr{O}(K)$ with the
natural norm forms.
\end{example}
\begin{example}
Over $K = \mathbb{R}$, in addition to the split composition
algebras, there exist the usual division algebras of dimensions $2$,
$4$, and $8$: the complex numbers $\mathbb{C}$, the Hamiltonian
quaternions $\mathbb{H}$, and the Cayley octonions $\mathbb{O}$,
respectively.
\end{example}
If the quadratic form $q$ on the composition algebra $A$ is
nondegenerate, then $A$ is alternative separable of degree $2$ and has
an involution $\star$ sending $a \in A$ to $a^{\star} = \overline{a}$
\cite[Prop.~33.9]{bookofinvolutions}. Any element $a \in A$ hence
satisfies the equation
\begin{equation*}
a^2 - \tr_A(a) a + \norm_A(a) e = 0,
\end{equation*}
where the trace and norm are defined in terms of the quadratic norm
form $q$ as
\begin{eqnarray*}
\tr_A(a) := q(a+e) - q(a) - q(e) &\textrm{and} & \norm_A(a):= q(a).
\end{eqnarray*}
For a composition algebra $A$ over $K$ as above, the Hermitian matrix
algebra $\mathscr{H}_n(A)$ consists of the $n \times n$ matrices $M =
(m_{ij}) \in \Mat_{n \times n}(A)$ with $M = \overline{M}^t$, or
equivalently, $m_{ji} = m_{ij}^\star$ for $1 \leq i, j \leq n$. The
multiplicative structure of the algebra $\mathscr{H}_n(A)$ is
commutative but not associative, given by
\begin{equation} \label{eq:hermmultiplication}
M \bullet M' := \frac{1}{2}(M \cdot M' + M' \cdot M)
\end{equation}
where $\cdot$ denotes usual matrix multiplication. Under this algebra structure, the
Hermitian matrices $\mathscr{H}_n(A)$ form a Jordan algebra.
We now restrict to the case of cubic Jordan algebras given as a Hermitian matrix algebra for a composition algebra $A$. There is one Jordan algebra structure on $\mathscr{H}_3(A)$ inherited from the composition law
\eqref{eq:hermmultiplication}. Also, on $\mathscr{H}_3(A)$, we may
define a natural admissible cubic form $(\Norm, e, \sharp)$:
\begin{align}
\Norm \!\begin{pmatrix} c_1 & a_3 & a_2^\star \\ a_3^\star & c_2 & a_1 \\ a_2 & a_1^\star & c_3 \end{pmatrix}
&:= c_1 c_2 c_3 - c_1 \norm_A(a_1) - c_2 \norm_A(a_2) - c_3 \norm_A(a_3) + \tr_A(a_1 a_2 a_3) \label{eq:normJA} \\[.02in]
\!e := \!\begin{pmatrix} 1 & & \\ & 1 & \\ & & 1 \end{pmatrix}\:\! & \,\nonumber \\[.0165in]
\sharp: \!\begin{pmatrix} c_1 & a_3 & a_2^\star \\ a_3^\star & c_2 & a_1 \\ a_2 & a_1^\star & c_3 \end{pmatrix}\!\!\!
&\,\,\mapsto \!\begin{pmatrix}
c_2 c_3 - \norm_A(a_1) & a_2^\star a_1^\star - c_3 a_3 & a_3 a_1 - c_2 a_2^\star \\
a_1 a_2 - c_3 a_3^\star & c_1 c_3 - \norm_A(a_2) & a_3^\star a_2^\star - c_1 a_1 \\
a_1^\star a_3^\star - c_2 a_2 & a_2 a_3 - c_1 a_1^\star & c_1 c_2 - \norm_A(a_3)
\end{pmatrix} \label{eq:sharpJA}
\end{align}
for $c_1, c_2, c_3 \in K$ and $a_1, a_2, a_3 \in A$. For example, if
$A$ were commutative, then the norm form $\Norm$ is the usual
determinant of the matrix and the map $\sharp$ coincides with the
usual adjoint map for $3 \times 3$ matrices. Springer's construction
then gives a Jordan algebra structure on $\mathscr{H}_3(A)$ using this
admissible cubic form.
\begin{prop}[\mbox{\cite[\S 4.4]{mccrimmon}}]
The two Jordan algebra structures on $\mathscr{H}_3(A)$, as
defined above, are the same.
\end{prop}
\subsubsection{Hermitian tensor spaces}
In this section, we describe in a basis-free manner the algebra $\mathscr{H}_n(A)$
for {\em associative} composition algebras $A$ over
$K$, {\textrm i.e., } composition algebras of dimensions $1$, $2$, or $4$ over $K$.
For such $A$, we introduce the notion of a {\em
Hermitian tensor space} of an $A$-bimodule $\mathcal{M}$. Just as
for symmetric and alternating tensor products, the idea is to
construct a subspace (or a quotient space) of a tensor space that
corresponds to Hermitian matrices.
If $\mathfrak{M}$ is an $A$-bimodule, then we define
$\mathfrak{M}^\star$ to be its twist by the involution $\star$ on $A$.
In other words, there is $K$-vector space (but not $A$-module)
isomorphism
\begin{align*}
\mathfrak{M} &\longrightarrow \mathfrak{M}^\star \\
m &\longmapsto m^\star
\end{align*}
but the left and right $A$-actions on $\mathfrak{M}^\star$ are given by
\begin{align*}
a(m^\star) = ((m)a^\star)^\star && \textrm{and} && (m^\star)a = (a^\star(m))^\star
\end{align*}
for all $a \in A$ and $m \in \mathfrak{M}$. For any two $A$-bimodules,
the tensor product is again an $A$-bimodule; in our case, we have that
$\mathfrak{M} \otimes_A \mathfrak{M}^\star$ is an $A$-bimodule with left
and right $A$-actions described by
\begin{align*}
a(m_1 \otimes m_2^\star) = a(m_1) \otimes m_2^\star && \textrm{and} && (m_1 \otimes m_2^\star)a = m_1 \otimes (m_2^\star)a = m_1 \otimes (a^\star(m_2))^\star
\end{align*}
for all $a \in A$ and $m_1, m_2 \in \mathfrak{M}$. Note that elements
of $A$ can ``pass through'' the tensor product, {\textrm i.e., } the relation
$(m_1)a \otimes m_2^\star = m_1 \otimes a(m_2^\star)$ holds for all $a \in
A$ and $m_1, m_2 \in \mathfrak{M}$. There is a natural involution
$\tau$ on elements of $\mathfrak{M} \otimes_A \mathfrak{M}^\star$ sending
$$m_1 \otimes m_2^\star \mapsto m_2 \otimes m_1^\star.$$
\begin{defn} \label{def:hermtensor}
The {\em Hermitian tensor space} $\Herm^2(\mathfrak{M})$ of
$\mathfrak{M}$ is the sub-bimodule of $\mathfrak{M}~\otimes_A~\mathfrak{M}^\star$
consisting of elements $M$ satisfying $\tau(M) = M$.
\end{defn}
\begin{remark}
One could also define a similar Hermitian tensor space as a
quotient
$$\mathfrak{M} \otimes_A \mathfrak{M}^\star / \mathcal{I}$$
where $\mathcal{I}$ is the submodule generated by all tensors
of the form $m_1 \otimes m_2^\star - m_2 \otimes m_1^\star$ for $m,
m' \in \mathfrak{M}$. Over a field of characteristic not $2$,
these notions are the same (just like for symmetric tensors).
For our purposes, the subspace defined in Definition
\ref{def:hermtensor} is more useful.
\end{remark}
For any $\mathfrak{M}$, there is a Segre-like map
\begin{align} \label{eq:definesegre}
\Seg: \mathfrak{M} &\longrightarrow \Herm^2(\mathfrak{M}) \subset \mathfrak{M} \otimes_A \mathfrak{M}^\star \\
m &\longmapsto m \otimes m^\star \nonumber
\end{align}
whose image consists precisely of the ``rank one'' tensors in $\mathfrak{M}
\otimes_A \mathfrak{M}^\star$. This is, of course, not a linear map. In
fact, the right $A$-action on $\mathfrak{M}$ scales the image of
$\Seg$ by elements of the field $K$; in particular,
\begin{equation} \label{eq:SegAaction}
\Seg((m)a) = (m)a \otimes ((m)a)^\star = (m)a \otimes a^\star(m^\star) = \norm_A(a) (m \otimes m^\star).
\end{equation}
If $\mathfrak{M}$ is a free rank $r$ $A$-bimodule with a choice of
basis, then the Hermitian tensor space $\Herm^2(\mathfrak{M})$ is
visibly just the space $\mathscr{H}_r(A)$ of $r \times r$ Hermitian
matrices over $A$, as the involution $\tau$ corresponds to taking the
conjugate transpose of a matrix.
In the sequel, let $\mathfrak{M}$ denote a free self-dual rank $3$
$A$-bimodule, and let ${J(A)} := \Herm^2(\mathfrak{M})$. Then ${J(A)}$ also
has the structure of a cubic Jordan algebra as in \S
\ref{sec:compalgs}. When we refer to ${J(A)}$ for associative $A$, the module
$\mathfrak{M}$ will be assumed. For an octonion algebra $A$ over $K$, the notation
${J(A)}$ will refer to the exceptional Jordan algebra $\mathscr{H}_3(A)$.
\begin{remark}
Because $\mathfrak{M}$ is assumed to be self-dual,
we may view elements of the tensor space $\mathfrak{M} \otimes_A
\mathfrak{M}^\star$ as maps from $\mathfrak{M}^\vee \cong
\mathfrak{M}$ to $\mathfrak{M}^\star$. So there are also
basis-free definitions of the norm, trace, and spur forms, as
well as the adjoint map $\sharp$ in \eqref{eq:sharpmap}, for ${J(A)}$.
\end{remark}
\subsubsection{Rank} \label{sec:rank}
Elements of ${J(A)}$ inherit the notion of {\em rank} from the ambient
tensor space $\mathfrak{M} \otimes \mathfrak{M}^\star$. In this section, we discuss the stratification of
${J(A)}$ by rank, which is closely related to Severi varieties and their
tangent and secant varieties.
By forgetting the Jordan algebra and the $A$-module structure on
${J(A)}$, it makes sense to think of ${J(A)}$ as a $K$-vector space of
dimension $3 \dim(A) + 3$. The projective space $\mathbb{P}({J(A)})$ will denote
the space of $K$-lines in ${J(A)}$ as a $K$-vector space and thus has
dimension $3 \dim A + 2$ over $K$.
Let $X_A \in \mathbb{P}({J(A)})$ correspond to the rank one elements of ${J(A)}$,
so $X_A$ is cut out by quadrics in $\mathbb{P}({J(A)})$. Let $Y_A \in \mathbb{P}({J(A)})$
consist of the elements of ${J(A)}$ having rank at most two. Then $Y_A$ is visibly the secant
variety of $X_A$ in $\mathbb{P}({J(A)})$. On the other hand, the variety $Y_A$
is defined by the cubic norm form $\Norm$ on ${J(A)}$, so it is a cubic
hypersurface in $\mathbb{P}({J(A)})$.
If the composition algebra $A$ has dimension $d$ over $K$, then $X_A$,
$Y_A$, and $\mathbb{P}({J(A)})$ have dimensions $2d$, $3d+1$, and $3d+2$,
respectively. Since $d = 1$, $2$, $4$, or $8$, in all these cases, the
secant variety $Y_A$ of $X_A$ is defective, and by a theorem of
Zak \cite{zak-book}, the secant variety is also the tangent variety.
\begin{example}
Over an algebraically closed field of characteristic $0$, the varieties $X_A \subset \mathbb{P}({J(A)})$ for the
four composition algebras $A$ (see Example \ref{ex:compalgsKbar})
are exactly the four Severi varieties \cite{zak-severivarieties}:
\begin{enumerate}[(i)]
\item the Veronese surface $\mathbb{P}^2 \subset \mathbb{P}^5$,
\item the Segre fourfold $\mathbb{P}^2 \times \mathbb{P}^2 \subset \mathbb{P}^8$,
\item the Grassmannian $\Gr(2,6) \subset \mathbb{P}^{14}$, and
\item the $16$-dimensional variety $E^{16} \subset \mathbb{P}^{26}$ discovered by Lazarsfeld~\cite{lazarsfeldE16}.
\end{enumerate}
Over a general field $K$, the varieties that arise are twisted forms of these.
\end{example}
From the geometric perspective, the adjoint map $\sharp: {J(A)} \to
{J(A)}^\vee$ is essentially (up to scaling) the birational map
\begin{equation} \label{eq:beta}
\beta_A: \mathbb{P}({J(A)}) \dashrightarrow \mathbb{P}({J(A)}^\vee)
\end{equation}
given by the linear system of quadrics passing through $X_A$, or in
other words, the partial derivatives of the norm form $\Norm$ on ${J(A)}$
(see \cite{ein-shepherdbarron} or \cite{zak-severivarieties}). By definition, the map $\beta_A$
blows down $Y_A$ to $X_A$, and the inverse blows up $X_A$ to $Y_A$, so $X_A$ is
naturally isomorphic to the dual variety of $Y_A$ and vice versa.
These varieties $X_A$ have a simple moduli interpretation, based on
their definition as rank one elements of ${J(A)}$ up to $K$-scaling.
\begin{lemma}\label{compar}
For any composition algebra $A$ of dimension $1, 2$, or $4$ over
$K$, the variety $X_A$ parametrizes elements of $\mathfrak{M}$
up to right $A$-scaling.
\end{lemma}
\begin{proof}
The map $\Seg$ defined in \eqref{eq:definesegre} descends to a
well-defined map
\begin{equation*}
(\mathfrak{M} \setminus \{0\}) / A \longrightarrow \mathbb{P}({J(A)})
\end{equation*}
by the computation in \eqref{eq:SegAaction}. (Here
$(\mathfrak{M} \setminus \{0\}) / A$ denotes nonzero elements
of the module $\mathfrak{M}$ up to right $A$-scaling.) The
image of this map is by definition $X_A$, and it is easy to
check that this map is injective.
\end{proof}
In fact, the variety $X_A$ for all composition algebras $A$ (including
those of dimension $8$) over an algebraically closed field of
characteristic $0$ is often considered an embedding of the projective
plane $\mathbb{P}^2(A)$ over $A$ into $\mathbb{P}({J(A)})$ \cite{zak-severivarieties}.
Here, we work over a more general base field $K$, but the above
argument only holds for associative composition algebras. For
octonion algebras $A$ over $K$, an explicit computation shows that the
variety $X_A$ here shares the same points as the usual definition of
an octonionic projective plane \cite[Chapter 12]{conway-quatoct},
which can also be described as right (or left) $A$-lines in $A^3$.
\subsubsection{Linear transformations of Jordan algebras}
Let $\mathrm{SL}(J)$ be the group of norm-preserving $K$-linear automorphisms
of a Jordan algebra $J$. In this way, the Jordan algebra $J$ may
be thought of as a representation of the group $\mathrm{SL}(J)$. In the case $J=\mathcal H_3(A)$,
we also denote $\mathrm{SL}(J)$ by $\mathrm{SL}_3(A)$.
Over an algebraically closed field $K = \overline{K}$, the cubic
Jordan algebras ${J(A)}$ built from the four split composition algebras
$A$ correspond to the following groups and representations:
\begin{center}
\begin{tabular}{c|c|c}
$A$ & $\mathrm{SL}({J(A)})$ &${J(A)}$ \\
\hline
$\mathscr{U}(K) = K$ & $\mathrm{SL}_3(K)$ & $\Sym^2(3)$ \\
$\mathscr{B}(K) \cong K \times K$ & $\mathrm{SL}_3(K) \times \mathrm{SL}_3(K)$ & $3 \otimes 3$\\
$\mathscr{Q}(K) \cong \Mat_{2 \times 2}(K)$ & $\mathrm{SL}_6(K)$ & $\wedge^2(6)$ \\
$\mathscr{O}(K)$ & $E_6(K)$ & $27$ \\
\end{tabular}
\end{center}
Over a general field, we may consider various forms of these groups and the corresponding representations.
For the cubic Jordan algebras ${J(A)}$ described in \S
\ref{sec:rank}, the cone on $X_A$ --- that is, the set of rank one
tensors --- in ${J(A)}$ is exactly the orbit of the highest weight vector
of the representation ${J(A)}$ of $\mathrm{SL}({J(A)})$ (see \cite[Chapter
3]{zak-book} for an explanation over an algebraically closed field).
Over the algebraic closure, it is the unique closed orbit of the action of $\mathrm{SL}({J(A)})$ on
$\mathbb{P}({J(A)})$. Thus, the variety $X_A \subset \mathbb{P}({J(A)})$ is isomorphic
to the flag variety $\mathrm{SL}({J(A)})/P$, where $P$ is the parabolic subgroup of ${J(A)}$
that stabilizes the highest weight vector.
More generally, over the algebraic closure, the orbits of the action of $\mathrm{SL}({J(A)})$ on ${J(A)}$
give a stratification of ${J(A)}$ by rank. We obtain another description of the rank two
tensors of the representation ${J(A)}$, which together form another orbit of $\mathrm{SL}({J(A)})$ on $\mathbb{P}({J(A)})$.
Note that this description of $X_A$ automatically gives a
moduli interpretation of $X_A$, since it is a generalized flag variety;
the moduli interpretation given in Lemma~\ref{compar} is slightly
stronger, since it also includes the action of the composition algebra $A$ on the
vector bundle.
\subsection{Doubly Hermitian Rubik's cubes} \label{sec:deg3moduli}
Our goal in this section is to study, in a uniform way, the orbits of
the representations $V \otimes {J(A)}$ of the group $G_{J(A)} := \mathrm{GL}(V) \times
\mathrm{SL}({J(A)})$, where $V$ is a three-dimensional vector space over the
field $K$ and $J(A)$ is the cubic Jordan algebra obtained from a composition algebra $A$ as
defined in \S \ref{sec:cubicjordan}.
We will restrict ourselves to nondegenerate elements of
the tensor space, as these will correspond exactly to nonsingular curves.
\begin{defn}
An element $\phi \in V \otimes {J(A)}$ is called {\em nondegenerate}
if the induced composition map
$$\sharp \circ \phi : V^\vee \to {J(A)} \to {J(A)}^\vee$$
is injective.
\end{defn}
We will show that nondegenerate elements of $V \otimes {J(A)}$ correspond to
genus one curves with extra data, including what we call an
$A$-line on the curve. Intuitively, an $A$-line on a variety $Z$ is
like a rank one (left or right) $A$-module, if the notion of rank were
well defined for noncommutative rings.
\begin{defn}
Let $A$ be a dimension $d$ composition algebra over $K$ and
let $Z$ be a variety defined over $K$. Then an {\em $A$-line}
over $Z$ is a rank $d$ vector bundle $E$ over $Z$ with a
global faithful right $A$-action.
\end{defn}
We say that an $A$-line $E$ on $Z$ has {\em size $s$} if $E$ is a subbundle
of the trivial rank $ds$ bundle $B$ over $Z$, such that $B$ has
a global faithful right $A$-action that restricts on $E$ to
the given $A$-action on $E$. A size $3$ $A$-line $E$ on $Z$ is {\em very ample} if there
is an immersion $\kappa: Z \to X_A$ such that the pullback of the
universal $A$-line on $X_A$ is isomorphic to $E$. Finally, for a very ample size $3$
$A$-line $E$ on $Z$, we denote by $\lin E$ the line bundle on $Z$ given by pulling back
$\mathcal{O}_{\mathbb{P}({J(A)})}(1)$ to $Z$ via the composition $Z \stackrel{\kappa}{\longrightarrow} X_A \hookrightarrow \mathbb{P}({J(A)})$.
We will show that $\lin E$ is closely related to the determinant bundle $\det E$ (of $E$ as a
vector bundle), in each case.
From an element of $V \otimes {J(A)}$, we will construct a genus one curve with a degree~$3$ line bundle and an $A$-line. This construction will be automatically invariant under the action of $G_{J(A)}$. In fact, this is
exactly the data that determines a $G_{J(A)}$-orbit!
\begin{thm} \label{thm:hermRC}
The nondegenerate $G_{J(A)}$-orbits of $V \otimes {J(A)}$ are in
bijection with isomorphism classes of nondegenerate triples $(C,L,E)$, where
$C$ is a smooth genus one curve over $K$, $L$ is a degree~$3$
line bundle on $C$, and $E$ is a very ample size $3$
$A$-line over $C$ satisfying $\lin E \cong L^{\otimes 2}$.
\end{thm}
The nondegeneracy condition for a triple $(C,L,E)$ will be discussed more in the proof; it is an open condition, so the theorem
may be rephrased as a bijection between the orbits of $V \otimes {J(A)}$ and the $K$-points of an open substack of the
moduli space of such triples (with the isomorphism between $\lin E$ and $L^{\otimes 2}$). For some choices of $A$, we will work out a relatively simple interpretation of this condition.
\subsubsection{Geometric construction}
We now show that a nondegenerate element $\phi$ of $V
\otimes {J(A)}$ naturally gives rise to the geometric data of a genus one
curve $C$, a degree $3$ line bundle on $C$, and an $A$-bundle $E$.
Let $d$ be the dimension of the composition algebra $A$ over $K$.
Given $\phi \in V \otimes {J(A)}$, we may instead think of $\phi$ as a
linear map in $\Hom(V^\vee,{J(A)})$. Nondegeneracy of $\phi$ immediately
implies that this map is injective, so we obtain a linear map
\begin{equation*}
\mathbb{P}(\phi) : \mathbb{P}(V^\vee) \longrightarrow \mathbb{P}({J(A)}).
\end{equation*}
Let $W$ be the image of $\mathbb{P}(\phi)$ in $\mathbb{P}({J(A)})$. Then the secant
variety $Y_A$ of $X_A$ cuts out a cubic plane curve $C$ on $W$. In
other words, the curve $\iota: C \hookrightarrow \mathbb{P}({J(A)})$ is defined by the vanishing of the cubic
norm form $\Norm$ on the plane $W$. Let $L$ be the pullback of
$\mathcal{O}(1)$ from the projective plane $W$ to $C$, so $L$ is a degree $3$
line bundle on $C$.
We claim that for $\phi$ nondegenerate, this curve $C$ is smooth and
irreducible, and thus of genus one. Nondegeneracy of $\phi$ implies
that $W$ does not intersect the variety $X_A$, which is the base locus
for the rational map $\beta_A$ in \eqref{eq:beta}, or equivalently,
that the partials of the norm form $\Norm$ do not simultaneously
vanish. In this case, the curve $C$ is nonsingular.
\begin{lemma}
The image $C^\sharp$ of the curve $C \subset Y_A$ under the
adjoint map $\beta_A$ is isomorphic to $C$, and hence is
a smooth irreducible genus one curve in $X_A \subset \mathbb{P}({J(A)}^\vee)$.
\end{lemma}
\begin{proof}
Recall that ${J(A)}$ and its dual may be naturally identified, and
from \S \ref{sec:rank}, the adjoint map
$\beta_A$ is a birational map from $\mathbb{P}({J(A)})$ to $\mathbb{P}({J(A)}^\vee)$,
which blows down $Y_A \setminus X_A$ to $X_A$. The image of
the plane $W$ under $\beta_A$ is birational on $W \setminus C$. In other words, the generic
fiber of $\beta_A$ restricted to $W$ is connected, so by
Zariski's connectedness theorem, all the special fibers are
connected; since there are no contractible curves in $W \cong
\mathbb{P}^2$, each fiber is a single point. Thus, the image
$C^\sharp$ of $C$ is also a smooth irreducible genus one
curve, and it is contained in $X_A$.
\end{proof}
By the moduli interpretation of the variety $X_A$, the closed immersion
$C^\sharp \hookrightarrow X_A$ is equivalent to the data of a
very ample size $3$ $A$-line on $C^\sharp$, which pulls back to a very ample
size $3$ $A$-line $E$ on $C$. In other words, we have
produced the triple $(C,L,E)$ as desired.
It is clear that the isomorphism class of the triple $(C,L,E)$ is preserved under
the action of $G_{{J(A)}}$, where two such triples $(C,L,E)$ and $(C',L',E')$ are isomorphic
if there is an isomorphism $\sigma: C \to C'$ such that $\sigma^* L' = L$ and $\sigma^* E' = E$
and the $A$-actions on $E$ and $E'$ are related by $\sigma$.
The groups $\mathrm{GL}(V)$ and $\mathrm{SL}({J(A)})$ act by linear transformations on $V$ and on ${J(A)}$, respectively,
and the action of $\mathrm{SL}({J(A)})$ fixes the varieties $X_A$ and $Y_A$. Thus, both actions do
not affect the geometric data, up to isomorphism.
Now we show that the triple $(C,L,E)$ satisfies the last condition of the theorem.
\begin{lemma}
There is an isomorphism
$$\lin E \cong L^{\otimes 2}$$
of line bundles on the curve $C$.
\end{lemma}
\begin{proof}
This relation follows from the fact that the map $\beta_A$ is given by quadratic polynomials.
The line bundle $\lin E$ on $C$ is the pullback of $\mathcal{O}_{\mathbb{P}({J(A)})}(1)$ via
\begin{equation*}
C \stackrel{\beta_A}{\longrightarrow} C^\sharp \hookrightarrow X_A \hookrightarrow \mathbb{P}({J(A)}^\vee) \cong \mathbb{P}({J(A)}),
\end{equation*}
which is isomorphic to $\iota^*\mathcal{O}_{\mathbb{P}({J(A)})}(2)$. On the other hand, the
line bundle $L$ is defined as the pullback of $\mathcal{O}_W(1)$ to $C$,
and since $W$ lies linearly in $\mathbb{P}({J(A)})$, in fact $L$ is isomorphic
to $\iota^*\mathcal{O}_{\mathbb{P}({J(A)})}(1)$.
\end{proof}
This line bundle $\lin E$ on $C$ is closely related to the determinant bundle of $E$.
\begin{lemma}
For a very ample size $3$ $A$-line $E$ on a projective variety $Z$, where $A$ is a composition
algebra over $K$ of dimension $d$, if $\dim A = 1$, then
$$(\det E)^{\otimes 2} \cong \lin E$$
and if $\dim A = 2$, $4$, or $8$, then
$$\det E \cong (\lin E)^{\otimes (d/2)}.$$
\end{lemma}
\begin{proof}
It suffices to prove this lemma for the variety $X_A$ itself. Recall that $X_A$
is a homogeneous variety in $\mathbb{P}({J(A)})$ given as the projectivization of the orbit
of the highest weight vector of the representation ${J(A)}$ of $\mathrm{SL}({J(A)})$. It is well known
that the pullback of $\mathcal{O}_{\mathbb{P}({J(A)})}(1)$ to $X_A$ is the product of the determinants of
the vector bundles in the universal flag on $X_A$ \cite[Chapter 9]{fulton-youngtableaux}.
Comparing the universal $A$-line $E$ on $X_A$ to the vector bundles in the universal flag
under the typical moduli interpretation gives the lemma. (For example, when $A$ is the split
quaternions over $K$, our $A$-line $E$ is a rank $4$-vector bundle on $X_A = \Gr(2,6)$ and is the second
tensor power of the typical universal rank $2$ bundle defined on the Grassmannian.)
\end{proof}
\subsubsection{Bijection}
The geometric construction described above gives one direction of the bijection.
We now prove Theorem \ref{thm:hermRC} (and its weaker version, Theorem \ref{thm:2hermRCpreview}).
\begin{proof}[Proof of Theorem $\ref{thm:hermRC}$]
Suppose $(C,L,E)$ is a triple satisfying the conditions of the theorem. We wish to recover
a plane $W$ in $\mathbb{P}({J(A)})$ such that $C$ is isomorphic to the curve cut out by $W$ and
the cubic hypersurface $Y_A$.
The degree $3$ line bundle $L$ on $C$ gives a closed immersion $\eta: C \hookrightarrow \mathbb{P}(H^0(C,L)^\vee)
\cong \mathbb{P}^2$. The $A$-line $E$ on $C$ implies that there is a closed immersion
\begin{equation} \label{eq:kappa}
\kappa: C \to X_A \hookrightarrow \mathbb{P}({J(A)}) \cong \mathbb{P}({J(A)}^\vee)
\end{equation}
given by the sections of $\lin E$. More precisely, since $\lin E \cong \kappa^* \mathcal{O}_{\mathbb{P}({J(A)})}(1)$
and $\deg (\lin E) = 6$ by assumption, the image of $\kappa$ lies in a
$\mathbb{P}^5$ lying linearly in $\mathbb{P}({J(A)})$, namely the image of the map $\lambda: \mathbb{P}(H^0(C,\lin E)^\vee) \hookrightarrow \mathbb{P}({J(A)})$.
Recall that we have an isomorphism $L^{\otimes 2} \cong \lin E$. The multiplication map
$$H^0(C,L) \otimes H^0(C,L) \longrightarrow H^0(C,L^{\otimes 2}) \cong H^0(C,\lin E)$$
is surjective and factors through $\Sym^2 H^0(C,L)$, so a dimension count shows that
the spaces $H^0(C,\lin E)$ and $\Sym^2 H^0(C,L)$ are naturally isomorphic. Thus,
we have a natural quadratic map
\begin{equation}
\rho: \mathbb{P}(H^0(C,L)^\vee) \longrightarrow \mathbb{P}(\Sym^2 H^0(C,L)^\vee) \stackrel{\cong}{\longrightarrow} \mathbb{P}(H^0(C,\lin E)^\vee).
\end{equation}
Pulling back the line bundle $\mathcal{O}_{\mathbb{P}({J(A)})}(1)$
to the curve $C$ via the composition $\lambda \circ \rho \circ \eta$ and via $\kappa$
give isomorphic line bundles on $C$, so under an appropriate change of basis, the
images of $C$ in $\mathbb{P}({J(A)})$ via these two maps coincide. In other words, the following diagram commutes:
\begin{equation*}
\raisebox{2\baselineskip}{ \xymatrix@C=0.6in{
C \ar[d]^{\eta} \ar[r] & X_A \ar[d] \\
\mathbb{P}(H^0(C,L)^\vee) \ar[r]^-{\lambda \circ \rho}_-{\mathrm{quadratic}} & \mathbb{P}({J(A)})
}} \qquad .\end{equation*}
The image of $\mathbb{P}(H^0(C,L)^\vee)$ under $\rho$ is a Veronese surface $\mathscr{V}$
in $\mathbb{P}(H^0(C,\lin E)^\vee) \subset \mathbb{P}({J(A)})$. The nondegeneracy condition we require
is that this Veronese $\mathscr{V}$ is not contained in $Y_A$. Under the inverse of the adjoint
map $\beta_A$, we claim that $\mathscr{V}$ gives a $\mathbb{P}^2$ lying linearly in $\mathbb{P}({J(A)})$.
Recall that $\beta_A \circ \beta_A$ is the identity on $\mathbb{P}({J(A)}) \setminus Y_A$.
By assumption, outside of the image of $C$ on $\mathscr{V}$, the map $\beta_A$ is birational,
so by the valuative criterion of properness, there is a well-defined map from
all of the surface $\mathscr{V}$ to $\mathbb{P}({J(A)})$, whose image is a linear subspace of $\mathbb{P}({J(A)})$.
This plane may be identified with $\mathbb{P}(H^0(C,L)^\vee)$ under some choice of basis.
Note that the nondegeneracy condition on the triple $(C,L,E)$ is satisfied when constructed from
an element of $V \otimes {J(A)}$. Finally, the constructions in each direction are inverse to one another,
since $\beta_A \circ \beta_A$ is the identity on an open set of $\mathbb{P}({J(A)})$.
\end{proof}
\subsection{Specializations} \label{sec:deg3special}
For particular choices of $A$, Theorem \ref{thm:hermRC} recovers some of the spaces considered earlier,
while some other choices of $A$ give new parametrization theorems. In this section, we describe
some of the cases where $A$ is split.
For example, the case where $A = K \times K$ and $d = 2$ recovers the space of Rubik's cubes studied
in \S \ref{sec:333}. For $A = K \times K$, it is easy to check that the Jordan algebra ${J(A)}$ is
isomorphic to the Jordan algebra $\Mat_{3 \times 3}(K)$ of $3 \times 3$ matrices
over $K$, where the norm, spur, and trace forms coming from the characteristic polynomial in the standard
way. The variety $X_A$ is isomorphic to the Segre fourfold $\mathbb{P}^2 \times \mathbb{P}^2 \hookrightarrow \mathbb{P}^8$, and its secant
variety $Y_A$ is the cubic hypersurface given by the vanishing of the determinant. Then the curve obtained as a intersection of a plane and
$Y_A$ in $\mathbb{P}({J(A)})$ may be thought of as a determinantal variety, just as before.
The specialization of Theorem \ref{thm:hermRC} to $A = K$ and $d = 1$ gives the space of doubly symmetric
Rubik's cubes, which we studied in \S \ref{sec:2symRC}. Here, the variety $X_A$
is the Veronese surface $\mathbb{P}^2 \hookrightarrow \mathbb{P}^5$. The $A$-line $E$ is another degree $3$ line bundle,
given as the pullback of $\mathcal{O}(1)$ from the Veronese surface to the curve. The squares of the
line bundles $L$ and $E$ are isomorphic, and thus their difference is a $2$-torsion point on the Jacobian of the curve,
as described in Theorem \ref{thm:2symRC}.
We next study Theorem \ref{thm:hermRC} in the case where $d = 4$ and the composition algebra $A$ is the algebra of
split quaternions over $K$, i.e., the algebra $\Mat_{2 \times 2}(K)$ of $2 \times 2$ matrices over $K$, in order
to recover Theorem \ref{thm:2skewRCpreview}.
In this case, the algebra ${J(A)}$ is isomorphic to the algebra of $6 \times 6$ skew-symmetric matrices
over $K$, where the cubic form is the degree $3$ Pfaffian of such a matrix. The moduli problem
becomes one of determining so-called {\em Pfaffian representations}, which have been studied
over an algebraically closed field in \cite{buckley-pfaff}.
\begin{thm}
Let $V$ and $W$ be $K$-vector spaces of dimensions $3$ and $6$, respectively. Then nondegenerate
$\mathrm{GL}(V) \times \mathrm{SL}(W)$-orbits of $V \otimes \wedge^2 W$ are in bijection with isomorphism classes
of nondegenerate triples $(C,L,F)$, where $C$ is a genus one curve over $K$, $L$ is a degree $3$ line bundle on $C$,
and $F$ is a rank $2$ vector bundle on $C$, with $\det F \cong L^{\otimes 2}$.
\end{thm}
The interpretation of $X_A$ as a moduli space of rank $2$ vector bundles is straightforward.
Here $X_A \hookrightarrow \mathbb{P}({J(A)})$ is the Pl\"{u}cker embedding of the Grassmannian $\Gr(2,W)$ in
$\mathbb{P}(\wedge^2(W)) \cong \mathbb{P}^{14}$. The $A$-line $E$ coming from Theorem \ref{thm:hermRC} is a
rank $4$ --- not rank $2$! --- vector bundle over $C$, but there is an equivalence of categories
between modules $E$ over rank $4$ Azumaya algebras and rank $2$ vector bundles $F$. This
phenomenon is special to the case of the split quaternion algebra; for nonsplit quaternion algebras, the minimal rank vector
bundle recovered from this data will have rank $4$.
We may also set $A$ to be the split octonion algebra $\mathscr{O}$ over $K$, although the data is certainly less familiar
to most people! Then the algebra ${J(A)}$ is the exceptional Jordan algebra, and the variety $X_A$ is the fourth Severi variety
found by Lazarsfeld \cite{lazarsfeldE16, zak-severivarieties}. The variety $X_A$ is $16$-dimensional, and
it is often interpreted as a projective plane $\mathbb{P}^2(\mathscr{O})$ over $\mathscr{O}$ \cite{compprojplanes}. This interpretation
is exactly how we recover a $\mathscr{O}$-line on a curve $C$ from a map $C \to \mathbb{P}^2(\mathscr{O})$.
\begin{thm}
Let $V$ and $W$ be $K$-vector spaces of dimensions $3$ and $27$, respectively,
with the split algebraic group $E_6$ acting on $W$.
Then nondegenerate $\mathrm{GL}(V) \times E_6$-orbits of $V \otimes W$ are in bijection with isomorphism classes of
nondegenerate quadruples $(C,L,\xi, E)$, where $C$ is a genus one curve over $K$,
$L$ is a degree $3$ line bundle on $C$, $\xi: C \to \mathbb{P}^2(\mathscr{O}) \subset \mathbb{P}(W)$
is a map, and $E$ is the rank $8$ vector bundle on $C$ obtained from pulling back the universal bundle on $\mathbb{P}^2(\mathscr{O})$,
and $\det E \cong L^{\otimes 8}$.
\end{thm}
One may also take $A$ to be a nonsplit quadratic algebra over $K$. We plan to discuss some applications of these cases in future work.
\section{Representations associated to degree \texorpdfstring{$2$}{2} line bundles}
\label{sec:hermHC}
In this section, analogously to Section \ref{sec:hermRC},
we study a class of representations whose orbits are related to genus one curves with {degree $2$} line bundles. The main theorems in this section are summarized in Section~\ref{sec:HCpreview}.
We begin in \S\ref{sec:bideg22forms} by considering the space of bidegree $(2,2)$ forms on $\mathbb{P}^1 \times \mathbb{P}^1$.
We show that the orbits here
correspond to genus one curves equipped with a degree $2$ line bundle
and a nonzero point on the Jacobian. (This is analogous to the interpretation of orbits for the space of Rubik's cubes!)
In~\S\ref{sec:hypercube}--\ref{sec:symhypercubes}, we then examine the space of hypercubes ($2 \times 2 \times 2 \times 2$ matrices) and some of its simpler variants; this space of hypercubes is the fundamental space for many degree $2$ moduli problems, just as the
space of Rubik's cubes, considered in the previous section, was the fundamental space for many degree $3$ moduli problems.
In preparation for the general case, in \S\ref{subsec:Hcubes}
we then introduce the notion of ``triply Hermitian cubes'' with respect to a cubic Jordan algebra $J$, which form a vector space $\mathscr{C}=\mathscr{C}(J)$, and we describe a flag variety inside
this space up to scaling. In~\S\ref{sec:deg2moduli}, given an element of the space $V \otimes \mathscr{C}$ where $V$ is a two-dimensional vector space, we then
construct genus one curves with a projection to that flag variety. This yields bijections between orbits on $V\otimes\mathscr{C}$ (i.e., the space of ``triply Hermitian hypercubes'') and isomorphism classes of genus one curves equipped with degree 2 line bundles and additional vector bundles.
After uniformly treating the bijections for such spaces,
we then specialize to several of the split cases, for which the geometric data becomes easier to describe; many of these are related to interesting arithmetic structures.
Many of the orbit problems described in
this section are used in \cite{cofreecounting} to determine average sizes of $2$-Selmer groups for certain families of elliptic curves over $\mathbb{Q}$.
\subsection{Bidegree \texorpdfstring{$(2,2)$}{(2,2)} forms} \label{sec:bideg22forms}
Let $V_1$ and $V_2$ be two-dimensional vector spaces over $K$. A {\em
$(2,2)$ form} $f$ over $K$ is an element of $\Sym^2 V_1 \otimes \Sym^2
V_2$. With a choice of bases $\{w_1,w_2\}$ and $\{x_1,x_2\}$ of $V_1$ and $V_2$,
respectively, such a form $f$ may be represented as a polynomial
\begin{align} \label{eq:22form}
f(w_1,w_2,x_1,x_2) = a_{22} w_1^2 x_1^2 &+ a_{32} w_1 w_2 x_1^2 + a_{42} w_2^2 x_1^2
+ a_{23} w_1^2 x_1 x_2 + a_{33} w_1 w_2 x_1 x_2 \\ &+ a_{43} w_2^2 x_1 x_2
+ a_{24} w_1^2 x_2^2 + a_{34} w_1 w_2 x_2^2 + a_{44} w_2^2 x_2^2 . \nonumber
\end{align}
The group $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2)$ acts on the space
of $(2,2)$ forms by the standard action on each factor. We will also consider a
twisted action of $(g_1,g_2) \in \mathrm{GL}(V_1) \times \mathrm{GL}(V_2)$:
\begin{equation*}
(g_1,g_2) f(w,x) = \det(g_1)^{-1} \det(g_2)^{-1} f(g_1(w), g_2(x)).
\end{equation*}
This is the representation $\Sym^2 V_1 \otimes \Sym^2 V_2 \otimes (\wedge^2 V_1 \otimes \wedge^2 V_2)^{-1}$
of $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2)$; by abuse of notation, we will refer to this as the twisted
action on $(2,2)$ forms. This twisted action is not faithful; for example, the scalars $\mathbb{G}_m(V_i)$ of $\mathrm{GL}(V_i)$
act trivially on all $(2,2)$ forms. Finally, the standard
scaling action of $\mathbb{G}_m$ on such bidegree $(2,2)$ forms $f$
will be relevant in the sequel. The group $G$ for the moduli problem here will be the product of the scaling $\mathbb{G}_m$ and $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2)$ acting by the twisted action.
\subsubsection{Geometric construction and bijection}
The $(2,2)$ form $f$ cuts out a bidegree $(2,2)$ curve $C$ in
$\mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee)$. If the curve $C$ is smooth, then a
standard computation shows that $C$ has genus $(2-1)(2-1) =
1$. Pulling back line bundles via the embedding $\iota: C
\hookrightarrow \mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee)$ gives two degree
$2$ line bundles on $C$, namely
\begin{align*}
L_1 := \iota^* \mathcal{O}_{\mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee)}(1,0) && \textrm{and} &&
L_2 := \iota^* \mathcal{O}_{\mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee)}(0,1).
\end{align*}
Each of the projection maps $\mathrm{pr}_i : C \to \mathbb{P}(V_i^\vee)$, for
$i = 1$ or $2$, is a degree $2$ cover of $\mathbb{P}(V_i^\vee)$, ramified at
four points over a separable closure of $K$. A binary quartic $q_1$
on $V_1$ associated to the ramification locus in $\mathbb{P}(V_1^\vee)$
may be computed by taking the discriminant of $f$ as a quadratic polynomial on $V_2$:
\begin{align} \label{eq:bqof22form}
q_1(w_1,w_2) := \disc (f(x_1,x_2)) = &(a_{23} w_1^2 + a_{33} w_1 w_2 + a_{43} w_2^2)^2 \\
&- (a_{22} w_1^2 + a_{32} w_1 w_2 + a_{42} w_2^2)(a_{24} w_1^2 + a_{34} w_1 w_2 + a_{44} w_2^2), \nonumber
\end{align}
and similarly for $q_2(x_1, x_2)$ as a binary quartic form on $V_2$.
The nonsingular genus one curve obtained from each of these binary quartics,
as in \S \ref{sec:binaryquartics}, is isomorphic to the curve $C$.
Via those isomorphisms, the line bundles $L_1$ and $L_2$ coincide with
the natural degree $2$ line bundles
on the genus one curves coming from these binary quartics.
We call a $(2,2)$ form $f$ or its associated curve $C$ {\em
nondegenerate} if both of the associated binary quartics are
nondegenerate, {\textrm i.e., } have four distinct roots over a separable closure.
For each of the binary quartics, this condition is given by the
nonvanishing of the discriminant $\Delta(q_i)$. As the binary quartic
$q_i$ is invariant under the action of $\mathrm{SL}(V_j)$ on $f$, the
discriminant $\Delta(q_i)$ is a degree $12$ $\mathrm{SL}(V_i) \times
\mathrm{SL}(V_j)$-invariant for $f$. Moreover, it is easy to check that
$I(q_1) = I(q_2)$ and $J(q_1) = J(q_2)$, so $\Delta(q_1) =
\Delta(q_2)$. Thus, the polynomials $I(f) := I(q_i)$ and $J(f) :=
J(q_i)$ for $i=1$ or $2$ are
$\mathrm{SL}(V_1) \times
\mathrm{SL}(V_2)$-invariants of $f$ having degrees $4$ and $6$, respectively. The {\em discriminant}
$\Delta(f) = \Delta(q_i)$ of the $(2,2)$ form $f$ is a degree $12$
invariant, and a {\em nondegenerate} $(2,2)$ form is one with nonzero
discriminant.%
\footnote{Neither $J(f)$ nor $\Delta(f)$ is a generator for the ring of
$\mathrm{SL}(V_1) \times \mathrm{SL}(V_2)$-invariants of $\Sym^2 V_1 \otimes \Sym^2 V_2$. The invariant
ring is a polynomial ring with generators in degrees $2$, $3$, and $4$, and will be discussed
more carefully in \S \ref{sec:bideg22invtheory}.}
The nonvanishing of this discriminant is also equivalent to the condition that
the curve $C$ cut out by $f$ is nonsingular.
Thus, from a nondegenerate $(2,2)$ form $f$, we have constructed a
genus one curve in $\Pone \times \Pone$, and the $G$-action preserves
the isomorphism class of this curve and the line bundles. Conversely, given a genus one curve $C$ and two
degree $2$ line bundles $L_1$ and $L_2$ on $C$, there are natural
degree $2$ maps $\eta_i : C \to \mathbb{P}(H^0(C,L_i)^\vee) = \mathbb{P}^1$ and the product map
\begin{equation*}
\xymatrix{
(\eta_1,\eta_2) : C \ar[r]& \mathbb{P}(H^0(C,L_1)^\vee) \times \mathbb{P}(H^0(C,L_2)^\vee)
}.
\end{equation*}
If $L_1 \cong L_2$, then $(\eta_1,\eta_2)$ is a degree $2$ cover of a diagonal
in $\Pone \times \Pone$, {\textrm i.e., } the image of this map is isomorphic to $\mathbb{P}^1$.
Otherwise, we claim that this map is a closed immersion.
\begin{lemma} \label{lem:ijembedding}
For a smooth irreducible genus one curve $C$ and
non-isomorphic degree $2$ line bundles $L_1$ and $L_2$ on $C$,
the composition
\begin{equation*}
\xymatrix{
\kappa: C \ar[rr]^-{(\eta_1,\eta_2)}&& \mathbb{P}(H^0(C,L_1)^\vee) \times \mathbb{P}(H^0(C,L_2)^\vee)
\ar @{^{(}->}[r]^-{\mathrm{Segre}} & \mathbb{P}(H^0(C,L_1)^\vee \otimes H^0(C,L_2)^\vee)
}
\end{equation*}
is a closed immersion.
\end{lemma}
\begin{proof}
By Riemann-Roch, the spaces of sections $H^0(C,L_1),
H^0(C,L_2)$, and $H^0(C,L_1 \otimes L_2)$ have dimensions $2$, $2$,
and $4$, respectively. The multiplication map
$$\mu_{12}: H^0(C,L_1) \otimes H^0(C,L_2) \longrightarrow H^0(C,L_1 \otimes L_2)$$
is an isomorphism, by Castelnuovo's basepoint-free pencil trick
(see \cite[p.\ 126]{acgh}).
Since $\deg (L_1 \otimes L_2) = 4$, the curve $C$ is isomorphic to its image
in $\mathbb{P}(H^0(C,L_1 \otimes L_2)^\vee) = \mathbb{P}^3$. Since $\kappa$ is the composition
of this map to $\mathbb{P}(H^0(C,L_1 \otimes L_2)^\vee)$ with the isomorphism $\mathbb{P}(\mu_{12}^\vee)$,
it is a closed immersion.
\end{proof}
The image $C_{12}$ of the curve $C$ in $\mathbb{P}(H^0(C,L_1)^\vee) \times
\mathbb{P}(H^0(C,L_2)^\vee)$ is cut out by a $(2,2)$ form, via the exact
sequence
$$0 \to \mathcal{I}_{C} \to \mathcal{O}_{\mathbb{P}(H^0(C,L_1)^\vee) \times \mathbb{P}(H^0(C,L_2)^\vee)} \to \mathcal{O}_{C} \to 0$$
where $\mathcal{I}_{C}$ is the ideal defining $C_{12}$. Tensoring with $\mathcal{O}(2,2)$, taking cohomology,
and tensoring by the dual of $H^0(\mathbb{P}(H^0(C,L_1)^\vee) \times \mathbb{P}(H^0(C,L_2)^\vee),\mathcal{I}_{C}(2,2))$
gives a map from $K$ to
\begin{align*}
H^0(\mathbb{P}(H^0(C,L_1)^\vee) &\times \mathbb{P}(H^0(C,L_2)^\vee),\mathcal{O}(2,2)) \\
& \qquad \otimes (\wedge^2(H^0(C,L_1)))^{-1} \otimes (\wedge^2(H^0(C,L_2)))^{-1} \otimes \omega_C,
\end{align*}
where $\omega_C$ is the usual Hodge bundle for $C$. We thus obtain a bidegree $(2,2)$ form, i.e.,
an element of $\Sym^2 (H^0(C,L_1)) \otimes \Sym^2 (H^0(C,L_2)) \otimes (\wedge^2(H^0(C,L_1)) \otimes \wedge^2(H^0(C,L_2)))^{-1}$. The factor $\omega_C$ fixes the scaling, just as in the cases in Section \ref{sec:classical}.
Thus, a genus one curve and two nonisomorphic degree $2$ line bundles $L_1$ and $L_2$ give rise to a $(2,2)$ form.
\begin{thm} \label{thm:22curvesbij}
The nondegenerate $G$-orbits of $\Sym^2 V_1 \otimes \Sym^2 V_2$ for two-dimensional vector spaces $V_1$ and $V_2$ are in bijection
with isomorphism classes of triples $(C,L_1,L_2)$, where $C$ is a genus one curve and $L_1$
and $L_2$ are nonisomorphic degree $2$ line bundles on $C$.
\end{thm}
The stabilizer of the $G$-action corresponds to the automorphism group of the triple $(C,L_1,L_2)$, which
for a generic nondegenerate $(2,2)$ form is the $K$-points of an extension of $\Jac(C)[2]$ by $\mathbb{G}_m^2$.
In general, the stabilizer consists of the $K$-points of a possibly non-split extension of this group
scheme by $\Aut(\Jac(C))$.
By the same argument as in Corollary \ref{cor:333CLP}, we may keep track of the difference $L_2 \otimes L_1^{-1}$
instead of $L_2$. The difference corresponds to a point on the Jacobian of $C$, and in fact, in the
period-index subgroup $\Jac^2_C(K)$.
\begin{cor} \label{cor:22formsCLP}
The nondegenerate $G$-orbits of $\Sym^2 V_1 \otimes \Sym^2 V_2$
for two-dimensional vector spaces $V_1$ and $V_2$ are in bijection
with isomorphism classes of triples $(C,L,P)$, where $C$ is a genus one curve, $L$
is a degree $2$ line bundle on $C$, and $0 \neq P \in \Jac^2_C(K)$.
\end{cor}
\subsubsection{Invariant theory} \label{sec:bideg22invtheory}
The $\mathrm{SL}(V_1) \times \mathrm{SL}(V_2)$-invariants of the representation $\Sym^2 V_1 \otimes \Sym^2 V_2$ form
a polynomial ring generated by invariants $\delta_2$, $\delta_3$, and $\delta_4$ of degrees $2$, $3$, and $4$
(see \cite{vinberg}, for example). These same polynomials are relative invariants under the standard
action of $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2)$, but are invariant under the twisted action of $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2)$ described above.
When $f$ is a $(2,2)$ form given by \eqref{eq:22form}, we may take the generators to be
\begin{align*}
\delta_2 &= a_{33}^2 - 4 a_{32} a_{34} + 8 a_{24} a_{42} - 4 a_{23} a_{43} + 8 a_{22} a_{44} \\
\delta_3 &= a_{24} a_{33} a_{42} - a_{23} a_{34} a_{42} - a_{24} a_{32} a_{43} + a_{22} a_{34} a_{43} + a_{23} a_{32} a_{44} - a_{22} a_{33} a_{44} \\
\delta_4 &= I(f)
\end{align*}
although any linear combination of $I(f)$ and $\delta_2^2$ is a degree $4$ generator of the invariant ring.
Given a $(2,2)$ form $f$, these invariants are essentially described by the triple $(C,L,P)$ of Corollary \ref{cor:22formsCLP},
or in particular, the Jacobian $E$ of $C$ and the coordinates of $P$ on some form of $E$.
Recall that $I(f)$ and $J(f)$ are $\mathrm{SL}(V_1) \times \mathrm{SL}(V_2)$-invariants of degrees $4$ and $6$,
obtained from the binary quartics $q_i$ for $i = 1$ or $2$. The Jacobian of the curve $C$
given by $f = 0$ is the Jacobian of the genus one curve associated to $q_i$, and it can be written
in Weierstrass form as
\begin{equation*}
y^2 = x^3 - 27 I(f) x - 27 J(f).
\end{equation*}
The nonzero point $P$ has coordinates $(x(f),y(f))$ satisfying $E$, which are $\mathrm{SL}(V_1) \times \mathrm{SL}(V_2)$-invariants
of degrees $2$ and $3$, respectively, {\textrm i.e., } scalar multiples of the generators $\delta_2$ and $\delta_3$ of the invariant ring! The relation
$$(108 \delta_3)^2 = (3 \delta_2)^3 - 27 I(f) (3 \delta_2) - 27 J(f)$$
shows that $(x(f),y(f)) = (3 \delta_2, 108 \delta_3)$.
We may also write the Jacobian of the genus one curve!$C$ in generalized Weierstrass form as
\begin{equation} \label{eq:weierstrass234}
y^2 + a_3 y = x^3 + a_2 x^2 + a_4 x
\end{equation}
where the coefficients satisfy
\begin{align*}
a_2 = 9 \delta_2, &&
a_3 = 216 \delta_3, &&
a_4 = 27 \delta_2^2 - 27 \delta_4,
\end{align*}
and are different generators of the invariant ring (since we are working over a field $K$ not of characteristic $2$ or $3$).
\subsection{Hypercubes} \label{sec:hypercube}
We now consider $2 \times 2 \times 2 \times 2$ boxes, or hypercubes.
This space is
the fundamental representation for the degree $2$ cases,
and we will study a number of variants and generalizations in the subsections that follow.
The representation in question is
$V := V_1 \otimes V_2 \otimes V_3 \otimes V_4$, where each $V_i$ is a $2$-dimensional $K$-vector space,
with the natural action by $G := \mathrm{GL}(V_1) \times \mathrm{GL}(V_2) \times \mathrm{GL}(V_3) \times \mathrm{GL}(V_4)$.
We prove the following theorem, where nondegeneracy corresponds to the nonvanishing of
a certain degree $24$ invariant, described in more detail below.
\begin{thm} \label{thm:hypercube}
Let $V_1$, $V_2$, $V_3$, $V_4$ be $2$-dimensional vector spaces over $K$.
Then nondegenerate $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2) \times \mathrm{GL}(V_3) \times \mathrm{GL}(V_4)$-orbits
of $V_1 \otimes V_2 \otimes V_3 \otimes V_4$ are in bijection
with isomorphism classes of quintuples $(C, L_1, L_2, L_3, L_4)$, where
$C$ is a genus one curve over $K$, and the $L_i$ are degree $2$
line bundles on $C$, satisfying $L_1 \otimes L_2 \cong L_3 \otimes
L_4$ and $L_i \not\cong L_j$ for $i \neq j, 1 \leq i \leq 2, 1 \leq j \leq 4$.
\end{thm}
The stabilizer of a nondegenerate hypercube giving the genus one curve $C$ is exactly the automorphism
group of the quintuple, provided that we record the isomorphism $L_1 \otimes L_2 \cong L_3 \otimes L_4$. In particular,
let $H$ be the extension of $\Jac(C)[2]$ by the kernel of the multiplication map
$\mathbb{G}_m(V_1) \times \mathbb{G}_m(V_2) \times \mathbb{G}_m(V_3) \times \mathbb{G}_m(V_4) \to \mathbb{G}_m$,
where each $\mathbb{G}_m(V_i)$ is the set of scalar transformations of $V_i$ for $1 \leq i \leq 4$. Then the
stabilizer consists of the $K$-points of a possibly non-split extension of $\Aut(\Jac(C))$ by this group scheme $H$.
For example, if the $j$-invariant of $\Jac(C)$ is not $0$ or $1728$, and $C$ has a rational point $O$ and
full rational $2$-torsion with respect to $O$, generated by $P_1$ and $P_2$, then the hypercube corresponding to
the quintuple $(C,\mathcal{O}(2O),\mathcal{O}(O+P_1), \mathcal{O}(O+P_2), \mathcal{O}(O+P_1 + P_2))$
will have stabilizer group $H \times \mathbb{Z}/2\mathbb{Z}$, that is, an extension of $(\mathbb{Z}/2\mathbb{Z})^3$ by $\mathbb{G}_m^3$.
\subsubsection{Geometric construction} \label{sec:HCgeom}
We describe how to construct the genus one curve and degree $2$ line bundles from a hypercube;
any $G$-equivalent hypercube will produce an isomorphic curve and line bundles.
Let $\AA \in V_1 \otimes V_2 \otimes V_3 \otimes V_4$, so $\AA$ induces a linear map from
$V_1^\vee \otimes V_2^\vee$ to $V_3 \otimes V_4$ and thus a linear map
$$\mathbb{P}(V_1^\vee \otimes V_2^\vee) \to \mathbb{P}(V_3 \otimes V_4).$$
There is a natural determinantal quadric in $\mathbb{P}(V_3 \otimes V_4)$; with
choices of bases for $V_3$ and $V_4$, it consists of nonzero $2 \times 2$ matrices,
up to scaling, which have determinant $0$. Let $C_{12}$ be the intersection of this
quadric with the image of the Segre map $\mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee) \to \mathbb{P}(V_1^\vee \otimes V_2^\vee)$,
composed with the linear map given by $\AA$. Then $C_{12}$ is generically
a curve.
More explicitly, the curve $C_{12}$ is a determinantal variety, given
by the vanishing of the determinant of a $2 \times 2$ matrix of bidegree $(1,1)$
forms on $\mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee) = \Pone \times \Pone$. With choices of
bases for all the vector spaces $V_i$, the hypercube may be written as
a $2 \times 2 \times 2 \times 2$ array $(a_{rstu})_{1 \leq r, s, t, u \leq 2}$ with
$a_{rstu} \in K$. Then the curve $C_{12}$ is given as the vanishing of
the determinant of $\AA(w,x,\cdot,\cdot)$, which may be represented as the matrix
%
\begin{small}
$$\begin{pmatrix} a_{1111} w_1 x_1 + a_{1211} w_1 x_2 + a_{2111} w_2 x_1 + a_{2211} w_2 x_2 &
a_{1112} w_1 x_1 + a_{1212} w_1 x_2 + a_{2112} w_2 x_1 + a_{2212} w_2 x_2 \\
a_{1121} w_1 x_1 + a_{1221} w_1 x_2 + a_{2121} w_2 x_1 + a_{2221} w_2 x_2 &
a_{1122} w_1 x_1 + a_{1222} w_1 x_2 + a_{2122} w_2 x_1 + a_{2222} w_2 x_2 \end{pmatrix},$$
\end{small}%
where $\{w_1, w_2 \}$ and $\{x_1, x_2\}$ are the bases for $V_1^\vee$ and $V_2^\vee$, respectively. This
determinant $f_{12}(w,x)$ is a $(2,2)$ form, i.e., an element of $\Sym^2 V_1 \otimes \Sym^2 V_2$, and it is invariant
under the action $\mathrm{SL}(V_3) \times \mathrm{SL}(V_4)$. One may similarly define the varieties $C_{ij}$ for
any $1 \leq i \neq j \leq 4$. (We identify $C_{ij}$ and $C_{ji}$.)
We call a hypercube {\em nondegenerate} if the variety $C_{12}$ is smooth and one-dimensional, which
by \S \ref{sec:bideg22forms} is given by the nonvanishing of a $\mathrm{SL}(V_1) \times \mathrm{SL}(V_2)$ invariant
of degree $12$ in the coefficients of $f_{12}$ and thus degree $24$ in the hypercube. This degree $24$
polynomial is invariant under all $\mathrm{SL}(V_i)$ for $1 \leq i \leq 4$. By symmetry (or explicit computation),
this polynomial is the discriminant for all the $f_{ij}$, and we call it the {\em discriminant} of the
hypercube $\AA$ itself. Nondegeneracy is preserved by the action of $G$. In the sequel, we will only
work with nondegenerate hypercubes.
If $\AA$ is nondegenerate, then for all points $(w,x) \in C_{12}$, the matrix $\AA(w,x,\cdot,\cdot)$
is not the zero matrix and thus has rank exactly $1$. (If it were the zero matrix, then all the partial
derivatives would vanish at the point $(w,x)$, so $C_{12}$ would not be smooth.) So for all $(w,x) \in C_{12}$,
there is exactly one dimension of vectors $y \in V_3$ such that $\AA(w,x,y,\cdot) = 0$ (and similarly, one
dimension of vectors $z \in V_4$ with $\AA(w,x,\cdot,z) = 0$).
Given a nondegenerate hypercube $\AA$, it turns out that all of the resulting curves $C_{ij}$ are isomorphic!
To see this, define the variety
$$C_{123} := \{(w,x,y) \in \mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee) \times \mathbb{P}(V_3^\vee) : \AA(w,x,y,\cdot) = 0 \}.$$
The projection $\pi_{123}^{12}$ of this variety $C_{123}$ onto $C_{12}$ is an isomorphism, where the inverse map $\rho_{12}^{123}$ is given by taking the point in $\mathbb{P}(V_3^\vee)$ corresponding to the exactly one-dimensional kernel of $\AA(w,x,\cdot,\cdot)$.
By symmetry, the curves $C_{13}$ and $C_{23}$ are also isomorphic to $C_{123}$. We may similarly define the curves
$C_{ijk} \subset \mathbb{P}(V_i^\vee) \times \mathbb{P}(V_j^\vee) \times \mathbb{P}(V_k^\vee)$ for any $\{i,j,k\} \subset \{1,2,3,4\}$,
and they are all isomorphic to their projections to any two factors.
The $C_{ij}$ are all smooth irreducible isomorphic genus one curves, related by natural isomorphisms
$$\tau_{ij}^{jk} : C_{ij} \stackrel{\rho_{ij}^{ijk}}{\longrightarrow} C_{ijk} \stackrel{\pi_{ijk}^{ik}}{\longrightarrow} C_{jk}$$
for $\{i,j,k\} \subset \{1,2,3,4\}$, where each $\pi$ is the projection and $\rho$ is the natural inverse ``un-projection'' map (where each map has domain given by the subscript and target given by the superscript). We also obtain isomorphisms of the form
$$\tau_{ijk}^{jkl} : C_{ijk} \to C_{jk} \to C_{jkl}$$
for $\{i,j,k,l\} = \{1,2,3,4\}$. Finally, by the definition of these curves, we have natural projection maps
\begin{align*}
\pi_{ij}^i : C_{ij} \to \mathbb{P}(V_i^\vee) \!\!\!\!&&\!\!\!\! \textrm{\!\!\!\!\!\!\!\!\!\!\!and\!\!\!\!\!\!\!\!\!\!\!}
\!\!\!\!&&\!\!\!\! \pi_{ijk}^i : C_{ijk} \to \mathbb{P}(V_i^\vee). \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\end{align*}
It is clear that $\tau_{ijk}^{jkl}$ and $\tau_{jkl}^{ijk}$ are inverse
maps, as are $\tau_{ij}^{jk}$ and $\tau_{jk}^{ij}$. However, composing more than two such
maps in sequence will not always give identity maps on these curves.
There are two types of interesting cycles we can obtain from composing the $\tau_{ijk}^{jkl}$ maps. These are best exemplified
by arranging the curves $C_{ijk}$ in a tetrahedron as in \eqref{eq:tet}: there are four triangles (the faces
of the tetrahedron) and three four-cycles.
\begin{lemma} \label{lem:threecycle}
The triangle of maps on each of the faces of the tetrahedron \eqref{eq:tet}
is not the identity map, but composing it twice gives the identity map. In particular,
the composition $C_{ijk} \to C_{ikl} \to C_{ijl} \to C_{ijk}$ is the hyperelliptic involution for $C_{ijk} \to \mathbb{P}(V_i^\vee)$,
sending any point $(w,x_1,y_1)$ to the point $(w,x_2,y_2)$, where
$\{x_1, x_2\}$ and $\{y_1, y_2\}$ are the $($not necessarily distinct$)$ solutions to
$\AA \lrcorner \, (w \otimes x) = 0$ and $\AA \lrcorner \, (w \otimes y) = 0$, respectively.
\end{lemma}
\begin{proof}
Given a point $w \in \mathbb{P}(V_1^\vee)$ not in the ramification locus of
the projection from $C_{12}$, there are two distinct points $x_1, x_2 \in
\mathbb{P}(V_2^\vee)$ such that $\det \AA(w,x_i,\cdot,\cdot) = 0$. For $i = 1$ or $2$, we have $\AA(w, x_i, y_i,\cdot) = 0$
for exactly one point $y_i \in \mathbb{P}(V_3^\vee)$. Since $\AA(w, x_2, y_1, z) = 0$
for some $z \in \mathbb{P}(V_4^\vee)$, the linear form $\AA(w, \cdot, y_1 ,z)$
vanishes when evaluated at both $x_1$ and $x_2$, so it is identically
$0$; similarly, the linear form $\AA(w, x_2, \cdot, z)$ is identically zero. Thus, we have the composition
\begin{equation*}
\raisebox{\baselineskip}{
\xymatrix @R=0pt{
C_{123} \ar[r]^{\tau_{123}^{134}} & C_{134} \ar[r]^{\tau_{134}^{124}} & C_{124} \ar[r]^{\tau_{124}^{123}} & C_{123} \\
(w, x_1,y_1) \ar@{|->}[r] & (w,y_1,z) \ar@{|->}[r] & (w, x_2, z) \ar@{|->}[r] & (w, x_2, y_2).
}
}
\qedhere
\end{equation*}
\end{proof}
Four-cycles of maps $\tau_{ijk}^{jkl}$ are also not the identity; we will show this by proving a relation among
degree $2$ line bundles defined on each of the curves. For simplicity of notation, choose one curve, say $C_{12}$, to be the
primary curve under consideration. This choice matters in the
definitions and constructions we will make in the sequel, but all
choices are equivalent.
Define four line bundles $L_i$ on $C_{12}$ by pulling back the line
bundle $\mathcal{O}(1)$ from each $\mathbb{P}(V_i^\vee)$. Of course, it is important
through which maps we choose to pullback the bundle:
\begin{align} \label{eq:C12linebundles}
L_1 &:= (\pi_{12}^1)^* \mathcal{O}_{\mathbb{P}(V_1^\vee)}(1) \nonumber\\
L_2 &:= (\pi_{12}^2)^* \mathcal{O}_{\mathbb{P}(V_2^\vee)}(1) \\
L_3 &:= (\pi_{123}^3 \circ \rho_{12}^{123})^* \mathcal{O}_{\mathbb{P}(V_3^\vee)}(1) \nonumber\\
L_4 &:= (\pi_{124}^4 \circ \rho_{12}^{124})^* \mathcal{O}_{\mathbb{P}(V_4^\vee)}(1). \nonumber
\end{align}
That is, $L_1$ and $L_2$ come directly from the maps $C_{12} \to
\mathbb{P}(V_1^\vee)$ and $C_{12} \to \mathbb{P}(V_2^\vee)$, and for $i = 3$ or $4$, the
line bundle $L_i$ is pulled back via the simplest maps $C_{12} \to C_{12i} \to C_{2i} \to
\mathbb{P}(V_i^\vee)$. Since all the curves $C_{ij}$ are defined
by bidegree $(2,2)$ equations, each of these line bundles on $C_{12}$
have degree $2$.
By Lemma \ref{lem:ijembedding}, the line bundles $L_1$ and $L_2$ are not isomorphic, since $C_{12}$
is a smooth irreducible genus one curve given by a nondegenerate $(2,2)$ form.
Similarly, since $C_{ij}$ is also a smooth irreducible genus one curve for
$i = 1$ or $2$ and $j=3$ or $4$, the line bundles
$(\tau_{ij}^{12})^* L_i = (\pi_{ij}^i)^* \mathcal{O}_{\mathbb{P}(V_i^\vee)}(1)$ and
$(\tau_{ij}^{12})^* L_j = (\pi_{ij}^j)^* \mathcal{O}_{\mathbb{P}(V_j^\vee)}(1)$ on $C_{ij}$ are not isomorphic,
so $L_i$ and $L_j$ are not isomorphic bundles on $C_{12}$. Thus, the four line
bundles defined in \eqref{eq:C12linebundles} are all pairwise nonisomorphic, except
possibly $L_3$ and $L_4$.
\begin{lemma} \label{lem:HCrelation}
For the line bundles on $C_{12}$ defined above, we have the relation
\begin{equation} \label{eq:HCrelation}
L_1 \otimes L_2 \cong L_3 \otimes L_4.
\end{equation}
\end{lemma}
\begin{proof}
With a choice of a basis for $V_i$, points of the projective spaces
$\mathbb{P}(V_i^\vee)$ may be represented as $[a:b]$. Let
$\mathcal{D}(L)$ be the linear equivalence class of divisors
corresponding to a line bundle $L$. A representative $D_3$ of
$\mathcal{D}(L_3)$ is (the formal sum of the points in) the preimage of a fixed point, say
$[1:0]$, in $\mathbb{P}(V_3^\vee)$, and similarly, we may choose a
divisor $D_4$ in the class of $\mathcal{D}(L_4)$ as the
preimage of $[1:0] \in \mathbb{P}(V_4^\vee)$. Let $\AA(w,x,\cdot,\cdot)$ be
denoted by the matrix
\begin{equation*}
\begin{pmatrix} \AA_{11}(w,x) & \AA_{12}(w,x) \\ \AA_{21}(w,x) & \AA_{22}(w,x) \end{pmatrix} \in V_3 \otimes V_4.
\end{equation*}
Then $D_3 + D_4$ is the sum of the four points that are
solutions (counted up to multiplicity) of the system
\begin{equation*}
\left\{ \begin{matrix}
\AA_{11}(w,x) = 0 \\
\det \AA(w,x,\cdot,\cdot) = 0
\end{matrix} \right\}.
\end{equation*}
Interpreted in another way, these four points of intersection
are exactly the points of intersection of $C_{12}$ and the
bidegree $(1,1)$ curve given by $\AA_{11}$ in $\mathbb{P}(V_1^\vee)
\times \mathbb{P}(V_2^\vee)$. Thus, the line bundle corresponding to
the sum of these four points is just the pullback of
$\mathcal{O}_{\mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee)}(1,1)$ to $C_{12}$;
that is, $\mathcal{O}(D_3 + D_4) \cong L_1 \otimes L_2$, which is
the desired relation.
\end{proof}
Using this relation among the line bundles, a computation shows that the
four-cycles in the tetrahedron \eqref{eq:tet} are not commutative. For example, the composition map
$$\tau_{124}^{123} \circ \tau_{134}^{124} \circ \tau_{234}^{134} \circ \tau_{123}^{234} : C_{123} \longrightarrow C_{123}$$
is a nontrivial automorphism, given as a translation by the point
$$P := L_3 \otimes L_1^{-1} = L_2 \otimes L_4^{-1} \in \Pic^0(C_{12}) \cong \Jac(C_{12}) \cong \Jac(C_{123}),$$
where the two isomorphisms are entirely canonical, so $P$ can be thought of as a point on $\Jac(C_{123})$.
The reverse four-cycle is the inverse map and is given by translation by
$-P$. Similarly, the other four-cycles are given by translation by the points
$L_4 \otimes L_1^{-1}$ and $L_2 \otimes L_1^{-1}$ (up to sign). Because of the relation in \eqref{eq:HCrelation},
these points (with the correct choice of sign) add up to $0$ on the Jacobian! This may also be seen directly
from the tetrahedron picture, using the facts that each four-cycle decomposes as two consecutive three-cycles and
that each three-cycle composed with itself is the identity (by Lemma \ref{lem:threecycle}).
We summarize these results in the following proposition:
\begin{prop} \label{prop:fourcycles}
Given a nondegenerate hypercube $\AA$, we have the following statements,
for any permutation $(i,j,k,l)$ of $(1,2,3,4)$:
\begin{enumerate}
\item[{\rm (i)}]
The composition map
\begin{equation*}
\alpha_{ijkl} := \tau_{ijk}^{jkl} \circ \tau_{ijl}^{ijk} \circ \tau_{ikl}^{ijl} \circ \tau_{jkl}^{ikl} : C_{jkl} \longrightarrow C_{jkl}
\end{equation*}
is the automorphism of $C_{jkl}$ given by translation by the point
$$P_{ijkl} := M_l \otimes M_j^{-1} \in \Pic^0(C_{jl}) \cong \Jac(C_{jl}) \cong \Jac(C_{jkl})$$
where $M_j = (\pi_{jl}^j)^*\mathcal{O}_{\mathbb{P}(V_j^\vee)}(1)$ and $M_l = (\pi_{jl}^l)^* \mathcal{O}_{\mathbb{P}(V_l^\vee)}(1)$ are degree $2$
line bundles on $C_{jl}$.
\item[{\rm (ii)}] We have $P_{ijkl} = - P_{ilkj}$, as $\alpha_{ijkl} \circ \alpha_{ilkj}$ is the identity map on $C_{jkl}$.
\item[{\rm (iii)}]
The points $P_{ijkl}$, $P_{iklj}$, and $P_{iljk}$ sum to $0$ on the Jacobian of $C_{jkl}$, so the composition
of the automorphisms $\alpha_{ijkl}$, $\alpha_{iklj}$, and $\alpha_{iljk}$ in any order is the identity map
on $C_{jkl}$.
\end{enumerate}
\end{prop}
\subsubsection{Bijections}
Because the geometric constructions of the previous section are entirely $G$-invariant, we have already seen
that the $G$-orbit of a nondegenerate hypercube gives rise to a genus one curve and four line bundles (with a relation), up to isomorphism.
In fact, we may construct a nondegenerate hypercube from such a curve, along with the
line bundles, which will prove Theorem \ref{thm:hypercube}.
\begin{proof}[Proof of Theorem $\ref{thm:hypercube}$]
Let $C$ be a genus one curve, and let $L_1$, $L_2$, $L_3$, $L_4$ be degree $2$ line bundles
on $C$ as in the statement of the theorem. We first show how to construct a hypercube from this
data.
\begin{lemma} \label{lem:musurjective}
Given a genus one curve $C$ and three non-isomorphic degree
$2$ line bundles $L_1$, $L_2$, $L_3$ on $C$, the multiplication map $(${\textrm i.e., } the cup product
on cohomology$)$
\begin{equation*}
\mu_{123}: H^0(C,L_1) \otimes H^0(C,L_2) \otimes H^0(C,L_3) \longrightarrow H^0(C,L_1 \otimes L_2 \otimes L_3)
\end{equation*}
is surjective, and its kernel may be naturally identified with the space of global
sections $H^0(C,L_i^{-1} \otimes L_j \otimes L_k)$ for any permutation $\{i,j,k\}$ of $\{1,2,3\}$.
\end{lemma}
\begin{proof}
Recall from the proof of Lemma \ref{lem:ijembedding} that the multiplication map
$$\mu_{ij}: H^0(C,L_i) \otimes H^0(C,L_j) \longrightarrow H^0(C,L_i \otimes L_j)$$
for two such line bundles is an isomorphism, due to the basepoint-free pencil trick.
We apply the same trick again here: for any permutation $\{i,j,k\}$ of $\{1,2,3\}$,
we tensor the sequence $0 \to L_i^{-1} \to H^0(C,L_i) \otimes \mathcal{O}_C \to L_i \to 0$
with $L_j \otimes L_k$ and take cohomology to obtain the exact sequence
\begin{align} \label{eq:baseptfreeHC}
0 &\to H^0(C,L_i^{-1} \otimes L_j \otimes L_k) \to H^0(C,L_i) \otimes H^0(C,L_j \otimes L_k)
\to H^0(C,L_i \otimes L_j \otimes L_k) \nonumber \\
&\to H^1(C,L_i^{-1} \otimes L_j \otimes L_k) = 0.
\end{align}
As the map $\mu_{123}$ factors through the isomorphism
$$(\mathrm{id},\mu_{jk}) : H^0(C,L_i) \otimes H^0(C,L_j \otimes L_k) \to H^0(C,L_i \otimes L_j \otimes L_k),$$
the sequence \eqref{eq:baseptfreeHC} shows that $\mu_{123}$ is surjective and its kernel may be naturally identified with
$H^0(C,L_i^{-1} \otimes L_j \otimes L_k)$.
\end{proof}
Given $C$, $L_1$, $L_2$, $L_3$ as in the lemma, by Riemann-Roch, the kernel of $\mu_{123}$ has dimension $2$, and
we may use the inclusion of this kernel into the domain to specify a hypercube
\begin{align*}
\AA &\cong \Hom(\ker \mu_{123},H^0(C,L_1) \otimes H^0(C,L_2) \otimes H^0(C,L_3)) \\
&\in H^0(C,L_1) \otimes H^0(C,L_2) \otimes H^0(C,L_3) \otimes (\ker \mu_{123})^\vee
\end{align*}
where $V_i = H^0(C,L_i)$ for $1 \leq i \leq 3$ and $V_4 = (\ker \mu_{123})^\vee$.
We will show below that the hypercube $\AA$ thus constructed is nondegenerate and that
the geometric construction from $\AA$ gives a tuple isomorphic to the original $(C, L_1, L_2, L_3, L_4)$.
Let $C_{ij}'$ be the image of $C$
via the natural immersion into $\mathbb{P}(H^0(C,L_i)^\vee) \times \mathbb{P}(H^0(C,L_j)^\vee)$.
Let $C_{ij}$ be constructed from $\AA$ by
$$C_{ij} := \{ (w,x) \in \mathbb{P}(V_i^\vee) \times \mathbb{P}(V_j^\vee) :
\det (\AA \lrcorner \, (w \otimes x)) = 0 \} \subset \mathbb{P}(V_i^\vee) \times \mathbb{P}(V_j^\vee).$$
We will show that these two varieties are the same for all $i \neq j$, but first for {\em some} $i \neq j$.
\begin{claim} \label{claim:CijCijprime} For some $1 \leq i \neq j \leq 3$, we have $C_{ij} = C_{ij}'$ as sets.
\end{claim}
\begin{proof}
For all $i \neq j$, the inclusion $C_{ij}' \subseteq C_{ij}$ is easy: for each $1 \leq k \leq 3$,
let $\{r_{k1},r_{k2}\}$ be a basis of $H^0(C,L_k)$. Then the definition of $\AA$ implies that
\begin{equation*}
\AA \lrcorner \, \left( [r_{i1}(p):r_{i2}(p)] \otimes [r_{j1}(p):r_{j2}(p)] \otimes [r_{k1}(p):r_{k2}(p)] \right) = 0
\end{equation*}
for all points $p \in C$, so $\left( [r_{i1}(p):r_{i2}(p)], [r_{j1}(p):r_{j2}(p)] \right)$ lies in $C_{ij}$.
Since $C_{ij}$ is defined by a bidegree $(2,2)$ equation $f_{ij}$ in $\mathbb{P}(V_i^\vee) \times \mathbb{P}(V_j^\vee)$,
if we show that $f_{ij}$ is nonzero and irreducible, then we find that $C_{ij}$ is a smooth irreducible genus one curve and thus $C_{ij} = C_{ij}'$.
An irreducible bidegree $(d_1,d_2)$ form on $\Pone \times \Pone$ defines a genus $(d_1-1)(d_2-1)$ curve.
So if $f_{ij}$ is nonzero and factors nontrivially, then no irreducible component can be a smooth irreducible
genus one curve. However, since $C_{ij}$ contains the smooth irreducible genus
one curve $C_{ij}'$, the polynomial $f_{ij}$ must be either zero
or irreducible for each pair $i \neq j$. If $f_{ij} = 0$ identically, then
$C_{ij}$ is all of $\mathbb{P}(V_i^*) \times \mathbb{P}(V_j^*)$. The projection of
\begin{equation*}
C_{123} := \{(w,x,y) \in \mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee) \times \mathbb{P}(V_3^\vee) : \AA(w,x,y,\cdot) = 0 \}
\end{equation*}
to any $\mathbb{P}(V_i^\vee) \times \mathbb{P}(V_j^\vee)$ is exactly $C_{ij}$
by definition, and we will show that at least one of these projections is not two-dimensional.
Let $f$ and $g$ be the two tridegree $(1,1,1)$ equations defining $C_{123}$. Because $\AA$
is defined by two linearly independent elements of $\ker \mu_{123}$, we have that $f$
and $g$ are nonzero and not multiples of one another. If $\gcd(f,g) = 1$, then
$C_{123}$ is a complete intersection and thus a one-dimensional variety. Otherwise,
suppose without loss of generality that $\gcd(f,g)$ has tridegree $(1,1,0)$ or $(1,0,0)$.
In either case, the projection to $\mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee)$ is still one-dimensional.
Therefore, there exists {\em some} $i \neq j$ such that $C_{ij}$ is not two-dimensional,
and thus must be exactly $C_{ij}'$.
\end{proof}
Since $f_{ij}$ cuts out a smooth irreducible genus one curve, we have
$\disc f_{ij} \neq 0$. Thus, the hypercube $\AA$ has nonzero discriminant and is nondegenerate.
As $\disc(\AA) \neq 0$, the polynomials $f_{kl}$ do not vanish for any $k \neq l$,
and the $C_{kl}$ are smooth irreducible genus one curves.
It follows from the proof of Claim \ref{claim:CijCijprime} that
all of the $C_{kl}$ are in fact set-theoretically equal to $C_{kl}'$. Moreover,
$C_{123}$ is set-theoretically equal to the image $C_{123}'$ of the
embedding of $C$ into the triple product space
$\mathbb{P}(H^0(C,L_1)^\vee) \times \mathbb{P}(H^0(C,L_2)^\vee) \times \mathbb{P}(H^0(C,L_3)^\vee)$.
Because there is a canonical isomorphism $C_{123}' \to C_{123}$,
for $1 \leq i \leq 3$, the pullbacks of $\mathcal{O}_{\mathbb{P}(H^0(C,L_i)^\vee)}(1)$ to $C_{123}$ and then to $C$
are exactly the line bundles $L_i$.
From a genus one curve and three nonisomorphic degree $2$ line bundles on this curve,
we have constructed a nondegenerate hypercube. This
hypercube, in turn, produces---via the constructions of \S \ref{sec:HCgeom}---an
isomorphic curve and the same line bundles. Similarly, by the definitions of these
maps, going from $G$-orbits of nondegenerate hypercubes to quintuples $(C,L_1,L_2,L_3,L_4)$
as in the theorem, and then back to $G$-orbits of hypercubes, is the identity map.
\end{proof}
We now rewrite the basic bijection of Theorem \ref{thm:hypercube}, by describing the
geometric data in a slightly different way. This is analogous to the corollary following Theorem
\ref{thm:333bij}. We simply replace the data of the line bundles $L_2$, $L_3$, $L_4$ by the differences
between each of them and $L_1$, which is a point on $\Pic^0(C) \cong \Jac(C)$.
Since the sum of these differences (up to sign) is zero, it suffices
to keep track of two such points on the Jacobian of the curve, say $P : = L_2 \otimes L_1^{-1}$ and
$P' := L_3 \otimes L_2^{-1}$. Recall (or see Appendix~\ref{appendix:torsors})
that the differences of two line bundles on a genus one curve are rational points in the appropriate period-index
subgroup of that curve.
\begin{cor} \label{cor:HCbijP}
Let $V_1$, $V_2$, $V_3$, $V_4$ be $2$-dimensional $K$-vector spaces. Then the nondegenerate
$\mathrm{GL}(V_1) \times \mathrm{GL}(V_2) \times \mathrm{GL}(V_3) \times \mathrm{GL}(V_4)$-orbits of $V_1 \otimes V_2 \otimes V_3 \otimes V_4$
are in bijection with isomorphism classes of quadruples $(C,L,P,P')$, where $C$ is a genus one curve
over $K$, $L$ is a degree $2$ line bundle on $C$, and $P$ and $P'$ are distinct nonzero points in $\Jac^2_C(K) \subset \Jac(C)(K)$.
\end{cor}
This also concludes the proof of Theorem \ref{hyperpar}, where we take $P''$ to represent the difference between $L_1$ and $L_3$.
\subsubsection{Invariant theory}
The $\mathrm{SL}(V_1) \times \mathrm{SL}(V_2) \times \mathrm{SL}(V_3) \times \mathrm{SL}(V_4)$-invariants of an element
of $V_1 \otimes V_2 \otimes V_3 \otimes V_4$, for two-dimensional vector spaces $V_i$, form a polynomial
ring generated freely by $a_2$, $a_4$, $a_4'$, $a_6$ of degrees $2$, $4$, $4$, and $6$,
respectively \cite{littelmann}. Just as in the previous cases considered, these invariants have
several interpretations in terms of each orbit's geometric data consisting of a genus one curve $C$,
a degree $2$ line bundle $L$, and nonzero points $P$, $P'$, and $P''$ in $\Jac^2_C(K)$ that sum to $0$.
One geometric interpretation of the generators of the invariant ring was discussed in \S \ref{sec:HCorbitpreview}: the Jacobian
of the genus one curve is given by
\begin{equation} \label{eq:EC812coeffs}
E: y^2 = x^3 + a_8 x + a_{12},
\end{equation}
where we have formulas for $a_8$ and $a_{12}$ in terms of $a_2$, $a_4$, $a_4'$, and $a_6$; then
$a_2$ is the slope of the line on which $P$, $P'$, $P''$ lie (on $E$); $(a_4, a_6)$ are the coordinates for the point $P$ on $E$;
and $a_4'$ is the $x$-coordinate for $P'$.
Another interpretation gives a model for the Jacobian elliptic curve with fixed points corresponding to $P$ and $P'$:
\begin{prop} \label{prop:HCinvsEC}
There exists a choice of normalization for the
$\mathrm{SL}(V_1) \times \mathrm{SL}(V_2) \times \mathrm{SL}(V_3) \times \mathrm{SL}(V_4)$-invariants $\delta_2$, $\delta_4$, $\delta_4'$, $\delta_6$
such that given a nondegenerate tensor in $V_1 \otimes V_2 \otimes V_3 \otimes V_4$ corresponding to $(C,L,P,P')$ as in
Corollary~$\ref{cor:HCbijP}$, the Jacobian of $C$ may be given in normal form as
\begin{equation} \label{eq:HCEC}
E: y^2 + \delta_4' y = x^4 + \delta_2 x^3 + \delta_4 x^2 + \delta_6 x
\end{equation}
with identity point $(0,0)$, and the points $P$ and $P'$ correspond to the two points at infinity when homogenized.
\end{prop}
Just as for Proposition \ref{prop:333invthy}, straightforward proofs of both of these results are computational. In this case, the invariants
are very reasonable to work with explicitly, and it is easy to show that the elliptic curve in \eqref{eq:HCEC} is isomorphic to the
Jacobian of the genus one curves constructed from the hypercube and that the points at infinity give the translations $\alpha_{ijk}$. An
abstract proof of Proposition \ref{prop:HCinvsEC}, again like in Proposition \ref{prop:333invthy}, relies on the fact
that elliptic curves with two distinct non-identity points may be written in the form \eqref{eq:HCEC},
so the coefficients $\delta_2$, $\delta_4$, $\delta_4'$, $\delta_6$ are relative
$\mathrm{GL}(V_1) \times \mathrm{GL}(V_2) \times \mathrm{GL}(V_3) \times \mathrm{GL}(V_4)$-invariants of $V_1 \otimes V_2 \otimes V_3 \otimes V_4$.
For any elliptic curve $E$ over $K$ of the form \eqref{eq:HCEC}, there always exists a $G$-orbit of $V$ where E is the Jacobian of the associated genus one curve, giving the analogous statement to Corollary \ref{cor:333surjorbits}:
\begin{cor} \label{cor:HCsurjorbits}
The map from nondegenerate orbits $V(K)/G(K)$ to elliptic curves of the form
$$E: y^2 + \delta_4' y = x^4 + \delta_2 x^3 + \delta_4 x^2 + \delta_6 x$$
with $\delta_2$, $\delta_4$, $\delta_4'$, $\delta_6 \in K$, by
taking the Jacobian of the genus one curve associated to the orbit, is surjective.
\end{cor}
\subsection{Symmetric hypercubes} \label{sec:symhypercubes}
In this section, we study ``symmetrized'' hypercubes, as discussed in \S \ref{sec:symHCpreview}--\ref{sec:doubledoublesym}. We show that nondegenerate orbits of the different types of symmetric hypercubes correspond to genus one curves with different numbers of degree $2$ line bundles. Nondegeneracy for symmetric hypercubes is determined by the nonvanishing of the same degree~$24$ discriminant for hypercubes, although this discriminant factors differently in each case.
\subsubsection{Doubly symmetric hypercubes} \label{sec:2symHC}
The simplest case is that of doubly symmetric hypercubes, i.e., elements of the representation $V_1 \otimes V_2 \otimes \Sym_2V_3 \subset V_1 \otimes V_2 \otimes V_3 \otimes V_3$ of $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2) \times \mathrm{GL}(V_3)$, where the $V_i$ are $2$-dimensional $K$-vector spaces. With choices of bases for each vector space, the elements may be viewed as doubly symmetric hypercubes or as $2 \times 2$ matrices of binary quadratic forms. Away from characteristic $2$, this is the same as the quotient representation $V_1 \otimes V_2 \otimes \Sym^2V_3$.
\begin{thm} \label{thm:2symHCL}
Let $V_1$, $V_2$, $V_3$ be $2$-dimensional vector spaces over $K$. Then
nondegenerate $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2) \times \mathrm{GL}(V_3)$-orbits of $V_1
\otimes V_2 \otimes \Sym_2V_3$ are in bijection with isomorphism classes
of quadruples $(C,L_1,L_2,L_3)$, where $C$ is a genus one curve over
$K$, and $L_1$, $L_2$, $L_3$ are degree $2$ line bundles on $C$ satisfying
$L_1 \otimes L_2 \cong L_3^{\otimes 2}$ and $L_3$ not isomorphic to
$L_1$ or $L_2$.
\end{thm}
\begin{proof}
Given a nondegenerate element $\AA \in V_1 \otimes V_2 \otimes \Sym_2V_3 \subset V_1 \otimes V_2 \otimes V_3 \otimes V_3$, we construct the genus one curve $C \subset \mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee)$ and four degree $2$ line bundles $L_1$, $L_2$, $L_3$, $L_4$ on $C$ in the same way as in \S \ref{sec:hypercube}. The symmetry implies that the line bundles $L_3$ and $L_4$ may be naturally identified, so we have from before that $L_1 \otimes L_2 \cong L_3^{\otimes 2}$ and $L_3$ is not isomorphic to $L_1$ or $L_2$.
Conversely, given such $(C,L_1,L_2,L_3)$ as in the theorem, we may set $L_4 = L_3$ and use the construction in the proof of Theorem \ref{thm:hypercube} to obtain a nondegenerate orbit of a hypercube $\AA \in V_1 \otimes V_2 \otimes V_3 \otimes V_4$, where $V_i = H^0(C,L_i)$ for $i = 1, 2, 3$ and $V_4$ is the dual of the kernel of the multiplication map $\mu_{123}$. From the proof of Theorem \ref{thm:hypercube}, we see that $V_4$ is also identified with $H^0(C,L_4)$, so the spaces $V_3$ and $V_4$ are naturally identified, say by $\psi_{43}: V_4 \to V_3$. The maps from $C \cong C_{12} \in \mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee)$ to $\mathbb{P}(V_3^\vee)$ and $\mathbb{P}(V_4^\vee)$ are both given by sections of the same line bundle $L_3$ and thus identical (after applying the identification $\psi_{43}$).
Now given a rank one element $B \in V \otimes V$ for a $2$-dimensional $K$-vector space $V$, if $B(x, \cdot) = 0$ and $B(\cdot, x) = 0$ for some nonzero $x \in V^\vee$, then $B$ is in the subspace $\Sym_2 V$ of $V \otimes V$. Therefore, for any $(x,y) \in C_{12}$, the map $1 \otimes \psi_{43}$ on $V_3 \otimes V_4$ takes $\AA(x,y,\cdot,\cdot)$ to an element of $\Sym_2 V_3 \subset V_3 \otimes V_3$. And since $C_{12}$ spans $\mathbb{P}(H^0(C,L_1)^\vee) \times \mathbb{P}(H^0(C,L_2)^\vee)$, the map $1 \otimes 1 \otimes 1 \otimes \psi_{43}$ on $V_1 \otimes V_2 \otimes V_3 \otimes V_4$ sends $\mathcal{A}$ to an element of $V_1 \otimes V_2 \otimes \Sym_2 V_3 \subset V_1 \otimes V_2 \otimes V_3 \otimes V_3$.
\end{proof}
The two points $P$ and $P'$ referred to in Corollary \ref{cor:HCbijP} are now related; in particular, we have $P = 2 P'$. Therefore, it suffices to keep track of only a single point $P'$, giving the following basis-free formulation of Theorem \ref{doublesympar}:
\begin{cor} \label{cor:2symHCP}
Let $V_1$, $V_2$, $V_3$ be $2$-dimensional vector spaces over $K$. Then
nondegenerate $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2) \times \mathrm{GL}(V_3)$-orbits of $V_1
\otimes V_2 \otimes \Sym_2V_3$ are in bijection with isomorphism classes
of quadruples $(C,L,P')$, where $C$ is a genus one curve over
$K$, and $L$ is a degree $2$ line bundle on $C$, and $P'$ is a nonzero, non-$2$-torsion point
in $\Jac^2_C(K)$.
\end{cor}
The ring of $\mathrm{SL}(V_1) \times \mathrm{SL}(V_2) \times \mathrm{SL}(V_3)$-invariants of $V_1 \otimes V_2 \otimes \Sym_2 V_3$ is a polynomial ring generated in degrees $2$, $4$, and $6$ \cite{littelmann}. Again, we may find several related geometric interpretations of these invariants.
Recall that there is a choice of rational generators $a_2$, $a_4$, $a_4'$, $a_6$ for the invariant ring of hypercubes such that the Jacobian of the genus one curve is given by \eqref{eq:EC812coeffs}, where $a_8$ and $a_{12}$ are given as in \eqref{eq:a8a12} and the two points on the Jacobian are $P = (a_4, a_6)$ and $P' = (a_4', a_6')$. For the doubly symmetric hypercube, as we know from Corollary \ref{cor:2symHCP}, because we have $P = 2P'$, we compute that $2 a_4' = 9 a_2^2 - a_4$. The expressions for $a_8$, $a_{12}$, and the discriminant~$\Delta$ also simplify significantly, when written in terms of $a_2$, $a_4'$, $a_6'$:
\begin{align}
a_8 &= -3 a_4'^2 + 2 a_2 a_6', \nonumber \\
a_{12} &= 2 a_4'^3 - 2 a_2 a_4' a_6' + a_6'^2,\nonumber \\ \label{discstar}
\Delta &= -16 a_6'^2 (-36 a_2^2 a_4'^2 + 108 a_4'^3 + 32 a_2^3 a_6' - 108 a_2 a_4' a_6' + 27 a_6'^2).
\end{align}
Note that because of the factorization of $\Delta$, the nondegeneracy condition that we require is now actually the nonvanishing of a degree $6 + 12 = 18$ invariant.
A little bit of algebra shows that the Jacobian is isomorphic to the elliptic curve
\begin{equation} \label{eq:Jacfor2symHC}
E: y^2 + 2 a_2 x y + 2 a_6' y = x^3 + (3 a_4' - a_2^2) x^2
\end{equation}
where the point $P'$ is now at $(x,y) = (0,0)$.
\begin{prop}
There is a choice of normalization for the relative invariants $b_2$, $b_4$, $b_6$ for the space of doubly symmetric hypercubes such that
given a nondegenerate element of $V_1 \otimes V_2 \otimes Sym_2 V_3$ corresponding to $(C,L,P')$ as in Corollary~$\ref{cor:2symHCP}$, the Jacobian of $C$ may be given in generalized Weierstrass form as
\begin{equation} \label{eq:2symHCJacnormal}
E: y^2 + b_2 x y + b_6 y = x^3 + b_4 x^2
\end{equation}
with the point $P'$ being $(x,y) = (0,0)$. Furthermore, the map from nondegenerate $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2) \times \mathrm{GL}(V_3)$-orbits of this
representation to elliptic curves of the form \eqref{eq:2symHCJacnormal}, given by taking the Jacobian of the associated elliptic curve, is
surjective.
\end{prop}
\subsubsection{Triply symmetric hypercubes} \label{sec:3symHC}
Next, we study the space of triply symmetric hypercubes. We may use the same methods as before to obtain a parametrization of the $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2)$-orbits of the space $V_1 \otimes \Sym_3 V_2 \subset V_1 \otimes V_2 \otimes V_2 \otimes V_2$, for $2$-dimensional $K$-vector spaces $V_1$ and $V_2$. As $K$ does not have characteristic $2$ or $3$ by assumption, this space is isomorphic to the quotient space $V_1 \otimes \Sym^3 V_2$, and it may also be thought of as pairs of binary cubic forms. The following is a basis-free version of Theorem \ref{triplesympar}:
\begin{thm} \label{thm:3symHCL}
Let $V_1$ and $V_2$ be $2$-dimensional vector spaces over $K$. Then
nondegenerate $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2)$-orbits of $V_1 \otimes \Sym_3V_2$
are in bijection with isomorphism classes of triples $(C,L,P)$,
where $C$ is a genus one curve over $K$, $L$ is a degree $2$ line bundle on $C$,
and $P$ is a nonzero $3$-torsion point of $\Jac(C)(K)$.
\end{thm}
\begin{proof}
From a nondegenerate element of $V_1 \otimes \Sym_3V_2 \subset V_1 \otimes V_2 \otimes V_2 \otimes V_2$, we obtain a genus one curve $C$ and four line bundles $L_1$, $L_2$, $L_3$, $L_4$ such that $L_1 \otimes L_2 \cong L_3 \otimes L_4$, just as for the usual hypercube. For ease of exposition, we will sometimes refer to the second and third copies of $V_2$ as $V_3$ and $V_4$, respectively. The symmetry clearly indicates that $L_3$ and $L_4$ are isomorphic. While it is tempting to conclude that the symmetry also implies that $L_2$ is isomorphic to $L_3$ and $L_4$, recall that this cannot be so even in the usual hypercube case! We instead have that the map from the curve $C$ to $\mathbb{P}(V_1^\vee) \times \mathbb{P}(V_2^\vee) \times \mathbb{P}(V_3^\vee)$ is invariant under switching the latter two factors (where $V_2 = V_3$), so there must be an isomorphism $L_2 \otimes L_3 \cong L_1^{\otimes 2}$. Thus, combining these relations and setting $P := L_2 \otimes L_1^{-1}$ as a point on $\Jac(C)$, we have that $3P = 0$. Note that because $2 \Jac(C)(K) \subset \Jac_C^2(K)$, all $3$-torsion points are in the degree $2$ period-index subgroup. Therefore, from a nondegenerate triply symmetric hypercube, we obtain a genus one curve, a degree $2$ line bundle $L_1$, and a nonzero $3$-torsion point in $\Jac(C)(K)$.
Conversely, given such a triple $(C,L,P)$, we claim that we may construct a triply symmetric hypercube. We may define the line bundles $L_1 = L$, $L_2 = L \otimes P$ and $L_3 = L_4 = L \otimes P \otimes P$, and the usual construction produces a nondegenerate hypercube $\AA \in V_1 \otimes V_2 \otimes V_3 \otimes V_4$ from $(C,L_1,L_2,L_3,L_4)$, where $V_1 = H^0(C,L_1)$, $V_2 = H^0(C,L_2)$, $V_3 = H^0(C,L_3)$, and $V_4$ is dual to the kernel of the multiplication map $\mu_{123}$. By the same argument as in Theorem \ref{thm:2symHCL}, we may choose an appropriate identification of $V_3$ and $V_4$ such that $\AA$ lies in $V_1 \otimes V_2 \otimes \Sym_2 V_3$; in other words, $\AA$ is invariant under the transposition $(34)$ acting on the indices of the vector spaces $V_i$ for $i = 1$, $2$, $3$, $4$.
In fact, we may identify $V_2$ and $V_3$ as well. Our $\AA$ gives rise again to a genus one curve isomorphic to $C$, but we may choose different line bundles to reconstruct the hypercube. That is, by focusing on $C \hookrightarrow C_{14} \in \mathbb{P}(V_1^\vee) \times \mathbb{P}(V_4^\vee)$, we have line bundles $L_1$ and $L_4$, as before, which are the pullbacks of $\mathcal{O}_{\mathbb{P}(V_1^\vee)}(1)$ and $\mathcal{O}_{\mathbb{P}(V_4^\vee)}(1)$, respectively, to $C_{14}$. The pullbacks of $\mathcal{O}_{\mathbb{P}(V_2^\vee)}(1)$ and $\mathcal{O}_{\mathbb{P}(V_3^\vee)}(1)$ to $C_{14}$ via $\rho_{14}^{124}$ and $\rho_{14}^{134}$, respectively, are now both isomorphic to $L_1 \otimes P$. Using the multiplication map $\mu_{124}$ to reconstruct the same hypercube $\AA$ gives a natural identification of $V_2$ and $V_3$ where the maps from $C_{14}$ to $\mathbb{P}(V_2^\vee)$ and to $\mathbb{P}(V_3^\vee)$ are identical. Thus, we obtain an identification of $V_2$ and~$V_3$ such that the hypercube $\AA$ remains invariant under the transposition $(23)$.
Therefore, because $\AA$ is fixed under the transpositions $(23)$ and $(34)$, it is a triply symmetric hypercube in $V_1 \otimes \Sym_3 V_2$, as desired.
\end{proof}
The $\mathrm{SL}(V_1) \times \mathrm{SL}(V_2)$-invariants for the space $V_1 \otimes \Sym_3 V_2$ form a polynomial ring, generated by two polynomials $a_2$ and $a_6$ of degrees $2$ and $6$, respectively. We may use our understanding of the invariant theory of normal hypercubes and of doubly symmetric hypercubes to explain these invariants geometrically. In particular, the two degree $4$ invariants for hypercubes (and the one for doubly symmetric hypercubes) are now just $a_2^2/3$. Substituting this relation into (\ref{discstar}) then gives that the Jacobian of the associated genus one curve $C$ has discriminant
$$\Delta = 16 (4 a_2^3 - 27 a_6) a_6^3.$$
Thus, the condition that a triply symmetric hypercube is nondegenerate is given by the nonvanishing of a polynomial of degree $6 + 6 = 12$.
The Jacobian of $C$ may also be written in the form
$$E: y^2 + 2 a_2 x y + 2 a_6 y= x^3,$$
where $P$ is the $3$-torsion point at $(x,y) = (0,0)$.
\begin{prop}
There is a choice of normalization for the relative invariants $b_2$ and $b_6$ for the space of triply symmetric hypercubes such that
given a nondegenerate element of $V_1 \otimes \Sym_3 V_2$ corresponding to $(C,L,P)$ as in Theorem $\ref{thm:3symHCL}$, the Jacobian
of $C$ may be given in generalized Weierstrass form as
\begin{equation} \label{eq:3symHCJacnormal}
E: y^2 + b_2 x y + b_6 y = x^3
\end{equation}
with the $3$-torsion point $P$ being $(x,y) = (0,0)$. Furthermore, the map from nondegenerate $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2)$-orbits of this
representation to elliptic curves of the form \eqref{eq:3symHCJacnormal}, given by taking the Jacobian of the associated elliptic curve, is
surjective.
\end{prop}
\subsubsection{Doubly doubly symmetric hypercubes, or bidegree \texorpdfstring{$(2,2)$}{(2,2)} forms again} \label{sec:22symHC}
We now study the subrepresentation $\Sym_2V_1 \otimes \Sym_2V_2 \subset V_1 \otimes V_1 \otimes V_2 \otimes V_2$ of $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2)$, where $V_1$ and $V_2$ are $2$-dimensional $K$-vector spaces. We call these doubly doubly symmetric hypercubes. Away from characteristic $2$, this space is isomorphic to the representation $\Sym^2V_1 \times \Sym^2V_2$ of bidegree $(2,2)$ forms, which we examined in \S \ref{sec:bideg22forms} with a different interpretation.
\begin{thm} \label{thm:22symHC}
Let $V_1$ and $V_2$ be $2$-dimensional vector spaces over $K$. Then nondegenerate $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2)$-orbits of $\Sym_2V_1 \times \Sym_2V_2$ are in bijection with isomorphism classes of triples $(C,L,P)$, where $C$ is a genus one curve over $K$, $L$ is a degree $2$ line bundle on $C$, and $P$ is a nonzero non-$2$-torsion point on $\Jac(C)(K)$.
\end{thm}
This statement is a basis-free version of Theorem \ref{doubledoublesympar}. Note that the moduli problem for doubly doubly symmetric hypercubes is identical to that for doubly symmetric hypercubes!
\begin{proof}
Starting from an element of $\Sym_2V_1 \otimes \Sym_2 V_2 \subset V_1 \otimes V_1 \otimes \Sym_2V_2$,
Corollary~\ref{cor:2symHCP} gives the triple $(C,L,P)$ as desired.
Conversely, given such $(C,L,P)$, recall from the proof of Theorem \ref{thm:2symHCL} that we may construct a hypercube $\AA \in U_1 \otimes U_2 \otimes \Sym_2U_3$, where $U_i := H^0(C,L_i)$ for
\begin{align*}
L_1 := L && L_2 := L \otimes P \otimes P && L_3 := L \otimes P.
\end{align*}
From this hypercube $\AA$, we may create the usual curves $C_{ijk}$ in $\mathbb{P}^1 \times \mathbb{P}^1 \times \mathbb{P}^1$; for example, we have $C_{123} \subset \mathbb{P}(U_1^\vee) \times \mathbb{P}(U_2^\vee) \times \mathbb{P}(U_3^\vee)$ with $L_i$ the pullback of $\mathcal{O}_{\mathbb{P}(U_i^\vee)}(1)$ to $C_{123}$. Under the composition
$$C_{123} \to C_{13} \to C_{134} \to C_{34} \to C_{234} \to \mathbb{P}(U_2^\vee),$$
the line bundle $\mathcal{O}_{\mathbb{P}(U_2^\vee)}(1)$ pulled back to $C_{123}$ is isomorphic to $L_1$ (using the relations of the form in Lemma \ref{lem:HCrelation}). In particular, this gives a natural identification of the vector spaces $U_1 = H^0(C,L_1)$ with $U_2$! Moreover, the two maps
\begin{align*}
\pi_{134}^1 \circ \rho_{34}^{134}: C_{34} \to C_{134} \to \mathbb{P}(U_1^\vee) \\
\pi_{234}^2 \circ \rho_{34}^{234}: C_{34} \to C_{234} \to \mathbb{P}(U_2^\vee)
\end{align*}
are the same after the identification of $U_1$ and $U_2$. Therefore, there exists a choice of basis for $U_1$ and for $U_2$ such that $\AA$ is actually
invariant when switching the first and second factor; in other words, it is in the orbit of an element in $\Sym_2U_1 \otimes \Sym_2U_3$.
\end{proof}
There also exists a straightforward computational proof, by exhibiting a linear transformation in $\mathrm{GL}(V_1)$ taking an element of $V_1 \otimes V_1 \otimes \Sym_2V_2$ to an element of $\Sym_2V_1 \otimes \Sym_2V_2$. This linear transformation has entries that are degree $3$ in the coefficients of the
original element; its determinant is nonzero for nondegenerate doubly symmetric hypercubes, as it is exactly the degree~$6$ invariant~$a_6'$ from \S \ref{sec:2symHC}, which appears as a factor of the discriminant for doubly symmetric hypercubes. If we choose bases $\{u_1,u_2\}$ and $\{v_1,v_2\}$ for $V_1$ and $V_2$, respectively, we may represent a doubly symmetric hypercube as
\begin{equation} \label{eq:2symHCex}
\sum_{i,j=1}^2 (r_{ij} v_1^2 + s_{ij} v_1 v_2 + t_{ij} v_2^2) u_i u_j.
\end{equation}
Then the linear transformation $(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}) \in \mathrm{GL}(V_1)$ with
\begin{align*}
a &= -r_{22} s_{21} t_{11} + r_{21} s_{22} t_{11} + r_{22} s_{11} t_{21} - r_{11} s_{22} t_{21} - r_{21} s_{11} t_{22} + r_{11} s_{21} t_{22} \\
b &= -r_{21} s_{12} t_{11} + r_{12} s_{21} t_{11} + r_{21} s_{11} t_{12} - r_{11} s_{21} t_{12} - r_{12} s_{11} t_{21} + r_{11} s_{12} t_{21} \\
c &= -r_{22} s_{21} t_{12} + r_{21} s_{22} t_{12} + r_{22} s_{12} t_{21} - r_{12} s_{22} t_{21} - r_{21} s_{12} t_{22} + r_{12} s_{21} t_{22} \\
d &= -r_{22} s_{12} t_{11} + r_{12} s_{22} t_{11} + r_{22} s_{11} t_{12} - r_{11} s_{22} t_{12} - r_{12} s_{11} t_{22} + r_{11} s_{12} t_{22}
\end{align*}
will send the doubly symmetric hypercube \eqref{eq:2symHCex} to a doubly doubly symmetric hypercube.
Note that this moduli interpretation for the orbits is very similar to
the one in Corollary \ref{cor:22formsCLP}. However, the curve $X$
obtained in that corollary (and the previous Theorem
\ref{thm:22curvesbij}) has discriminant of degree $12$. The curve $C$
here, from Theorem \ref{thm:22symHC}, has discriminant of degree $24$,
so they are clearly not the same --- but they are closely related. In
particular, the curve $C$ is the (generalized) Hessian of the curve
$X$! Here, we define the {\em Hessian} of a bidegree $(2,2)$ form
$f(w_1,w_2,x_1,x_2)$ in $\Sym^2 V_1 \otimes \Sym^2 V_2$ (and by abuse of
terminology, the Hessian of the corresponding curve) as the curve cut
out by the determinant of the matrix
$$\left(\frac{\partial^2 f}{\partial w_i \partial x_j}\right)_{1 \leq i,j \leq 2}$$
which is also a bidegree $(2,2)$ form on $\Sym^2 V_1 \otimes \Sym^2 V_2$.
It is a small computation to check that the genus one curve $C$ is the
Hessian of $X$.
The $\mathrm{SL}(V_1) \times \mathrm{SL}(V_2)$-invariant ring of $\Sym_2V_1 \times
\Sym_2V_2$ is generated by three invariants $\delta_2$, $\delta_3$,
$\delta_4$ of degrees $2$, $3$, and $4$, respectively, as we know from
\S \ref{sec:bideg22invtheory}. We may apply our understanding of
the invariants in the doubly symmetric hypercube case; in particular,
we find that
\begin{align*}
\delta_2 &= - \frac{8}{3} a_2, &
\delta_3^2 &= \frac{4}{27}a_6', &
\delta_4 &= \frac{64}{9} a_2^2 - 16 a_4',
\end{align*}
where $a_2$, $a_4'$, and $a_6'$ are the polynomials from \S \ref{sec:2symHC}. We can substitute these formulas into
\eqref{eq:Jacfor2symHC} to obtain a formula for the Jacobian of the
curve $C$ arising from an element $f$ of $\Sym_2V_1 \otimes
\Sym_2V_2$ via Theorem \ref{thm:22symHC}. Thus, there are
rational generators $b_2$, $b_3$, $b_4$ for the invariants such that
$\Jac(C)$ may be written in generalized Weierstrass form as
\begin{equation} \label{eq:22symHCJac}
y^2 + b_2 x y + 6 b_3^2 y = x^3 + b_4 x^2,
\end{equation}
and the point $P$ is at $(x,y) = (0,0)$. The discriminant of the
Jacobian of $C$ factors as a rational multiple of
$$\delta_3^4 (-64 a_2^2 a_4'^2 + 192 a_4'^3 + 384 a_2^3 \delta_3^2 - 1296 a_2 a_4' \delta_3^2 + 2187 \delta_3^4),$$
where the second factor (up to a scalar) is the discriminant of the Jacobian of the
genus one curve $X$ cut out by $f$ directly.
A priori, it may not be obvious that any elliptic curve over $K$ with a nonzero
non-$2$-torsion point can be expressed in the form (\ref{eq:22symHCJac}). To see this, recall that any such curve can be expressed in the form
$$y^2 + a x y + b y = x^3 + c x^2$$
where the discriminant is nonzero (and thus $b \neq 0$). We then note that the latter elliptic
curve is actually isomorphic to one of the form \eqref{eq:22symHCJac} by taking
$b_2 = 6a/b$, $b_4 = 6/b$, and $b_6 = 36c/b^2$.
\subsubsection{Quadruply symmetric hypercubes, or binary quartic forms again} \label{sec:4symHC}
The last symmetrization that we study is the case of fully symmetric hypercubes, i.e., elements of $\Sym_4 (V) \subset V^{\otimes 4}$ for a $2$-dimensional
$K$-vector space $V$. Since our field $K$ is not of characteristic $2$ or $3$, this space is isomorphic to the quotient space $\Sym^4 V$ of binary quartic forms. Here, we take the standard $\mathrm{GL}(V)$-action on $\Sym_4(V)$, which is slightly different than the action considered on binary quartic forms in \S \ref{sec:binaryquartics}. The following is a basis-free version of Theorem \ref{sympar}:
\begin{thm} \label{thm:4symHC}
For a $2$-dimensional $K$-vector space $V$, nondegenerate $\mathrm{GL}(V)$-orbits of $\Sym_4 (V)$ are in bijection with isomorphism classes of triples $(C,L,P)$, where $C$ is a genus one curve over~$K$, $L$ is a degree $2$ line bundle on $C$, and $P$ is a nonzero $3$-torsion point of $\Jac(C)(K)$.
\end{thm}
This moduli problem is identical to that of triply symmetric hypercubes!
\begin{proof}
Given an element of $\Sym_4(V) \subset V \otimes \Sym_3V$, we may apply Theorem \ref{thm:3symHCL} to obtain a genus one curve $C$ with a degree $2$ line bundle $L$ and a nonzero $3$-torsion point $P$ of $\Jac(C)(K)$.
Conversely, given such a triple $(C,L,P)$, we may use Theorem \ref{thm:3symHCL} to construct a hypercube $\AA$ in $U_1 \otimes U_2 \otimes U_3 \otimes U_4$, where $U_1 = H^0(C,L)$ and $U_2 = H^0(C, L \otimes P)$ has a natural identification with $U_3$ and $U_4$ such that $\AA$ is invariant under permutations of $U_2$, $U_3$, and $U_4$. An almost identical argument to the one in Theorem \ref{thm:22symHC} shows that we may in fact identify $U_1$ with $U_2$ and that $\AA$ is invariant under switching $U_1$ and $U_2$. In other words, the hypercube $\AA$ coming from $(C,L,P)$ is in the orbit of a quadruply symmetric hypercube.
\end{proof}
A simple computational proof, just as for Theorem \ref{thm:22symHC}, is also possible; we only need to specify an element of $\mathrm{GL}(V)$ that acts on
a given nondegenerate element of $V \otimes \Sym_3 (V)$ (via the first factor only) to produce an element of $\Sym_4(V)$. With a choice of basis $\{v_1, v_2\}$ for $V$, we may represent an element of $V \otimes \Sym_3V$ as
\begin{equation} \label{eq:3symHCex}
\sum_{i=1}^2 v_i \otimes (r_i v_1^3 + s_i v_1^2 v_2 + t_i v_1 v_2^2 + u_i v_2^3).
\end{equation}
Then applying the linear transformation $(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}) \in \mathrm{GL}(V)$,
with
\begin{align*}
a &= -s_2^2 t_1 + s_1 s_2 t_2 + r_2 t_1 t_2 - r_1 t_2^2 - r_2 s_1 u_2 + r_1 s_2 u_2 \\
b &= s_1 s_2 t_1 - r_2 t_1^2 - s_1^2 t_2 + r_1 t_1 t_2 + r_2 s_1 u_1 - r_1 s_2 u_1 \\
c &= s_2 t_1 t_2 - s_1 t_2^2 - s_2^2 u_1 + r_2 t_2 u_1 + s_1 s_2 u_2 - r_2 t_1 u_2 \\
d &= - s_2 t_1^2 + s_1 t_1 t_2 + s_1 s_2 u_1 - r_1 t_2 u_1 - s_1^2 u_2 + r_1 t_1 u_2,
\end{align*}
to the first factor of $V$, gives a quadruply symmetric hypercube. The determinant of this transformation is just the degree $6$ invariant $a_6$
for triply symmetric hypercubes (from \S \ref{sec:3symHC}), and since it is a factor of the discriminant, it is nonzero for nondegenerate
hypercubes.
Just as for the triply symmetric Rubik's cubes and the doubly doubly symmetric hypercubes, this parametrization of binary quartic forms is related to the other ``dual'' parametrization for the same space, namely the one described in \S \ref{sec:binaryquartics}. In Theorem~\ref{thm:bqorbit}, the genus one curve $X$ arising from a binary quartic form has discriminant of degree $6$. Here, in Theorem~\ref{thm:4symHC}, we produce a genus one curve $C$ whose discriminant has degree $24$. But these two curves $X$ and $C$ are again related by the Hessian construction, i.e., $C$ is the Hessian of $X$! More precisely, let $q(w_1,w_2)$ be a binary quartic form in $\Sym^4 V$. Then the {\em Hessian} of $q$ (or of the genus one curve $X$ given as $y^2 = q(w_1,w_2)$) is the binary quartic
$$H(q)(w_1,w_2) := \disc \left( \frac{\partial^3 q}{\partial w_i \partial w_j \partial w_k} \right)_{1 \leq i, j, k \leq 2},$$
i.e., the discriminant of the three-dimensional matrix of triple derivatives. (Recall from \S \ref{sec:cubes} that the discriminant is the unique polynomial invariant of a $2 \times 2 \times 2$ cube.) By abuse of terminology, we also say that the genus one curve $C$ associated to $H(q)$ is the Hessian of $q$ and of $X$. This curve $C$ is the one obtained from Theorem \ref{thm:4symHC}. In particular, we obtain a proof of the following:
\begin{cor}
Given a binary quartic form $q$ of $\Sym^4 V$, where $V$ is a $2$-dimensional $K$-vector space, let $H(q)$ denote the Hessian binary quartic form. Then the Jacobian of the genus one curve given by the equation $y^2 = H(q)$ has a nonzero $3$-torsion point defined over $K$.
\end{cor}
Recall from \S \ref{sec:binaryquartics} that the $\mathrm{SL}(V)$-invariants of $\Sym^4 V$ are generated by two invariants $I$ and $J$ of degrees $2$ and $3$, respectively. These invariants appear in the coefficients for the Jacobian of the curve $C$ arising from an element $\AA \in \Sym_4 (V)$ via Theorem \ref{thm:4symHC}. In particular, because the space $\Sym_4 (V)$ is contained in both $V \otimes \Sym_3 V$ and $\Sym_2 V \otimes \Sym_2 V$, we may use the geometric interpretations of the invariants of triply symmetric hypercubes and of doubly doubly symmetric hypercubes to easily understand the invariants in this case! The Jacobian of $C$ is isomorphic to
\begin{equation} \label{eq:4symHCJac}
E: y^2 + 2 a_2 x y + 216 a_3^2 y = x^3,
\end{equation}
where $a_2$ is defined as for hypercubes and $a_3$ is the primitive integral degree $3$ invariant generator. In terms of the invariants $I$ and $J$, because $\AA$ viewed as a binary quartic form has coefficients with factors of $4$ and $6$, the polynomials $I(\AA)$ and $J(\AA)$ are not primitive: $I(\AA) = -4 a_2$ and $J(\AA) = -432 a_3$. The discriminant of the Jacobian \eqref{eq:4symHCJac} factors as a rational multiple of
$$a_3^6 (a_2^3 + 729 a_3^2),$$
and the latter factor is just the discriminant of the Jacobian of $Z$ (or of $\AA$ as a binary quartic form).
Finally, just as before, we observe that any elliptic curve over $K$ with a nonzero $3$-torsion point defined over $K$ can be expressed in the form \eqref{eq:4symHCJac}. Namely, if we have an elliptic curve of the form
$y^2 + a x y + b y = x^3$
with nonzero discriminant (implying $b \neq 0$),
then setting $a_2 = 108 a / b$ and $a_3 = 216 / b$ gives an isomorphic elliptic curve of the form (\ref{eq:4symHCJac}).
\subsection{Hermitian cubes}\label{subsec:Hcubes}
In this section, we discuss spaces $\mathscr{C}$, sometimes called ``Freudenthal algebras,''
or ``Freudenthal triple systems,'' that have a quartic norm form, originally introduced in \cite{freudenthal-magic}. These vector spaces are related to the space of $2 \times 2 \times 2$ cubes---that is, the tensor space $V_1 \otimes V_2 \otimes V_3$
where $V_i$ is a two-dimensional vector space over $K$---in the same way that the spaces of Hermitian matrices
over composition algebras are related to the usual matrix algebras. Our goal here, then, is to
``triply Hermitianize'' the space of $2 \times 2 \times 2$ cubes with respect to a cubic algebra.
The natural cubic algebras to use are cubic Jordan algebras $J$. In fact, this process will work for any
cubic Jordan algebra with a nondegenerate trace form (see \S \ref{sec:springer} for general constructions).
Some of the properties and formulas we use below for
Hermitian cubes are explained in more detail in \cite{krutelevich} and also rely on ideas from \cite{faulkner, clerc}.
Analogously to \S \ref{sec:cubicjordan}, we will describe spaces $\mathscr{C}$ of Hermitian $2\times 2\times 2$ cubical matrices and their properties, including
a norm form, a trace form, an adjoint map, and a notion of rank. We will then be interested in the rank one
loci of such spaces and their moduli descriptions.
We will use such moduli
interpretations to obtain vector bundles on varieties mapping to these rank one loci.
This will allows us
in \S \ref{sec:deg2moduli}
to uniformly study representations of the form $V\otimes \mathscr{C}$, where $V$ is a $K$-vector space of dimension~2, in terms of genus one curves.
\subsubsection{Definitions and invariants}
\begin{defn}
A {\em Hermitian cube space} ${\mathscr{C}(J)}$ over a cubic Jordan algebra $J$ is a vector space of elements of the form
\begin{equation*}
\xymatrix@!0{
& b' \ar@{-}[rr]\ar@{-}'[d][dd] & & c'' \ar@{-}[dd] \\
a \ar@{-}[ur]\ar@{-}[rr] \ar@{-}[dd] & & b \ar@{-}[ur]\ar@{-}[dd] \\
& c \ar@{-}'[r][rr] & & d \\
b'' \ar@{-}[rr]\ar@{-}[ur] & & c' \ar@{-}[ur]
} \end{equation*}
where $a, d \in K$ and $(b,b',b'')$ and $(c,c',c'')$ are conjugates in $J$. Addition and multiplication
by elements of $K$ occurs componentwise. We will
abbreviate elements, or {\em Hermitian cubes}, in ${\mathscr{C}(J)}$ as $(a,b,c,d)$.
\end{defn}
\begin{example} \ \begin{enumerate}[(i)]
\item
If $J$ is the field $K$ itself with cubic norm form $\Norm(x) = x^3$ for $x \in K$, the conjugates of
any $x \in K$ are both just $x$, so the space ${\mathscr{C}(J)}$ is isomorphic to $\Sym^3K^2$.
\item
If $J$ is the split algebra $K \times K \times K$ with norm form $\Norm((x_1,x_2,x_3)) = x_1 x_2 x_3$, then
${\mathscr{C}(J)}$ is isomorphic to $K^2 \otimes K^2 \otimes K^2$, or the space of $2 \times 2 \times 2$ cubes.
\end{enumerate} \end{example}
In general, if $J$ has dimension $d$ over $K$, then ${\mathscr{C}(J)}$ is a $K$-vector space of dimension $2d + 2$.
Although there is a very weak algebra structure on ${\mathscr{C}(J)}$, we will not use it in the sequel. We will only
use the structure of ${\mathscr{C}(J)}$ as a representation of a certain reductive group $G_{\mathscr{C}(J)}$, which will be
a prehomogeneous vector space with a relative invariant of degree $4$. This invariant will be
the norm form for Hermitian cubes.
Recall that a cubic Jordan algebra $J$ comes equipped with a trace form $\Tr$ and a cubic norm form $\Norm$,
as well as their (bi)linearizations (see equation \eqref{eq:jordanalgforms} in \S \ref{sec:springer}).
We restrict our attention to cubic Jordan algebras for which the trace form $\Tr$ is nondegenerate,
in which case we obtain a natural adjoint map $\sharp$, as in equation \eqref{eq:sharpmap}.
We will use these properties of $J$ to construct a trace form $\Tr_{\mathscr{C}(J)}$ and quartic norm
form $\Norm_{\mathscr{C}(J)}$ for ${\mathscr{C}(J)}$, given the basepoint $\varepsilon := (1,0,0,1)$.
For a $3 \times 3$ Hermitian matrix over a composition algebra $A$, the cubic norm form is a generalization
of the determinant of the matrix; here, we generalize the quartic discriminant of a $2 \times 2 \times 2$ cube,
which is the generator of the ring of $\mathrm{SL}_2^3$-invariants for $2 \times 2 \times 2$ cubes.
\begin{defn}
The {\em discriminant} of a Hermitian cube $A = (a,b,c,d)$ is given by
\begin{equation} \label{eq:discHermcube}
\disc(A) = (ad - \Tr(b,c))^2 - 4\Tr(b^\sharp,c^\sharp) + 4a\Norm(c) + 4d\Norm(b).
\end{equation}
The {\em norm form} $\Norm_{\mathscr{C}(J)}$ is the complete linearization
of the discriminant form, so it is a symmetric quadrilinear map
$$\Norm_{\mathscr{C}(J)}: {\mathscr{C}(J)} \times {\mathscr{C}(J)} \times {\mathscr{C}(J)} \times {\mathscr{C}(J)} \longrightarrow K$$
with $\Norm_{\mathscr{C}(J)}(A,A,A,A) = \disc(A)$. The linear trace form $\Tr_{\mathscr{C}(J)}$ is defined as
$$\Tr_{\mathscr{C}(J)}(A) := \Norm_{\mathscr{C}(J)}(A,\varepsilon,\varepsilon,\varepsilon)$$
and there is an alternating bilinear trace form
$$ \langle A,A' \rangle = ad' - \Tr(b,c') + \Tr(b',c) - a'd.$$
\end{defn}
Since the trace form $\Tr$ for the Jordan algebra $J$ is nondegenerate, and hence
the bilinear trace form $\langle \cdot,\cdot \rangle$ is nondegenerate,
the space ${\mathscr{C}(J)}$ is self-dual. There is a natural cubic ``adjoint'' map $\flat$ associated to ${\mathscr{C}(J)}$:
\begin{equation} \label{eq:flatmap} \xymatrix@R=0pt@C=18pt{
\flat:& {\mathscr{C}(J)} \ar[r]& {\mathscr{C}(J)}^\vee \ar[r]^{\cong} & {\mathscr{C}(J)} \\
&A \ar@{|->}[r] & \Norm_{\mathscr{C}(J)}(A,A,A,\cdot) \ar@{|->}[r] & A^\flat.
}\end{equation}
This map is the analogue of the adjoint map $\sharp$ for cubic Jordan algebras
(defined in equation \eqref{eq:sharpmap}) and satisfies the properties
\begin{eqnarray} \label{eq:adjadj}
\Norm_{\mathscr{C}(J)}(A,A,A,B) = \langle A^\flat, B \rangle & \mathrm{and} & (A^\flat)^\flat = - \disc(A)^2 A.
\end{eqnarray}
Explicitly, for a Hermitian cube $A = (a,b,c,d)$, the adjoint $A^\flat = (a^\flat, b^\flat, c^\flat, d^\flat)$ is given by the formulas
\begin{align} a^\flat &= a^2 d - a \Tr(b,c) + 2 \Norm(b) \nonumber \\
b^\flat &= 2 c \times b^\sharp - 2 a c^\sharp + (a d - \Tr(b,c)) b \\
c^\flat &= -2 b \times c^\sharp + 2 d b^\sharp - (a d - \Tr(b,c))c \nonumber \\
d^\flat &= -a d^2 + d \Tr(b,c) - 2 \Norm(c) \nonumber
\end{align}
where $\times$ denotes the bilinearization of $\sharp$ defined in \eqref{eq:bilinearsharp}.
As in the cubic case, by an abuse of notation, the map $\flat$ will sometimes
just refer to the first map in \eqref{eq:flatmap} from ${\mathscr{C}(J)}$ to ${\mathscr{C}(J)}^\vee$.
\begin{example} \ \begin{enumerate}[(i)]
\item
If $J = K$, the discriminant of $A$ is the usual quartic discriminant
of a binary cubic form in $\Sym^3K^2$. That is, if $A = (a,b,c,d)$
represents the binary cubic $a X^3 + 3 b X^2 Y + 3 c X Y^2 + d Y^3$, then
$$\disc(A) = a^2 d^2 - 3 b^2 c^2 - 6 a b c d + 4 a c^3 + 4 b^3 d$$
and the adjoint is the covariant given by the determinant of the Jacobian
matrix of the cubic and its Hessian, namely
$$A^\flat = (2 b^3 - 3 a b c + a^2 d,b^2 c - 2 a c^2 + a b d,
- b c^2 + 2 b^2 d - a c d, - 2 c^3 + 3 b c d - a d^2).$$
\item
If $J = K \times K \times K$, then the discriminant of
the Hermitian cube $A = (a,(b_1,b_2,b_3),(c_1,c_2,c_3),d)$ is the usual quartic discriminant
of a $2 \times 2 \times 2$ cube (see \cite[\S 2.1]{hcl1}):
\begin{align*}
\disc(A) &= a^2 d^2 + b_1^2 c_1^2 + b_2^2 c_2^2 + b_3^2 c_3^2 + 4(ac_1c_2c_3 + b_1 b_2 b_3 d) \\
&\quad - 2(ab_1c_1d + ab_2c_2d + ab_3c_3d + b_1 b_2 c_1 c_2 + b_1 b_3 c_1 c_3 + b_2 b_3 c_2 c_3).
\end{align*}
The adjoint $A^\flat := (a^\flat,(b_1^\flat,b_2^\flat,b_3^\flat),(c_1^\flat,c_2^\flat,c_3^\flat),d^\flat)$
is easy to compute, with coordinates
\begin{align*}
a^\flat &= 2 b_1 b_2 b_3 - a (b_1 c_1 + b_2 c_2 + b_3 c_3) + a^2 d \\
d^\flat &= -2 c_1 c_2 c_3 + d (b_1 c_1 + b_2 c_2 + b_3 c_3) - a d^2 \\
b_i^\flat &= (-b_i c_i + b_j c_j + b_k c_k) b_i - 2 a c_j c_k + a d b_i \\
c_i^\flat &= - (- b_i c_i + b_j c_j + b_k c_k) c_i + 2 d b_j b_k - a d c_i
\end{align*}
where $\{i,j,k\} = \{1,2,3\}$.
\end{enumerate} \end{example}
\subsubsection{Rank and linear transformations}
We may also define an analogue of rank for Hermitian cube spaces that agrees with
the natural notion of rank for a tensor space in the simplest cases.
\begin{defn} \label{defn:rankHermitiancubes}
A nonzero Hermitian cube $A \in {\mathscr{C}(J)}$ has {\em rank one}
if it is a scalar multiple of a Hermitian cube of the form
\begin{equation} \label{eq:rankonecubes}
(a,b,c,d) = (\Norm(\alpha),\alpha^\sharp \bullet \beta, \beta^\sharp \bullet \alpha, \Norm(\beta))
\end{equation}
for any $\alpha, \beta \in J$.
A Hermitian cube $A \in {\mathscr{C}(J)}$ has {\em rank $\leq 2$} if its adjoint $A^\flat$ is $0$.
It has {\em rank $\leq 3$} if its discriminant $\disc(A)$ is $0$. Finally, it has {\em rank four} if its discriminant is nonzero.
\end{defn}
Note that the condition on scalar multiples for rank one cubes is necessary: if $k \in K$
is not in the image of the map $\Norm$, then $(k,0,0,0)$ is not of the form in
\eqref{eq:rankonecubes}, but intuitively we would still like this cube
to have rank one. With this definition, the rank one (and zero) cubes are,
up to $K$-scaling, given by elements of $J^2$. More precisely, there
is a map
\begin{align*}
J \oplus J &\longrightarrow {\mathscr{C}(J)} \\
(\alpha,\beta) &\longmapsto (\Norm(\alpha),\alpha^\sharp \bullet \beta, \beta^\sharp \bullet \alpha, \Norm(\beta))
\end{align*}
that descends to a rational map
\begin{equation} \label{eq:PJtoPCJ}
\tau: \mathbb{P}(J \oplus J) \dashrightarrow \mathbb{P}({\mathscr{C}(J)})
\end{equation}
where $\mathbb{P}(J \oplus J)$ and $\mathbb{P}({\mathscr{C}(J)})$ denotes $K$-lines in the
$K$-vector spaces $J \oplus J$ and ${\mathscr{C}(J)}$, respectively. Let $X_{\mathscr{C}(J)}$
denote the image of the map $\tau$. Intuitively, the variety $X_{\mathscr{C}(J)}$
is like the projective line over $J$, with $\tau$ being analogous to
the embedding of the twisted cubic. (This comparison is not entirely
accurate, of course, since scaling $(\alpha,\beta)$ by elements of $J$
does not usually fix its image under $\tau$.)
Let $\mathrm{SL}_2(J)$ denote the group of discriminant-preserving $K$-linear
transformations of the space ${\mathscr{C}(J)}$.\footnote{This group is denoted by $\mathrm{Inv}({\mathscr{C}(J)})$ in \cite{krutelevich}.}
Some examples include
\begin{center}
\begin{tabular}{c|c|c} \label{table:SL2J}
$J$ & ${\mathscr{C}(J)}$ & $\mathrm{SL}_2(J)$ \\
\hline
$K$ & $\Sym^3(2)$ & $\mathrm{SL}_2(K)$ \\
$K^{3}$ & $2 \otimes 2 \otimes 2$ & $\mathrm{SL}_2(K)^3$ \\
$\mathscr{H}_3(K)$ & $\wedge^3_0(6)$ & $\mathrm{Sp}_6(K)$ \\
$\mathscr{H}_3(K \times K)$ & $\wedge^3(6)$ & $\mathrm{SL}_6(K)$ \\
$\mathscr{H}_3(\mathscr{Q})$ & $S^+(32)$ & $\Spin_{12}$ \\
$\mathscr{H}_3(\mathscr{O})$ & $56$ & $E_7$
\end{tabular}
\end{center}
where the last two may also range over different quaternion and octonion algebras
over $K$ (and thus are associated with different forms of the corresponding groups).
Thus, the space ${\mathscr{C}(J)}$ may be thought of as a representation of
$\mathrm{SL}_2(J)$. In fact, the variety $X_{\mathscr{C}(J)}$ is the projectivization
of the orbit of the highest weight vector of the representation of $\mathrm{SL}_2(J)$ on ${\mathscr{C}(J)}$,
and is isomorphic to the homogeneous space given by this representation! That is, the variety
$X_{\mathscr{C}(J)}$ is isomorphic to $\mathrm{SL}_2(J) / P_{\mathscr{C}(J)}$, where $P_{\mathscr{C}(J)}$ is the
parabolic subgroup associated to the representation ${\mathscr{C}(J)}$, and it has
a moduli interpretation as a flag variety.
In fact, the rank of all the elements of ${\mathscr{C}(J)}$ is preserved under the action of $\mathrm{SL}_2(J)$, and over an
algebraically closed field, the group $\mathrm{SL}_2(J)$ acts transitively on the set of rank $r$ elements \cite[Lemma 21 \& Thm 2]{krutelevich}.
In other words, the space ${\mathscr{C}(J)}$ is stratified by rank into orbits of $\mathrm{SL}_2(J)$ when $K$ is algebraically closed.
Thus, for many computations related to elements in ${\mathscr{C}(J)}$, they may just be checked on representatives of ${\mathscr{C}(J)}$ of
the appropriate rank; see, e.g., \cite[eqs.~(58)-(61)]{krutelevich} for some simple choices of representatives.%
\footnote{For example, the definition of ``rank one'' cubes in \cite{krutelevich} differs from Definition \ref{defn:rankHermitiancubes},
but it is easy to check that they are equivalent by this method.}
In the case where $J$ is the $\mathscr{H}_3(\mathbb{C} \times \mathbb{C})$ and
$X_{\mathscr{C}(J)}$ is the Grassmannian $\Gr(3,6)$, these varieties and some
of the geometric constructions in the sequel are studied in Donagi's
work \cite{donagi-grassmannians}.
Furthermore, in each of the cases in the above table over an algebraically closed field,
the secant variety of $X_{\mathscr{C}(J)}$ is the entire space $\mathbb{P}({\mathscr{C}(J)})$, and the tangent variety of $X_{\mathscr{C}(J)}$ is the quartic hypersurface
$Y_{\mathscr{C}(J)}$ given by the vanishing of the discriminant \cite[Chap.~III]{zak-book}; that is,
$Y_{\mathscr{C}(J)}$ is made up of the the rank $\leq 3$ elements of ${\mathscr{C}(J)}$ (up to $K$-scaling).
In fact, Zak shows (and it is easy to check on representatives of the appropriate ranks):
\begin{lemma} \label{lem:ptonsecant}
Each general point of $\mathbb{P}({\mathscr{C}(J)}) \setminus Y_{\mathscr{C}(J)}$ lies on exactly one secant
line of $X_{\mathscr{C}(J)}$. Each point of $Y_{\mathscr{C}(J)} \setminus X_{\mathscr{C}(J)}$ lies on exactly one tangent
line of $X_{\mathscr{C}(J)}$.
\end{lemma}
Therefore, we find that the adjoint map $\flat$ induces a birational map
$$\beta_J: \mathbb{P}({\mathscr{C}(J)}) \dashrightarrow \mathbb{P}({\mathscr{C}(J)})$$
whose reduced base locus is the variety $X_{\mathscr{C}(J)}$. Under $\beta_J$, the quartic hypersurface $Y_{\mathscr{C}(J)}$
is blown down to $X_{\mathscr{C}(J)}$, as the adjoint of rank $\leq 3$ elements in ${\mathscr{C}(J)}$ have rank $\leq 1$. By \eqref{eq:adjadj}, applying $\beta_J$ twice is the identity away from $Y_{\mathscr{C}(J)}$.
Furthermore, the adjoint map $\beta_J$ preserves each secant line. This is easy to check by computation, e.g., by showing that
the adjoint of the sum of two rank one cubes is in their span. For example, the adjoint of the sum of $A = (1,0,0,0)$ and $B = (\Norm(\alpha),\alpha^\sharp \bullet \beta, \beta^\sharp \bullet \alpha, \Norm(\beta))$ is $\Norm(\beta) A - \Norm(\beta) B$.
\subsection{Triply Hermitian hypercubes} \label{sec:deg2moduli}
As in \S \ref{sec:deg3moduli}, we would like to study the orbits of a class of ``Hermitianized''
representations uniformly. Namely, for $V$ a two-dimensional $K$-vector space and $J$ a cubic
Jordan algebra, we study the representation of $\mathrm{GL}(V) \times \mathrm{SL}_2(J)$-orbits on
$V \otimes {\mathscr{C}(J)}$. We find that the orbits correspond to isomorphism classes of genus one curves with degree $2$ line bundles,
along with bundles related to $X_{\mathscr{C}(J)}$. We consider only nondegenerate elements of the tensor space, which will correspond
to smooth curves.
\begin{defn}
An element $\phi \in V \otimes {\mathscr{C}(J)}$ is called {\em nondegenerate} if
the induced composition map
$$\flat \circ \phi: V^\vee \to {\mathscr{C}(J)} \to {\mathscr{C}(J)}^\vee$$
is injective.
Note that nondegeneracy implies that the elements in the image of $\phi$ do not have rank one, and the
discriminant of all but four points (over $\overline{K}$, up to multiplicity) in the image of $\phi$ is nonzero.
\end{defn}
The following theorem---a more precise version of Theorem \ref{triplehermpar}---states that the orbits of such nondegenerate elements of $V \otimes {\mathscr{C}(J)}$ are in correspondence with genus one curves with certain vector bundles:
\begin{thm} \label{thm:hermHC}
The nondegenerate $\mathrm{GL}_2(K) \times \mathrm{SL}_2(J)$-orbits of $V \otimes {\mathscr{C}(J)}$
are in bijection with isomorphism classes of nondegenerate quadruples
$(C,L,\mathcal{F}, \kappa)$, where $C$ is a genus one curve over~$K$; \,$L$ is a degree $2$ line bundle on $C$; \;$\mathcal{F} = (E_i)$
is the flag of vector bundles $E_i$ given by pulling back the
universal flag via the map $\kappa: C \to X_{\mathscr{C}(J)}$; \,and $\kappa^*
\mathcal{O}_{\mathbb{P}({\mathscr{C}(J)})}(1) \cong L^{\otimes 3}$.
\end{thm}
As in \S \ref{sec:deg3moduli}, we will discuss the nondegeneracy condition
on quadruples $(C,L,\mathcal{F},\kappa)$ in the proof. It is again an open
condition, so the statement of the theorem may be rephrased as giving
a bijection between orbits of $V \otimes {\mathscr{C}(J)}$ and the $K$-points of an open
substack of the moduli space of $(C,L,\mathcal{F},\kappa)$ (with the
isomorphism from $\kappa^*\mathcal{O}_{\mathbb{P}({\mathscr{C}(J)})}(1)$ to $L^{\otimes 3}$). Again,
in many cases, this nondegeneracy condition will be a simple condition
on the bundles.
The rest of this subsection contains the proof of Theorem \ref{thm:hermHC}.
First, from a nondegenerate element $\phi$ in $V \otimes {\mathscr{C}(J)}$, we explain how to
naturally construct the genus one curve and associated data described in Theorem
\ref{thm:hermHC}. The construction will be invariant under the group
action. Although many features of this construction are similar to
the one described in \S \ref{sec:deg3moduli}, here the curve will
not necessarily be immersed in $\mathbb{P}({\mathscr{C}(J)})$ but instead have a degree
$2$ map to a line in $\mathbb{P}({\mathscr{C}(J)})$. In addition, the adjoint map plays a
different role here and is not logically necessary for describing
the genus one curve and other geometric data from
the multilinear object. For the remainder of this subsection, we fix the cubic
Jordan algebra $J$ and set $X := X_{\mathscr{C}(J)}$ and $Y := Y_{\mathscr{C}(J)}$.
We will work with the Hilbert scheme $\mathcal{H}\mathrm{ilb}_2(X)$ of two points on $X$. Let
$\zeta: \mathcal{Z}^{\mathrm{univ}} \to \mathcal{H}\mathrm{ilb}_2(X)$ denote the universal degree $2$ subscheme over $\mathcal{H}\mathrm{ilb}_2(X)$, so there is also a natural map $\epsilon: \mathcal{Z}^{\mathrm{univ}} \to X$.
Let $\mathcal{L}^{\mathrm{univ}}$ denote the universal line over $\mathcal{H}\mathrm{ilb}_2(X)$ pulled back from
the universal line over the Hilbert scheme $\mathcal{H}\mathrm{ilb}_2(\mathbb{P}({\mathscr{C}(J)}))$ of two points in $\mathbb{P}({\mathscr{C}(J)})$.
That is, to a zero-dimensional degree $2$ subscheme in $X \subset \mathbb{P}({\mathscr{C}(J)})$,
we associate the unique line passing through it; a nonreduced subscheme gives a point and a tangent direction,
and thus also a unique line. We then have the diagram
\begin{equation} \label{eq:ZLuniv} \xymatrix{
\mathcal{Z}^{\mathrm{univ}} \ar[rd]_-{\zeta} \ar@{^{(}->}[r]^{\iota} & \mathcal{L}^{\mathrm{univ}} \ar[d]^-{\mathbb{P}^1 \textrm{-bundle}} \ar[r]^-{b} & \mathbb{P}({\mathscr{C}(J)}) \\
& \mathcal{H}\mathrm{ilb}_2(X)
} \end{equation}
where the map $b$ on the right comes from the construction of $\mathcal{L}^{\mathrm{univ}}$.
A straightforward computation (e.g., as found in \cite[\S 2.1]{vermeire}) gives
\begin{equation} \label{eq:Luniv}
\mathcal{L}^{\mathrm{univ}} \cong \mathbb{P}(\zeta_* \epsilon^* \mathcal{O}_X(1)),
\end{equation}
where $\mathcal{O}_X(1)$ denotes the pullback of $\mathcal{O}_{\mathbb{P}({\mathscr{C}(J)})}(1)$ to $X \subset \mathbb{P}({\mathscr{C}(J)})$.
We first give the construction of a genus one curve and the appropriate bundles from an orbit of $V \otimes {\mathscr{C}(J)}$.
\begin{construction} \label{cnstr:hermHCtoCurve}
Given $\phi \in V \otimes {\mathscr{C}(J)}$, we view $\phi$ as a linear map
in $\Hom(V^\vee,{\mathscr{C}(J)})$. Nondegeneracy implies that there is a map
$$\mathbb{P}(\phi): \mathbb{P}(V^\vee) \longrightarrow \mathbb{P}({\mathscr{C}(J)})$$
whose image does not intersect $X$.
Lemma \ref{lem:ptonsecant} implies that the general points of $\mathbb{P}(V^\vee)$ each lie
in exactly one secant line. The idea is that the curve will be made up
of the ``pivot'' points of these secant lines and thus be a double cover
of $\mathbb{P}(V^\vee)$.
More precisely, because the image of the map $\mathbb{P}(\phi): \mathbb{P}(V^\vee) \to \mathbb{P}({\mathscr{C}(J)})$ is not completely contained in $X$, the map $\mathbb{P}(\phi)$ lifts uniquely to $\mathcal{L}^\mathrm{univ}$ by the valuative criterion, as $b$ is birational. We thus have
\begin{equation} \label{eq:ZLunivP1} \xymatrix{
& & \mathbb{P}(V^\vee) \ar[d]^{\mathbb{P}(\phi)} \ar@{..>}[ld]_{\widetilde{\mathbb{P}(\phi)}} \\
\mathcal{Z}^{\mathrm{univ}} \ar[rd]_-{\zeta} \ar@{^{(}->}[r]^{\iota} & \mathcal{L}^{\mathrm{univ}} \ar[d]^-{p} \ar[r]^b & \mathbb{P}({\mathscr{C}(J)}) \\
& \mathcal{H}\mathrm{ilb}_2(X)
} \end{equation}
If $\widetilde{\mathbb{P}(\phi)}$ factors through a fiber of $p$, then the image of $\mathbb{P}(\phi)$
would itself be a secant line to $X$, which contradicts the nondegeneracy assumption.
Thus, the composite map $p \circ \widetilde{\mathbb{P}(\phi)}$ is a finite map; because two
lines in $\mathbb{P}({\mathscr{C}(J)})$ intersect in at most one point, this composite map is actually an
isomorphism onto its image in $\mathcal{H}\mathrm{ilb}_2(X)$.
Pulling back the bottom left triangle of \eqref{eq:ZLunivP1}
via $p \circ \widetilde{\mathbb{P}(\phi)}: \mathbb{P}(V^\vee) \to \mathcal{H}\mathrm{ilb}_2(X)$, we obtain the diagram
\begin{equation*} \xymatrix{
C \ar[rd]_{\eta} \ar@{^{(}->}[r]^{\iota} & \Sigma \ar[d]^{p} \\
& \mathbb{P}(V^\vee)
} \end{equation*}
where $\Sigma$ is a ruled surface and $C$ is a degree $2$ cover of $\mathbb{P}(V^\vee)$.
Recall that $\mathcal{H}\mathrm{ilb}_2(X)$ is the blowup of $\Sym^2(X)$ along the diagonal.
The $2$-to-$1$ map $C \to \mathbb{P}(V^\vee)$ ramifies exactly where $\mathbb{P}(V^\vee)$ intersects the locus of fat schemes of $\mathcal{H}\mathrm{ilb}_2(X)$,
namely the pullback of the diagonal of $\Sym^2(X)$ via the natural birational map $\mathcal{H}\mathrm{ilb}_2(X) \dasharrow \Sym^2(X)$.
This ramification locus is the intersection of $\Im(\mathbb{P}(\phi)) \subset \mathbb{P}({\mathscr{C}(J)})$ and the
tangent variety $Y$. Since $Y$ is a quartic hypersurface in $\mathbb{P}({\mathscr{C}(J)})$, the ramification locus is a zero-dimensional subscheme
of degree $4$, and by the nondegeneracy assumption, the four points of ramification over $\overline{K}$ are distinct. By Riemann-Hurwitz, the curve $C$, possibly after normalization, has genus one.
This construction also produces an explicit equation for the curve $C$, e.g., for $v \in V^\vee$, it is the double cover of $\mathbb{P}(V^\vee)$ given by $y^2 = \disc(\phi(v)).$
The bundles on the curve $C$ may also be found immediately from the construction.
First, the pullback of $\mathcal{O}_{\mathbb{P}(V^\vee)}(1)$ via the degree $2$ map
$\eta$ is the desired degree $2$ line bundle $L$. The map
\begin{equation*}
\kappa: C \longrightarrow \mathcal{Z}^{\mathrm{univ}} \stackrel{\epsilon}{\longrightarrow} X \hookrightarrow \mathbb{P}({\mathscr{C}(J)})
\end{equation*}
and the moduli interpretation of $X$ as a flag variety in
$\mathbb{P}({\mathscr{C}(J)})$ together give a flag of bundles $\mathcal{F} = (E_i)$ on
the curve $C$.
Finally, we determine a relation between the line bundles $L$ and $\kappa^* \mathcal{O}_{\mathbb{P}({\mathscr{C}(J)})}(1)$ on $C$. It is easiest to
describe geometrically. Take a hyperplane $H$ in $\mathbb{P}({\mathscr{C}(J)})$
containing the image of $\mathbb{P}(\phi)$ but not its (cubic) image under the adjoint map $\beta_J$. Then for
any point of $C$ intersecting $H$, its conjugate (under the map $\eta$) also lies on $H$, since the
secant line containing these two conjugate points intersects $\Im{\mathbb{P}(\phi)}$ by the construction of $C$.
Thus, the line bundle $\kappa^* \mathcal{O}_{\mathbb{P}({\mathscr{C}(J)})}(1)$ is a power of $L$.
To show that this power is $3$, recall that the adjoint map $\beta_J$ preserves secant lines. In particular, for a general
point in $\mathbb{P}({\mathscr{C}(J)}) \setminus X$, the line spanned by itself and its image under $\beta_J$ is a secant line of $X$.
Now applying the adjoint map $\beta_J$ to $\Im{\mathbb{P}(\phi)}$ gives a cubic rational curve in $\mathbb{P}({\mathscr{C}(J)})$, i.e., a curve whose intersection with $H$ is degree $3$. Therefore, for a general choice of $H$, there are exactly three secant lines that contain points of $\Im{\mathbb{P}(\phi)}$ and are contained in $H$; these give rise to exactly three pairs of conjugate points on the curve $C$ contained in $H$.
As noted earlier, the data of the curve $C$, the line bundle $L$, the flag $\mathcal{F}$, and the map $\kappa$
are all clearly preserved (up to isomorphism) under the action of the group
$\mathrm{GL}(V) \times \mathrm{SL}_2(J)$, since each factor acts by linear
transformations on its respective projective space.
\end{construction}
We have described the map from $\mathrm{GL}(V) \times \mathrm{SL}_2(J)$-orbits of $V
\otimes {\mathscr{C}(J)}$ to isomorphism classes of quadruples
$(C,L,\mathcal{F},\kappa)$. We now describe the reverse map.
It will be clear that these two constructions are inverse to one another.
\begin{construction} \label{cnstr:curvetohermHC}
The general idea of the reverse map is as follows: starting with the
geometric data of $(C,L,\mathcal{F},\kappa)$, we would like to pick
out a linear $\mathbb{P}^1$ in $\mathbb{P}({\mathscr{C}(J)})$. For each degree $2$ divisor on
the curve in the linear series $\left| L \right|$, we find the
``average'' point of the images of its support in $X$, and all such points
together form the desired $\mathbb{P}^1$.
More precisely, given a quadruple $(C,L,\mathcal{F},\kappa)$ as in
the theorem, we have a natural degree $2$ map $\eta: C \to
\mathbb{P}(H^0(C,L)^\vee) \cong \mathbb{P}^1$. Using the hyperelliptic involution $\iota$ on $C$ given by $\eta$
and the map $\kappa: C \to X$, we obtain the commutative diagram
\begin{equation*} \xymatrix{
C \ar[r]^{(\kappa, \kappa \circ \iota)} \ar[d]_{\eta} & X \times X \ar[d]^{S_2-\textrm{quotient}} \\
\mathbb{P}(H^0(C,L)^\vee) \ar[r]_-{h'} & \Sym^2(X).
} \end{equation*}
The map $h'$ may be lifted to a map $h: \mathbb{P}(H^0(C,L)^\vee) \to \mathcal{H}\mathrm{ilb}_2(X)$,
since the image of $h'$ does not lie completely in the diagonal of $\Sym^2(X)$. We thus have
the commutative diagram
\begin{equation*} \xymatrix{
C \ar[r] \ar[d]_{\eta} & \mathcal{Z}^{\mathrm{univ}} \ar[d]_-{\zeta} \ar[r] & X \times X \ar[d]^{S_2-\textrm{quotient}} \\
\mathbb{P}(H^0(C,L)^\vee) \ar[r]_-{h} & \mathcal{H}\mathrm{ilb}_2(X) \ar[r] & \Sym^2(X).
} \end{equation*}
By diagram \eqref{eq:ZLuniv}, recall that $\mathcal{L}^{\mathrm{univ}}$ is a $\mathbb{P}^1$-bundle
over $\mathcal{H}\mathrm{ilb}_2(X)$. Define $\Sigma := h^* \mathcal{L}^{\mathrm{univ}}$,
so $p: \Sigma \to \mathbb{P}(H^0(C,L)^\vee)$ gives a ruled surface, specifically a $\mathbb{P}^1$-bundle
over $\mathbb{P}(H^0(C,L)^\vee)$.
In fact, using the relation between $L$ and $\kappa^*\mathcal{O}_X(1)$ and \eqref{eq:Luniv},
along with the projection formula, we see
that $$\Sigma = \mathbb{P}(\eta_* \mathcal{O}_C \otimes \mathcal{O}_{\mathbb{P}(H^0(C,L^\vee)}(3)).$$
We would like to pick out a section $s: \mathbb{P}(H^0(C,L)^\vee) \to \Sigma$
such that the image of the composite $b \circ h \circ s$ is linear in $\mathbb{P}({\mathscr{C}(J)})$.
First, we claim there is a unique section $s: \mathbb{P}(H^0(C,L)^\vee) \to \Sigma$ classifying
$\eta_* \mathcal{O}_C \otimes \mathcal{O}(3) \to \mathcal{O}(1)$.
That is, a cohomology computation shows that there is an exact sequence
$$0 \to \mathcal{O}(3) \to \eta_* \mathcal{O}_C \otimes \mathcal{O}(3) \to \mathcal{O}(1) \to 0$$
on $\mathbb{P}(H^0(C,L)^\vee)$, and because $\Hom(\mathcal{O}(3),\mathcal{O}(1)) = 0$,
any such map $\eta_* \mathcal{O}_C \otimes \mathcal{O}(3) \to \mathcal{O}(1)$ is this one, up to scalars.
Therefore, we obtain a map from $\mathbb{P}(H^0(C,L)^\vee)$ to $\mathcal{L}^{\mathrm{univ}}$,
and thus to $\mathbb{P}({\mathscr{C}(J)})$. The nondegeneracy condition on the geometric data that we require
is exactly that the image of this map does not intersect $X$; it is clear that it is satisfied
by the data in Construction \ref{cnstr:hermHCtoCurve} by assumption.
We now check that this map $\mathbb{P}(H^0(C,L)^\vee) \to \mathbb{P}({\mathscr{C}(J)})$ is linear,
i.e., the pullback of $\mathcal{O}_{\mathbb{P}({\mathscr{C}(J)})}(1)$ to $\mathbb{P}(H^0(C,L)^\vee)$
from $\mathbb{P}({\mathscr{C}(J)})$ (via $b \circ h \circ s$) is isomorphic to $\mathcal{O}_{\mathbb{P}(H^0(C,L)^\vee)}(1)$.
Since $\Sigma$ is a ruled surface,
there exists some $a_1, a_2 \in \mathbb{Z}$ such that
\begin{equation} \label{eq:linearityHerm}
(b \circ h)^* \mathcal{O}_{\mathbb{P}({\mathscr{C}(J)})}(1) = \mathcal{O}_p(a_1) \otimes p^* \mathcal{O}_{\mathbb{P}(H^0(C,L)^\vee)}(a_2).
\end{equation}
It is easy to see that $a_1 = 1$ because the fibers of $\Sigma$ map to lines in $\mathbb{P}({\mathscr{C}(J)})$.
To compute $a_2$, we pull back \eqref{eq:linearityHerm} to $C$ via $\iota: C \to \Sigma$:
\begin{align} \label{eq:linearityHerm2}
\iota^* (b \circ h)^* \mathcal{O}_{\mathbb{P}({\mathscr{C}(J)})}(1)
&= \iota^* \mathcal{O}_p(1) \otimes \eta^* \mathcal{O}_{\mathbb{P}(H^0(C,L)^\vee)}(a_2) \\
&= \eta^* \mathcal{O}_{\mathbb{P}(H^0(C,L)^\vee)}(3) \otimes \eta^* \mathcal{O}_{\mathbb{P}(H^0(C,L)^\vee)}(a_2), \nonumber
\end{align}
Since the left-hand side of \eqref{eq:linearityHerm2}
is just $\eta^* \mathcal{O}_{\mathbb{P}(H^0(C,L)^\vee)}(3)$ by the
assumed relation, we must have $a_2 = 0$.
Thus, we obtain
$$s^* (b \circ h)^* \mathcal{O}_{\mathbb{P}({\mathscr{C}(J)})}(1) = s^* (\mathcal{O}_p(1)) = \mathcal{O}_{\mathbb{P}(H^0(C,L)^\vee)}(1),$$
as desired.
\end{construction}
\subsection{Specializations} \label{sec:hermHCcases}
Allowing the cubic Jordan algebra $J$ in Theorem \ref{thm:hermHC} to vary gives many special cases, as highlighted in Table \ref{table:examples}. For certain choices of $J$, we recover some of the previously considered spaces related to hypercubes and the corresponding parametrization theorems, while for others, we obtain moduli spaces of genus one curves with higher rank vector bundles.
For example, for $J = K \times K \times K$, we recover the case of standard hypercubes from \S \ref{sec:hypercube}. In this case,
the homogeneous variety $X_{\mathscr{C}(J)}$ is just the Segre embedding of $\mathbb{P}^1 \times \mathbb{P}^1 \times \mathbb{P}^1$.
If we instead let $J = K$, with norm form $\Norm(x) = x^3$ for an element $x \in K$, then the space ${\mathscr{C}(J)}$ coincides with the space of triply symmetric hypercubes studied in \S \ref{sec:3symHC}, and $X_{\mathscr{C}(J)}$ is the twisted cubic in $\mathbb{P}^3$. Also, the space of doubly symmetric hypercubes (see \S \ref{sec:2symHC}) may be obtained by taking $J = K \times K$, with the norm form $\Norm(x_1, x_2) = x_1 x_2^2$ for $(x_1,x_2) \in K \times K$ and $X_{\mathscr{C}(J)}$ the image of $\mathbb{P}^1 \times \mathbb{P}^1$ embedded in $\mathbb{P}^5$ via $\mathcal{O}(1,2)$.
We describe some of the new moduli problems below. In each case, we also describe more carefully the bijections when the algebra $J$ contains split algebras, e.g., matrix algebras instead of more general central simple algebras. Taking other forms of these algebras then give twisted versions of the geometric data on the genus one curve.
\subsubsection{Doubly skew-symmetrized hypercubes} \label{sec:2skewHC}
A new case arises by choosing the cubic Jordan algebra $J$ to be $K \times \Mat_{2 \times 2}(K)$. We first describe the structure of $J$ and the space of Hermitian cubes ${\mathscr{C}(J)}$ for this choice of $J$.
The norm of an element $(x, M) \in K \times \Mat_{2 \times 2}(K)$ is $\Norm(x,M) = x \det(M)$, and for Springer's construction, we take the basepoint $e$ to be $(1, \mathrm{Id})$. Composition in this algebra is component-wise, with the usual Jordan structure on $\Mat_{2 \times 2}(K)$: for elements $(x_1, M_1), (x_2, M_2) \in K \times \Mat_{2 \times 2}(K)$, we have $(x_1, M_1) \bullet (x_2, M_2) = (x_1 x_2, (M_1 \cdot M_2 + M_2 \cdot M_1)/2)$.
We claim that there is a natural isomorphism between ${\mathscr{C}(J)}$ and the space $W := K^2 \otimes \wedge^2 K^4$, where the action of $\mathrm{SL}_2(J)$ on ${\mathscr{C}(J)}$ corresponds to the natural action of $\mathrm{SL}_2(K) \times \mathrm{SL}_4(K)$ on $W$. We represent elements of $W$ as a pair of $4 \times 4$ skew-symmetric matrices. Let $a, b, c, d, b_{ij}, c_{ij} \in K$ for $1 \leq i, j \leq 2$. Then this isomorphism sends
$(a, (b, (b_{ij})), (c, (c_{ij})), d) \in {\mathscr{C}(J)}$ to the pair
\begin{equation*}
{\begin{pmatrix}
0 & a & -b_{12} & b_{11} \\
-a & 0 & -b_{22} & b_{21} \\
b_{12} & b_{22} & 0 & c \\
-b_{11} & -b_{21} & -c & 0
\end{pmatrix}}
\qquad \qquad
{\begin{pmatrix}
0 & b & -c_{12} & c_{11} \\
-b & 0 & -c_{22} & c_{21} \\
c_{12} & c_{22} & 0 & d \\
-c_{11} & -c_{21} & -d & 0
\end{pmatrix}}
\end{equation*}
in $W$. It is a slightly tedious but trivial computation to check that the two group actions align.
The homogeneous variety $X_{\mathscr{C}(J)}$ is $\mathbb{P}^1 \times \Gr(2,4) \hookrightarrow \mathbb{P}^1 \times \mathbb{P}(\wedge^2 K^4) \hookrightarrow \mathbb{P}(W^\vee)$, where the first inclusion is by the Pl\"{u}cker map and the second is the Segre embedding. A map from a scheme $T$ to $X$ thus gives a degree $2$ line bundle and a rank $2$ vector bundle on $T$. Theorem \ref{thm:hermHC} with this choice of $J$ gives the following moduli description for pairs of Hermitian cubes up to equivalence:
\begin{thm}
Let $V_1$, $V_2$, and $V_3$ be $K$-vector spaces of dimensions $2$, $2$, and $4$,
respectively. Then nondegenerate $\mathrm{GL}(V_1) \times \mathrm{GL}(V_2) \times \mathrm{GL}(V_3)$-orbits of
$V_1 \otimes V_2 \otimes \wedge^2(V_3)$ are in bijection with isomorphism classes of nondegenerate
triples $(C,L_1,L_2,E)$, where $C$ is a genus one curve over $K$, $L_1$ and $L_2$ are non isomorphic
degree $2$ line bundles on $C$, and $E$ is a rank $2$ vector bundle
on $C$ with $\det E \cong L_1 \otimes L_2$.
\end{thm}
If we record the point $P := L_2 \otimes L_1^{-1}$ instead of the line bundle $L_2$, we recover Theorem~\ref{doubleskewpar}.
Also, we may also replace $J$ with $K \times Q$, for any quaternion algebra $Q$, in which case the homogeneous variety $X$ is a product of $\mathbb{P}^1$ and a twisted form of the Grassmannian $\Gr(2,4)$.
\subsubsection{Triply skew-symmetrized hypercubes} \label{sec:3skewHC}
Now let $J = \mathscr{H}_3(K \times K)$, i.e., the space of $3 \times 3$ Hermitian matrices over the quadratic algebra
$K \times K$. It is easy to check that $J$ is isomorphic to the space $\Mat_{3 \times 3}(K)$ of $3 \times 3$ matrices over $K$,
with composition given by $(M_1 \cdot M_2 + M_2 \cdot M_1)/2$ for $M_1, M_2 \in \Mat_{3 \times 3}(K)$.
Then the space ${\mathscr{C}(J)}$ of Hermitian cubes is isomorphic to $W := \wedge^3 K^6$, with the action of $\mathrm{SL}_2(J)$ matching the natural action of $\mathrm{SL}_6(K)$ on $W$. We will write down the isomorphism, following \cite[Example 19]{krutelevich}. Let $a, b_{ij}, c_{ij}, d \in K$ for $1 \leq i, j \leq 3$. Let $\{e_1, e_2, e_3, f_1, f_2, f_3 \}$ be a basis for $K^6$, and let $e_j^* = e_{j+1} \wedge e_{j+2}$ and $f_j^* = f_{j+1} \wedge f_{j+2}$ in $\wedge^2 K^6$, where the indices are taken cyclically. Then the element $(a,(b_{ij}),(c_{ij}),d)$ is sent to
$$a e_1 \wedge e_2 \wedge e_3 + \sum_{i,j = 1}^3 b_{ij} e_i \wedge f_j^* + \sum_{i,j = 1}^3 c_{ij} f_i \wedge e_j^* + d f_1 \wedge f_2 \wedge f_3$$
in $W = \wedge^3 K^6$.
Here, the homogeneous variety $X_{\mathscr{C}(J)}$ is the Grassmannian $\Gr(3,6)$, which lies in $\mathbb{P}(W^\vee)$ via the Pl\"{u}cker map.
Specializing Theorem \ref{thm:hermHC} gives the following basis-free version of Theorem \ref{thm:3skewHCpreview}:
\begin{thm} \label{thm:3skewHC}
Let $V_1$ and $V_2$ be $K$-vector spaces of dimensions $2$ and $6$,
respectively. Then nondegenerate $\mathrm{GL}(V_1) \times \mathrm{SL}(V_2)$-orbits of
$V_1 \otimes \wedge^3(V_2)$ are in bijection with isomorphism classes of
nondegenerate triples $(C,L,E)$, where $C$ is a genus one curve over $K$, $L$ is a
degree $2$ line bundle on $C$, and $E$ is a rank $3$ vector bundle
on $C$ with $\det E \cong L^{\otimes 3}$.
\end{thm}
\subsubsection{Some more exceptional representations} \label{sec:excHC}
For $J = \mathscr{H}_3(\mathscr{Q})$, where $\mathscr{Q}$ denotes the split quaternion algebra over $K$ (isomorphic to $\Mat_{2 \times 2}(K)$), we obtain a more exceptional representation and theorem.
\begin{thm}
Let $V_1$ and $V_2$ be $K$-vector spaces of dimensions $2$ and $32$, respectively, where $V_2$
is the half-spin representation of $\Spin_{12}$. Let $X$ be the homogeneous space for this action in
$\mathbb{P}(V_2^\vee)$.
Then nondegenerate $\mathrm{GL}(V_1) \times \Spin_{12}$-orbits of $V_1 \otimes V_2$
are in bijection with the $K$-points of an open substack
of the moduli space of nondegenerate tuples $(C,L,\kappa, \psi)$, where $C$ is a genus one curve, $L$ is a degree $2$
line bundle on $C$, and $\kappa$ is a map from $C$ to $X \subset \mathbb{P}(V_2^\vee)$, along
with an isomorphism $\psi$ from $L^{\otimes 3}$ to the pullback of $\mathcal{O}_{\mathbb{P}(V_2^\vee)}(1)$ to $C$ via $\kappa$.
\end{thm}
In Table \ref{table:examples}, we referred to this choice of $X_{\mathscr{C}(J)}$ as the projective line
over the cubic algebra $J$; this is due to our interpretation of $X_{\mathscr{C}(J)}$ as the rank one cubes
that are Hermitian over $J$.
In addition, analogous theorems hold when $\mathscr{Q}$ is replaced by a non-split quaternion algebra. Then, the group $\Spin_{12}$ is replaced by the appropriate twists.
For $J = \mathscr{H}_3(\mathscr{O})$, where $\mathscr{O}$ denotes the split octonion algebra over $K$, we have
a similar statement.
\begin{thm}
Let $V_1$ and $V_2$ be $K$-vector spaces of dimensions $2$ and $56$, respectively, where $V_2$ is the miniscule
representation of the group $E_7$. Let $X$ be the homogeneous space for this action in $\mathbb{P}(V_2^\vee)$. Then
nondegenerate $\mathrm{GL}(V_1) \times E_7$-orbits of $V_1 \otimes V_2$ are in bijection with the $K$-points of an open substack
of the moduli space of nondegenerate tuples $(C,L,\kappa,\psi)$, where $C$ is a genus one curve, $L$ is a degree $2$
line bundle on $C$, and $\kappa$ is a map from $C$ to $X \subset \mathbb{P}(V_2^\vee)$, along
with an isomorphism $\psi$ from $L^{\otimes 3}$ to the pullback of $\mathcal{O}_{\mathbb{P}(V_2^\vee)}(1)$ to $C$ via $\kappa$.
\end{thm}
Again, taking different octonion algebras over $K$ gives similar theorems, where the split $E_7$ is replaced by twisted forms
of $E_7$.
\section{Connections with exceptional Lie groups and Lie algebras} \label{sec:ExcLieAlgs}
In this section, we describe two ways in which the coregular representations we have considered
in this paper are related to exceptional Lie groups and Lie algebras.
These still mysterious connections give hints
as to further moduli problems and directions for investigation.
\subsection{Vinberg's \texorpdfstring{$\theta$}{theta}-groups}\label{vinberg}
All of the representations we have considered in this paper are {\em $\theta$-groups} in the sense
of Vinberg \cite{vinberg}, when viewed as representations of complex Lie groups.
Vinberg's idea was to extend the concept of a Weyl group and a Cartan
subspace to graded Lie algebras. Then the invariants of the representation correspond exactly
to the invariants of the Cartan subspace under the action of the Weyl group. Since the Weyl group
here is generated by complex reflections, its ring of invariants is free, so the representations
obtained in this way are coregular. Moreover, this construction gives a description of the invariants
(including their degrees) in terms of the Lie theory.
Let $\mathfrak{g}$ be a $\mathbb{Z}/m\mathbb{Z}$-graded (or $\mathbb{Z}$-graded) Lie algebra for some integer $m \geq 1$ (or
respectively, $m = \infty$). Then for $m$ finite, we may write
$$\mathfrak{g} = \mathfrak{g}_0 + \mathfrak{g}_1 + \cdots + \mathfrak{g}_{m-1}$$
with $[\mathfrak{g}_i,\mathfrak{g}_j] \subset \mathfrak{g}_{i+j}$ for $i,j \in \mathbb{Z}/m\mathbb{Z}$.
To each such graded Lie algebra, one associates an automorphism $\theta$ (or, for $m$ infinite, a one-parameter
family of such $\theta$) of $\mathfrak{g}$, {\textrm e.g., } for $m$ finite, we have $\theta(x) = \omega^k x$ for
$x \in \mathfrak{g}_k$ and $\omega = e^{2\pi i}$.
Given a graded Lie algebra $\mathfrak{g}$, let $G$ be any connected group having $\mathfrak{g}$ as its Lie algebra,
and let $G_0$ be the connected subgroup of $G$ with $\mathfrak{g}_0$ as its Lie algebra. Then a
{\em $\theta$-group} corresponding to $\mathfrak{g}$ is the representation of $G_0$ on $\mathfrak{g}_1$.
(The name ``group'' makes sense by thinking of $G_0$ as a subgroup of $\mathrm{GL}(\mathfrak{g}_1)$.)
Vinberg showed that the $G_0$-invarants of $\mathfrak{g}_1$ form a polynomial ring, and in fact,
the elements of $\mathfrak{g}_1$ with the same $G_0$-invariants comprise a finite number of orbits
over $\mathbb{C}$ \cite{vinberg}. Moreover, Kac showed that most such representations arise in this way
\cite{kac-nilpotent}.
These observations give a heuristic reason for looking at these particular representations if
we want to find parametrizations of genus one curves with data such as line bundles and
points on the Jacobian. The coarse moduli space of such objects is often a weighted projective space,
or a generically finite cover thereof, e.g., in many of the cases in this paper, the orbit spaces over $\mathbb{C}$ parametrize elliptic
curves in some family determined by the invariants. In contrast, arithmetic objects such as rings and ideal
classes, whose coarse moduli spaces are just a finite number of points, are often parametrized by
orbits of prehomogeneous vector spaces.
The $\theta$-groups may be read off directly from subdiagrams of Dynkin diagrams or
affine Dynkin diagrams \cite{vinberg,kac-nilpotent}. For subdiagrams of Dynkin
diagrams, the $\theta$-groups are all prehomogeneous vector spaces. All other
irreducible $\theta$-groups are listed in \cite[Table III]{kac-nilpotent}.
These are given by removing a single node from the affine Dynkin diagram.
We have indicated the affine Dynkin diagram that gives rise to each representation
in Table \ref{table:examples}. (Note that in Table~\ref{table:examples} we only
list the semisimple part of the $\theta$-group.) We have $m=2$ for lines 1--12 of Table~\ref{table:examples}, $m=3$ for lines 13--18, $m=4$ for line 19, and $m=5$ for line 20. Thus the value of $m$ corresponds to the degree of the associated line bundles in the geometric data!
The connection between these moduli problems and Vinberg theory, especially in the special case $m=2$, is investigated in the beautiful work of Thorne~\cite{jthorne-thesis}. It is an interesting question as to how Thorne's representation-theoretic constructions of families of curves when $m=2$, obtained via Vinberg theory and the deformation theory of simple singularities, are related to our more direct geometric constructions of these families.
\subsection{The Magic Triangle of Deligne and Gross}
In~\cite{delignegross}, Deligne and Gross observed that many of the remarkable properties of the
adjoint representations of the groups in the exceptional series
\begin{equation}\label{es}
1\subset A_1 \subset A_2 \subset G_2 \subset D_4 \subset F_4 \subset E_6 \subset E_7\subset E_8
\end{equation}
(as observed in~\cite{deligne-exceptionalseries}) persist for certain other natural sequences of representations.
Namely, for each pair $H \subset K$ of distinct subgroups in (\ref{es}), we may consider the centralizer $Z(H,K)$ of $H$ in $K$. In this way, we obtain a triangle of subgroups of $E_8$, as shown in Table~\ref{dgtable}, where the rows are
indexed by $H=E_7,\ldots,A_1$ from top to bottom, and the columns are indexed by $K=A_1,\ldots,E_7$ from left to right, and where we have
ignored all semidirect products with, and quotients by, finite groups.
Deligne and Gross show that each of the groups in this triangle is naturally equipped with a certain
{\it preferred} representation of that group, obtained from the action of $\mathbb{Z}(H,K) \times H$ on ${\rm Lie}(K)$
(see~\cite{delignegross} for details).
In Table~\ref{dgtable}, we have included also the dimensions of these preferred representations.
\begin{table}\label{dgtable}
$$\begin{array}{cccccccc}
& & & & & & & (A_1,2) \\[.1in]
& & & & & &(\mathbb{G}_m,1) & (A_2,3) \\[.1in]
& & & & &(\mu_3,1) &(A_1,3) & (G_2,7) \\[.1in]
& & & &(\mu_2^2,1) & (\mathbb{G}_m^2,2)& (A_1^3,4)& (D_4,8) \\[.1in]
& & &(\mu_2^2,2) & (A_1,5)& (A_2,8) & (C_3,14) & (F_4,26) \\[.1in]
& & (\mu_3,1)&(\mathbb{G}_m^2,3) & (A_2,6)& (A_2^2,9)& (A_5,15)& (E_6,27)\\[.1in]
& (\mathbb{G}_m,2)& (A_1,4)& (A_1^3,8)& (C_3,14)& (A_5,20)& (D_6,32)& (E_7,56) \\[.1in]
\end{array}
$$
\caption{Deligne and Gross's Magic Triangle of group representations}
\end{table}
Note that the groups in the bottom right $4\times 4$ square in Table~\ref{dgtable} correspond to the
entries of ``Freudenthal's Magic Square'' of Lie algebras \cite{freudenthal-magic}.
If one takes the last row of representations $(Z(H,K),V)$, where $H=A_1$, and considers instead $(Z(H,K)\times H, V\otimes 2)$, one obtains many of the representations that we used in Sections~\ref{sec:HCpreview} and
\ref{sec:hermHC} to understand degree 2 line bundles on genus one curves. Similarly, if one takes the second-to-last row of representations $(Z(H,K),V)$, where $H=A_2$, and considers instead $(Z(H,K)\times H,V\otimes 3)$, one obtains many of the representations that we used in Sections~\ref{sec:RCpreview} and \ref{sec:hermRC} to understand degree~3 line bundles on genus one curves.
As with \S\ref{vinberg}, we suspect that much more lies behind this connection.
|
1,314,259,993,381 | arxiv | \section{Introduction}
The mobility of a single spherical particle immersed in a solvent determines
the velocity of the particle in response to an applied external force. For
small Reynolds numbers Stokes' law yields the famous expression
$\mu_0^{-1}=3\pi\eta a$ in terms of the sphere diameter $a$ and the solvent
viscosity $\eta$ in thermal equilibrium. This free mobility $\mu_0$ is
intimately related to spontaneous solvent fluctuations through the Einstein
relation. For a suspension of interacting particles, even without hydrodynamic
coupling, the mobility $\mu$ of a single tagged particle is reduced. This
reflects the fact that work is necessary to displace neighboring particles in
order for the tagged particle to move, leading to larger dissipation. Still,
in equilibrium the Einstein relation
\begin{equation}
\label{eq:er}
D = k_\text{B}T\mu
\end{equation}
equates the effective, long-time diffusion coefficient $D$ obtained from
measuring the mean square displacement of a single tagged particle with its
mobility through the solvent temperature $T$, where $k_\text{B}$ is the
Boltzmann constant.
The Einstein relation~(\ref{eq:er}) is one out of many fluctuation-dissipation
relations valid in the \textit{linear response regime} for small perturbations
of the equilibrium state~\cite{kubo}. It is crucial to realize that also
nonequilibrium steady states allow for a linear response. However, driving the
suspension beyond the linear response regime, fluctuation-dissipation
relations such as the Einstein relation~\eqref{eq:er} need to be generalized
to nonequilibrium. There are two basic strategies discussed in the
literature. The first strategy is to introduce an additive correction taking
on the form of another correlation function~\cite{cris03,diez05,spec06}. This
correlation function involves another observable that can be related either to
entropy production~\cite{seif10} or 'dynamical activity'~\cite{baie09}. Such
an approach has been demonstrated experimentally for a single driven colloidal
particle~\cite{blic07,gome09,mehl10}. The second strategy introduces a
multiplicative correction through an effective
temperature~\cite{cugl97a,kurc05} replacing $T$ in
Eq.~\eqref{eq:er}. Originally developed in the context of aging, glassy
dynamics and weakly driven systems, effective temperatures have also been
investigated in shear driven supercooled liquids~\cite{bert02a}.
Self-diffusion in sheared interacting suspensions has been studied extensively
in computer simulations~\cite{heye86,cumm91,rast96,foss00,bert02a} and
experiments~\cite{qiu88,bess07} as well as
analytically~\cite{indr95,morr96,szam04}. A large body of publications
studies supercooled conditions in relation with the glass
transition~\cite{bert02a,bess07,krue10}. Most of these works focus on the
self-diffusion coefficient as this quantity is easily obtained from
experiments and simulations. The mobility of a tagged particle has been
addressed somewhat less prominently and mostly in analytic
calculations~\cite{blaw95,szam04}. In this Letter we determine numerically
both the full time-dependent velocity response and autocorrelation functions
for a tagged particle of a hard-core Yukawa suspension driven into a
nonequilibrium steady state through simple shear flow. In contrast to previous
work, explicit knowledge of the response function allows us to calculate and
discuss the mobility as a function of density and strain rate. We use a novel
method to efficiently obtain the time-dependent response function of the
velocity with respect to a small force applied to a single particle. Similar
methods to extract the response of a system using only unperturbed steady
state trajectories have been discussed in Refs.~\cite{chat04,bert07}.
\section{Sheared hard-core Yukawa fluid}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{flow}
\caption{a) Simple shear flow with strain rate $\shr$. b) Pair distribution
function $g(\x)$ in the $xy$ plane for volume fraction $\phi=0.3$ and
strain rate $\shr=1$. Centered on any particle, the function $g(\x)$
quantifies the probability to find another particle at $\x$. While
isotropic in equilibrium, the pair distribution function is distorted
through external flow.}
\label{fig:flow}
\end{figure}
The $N$ colloidal particles interact through the purely repulsive Yukawa
potential
\begin{equation}
\label{eq:yuk}
u(r) =
\begin{cases}
\varepsilon\frac{e^{-\kappa(r-1)}}{r} & (r\geqslant 1) \\
\infty & (r<1)
\end{cases}
\end{equation}
with hard core exclusion. The two potential parameters are the energy $\varepsilon$
at contact and the screening length $\kappa^{-1}$ determining the range of
interactions. Changing $\kappa$ interpolates between hard-sphere (large $\kappa$)
and coulombic (small $\kappa$) interactions. Throughout the paper we employ
dimensionless units and measure length in units of the particle diameter $a$
and energies in units of $k_\text{B}T$. The time scale $3\pi a^3\eta/k_\text{B}T$ is set by
the time a particle diffuses a distance equal to its diameter. In particular,
employing these units the mobility and diffusion coefficient of a free
particle reduce to unity, $D_0=\mu_0=1$. We set $\kappa^{-1}=0.2$ and choose
$\varepsilon=8$ such that the liquid is stable for a large pressure
range~\cite{azha00}. We explore the liquid phase at four volume fractions
$\phi\equiv0.1,0.2,0.3,0.4$, where $\phi=\pi N(a/L)^3/6$ with $L$ the side
length of the cubic simulation box. The highest density $\phi=0.4$ is close to
the equilibrium freezing transition, which occurs at a pressure
$28.9$~\cite{azha00} (for $\phi=0.4$ the measured pressure in our simulation
is $27.6$). For comparison, the freezing transition in a hard sphere
suspension occurs at $\phi\simeq0.494$~\cite{ande02}.
We employ Brownian dynamics simulations, for details see the appendix. The
suspension is driven into a nonequilibrium steady state through simple shear
flow $\vec u(\x)=\shr y\vec e_x$ with strain rate $\shr$ (which equals the
P\'eclet number in our units), see Fig.~\ref{fig:flow}a). The equations of
motion are $\dot\x_k=\vec v^0_k$ and
\begin{equation}
\label{eq:lang}
\dot{\vec v}^0_k = -\nabla_k U - [\vec v^0_k -\vec u(\x_k)]
+ \vec f_k + \boldsymbol\xi_k,
\end{equation}
where the dimensionless mass is set to one. Physically, this choice implies
that momenta relax on the diffusive time scale. Besides the forces due to the
potential energy $U\equiv\sum_{k<k'}u(|\x_k-\x_{k'}|)$ we allow for direct
forces $\vec f_k$. The stochastic noise $\boldsymbol\xi_k$ modeling the
interactions with the solvent has zero mean and correlations
$\mean{\xi_{ki}(t)\xi_{k'j}(t')}=2\delta_{ij}\delta_{kk'}\delta(t-t')$, where
$i,j=x,y,z$ is the vector component. In Eq.~(\ref{eq:lang}), we neglect
hydrodynamic coupling between different particles due to the solvent.
For the shear flow turned on we correct the particle velocities $\{\vec
v^0_k\}$ by the external flow and investigate the relative velocity
$\vec v_k=\vec v^0_k-\vec u(\x_k)$. We are interested in the dynamics
of a single tagged particle interacting with the remaining $N-1$ particles
in the suspension. Since all particles are identical we designate particle 1
as the tagged particle and drop the subscript; in the following $\x$ and
$\vec v$ are the position and relative velocity of the tagged particle,
respectively. We define the response of this velocity
\begin{equation}
\label{eq:R}
R_{ij}(t-t';\shr) \equiv \fd{\mean{v_i(t)}}{f_j(t')}
\end{equation}
with respect to an \textit{additional} small force $\vec f$. In addition, we
define the relative velocity autocorrelation matrix
\begin{equation}
\label{eq:C}
C_{ij}(t-t';\shr) \equiv \mean{v_i(t)v_j(t')}_0.
\end{equation}
The brackets $\mean{\cdots}_0$ refer to an average with respect to the
unperturbed steady state whereas $\mean{\cdot}$ is the average with the
external force applied. In equilibrium ($\shr=0$) the fluctuation-dissipation
theorem $R_{ij}(t)=C_{ij}(t)$ holds.
\section{Response function}
Sampling the correlation function~\eqref{eq:C} is straightforward. The direct
way to obtain the response matrix~\eqref{eq:R} from simulations would be
through a step perturbation of the force and the subsequent recording of the
tagged particle's velocity. Such a protocol has to be repeated many times and
the corresponding response function follows as the time-derivative of the mean
velocity. Here, we follow another route and determine the response function
through the path integral representation of the fluctuation-dissipation
theorem (FDT) for nonequilibrium steady states. The FDT in its general form
reads
\begin{equation*}
R_{ij}(t-t';\shr) = \mean{ v_i(t) B_j(t')}_0
\end{equation*}
with observable $B_j(t)=\delta\ln P/\delta f_j(t)|_{\vec f=0}$ conjugate to
the perturbation force $\vec f$ acting on the tagged particle. The stochastic
path weight is
\begin{equation*}
P[\{\boldsymbol\xi_k(t)\};\vec f(t)] = \mathcal{N} \exp\left\{ -\frac{1}{4}
\sum_{k=1}^N\Int{t}\boldsymbol\xi^2_k(t) \right\}
\end{equation*}
with normalization constant $\mathcal N$. From Eq.~\eqref{eq:lang} we see that
a perturbation of $\vec f$ is equivalent to a perturbation of $\boldsymbol\xi$ with
$\delta\xi_i(t)/\delta f_j(t')=-\delta_{ij}\delta(t-t')$ we immediately obtain
$B_j=\xi_j/2$ and therefore
\begin{equation}
\label{eq:R:nois}
R_{ij}(t-t';\shr) = \frac{1}{2}\mean{v_i(t)\xi_j(t')}_0.
\end{equation}
Since in a computer simulation we have direct access to the noise we can
exploit such an expression to obtain the response function through a steady
state correlation function. While Eq.~\eqref{eq:R:nois} has been known
before~\cite{cala05,spec06}, to the best of our knowledge so far it has not
been exploited to obtain the response function numerically.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{offdiag1}
\caption{Comparison of the off-diagonal components $R_{xy}(t)$ and
$R_{yx}(t)$ for strain rate $\shr=1.0$ and the four simulated volume
fractions: a)~$\phi=0.1$, b)~$\phi=0.2$, c)~$\phi=0.3$, and
d)~$\phi=0.4$. Increasing the density, the two curves approach each other
until for the highest density they almost lie on top of each other. Note
the changing time scale.}
\label{fig:offdiag}
\end{figure}
To understand the influence of the shear flow on the particle motion it is
instructive to look at the off-diagonal components $R_{xy}$ and $R_{yx}$
plotted in Fig.~\ref{fig:offdiag}. The component $R_{yx}$ describes the mean
behavior of a tagged particle when we apply a force parallel to the shear flow
and measure its velocity perpendicular in $y$ direction. The behavior of
$R_{yx}$ can be explained by looking at the pair distribution function in
Fig.~\ref{fig:flow}b), which is deformed compared to its equilibrium isotropic
shape. In order to move faster at short times the particle moves up ($R_{yx}$
is positive) to overcome its neighbors through a region of lower probability
to encounter another particle. At a later time the particle is pushed back
($R_{yx}$ is negative) due to interactions with other particles which become
more pronounced at higher densities. Interchanging $x$ and $y$-direction, the
same arguments hold for the component $R_{xy}$. However, since we pull the
particle up it enters a region where the surrounding fluid moves faster due to
the shear flow. Hence, the particle is accelerated and the response of the
relative velocity is negative for small times. With increasing density the
particle cannot move far in $y$-direction, making the velocity differences
smaller. The qualitative difference between the two curves diminishes and for
$\phi=0.4$ both almost lie on top of each other. Hence, the effect of the
shear flow on a single particle is more and more symmetric as particle motion
becomes correlated at higher densities.
\section{Diffusion and mobility}
\begin{figure}[b!]
\centering
\includegraphics[width=\linewidth]{diffusion}
\caption{Diffusion coefficients $D_\parallel$ parallel and $D_\perp$
perpendicular to the shear flow \textit{vs}. strain rate $\shr$ for the
four different volume fractions $\phi$. For a free particle
$D_\perp=D_\parallel=1$.}
\label{fig:diff}
\end{figure}
We now turn to the nonequilibrium diffusion coefficients and mobilities,
\begin{equation}
\label{eq:int}
D_{ij} \equiv \IInt{t}{0}{\infty} C_{ij}(t), \quad
\mu_{ij} \equiv \pd{\mean{v_i}}{f_j} = \IInt{t}{0}{\infty}R_{ij}(t),
\end{equation}
which are obtained through integrating the velocity autocorrelation and
response matrix, respectively. The mobility is defined as the velocity change
in response to a small force applied to the tagged particle, perturbing the
steady state reached through shearing the solvent. The diffusion coefficients
are related to the velocity autocorrelation through a Green-Kubo kind
relation. They are plotted in Fig.~\ref{fig:diff} and increase with increasing
strain rate. From our data we find that we can distinguish diffusion parallel
to the shear flow with $D_\parallel=D_{xx}$ and diffusion perpendicular with
$D_\perp=D_{yy}=D_{zz}$. In shear flow the tagged particle moves between
layers of different flow velocity effectively leading to larger fluctuations,
allowing the particle to explore phase space faster. At low density the
diffusion $D_\parallel$ parallel to the shear flow is substantially enhanced
compared to $D_\perp$ even though we subtract out the external flow. However,
the difference between the two diffusion coefficients vanishes with increasing
density.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{mobility}
\caption{Reduced mobilities $\mu/\mu_\text{eq}$ \textit{vs}. strain rate
$\shr$ for the four different volume fractions $\phi$. The lines are fits
to Eq.~\eqref{eq:mu}.}
\label{fig:mob}
\end{figure}
In Fig.~\ref{fig:mob} we plot the reduced mobility
$\mu(\phi,\shr)/\mu_\text{eq}(\phi)$ versus strain rate $\shr$ for the four
simulated densities with equilibrium ($\shr=0$) mobility $\mu_\text{eq}$. We
find that the diagonal components of the mobility matrix are equal within
error bars and we obtain the shown mobilities through averaging over the three
directions. The off-diagonal components are somewhat harder to obtain due to
large statistical errors but are clearly much smaller than their diagonal
counterparts (data not shown). The dependence of the absolute value of the
mobility on the strain rate is rather weak and for the lowest density
$\phi=0.1$ it is even constant. Such a weak dependence suggests that the
ability of the solvent to reorganize in response to dragging the tagged
particle out of the 'cage' formed by neighboring particles is only slightly
affected by the presence of the shear flow. Going to supercooled conditions,
this is likely to break down~\cite{habd04}.
To explain the dependence of the mobility on the strain rate we consider the
mean velocity of the tagged particle from Eq.~(\ref{eq:lang}),
\begin{equation*}
\mean{\vec v} = \mean{\vec F^{(1)}} + \vec f = -(N/L)^3\Int{\x}
g(\x)\nabla u(\x) + \vec f.
\end{equation*}
Here, $g(\x;\phi,\shr,\vec f)$ is the pair distribution function to find a
second particle at $\x$ if there is a particle at the origin, see
Fig.~\ref{fig:flow}b), and $\vec F^{(1)}\equiv-\nabla_1U$ is the force exerted
by neighboring particles on the tagged particle. The effects of the shear flow
and the force $\vec f$ enter only through the structure information encoded in
the pair distribution function. We can expand $g$ into a Taylor series for
small forces $\vec f$. On the other hand, it is well known that the structure
of the suspension in the presence of shear flow is singularly perturbed from
its isotropic equilibrium form~\cite{dhont,russel}, requiring an asymptotic
expansion in powers of $\shr^{1/2}$. Such an expansion in lowest order leads
to
\begin{equation}
\label{eq:mu}
\mu(\phi,\shr)\approx\mu_\text{eq}(\phi)\left[1+\chi(\phi)\shr^{1/2}\right].
\end{equation}
In principle the coefficients $\chi(\phi)$ could be obtained from the
knowledge of the perturbed pair distribution function $g(\x)$. Here, we
determine them through fitting the mobility, see the lines in
Fig.~\ref{fig:mob}. The fits show a good agreement with the simulated data for
all strain rates and densities even though we have only retained the lowest
order of the expansion~(\ref{eq:mu}).
\section{Einstein relation}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{teff}
\caption{The response $R_{xx}(t)$ (dashed) and correlation $C_{xx}(t)$
(dotted) functions for volume fractions a) $\phi=0.1$, b) $\phi=0.2$, and
c) $\phi=0.3$. The correlation functions are scaled by a constant factor
$1/\theta_x$ to match the initial decay of the response function. The
insets show these factors as a function of strain rate for the three
directions, where the lines are fits to Eq.~\eqref{eq:teff}.}
\label{fig:teff}
\end{figure*}
In Fig.~\ref{fig:teff} we plot $R_{xx}(t)$ and $C_{xx}(t)$ as functions of
time for different volume fractions. The correlation functions have been
scaled by a constant factor $1/\theta_x$ to match the initial decay of the
response functions. This procedure reveals a rather good agreement between
response and correlation function even for longer times (this holds also for
the $yy$ and $zz$ components). We approximate the ratio
\begin{equation}
\label{eq:teff}
\frac{C_{ii}(t;\phi,\shr)}{R_{ii}(t;\phi,\shr)} \approx \theta_i(\phi,\shr)
\approx 1 + \alpha_i(\phi)\shr^2
\end{equation}
by these constant factors. Using $R_{ii}(0)=1$ we can interpret
$\theta_i\approx\mean{v_i^2}_0$ as effective temperatures since they equal
approximately the velocity fluctuations of the tagged particle. We have also
checked the distribution functions of the velocity which are Gaussian with
width $\sqrt{\theta_i}$ as expected.
In Eq.~(\ref{eq:teff}) we expand the correlation function to second order in
the strain rate (due to the symmetry of simple shear flow there is no first
order) with fit parameters $\alpha_i$. This expansion becomes exact in the
case of interacting particles with linear forces~\cite{spec09}. In the insets
of Fig.~\ref{fig:teff} the factors $\theta_i$ for the three directions $x$,
$y$, and $z$ are shown to follow this quadratic prediction. Similar to the
diffusion coefficients we can distinguish a factor $\theta_\parallel=\theta_x$
parallel, and a factor $\theta_\perp=\theta_y=\theta_z$ perpendicular to the
shear flow. Again, increasing the density the difference between the two
directions vanishes.
Two points are noteworthy. First, no simple proportionality can be found
between the off-diagonal components of the response and the correlation
matrix. Both have a qualitatively different shape (data not shown). In
particular, the off-diagonal response components are strictly zero at $t=0$
whereas, in nonequilibrium, the off-diagonal velocity correlations are
different from zero. Second, there is a fundamental difference compared to the
effective temperature discussed for non-stationary, glassy dynamics out of
equilibrium. There, fluctuation and dissipation are related by an effective
temperature at low frequencies (i.e., on long time scales) while the initial
decay of response and correlations is governed by the bath
temperature~\cite{cugl97a}. Moreover, the effective temperature evolves slowly
as the system approaches equilibrium. In contrast, in our case already the
initial decay is governed by a temperature $\theta_i>1$. The effect of this
temperature extends into the tails of response and correlation functions. It
is only for high densities that we observe a deviation in the tails as can be
seen in Fig.~\ref{fig:teff}c).
We can finally write down a simple generalized Einstein relation
\begin{equation}
\label{eq:er:mod}
D_{ii} = \theta_i\mu
\end{equation}
for the diagonal components of the diffusion matrix. Inserting the expansions
for mobility [Eq.~\eqref{eq:mu}] and effective temperature
[Eq.~\eqref{eq:teff}] we find that $D_{ii}-\mu_\text{eq}\sim\shr^{1/2}$ to
lowest order. Such a dependence was also found in molecular dynamics
simulations for a Lennard-Jones fluid~\cite{heye86,cumm91}. Due to the small
$\chi/\alpha_i$ ratios we cannot resolve this $\shr^{1/2}$ dependence here.
In Fig.~\ref{fig:er} we test the putative effective temperature by comparing
$\theta_\perp\mu$ to the numerically obtained diffusion coefficient $D_\perp$
perpendicular to the shear flow. The mobility for $\phi=0.1$ is independent of
strain rate and the diffusion follows the quadratic prediction
$D_\perp\propto\shr^2$. While we find a good agreement for the two lowest
densities, the effective temperature underestimates the diffusion coefficient
at intermediate strain rates and high densities since the diffusion
coefficient qualitatively changes and approaches a linear function
$D_\perp\propto\shr$ at high densities. This indicates that the differences in
the tails of response and velocity autocorrelation funtions become more
important. Also, higher order terms might be required in the expansion of
mobility and effective temperature.
\begin{figure}[b!]
\centering
\includegraphics[width=\linewidth]{er}
\caption{Test of the Einstein relation $D_\perp=\theta_\perp\mu$ with
effective temperature $\theta_\perp$ from Eq.~(\ref{eq:teff}) and mobility
from Eq.~(\ref{eq:mu}). The curves show a very good agreement for the
lowest two densities but start to deviate at higher densities.}
\label{fig:er}
\end{figure}
\section{Experimental realization}
We briefly discuss how our findings could be tested in experiments. Of course,
the route via Eq.~\eqref{eq:R:nois} to obtain the velocity response matrix
through the explicit knowledge of the noise is not available in experiments.
Moreover, the direct route, i.e. perturbing only a single particle within a
suspension and measuring its time-dependent mean velocity, is experimentally
challenging and, as we find in our simulations, also statistically more
demanding.
Despite the difficulties it is still interesting to obtain this response
function since it immediately yields the nonequilibrium mobility. We now
assume that the tagged particle undergoes overdamped motion which is certainly
the relevant limit for experiments. The Langevin equation for the tagged
particle reads
\begin{equation}
\label{eq:lang:od}
\dot\x - \shr y\vec e_x = \vec F^{(1)} + \vec f + \boldsymbol\xi.
\end{equation}
The replacement of the noise in Eq.~\eqref{eq:R:nois} by
Eq.~\eqref{eq:lang:od} is permissible since the Jacobian arising due to the
change of variables is independent of $\vec f$. We then obtain an
experimentally accessible expression for the response function
\begin{equation}
\label{eq:ex}
R_{ij}(t) = \frac{1}{2}\left[ C_{ij}(t)-\mean{v_i(t)F^{(1)}_j(0)}_0 \right]
\end{equation}
for components $i,j=y,z$ perpendicular to the shear flow. Let us assume that
we record the particle position $\x_k$ at times $t_k\equiv k\tau$ with time
resolution $\tau$, e.g., through video microscopy. The velocity is then
approximated through the finite difference $\vec v_k=(\x_k-\x_{k-1})/\tau$. In
principle, the force $\vec F^{(1)}$ on the tagged particle can be calculated
from the knowledge of the pair potential and the positions of all neighboring
particles.
\section{Conclusions}
We have studied a hard-core Yukawa colloidal system at different densities
driven into a nonequilibrium steady state through shear flow. In particular,
we investigate diffusion and mobility of a single tagged particle for four
volume fractions $\phi$ and intermediate strain rates $\shr\leqslant1$. The
self-diffusion coefficient is calculated through the Green-Kubo relation from
the single particle's velocity autocorrelation function. The mobility is
obtained from the particle's response function through integration. For
systems governed by stochastic dynamics, this response function can be
obtained efficiently from the correlation function Eq.~(\ref{eq:R:nois})
measured in the unperturbed steady state. While for low densities we can
clearly distinguish quantities measured parallel and perpendicular to the shear
flow, this difference vanishes for high densities.
Surprisingly, the diagonal components of the response (i.e., the response is
measured in the direction of the applied force) and correlation matrix can be
matched over a large time range. The resulting proportionality factor can be
interpreted as an effective temperature, effectively restoring the Einstein
relation connecting diffusion and mobility. Moreover, this proportionality
factor is well approximated by a quadratic expansion in the strain rate. It
will be important to study how general such a simple effective temperature is
for driven interacting colloidal suspensions and whether it extends to other
observables. We believe that the methodology presented here will lead to new
insights in the numerical study of dense colloidal suspension, e.g., for
microscopic stress fluctuations~\cite{spec09,zaus09}. Finally, the influence
of hydrodynamic interactions on our results remains to be investigated.
\acknowledgments
TS gratefully acknowledges financial support by the Alexander-von-Humboldt
foundation and by the Director, Office of Science, Office of Basic Energy
Sciences, Materials Sciences and Engineering Division and Chemical Sciences,
Geosciences, and Biosciences Division of the U.S. Department of Energy under
Contract No.~DE-AC02-05CH11231. Financial support by the DFG through SE 1119/3
is also acknowledged.
\section{Appendix: Simulation details}
The simulated systems consist of $N=1728$ particles in a cubic simulation
box. Since we are interested in the bulk behavior of the suspension we employ
periodic boundary conditions and account for the shear flow through
Lees-Edwards sliding bricks. The equations of motion are integrated by a
stochastic version of the velocity Verlet algorithm~\cite{frenkelsmit}, where
the velocity appearing in the force term at the right hand side of
Eq.~\eqref{eq:lang} is taken from the mid-step velocity. The time step is set
to $\Delta t=0.0005\ll(\kappa\sqrt{\mean{v_i^2}})^{-1}\sim 0.08\dots0.2$. We
equilibrate the suspension and then slowly increase the strain rate to the
final value. Correlation functions have been obtained by averaging over 400
particle trajectories with 300,000 time steps each. These trajectories have
been determined in two independent runs from randomly chosen particles.
To implement the hard-core repulsion and prevent particles from overlapping,
the following simple algorithm is employed (see also
Refs.~\cite{stra99,foss00} and references therein). After every particle has
been moved, but before new forces are calculated, we store all overlapping
particle pairs and remove these overlaps as follows. For each pair both of the
particles are moved backwards in time along their respective velocity vector
up to the point where they collided. This time $0 < s < \Delta t$ is
stored. Knowing the positions and velocities at the impact, we compute the
connection vector $\vec e$ between both particles. We decompose the velocities
into $\vec{v}_{1,2}^{\parallel}=\vec{e}\vec{e}^T\vec{v}_{1,2}$ and
$\vec{v}_{1,2}^{\perp}=(\mathbf 1-\vec{e}\vec{e}^T)\vec{v}_{1,2}$. Only the parts
parallel to $\vec{e}$ can change. Using the usual elastic collision rule
preserving momentum and kinetic energy of the particles we obtain the
after-collision velocities
$\vec{v}_1'=\vec{v}_1^{\perp}+\vec{v}_2^{\parallel}$ and
$\vec{v}_2'=\vec{v}_2^{\perp}+\vec{v}_1^{\parallel}$. From the positions of
their collision the particles are then propagated forward with time step $s$
along the new velocity vectors. This procedure is repeated as long as
overlapping pairs exist.
|
1,314,259,993,382 | arxiv | \section{Introduction}
The new generation large area survey programs have contributed with their invaluable data to discover
new supernovae (SNe). In order to detect such transients, two different strategies are typically applied.
First is the pointed survey approach where a fixed catalogue of galaxies are independently observed with
a cadence of a few days. In the second method, a specific area of the sky is surveyed very frequently
and transients are identified using the image subtraction technique. Some important SNe search programs
based on these methods are: the Lick Observatory Supernova Search \citep[LOSS;][]{2000AIPC..522..103L},
the Panoramic Survey Telescope \& Rapid Response System \citep[Pan-STARRS;][]{2002SPIE.4836..154K},
the Dark Energy Survey \citep[DES;][]{2005astro.ph.10346T},
the Canada-France-Hawaii Telescope\,--\,Legacy Survey \citep[CFHT\,--\,LS;][]{2006A&A...447...31A},
the Carnegie Supernova Project \citep[CSP;][]{2006PASP..118....2H}
the Equation of State: SupErNovae trace Cosmic Expansion \citep[ESSENCE;][]{2007ApJ...666..674M,2007ApJ...666..694W},
the Sloan Digital Sky Survey \citep[SDSS;][]{2008AJ....135..338F,2008AJ....135..348S},
the Catalina Real-Time Transient Survey \citep[CRTS;][]{2009ApJ...696..870D},
the Palomar Transient Factory \citep[PTF;][]{2009PASP..121.1395L},
the All-Sky Automated Survey for SuperNovae \citep[ASAS-SN;][]{2014ApJ...788...48S} and the upcoming
Zwicky Transient Facility \citep[ZTF;][]{2014htu..conf...27B}.
It is notable that despite their great contribution to supernova (SN) search, this kind of projects are
observationally expensive, requiring many hours of valuable telescope time to complete, and are also
depth and cadence limited.
Furthermore, the majority of them do not perform observations of the same strip of
sky on a regular basis every night (however, see ASAS-SN survey, \url{http://www.astronomy.ohio-state.edu/asassn/index.shtml}).
The unconventional Liquid Mirror Telescopes \citep[LMTs,][]{1985PASP...97..454B,1992ApJ...393..829B,
1993PASP..105..501H} may provide a unique way to overcome some of these issues in a certain fashion.
For SNe studies, LMT observations are useful over the generic facilities in several
aspects:
\begin{itemize}
\item Unbiased imaging:
Most nearby SNe are discovered by repeated imaging of catalogued galaxies \citep[][]{2001ASPC..246..121F}
which introduces a possible bias towards metal rich galaxies.
Though ongoing ASAS-SN survey like programs have improved the situation and much is expected with the
upcoming ZTF facility. Moreover, a LMT will image the same strip of sky passing over it without any selection
bias.
\item Inexpensive technology:
The cost of constructing a moderate aperture telescope (4-m diameter) is roughly 1/50 that of a conventional
telescope of the same class \citep{Borra_1a,2003A&A...404...47B}.
\item Continuous data flow:
There will be no loss of precious observing time because a LMT will observe continuously during the nights
except for bad weather or technical problems and produce a large amount of scientific data using the sky light.
\item Easy image pre-processing:
Unlike conventional imaging, here image pre-processing is comparatively easier. For example, the image
reduction is performed by dividing each column by a one-dimensional flat field. That can be achieved
directly from the scientific data.
\item Deeper imaging:
Since each night the same sky strip will be captured by the telescope, we can co-add the consecutive night
data to produce deeper images.
\end{itemize}
\begin{figure*}
\centering
\includegraphics[scale=0.135]{ilmt_dome2.eps}
\includegraphics[scale=0.11]{ilmt_st1.eps}
\caption{Left panel: Panoramic view of the ILMT site. The 1.3-m \textit{DFOT} and the 4-m ILMT are
in the middle and right side, respectively in this image. Right panel: Major components of
the ILMT. Here, the container is gray, the air bearing is red, the three-point mount (white) sits
below the air bearing and the vertical steel frames (white) hold the corrector and the CCD camera at
the top. The tentative size and other parameters of the telescope are listed in Table~\ref{ilmt_lim}.
Note the nice view on the Himalayan chain in the background of the left photograph.}
\label{ILMT_pan}
\end{figure*}
\begin{figure}
\centering
\includegraphics[scale=0.27]{ILMT_present_status.eps}
\caption{Fish eye view of the present status of the ILMT.
To protect from the dust, the air bearing and the three-point mount are covered with a wooden box
(blue colour). Four safety pillars (yellow colour) are also visible near the parabolic container
to prevent any mercury spill.}
\label{ILMT_pan2}
\end{figure}
Different size LMTs have already been built and operated by several groups \citep[for example;][]
{1989ApJ...346L..41B,1994ApJ...436L.201H} and the 6-m diameter Large Zenithal Telescope
\citep{2007PASP..119..444H} was the largest one in its class. The scientific contributions
of these facilities were mainly limited due to the lack of an appropriate TDI corrector
and/or large CCD camera and/or location (c.f. poor sky conditions). Therefore, a full time
LMT entirely dedicated to astronomical observations was proposed and the idea of building
the International Liquid Mirror Telescope (ILMT\footnote{\url{http://www.ilmt.ulg.ac.be}})
was born. SNe related study is one of the major scientific interests behind the ILMT project
\citep{2006SPIE.6267E...4S}.
A link between the star formation history and the cosmic supernova rate has been an open question.
It is generally believed that Type Ia SNe originate from the intermediate mass (3\,--\,8 M$_{\odot}$)
and both young and old population stars \citep*[][for recent reviews]{2012NewAR..56..122W,
2014ARA&A..52..107M}. Such explosions are supposed to be the consequences of
thermonuclear disruptions when a carbon-oxygen white dwarf reaches the critical Chandrasekhar mass
limit \citep[$\simeq$1.4 M$_{\odot}$,][]{1931ApJ....74...81C} by accreting matter from its evolving
binary companion \citep*{1960ApJ...132..565H,1973ApJ...186.1007W,1986ApJ...301..601W,2000ARA&A..38..191H}.
The observational features of these events show homogeneity and due to their high luminosity near
the maximum light, they are detectable at high redshift \citep{2005ASSL..332...97F}. Type Ia SNe are
considered to be reliable standard candles \citep{1992ARA&A..30..359B,1993ApJ...413L.105P} and play
an important role to constrain the geometry of the Universe \citep[e.g.][]{1998AJ....116.1009R,
1999ApJ...517..565P,2007ApJ...659...98R}.
Contrary to the progenitors of Type Ia SNe, massive stars ($M \ge\, $8 M$_{\odot}$) follow a different
evolutionary path \citep[e.g.][]{1991ComAp..15..221B,2003ApJ...591..288H,Smartt2009,2012ARA&A..50..107L}.
At the final stage when their nuclear fuel is exhausted, the gravitational collapse of the stellar core
triggers into a catastrophic death that appears in the form of a core-collapse supernova (CCSN).
The spectro-photometric features led to classify them in several types \citep[c.f. IIP, IIL, IIn, IIb, Ib,
Ic and Ic-BL; see][for a review]{1941PASP...53..224M,1997ARA&A..35..309F}.
CCSNe show diverse observational properties.
For example, their absolute magnitude distribution peaks roughly 1.5 mag fainter than SN Ia and covers a
range of more than 5 mag \citep{2002AJ....123..745R,2006AJ....131.2233R}. Similarly, a wide dispersion is
seen in the ejecta mass, kinetic energy, radiated energy and the amount of synthesized radioactive materials
created in the explosion. This indicates that possibly different physical mechanisms play an important role
during the evolution phases of the progenitors such as stellar wind \citep*{2008A&ARv..16..209P}, mass
transfer in a binary system \citep*{1985ApJ...294L..17W,1992ApJ...391..246P,2010ApJ...725..940Y,
2012Sci...337..444S} and mass loss \citep{2006ApJ...645L..45S,2014ARA&A..52..487S}, etc.
Along with astrophysical and cosmological implications, SNe are also primarily responsible for the
chemical enrichment of galaxies through their heavy elements and dust \citep[e.g.][]{1986A&A...154..279M,
2001MNRAS.325..726T,2007MNRAS.378..973B}. Furthermore, the expanding shock waves produced during the
explosion sweep, compress and heat the surrounding interstellar medium that finally triggers the star
formation process \citep[e.g.][and references therein]{1977ApJ...217..473H,1998ASPC..148..150E}.
Unbiased and large sample studies of SNe may provide answers to some of the underlying questions
related to the star formation history, progenitor evolution scenario and parameters causing the
diversity in their observed properties. In this context the ILMT deep imaging survey along
with complementary observations from other existing observational facilities will be advantageous.
\section{The ILMT project}\label{project}
The ILMT project is a scientific collaboration between four countries: Belgium, India, Canada
and Poland. The main participating institutions are: the Li\`ege Institute of Astrophysics and
Geophysics (University of Li\`ege, Belgium), the Aryabhatta Research Institute of Observational
Sciences (ARIES, India), the Royal Observatory of Belgium, several Canadian universities (British
Columbia, Laval, Montr\'eal, Toronto, Victoria and York) and the Observatory of Pozna\'n (Poland).
The AMOS (Advanced Mechanical and Optical Systems) company in Belgium has participated in the
manufacturing of the telescope.
The ILMT is being installed at the Devasthal (meaning `Abode of God') mountain peak, in the
central Himalayan range in India. This is a newly developed observatory under ARIES. A panaromic
view of the site is illustrated in Fig.~\ref{ILMT_pan}. The Devasthal observatory is situated at
an altitude of $\sim$2450m, with longitude $79^{\circ}$ $41^{\prime}$ $04^{\prime\prime}$ East
and latitude $+29^{\circ}$ $21^{\prime}$ $40^{\prime\prime}$ \citep[c.f.][]{Sagar2011Csi...101...8.25,
2012ASInC...4..173S}.
It is important to highlight that in view of the site advantages, apart from the upcoming
4-m ILMT, the other existing astronomical observing facilities at Devasthal are the 1.3-m
{\it DFOT}\footnote{Devasthal Fast Optical Telescope} and 3.6-m {\it DOT}\footnote{Devasthal
Optical Telescope}. Major scientific prospectives of these telescopes can be found in
\citet[and references therein]{2016arXiv160706455S}.
A sketch of the ILMT structure is shown in Fig.~\ref{ILMT_pan} (right panel). It consists of three
major parts, namely the air bearing, the container and the vertical structure which will hold the
corrector and CCD camera. The primary mirror is a 4-m diameter epoxy-carbon-fiber structure that
has a smooth parabolic upper surface produced by spin casting \citep{Magette2010,Kumar2014},
see Fig.~\ref{ILMT_pan2}.
The dish will support a thin layer (approximately 2\,--\,3 mm thick) of liquid mercury that will produce
the reflecting surface. When the mirror is rotated uniformly about its vertical axis, the combination
of gravity and centrifugal force will produce an equilibrium surface that is parabolic to high accuracy.
A detailed explanation can be found in \citet{1982JRASC..76..245B}.
Although mercury vapour is harmful, it is greatly suppressed by a thin transparent layer of oxide
that forms soon after emplacement. Moreover, a thin film of mylar\footnote{It is a scientific grade
polyester film (thickness $<$\,12 $\mu$m). Optical quality tests of such films for the LMTs are
discussed in \citet{1992PASP..104.1239B,2007PASP..119..456H}.}, co-rotating with the mirror, will
contain any remaining vapour. This film is required to prevent vortices, produced in the boundary
layer above the mirror due to its rotation, from disturbing the liquid surface.
The ILMT is a zenithal rotating telescope. It cannot track stellar objects like conventional
glass mirror telescopes. Therefore, images are secured by electronically stepping the relevant CCD charges.
The transfer rate is kept similar as the target drifts across the detector (i.e. equal to the observatory
sidereal rate). This specific technique is known as the Time Delayed Integration (TDI) or drift-scanning
\citep[see][and references therein]{1992MNRAS.258..543G}. Advantages of the TDI mode of observations
can be found in \citet[][and references therein]{Kumar2014}. Because the primary mirror is parabolic, a
glass corrector will be used to obtain a good image quality over a field of view of 27 arcmin in diameter
including TDI correction \citep[see][]{1998PASP..110.1081H,2002A&A...388..712V}.
A CCD detector ($4096\times4096$ pixels) manufactured by `Spectral Instruments' will be positioned at
the prime focus, located about 8m above the mirror. The ILMT observations will be mainly performed with
the $i'$ filter (although there are additional filters $g'$ and $r'$). This will be advantageous
for a maximum number of nights because the spectral range covered by the $i'$ filter is less sensitive
to the bright phases of the moon. Initially the ILMT project will be for 5 years which will allow us to
collect a large sample of stellar objects including transients.
More detailed information about its instruments and science cases can be found
elsewhere \citep[e.g.][and references therein]{2006SPIE.6267E...4S,Jean_bina,Magette2010,
2012IAUS..285..394P,Finet2013,Kumar2014,2015ASInC..12..149K,Kumar_bina}.
During the last few years, several experiments have been performed to sort out technical difficulties
related to the ILMT. In continuation of such activities we have also performed TDI mode observations
from the Devasthal observatory using the 1.3m DFOT. These images have been used to test the ILMT data
reduction pipeline and preliminary results are presented in \citet{Pradhan_bina}. The installation
process of the telescope began in 2017 March and is now in its final stage.
The metallic structure (to hold the CCD camera and corrector), safety pillars, air bearing are already
erected. To ensure optimal and very safe operation of the air bearing two parallel air supply systems
have been installed. In addition, several components/instruments like pneumatic valves, air dryers,
air filters and sensors (pressure, temperature, humidity and dew-point) are also installed.
First light of the ILMT is expected before the beginning of the 2018 {\it Monsoon} season.
For the present status of the ILMT project, see \citet{Jean_bina}.
A fish eye view of the installation is shown in Fig.~\ref{ILMT_pan2}.
\section{ILMT limiting magnitudes and accessible sky area}\label{Estimation1}
The scientific performance of an instrument depends on the maximization of its throughput.
Considering various parameters (e.g. transmission coefficients from the mirrors, filters,
CCD glass, sky, extinction and quantum efficiency of the CCD chip), the expected counts ($N_{\rm e}$)
from a star of certain magnitude (m) can be estimated using the following formula
\citep{1989ecaa.book.....M,1991JApA...12..319M}.
\begin{equation}
N_{\rm e} = 3.95 \times 10^{11} ~ D^{2} ~ \lambda_{\rm n} ~ \Delta\lambda_{\rm n} ~ F_{0}^{\rm n} ~ 10^{-0.4{\rm m}}
A_{\rm F} ~ \eta
\end{equation}
where D is the diameter of the telescope, $\lambda_{\rm n}$ and $\Delta\lambda_{\rm n}$ are
the effective wavelength and bandwidth of the filters, $F_{0}^{\rm n}$ is the flux density
(per wavelength) from a star of magnitude 0 at the wavelength $\lambda_{\rm n}$ above
the Earth atmosphere, $A_{\rm F}$ is the fractional reflecting area of the mirror surface and
$\eta$ is the efficiency of the system (mirror + filter + CCD).
Assuming that each optical photon is capable of producing a corresponding photo-electron,
the full well capacity of the required CCD pixel could be estimated by assuming a certain
integration time for a star with a known brightness.
Furthermore, if the sky brightness is known for a given CCD, we can also calculate the sky
counts and the underlying noise.
\begin{equation}
N = \sqrt{(N_{\rm e} ~ e_{\rm t} + S_{\rm e} ~ e_{\rm t} ~ n_{\rm p} + D_{\rm c} ~ e_{\rm t} ~ n_{\rm p}
+ R_{\rm n}^2 ~ n_{\rm p})}
\end{equation}
Here,
$N_{\rm e}$ indicates the number of electrons (per sec), $e_{\rm t}$ the exposure time (sec),
$S_{\rm e}$ the sky brightness (in electrons), $n_{\rm p}$ the number of pixels in the image of the
observed star, $D_{\rm c}$ the dark current (e$^-$/pix/sec) and $R_{\rm n}$ the read out noise.
The signal-to-noise ratio (SNR) can also be calculated for stars with different brightness
\citep{1989ecaa.book.....M}.
\begin{equation}
{\rm SNR} = \left( \frac {{N_e \times e_t}}{N}\right).
\end{equation}
The CCD readout noise is Gaussian while the star counts, dark counts are Poisson in nature.
The aperture to calculate the star light is considered as circular. For the present calculations,
the FWHM is considered as 1.5 arcsec, nearly equal to the median seeing at Devasthal.
The optimal aperture is considered to be 1 $\times$ FWHM \citep[see][]{1989PASP..101..616H,Howell}.
We can also estimate the corresponding error in the magnitude estimation by knowing
the value of the SNR \citep{2011A&A...531A.151D}
\begin{equation}
\sigma_{mag} = 2.5 \times log_{10}[1+1/{\rm SNR}].
\end{equation}
\begin{table}
\centering
\small
\caption{Different parameters used to calculate the ILMT limiting magnitude.
See also \citet{Finet2013}. \label{ilmt_lim}}
\begin{tabular}{ll}
\hline
Diameter & 4.0-m \\
Fraction of reflecting area & 0.95 \\
Reflectivity & 0.77 \\
Mylar transmission & 0.80 \\
Corrector transmission & 0.85 \\
FWHM & 1.5\arcsec \\
CCD pixel size & 0.4\arcsec/pixel \\
CCD dark noise & 0.00083 e$^{-}$/pixel/sec \\
CCD readout noise & 5.0 e$^{-}$ \\
CCD gain & 4.0 e$^{-}$/ADU \\
Wavelength ($g', r', i'$) & 4750, 6250, 7630 \AA \\
Wavelength FWHM ($g', r', i'$) & 1450, 1500, 1500 \AA \\
Extinction ($\sim$$g'$,$\sim$$r'$,$\sim$$i'$) & 0.21, 0.13, 0.08 mag \\
Sky mag ($\sim$$g'$,$\sim$$r'$,$\sim$$i'$) & 21.3, 20.5, 18.9 mag/arcsec$^{2}$\\
CCD quantum efficiency ($g', r', i'$)& 0.70, 0.91, 0.91 \\
Filter transmission ($g', r', i'$) & 0.92, 0.95, 0.95 \\
System efficiency, $\eta$ ($g', r', i'$) & 0.55, 0.74, 0.74 \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig_lim.eps}
\caption{
The ILMT limiting magnitudes for the $g'$, $r'$ and $i'$ filters are shown with different symbols.
The X-axis represents the limiting magnitude and the Y-axis represents the SNR and the corresponding
error in magnitude. The filled and open symbols represent the results for the exposure of a single
scan (102 sec) and three scans (306 sec), respectively (see Section~\ref{Estimation1} for details).
The dotted horizontal line is indicative for a SNR of 10 and an uncertainty of 0.1 mag.
Approximately 0.5 mag is gained once we stack images taken over three nights in any single filter.}
\label{ilmtlimit}
\end{figure}
We have estimated the limiting magnitudes of the ILMT for different filters ($g'$, $r'$ and $i'$)
using the above equations. The various parameters used for these estimations are listed in
Table~\ref{ilmt_lim}. The limiting magnitudes estimated using the above methods for different
filters are plotted in Fig.~\ref{ilmtlimit} with different symbols. It is obvious from this figure
that with an exposure time of 102 sec, the limiting magnitudes are $\sim$22.8, $\sim$22.3 and
$\sim$21.4 mag for the $g'$, $r'$ and $i'$ filters, respectively. Furthermore, since during
each night the same strip of sky will pass over the telescope (except for a 4 min. shift in
right ascension); successive night images can then be co-added. This will yield a
longer effective integration time. Therefore, we have also estimated the limiting magnitudes
for 306 sec (3 night images) exposure time, using the same parameters. The estimated magnitude
limit improves to $\sim$23.4, $\sim$22.8 and $\sim$22.0 mag for the $g'$, $r'$ and $i'$ filters,
respectively.
The co-addition technique is not limited only for 3 nights but it can also be applied for several
more night imaging data. Consequently, we may reach very faint magnitude levels \citep[see also][]{Borra_1a,
Borra_1b, 2003A&A...404...47B}.
Pointing towards the zenith, the ILMT field-of-view (FOV) is centered at the Devasthal observatory
latitude which is 29.36$^\circ$ N. The ILMT FOV is approximately 27\arcmin\ by 27\arcmin.
One can find that the total accessible sky area with the ILMT will be 141.2 square degrees.
Out of it only 1/3 nightly strip ($\sim$47 square degrees) can be monitored.
At high galactic latitudes ($\mid$b$\mid$ $>$ 30$^\circ$) the detection of fainter and more distant
objects (e.g. SNe, galaxies, quasars ...) will be possible \citep[see][]{2006SPIE.6267E...4S,
Magette2010,Finet2013,Kumar2014}.
\begin{figure}
\centering
\includegraphics[scale=0.85]{SNRates_cc_g-band_rv1.eps}
\includegraphics[scale=0.85]{SNRates_cc_r-band_rv1.eps}
\includegraphics[scale=0.85]{SNRates_cc_i-band_rv1.eps}
\caption{Detection rate of CCSN as a function of redshift.
The dashed (green colour) and dotted (brown colour) curves, respectively indicate the cosmic CCSN rate
without and with dust extinction consideration. Possible number of CCSNe to be detected with the ILMT
in different bands ($g', r'$ and $i'$) and for different magnitude limits (c.f. stacking of
consecutive night images, see Section~\ref{Estimation1}) are also shown.}
\label{fig_snr_cc}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.85]{SNRates_1a_g-band_rv1.eps}
\includegraphics[scale=0.85]{SNRates_1a_r-band_rv1.eps}
\includegraphics[scale=0.85]{SNRates_1a_i-band_rv1.eps}
\caption{Detection rate of Type Ia SN as a function of redshift. The curves are similar to
those in Fig.~\ref{fig_snr_cc} but for Type Ia SNe.}
\label{fig_snr_1a}
\end{figure}
\section{Supernova rate and ILMT}\label{sn_rate}
The astronomical community is deeply interested in understanding the nature of the different kinds of
SNe and their evolution with redshift. The CCSNe rate is expected to reflect the star-formation rate,
increasing with redshift as (1\,+\,$z$)$^{\beta}$ (for $z$$\,\approx$\,0.5) where $\beta$ is in the range
2.5 to 3.9 \citep[see][]{2004ApJ...615..209H,2005ApJ...632..169L, 2005ApJ...619L..47S,2006ApJ...651..142H,
2010ApJ...718.1171R,2012A&A...539A..31C}.
The type Ia SNe rate rise is rather slow with redshift, $\sim$(1 + $z$)$^{\beta}$ \citep[see][and references
therein]{2008ApJ...683L..25P,2012AJ....144...59P}, where $\beta$ is $2.11 \pm 0.28$ up to $z$ $\sim$1.
In order to search for a possible correlation between the star formation and SN rates in the local
universe, several studies have been performed \citep[see, e.g.][and references therein]{2004ApJ...613..189D,
2006AJ....132.1126N,2008ApJ...682..262D,2011MNRAS.417..916G,2014ApJ...792..135T,2015A&A...584A..62C,
2017A&A...598A..50B}.
In the framework of supernova studies with liquid mirror telescopes, \citet{Borra_1a,Borra_1b,
2003A&A...404...47B} has described the cosmological implications of SNe and estimated the number of
SNe for a strip of sky using the expected rate of SNe given in \citet{1996ApJ...473..356P}. In the
following we have performed a detailed calculation of the expected number of SN events which can be
detected with the ILMT. We calculate the detection rate for the core-collapse and Type Ia SNe for all
the three proposed bands of the ILMT (see Figs.~\ref{fig_snr_cc} and \ref{fig_snr_1a}). For the
calculations we follow the prescription given in \citet{2009JCAP...01..047L}. In the following we
present a brief description of the steps and the quantities involved in the calculations.
The supernova detection rate per unit redshift per unit solid angle in a filter band $x$
can be expressed as follows,
\begin{equation}
\frac{dN_{SN,obs,x}}{dt_{obs}dzd\Omega} = R_{\rm SN}(z) \ f_{detect}(z;m_{lim, x}^{\rm SN}) \
\frac{r(z)^2}{1+z} \frac{dr}{dz}
\end{equation}
where $r(z)$ is the co-moving distance, $R_{SN}(z)$ is the cosmic SN rate which can be
written as
\begin{equation}
R_{\rm SN}(z) = \frac{X_{\rm SN}}{\left<m\right>_{\rm SN}} \ \dot\rho_\star(z)
\end{equation}
where $X_{SN}$ is the fraction of stellar mass which results in supernovae
(=$\int_{SN} M \xi(M) dM \bigg/ \int M \ \xi(M) dM)$, $\left<m\right>_{\rm SN}$ is the average supernova
progenitor mass (=$\int_{SN} M \ \xi(M) dM \bigg/ \int \xi(M) dM)$ and $\dot\rho_\star(z)$ is the star
formation rate. $\xi(M)$ represents the stellar initial mass function (IMF).
The quantity $f_{detect}(z)$ is the fraction of SNe which can be detected by the instrument and
it depends on the characteristics of the instrument and the SN type ($i$),
\begin{equation}
f_{detect}(z) = f_{dust}(z) \times \frac{1}{N} \sum_{i=1}^{N} \int^{m_{lim,x}}_{-\infty} \phi_i(m, z) \ dm
\end{equation}
$i$ denotes different SN types, for example in case of CCSNe we considered all four types,
Ibc (both Ib and Ic), IIL, IIP and IIn (i.e. N = 4), whereas for Type Ia SNe, N = 1.
The $\phi_i(m,z)$ is the supernova luminosity function or the magnitude distribution function which is assumed
to be a normal distribution with the mean magnitude $\tilde{m}_i$ and variance $\sigma_i$,
where $\tilde{m}_i = \tilde{m}_i^{abs} + 5 \log[{d_L(z)}/10\, \text{pc}] + K_{ix}(z) + \eta_{ixB}
+ A(z)$; $d_L(z) $ is the luminosity distance, $K_{ix}(z)$ is the $K$-correction, $\eta_{ixB}$ is the color
correction and $A(z)$ is the dust correction \citep[for details see,][]{2009JCAP...01..047L}.
The $K$-correction and the color correction can be computed as follows,
\begin{eqnarray}
\begin{aligned}
K_{ix}(z) &=& 2.5\log(1+z) \ + \ 2.5\log \frac{\int F_i(\lambda) \ S_x(\lambda) d\lambda}{\int F_i(\lambda/(1+z)) \ S_x(\lambda) d\lambda}
\end{aligned} \\
\begin{aligned}
\eta_{ixB} &=& -2.5\log \frac{\int F_i(\lambda) \ S_x(\lambda) d\lambda}{\int F_i(\lambda) \ S_B(\lambda) d\lambda} + {\rm zeropoint \ correction}
\end{aligned}
\end{eqnarray}
here, $F(\lambda)$ is the intrinsic spectral distribution of the supernova, $S_x(\lambda)$ and $S_B(\lambda)$ are
the response functions of the filter $x$ (ILMT filter bands) and $B$ (filter band used to estimate the absolute
magnitude distribution), respectively \citep[for more details see][]{2009JCAP...01..047L}.
We have considered a flat cosmology with $\Omega_m = 0.31$, and $h = 0.68$ which are consistent with the
recent Planck results \citeyearpar[Planck Collaboration XIII][]{2016A&A...594A..13P}. The absolute magnitude
distribution of the SNe is taken from \citet{2014AJ....147..118R} and has been adjusted for the Hubble
parameter value $h = 0.68$. We also put a conservative cut-off at 2.5$\sigma$ in the absolute magnitude
distribution as there are no data available beyond 2.5$\sigma$ in the sample of \citet{2014AJ....147..118R}.
We take the same IMF, $\dot\rho_\star(z)$ and the dust correction as given by \citet{2009JCAP...01..047L}.
The progenitor mass range for the Type Ia SNe is taken to be 3\,--\,8 M$_{\odot}$ and for CCSNe it is taken
to be 8\,--\,50 M$_{\odot}$ (mass range for all four CCSNe types i.e. Ibc, IIL, IIP and IIn).
Further, for CCSNe the form $F(\lambda)$ is taken as the one listed in \citet{2009JCAP...01..047L},
and for Type Ia SNe we choose $F(\lambda)$ as a 15000 K blackbody spectrum with cut-off
due to UV blanketing at $\lambda < 4000 \ \mathring{A}$ \citep{1999A&A...350..349D,2011ApJ...729...55F}.
For the calculation of magnitude limited detection rate we have used the expected magnitude limits
($m_{lim,x}$) for the ILMT in the different spectral bands (see Section~\ref{Estimation1}).
In Figs.~\ref{fig_snr_cc} and \ref{fig_snr_1a}, the dotted-brown and the dashed-green lines show the
cosmic supernova rate with and without dust correction, respectively. The solid-red lines correspond
to the expected magnitude limited SN detection rate observed with the ILMT. The set of three
magnitude limits respectively correspond to 1, 3 and 6 night integration time
(see Section~\ref{Estimation1}).
The cut-off in the absolute magnitude distribution has a very strong effect on the high redshift
cut-off seen in the magnitude limited detection rate plots. Changing the cut-off from 2.5$\sigma$
to 3.0$\sigma$ increases the cutoff redshift drastically, though it does not change the total
supernovae counts by much and it has little effect over the redshift value corresponding to
the maximum detection rate as has also been pointed out by \citet{2009JCAP...01..047L}.
Setting a cut-off in the absolute magnitude distribution at 2.5$\sigma$ gives a good,
yet conservative, estimate for the detection rate.
The sharp cut-off seen in the case of Type Ia SNe (Fig.~\ref{fig_snr_1a}, $g'$-band) is due to the UV
blanketing effect that cuts-off the spectral energy distribution at $z \gtrsim \lambda/4000$. It makes the
K-correction very high for $z \gtrsim \lambda/4000$ (see Fig.~\ref{fig_Kcor}; $g'$-band SN-Ia), making
$\tilde{m}_{\rm \scriptscriptstyle Ia}(z \gtrsim \lambda/4000) \gg m_{lim,g'}$.
As a result we get a sharp cut-off at a comparatively lower redshift in the $g'$-band for Type Ia SNe.
Ground based observing facilities are sometimes affected by local weather and the ILMT site is
not an exception. The wet and humid conditions during the {\it Monsoon} season necessitate the
closure of all observing facilities situated at the site from June to September every year.
Therefore, ILMT observations will also be closed during that period (i.e. 4 months). Previous
observing experiences suggest that in general, the Devasthal site has $\sim$210 clear nights in a year,
among that $\sim$160 nights are of photometric quality \citep[see][]{2000A&AS..144..349S}.
Taking into account the site photometric nights and the area covered by the ILMT each night,
we have estimated total number of possible SNe to be detected with this facility.
These numbers are listed in Table~\ref{ILMT_sn_det} (see also Table~\ref{ILMT_sn_det_z}).
It is obvious that the above-estimated SNe numbers will reasonably vary during real observations if we
also consider the night limitations and technical difficulties/maintenance of the instruments.
The uncertainties in the above estimates are therefore not discussed further.
\begin{table}
\centering
\small
\caption{SN detection rates with the ILMT.
1$_{\rm N}$, 3$_{\rm N}$ and 6$_{\rm N}$ indicate the number of SNe for the limiting magnitudes of
single, and co-added images of 3 and 6 nights, respectively.
Total number of SNe (columns 6, 7 and 8) are the redshift integrated
events in a year (only 160 photometric nights of the site and an average 8 hours of observing
time each night have been accounted for).} \label{ILMT_sn_det}
\begin{tabular}{cccccccc}
\hline
SN &Filter & \multicolumn{3}{c}{SNe/deg$^{2}$/year} & \multicolumn{3}{c}{Total SNe in a year}\\
Type & & 1$_{\rm N}$ & 3$_{\rm N}$ & 6$_{\rm N}$ & 1$_{\rm N}$ & 3$_{\rm N}$ & 6$_{\rm N}$ \\ \hline
Ia & $g'$ &63 & 89 & 115 & 1299 & 1835 & 2371 \\
& $r'$ &155 & 274 & 426 & 3196 & 5649 & 8783 \\
& $i'$ &28 & 71 & 174 & 577 & 1464 & 3588 \\\\
CC & $g'$ &50 & 97 & 177 & 1031 & 2000 & 3649 \\
& $r'$ &20 & 43 & 87 & 412 & 887 & 1794 \\
& $i'$ &3 & 8 & 19 & 62 & 165 & 392 \\
\hline
\end{tabular}
\end{table}
\subsection{Detection of supernova candidates}
The ILMT pointing is fixed towards the best seeing and atmospheric transparency position (i.e. at zenith).
This allows one to obtain images with optimal quality. During each clear night, the same strip
of sky will be scanned by the telescope. To detect SNe, previous night images or a good reference image
will be subtracted from the search night images. We plan to perform automated real-time data reduction
pipeline based on the Optimal Image Subtraction (OIS) technique \citep{1998ApJ...503..325A,
2000A&AS..144..363A}.
In order to detect SNe and classify their type, utmost care is needed. Miscellaneous astrophysical
and/or non-astrophysical contaminating sources such as variable stars, quasars, active galaxies,
asteroids, cosmic rays, etc. may appear on the acquired images. These unwanted sources must be removed
accurately to avoid false detection. Various catalogues for different types of variable sources can be
used to cross-match newly discovered transient candidates.
Some catalogues are: VERONCAT\footnote{\url{https://heasarc.gsfc.nasa.gov/w3browse/all/veroncat.html}}
\citep{2010A&A...518A..10V} for quasars and AGN, Minor Planet Checker\footnote{\url{http://www.minorplanetcenter.net}}
for planets \& comets, and SIMBAD\footnote{\url{http://simbad.u-strasbg.fr/simbad/}} \citep{2000A&AS..143....9W}
for variable stars. Following the detection of a possible SN candidate, information will be communicated
to the astronomical community (e.g. through ATEL and CBET), and also it will be available on the ILMT
webpage.
The appearance of emission lines in the spectra is mandatory for classification and confirmation
of a supernova. The spectrum is also useful to estimate the redshift and age (time after explosion) of
SNe. However, distant SNe may be too faint for spectroscopy. Moreover, the follow-up spectroscopy
will be a virtually impossible approach considering the large amount of transients detected in surveys.
Therefore, new techniques have been developed attributing to SNe photometric classification.
These are primarily based on some form of light curve fitting models e.g.
MLCS/MLCS2k2\footnote{Multicolour light-curve shape} \citep{1995ApJ...438L..17R,2007ApJ...659..122J}
and SALT/SALT2\footnote{Spectral adaptive light curve template} \citep{2005A&A...443....1G,2007A&A...466...2G}.
The observed data points are fitted to the templates and the most likelihood type is determined
\citep[see, e.g.][]{2002PASP..114..833P,2007AJ....134.1285P,2004PASP..116..597G,2006AJ....131..960S,
2006AJ....132..756J,2007ApJ...659..530K,2007PhRvD..75j3508K,2009ApJ...707.1064R,2010ApJ...709.1420G,
2010ApJ...723..398F,2011ApJ...738..162S}.
The above cited models along with colour cuts and colour evolution based techniques \citep[e.g.][]{2002PASP..114..284D,
2006AJ....132..756J} can be applied for the type determination of the ILMT discovered SNe.
A quick and dense monitoring (particularly, during the phase just after the explosion to a few weeks
after the explosion) of SNe will help in the type identification more accurately. Furthermore, at the
moment of shock break-out, some SNe are expected to emit a short burst of high energy radiation
\citep{2010ApJ...725..904N}. Thereafter, the shock break-out cooling may create an early peak in the
optical passband \citep{2013ApJ...769...67P}. The early phase observations are therefore crucial
to constrain the progenitor system \citep{2012ApJ...757...31B,2015A&A...574A..60T}.
Here we emphasize that the ILMT will work in the continuous data acquisition mode.
Accordingly, while it will contribute with data on each night basis nonetheless,
it will not be possible to observe again a particular sky patch on the same night once it has passed
over the ILMT field of view. Additionally, due to the limitation of the ILMT filter system it will
be difficult to obtain precise colour and light curve information of SN candidates.
Therefore, complementary observations with conventional glass mirror telescopes will be very useful.
Thanks to the ARIES observing facilities which host three modern glass telescopes with different
apertures (the 1.04-m Sampurnanad Telescope ({\it ST}), the 1.3-m {\it DFOT} and 3.6-m {\it DOT}).
Depending upon the brightness and peculiarity of the newly discovered objects these telescopes along with
other existing observing facilities in India and worldwide can be triggered for follow-up observations.
A detailed follow-up scheme is presented in \citet{Kumar_bina}.
It is important to mention that along with the SN studies, the ILMT has other scientific interests also.
That includes surveys for multiply imaged quasars, determination of trigonometric parallaxes of faint
nearby objects, detection of high stellar proper motions and short to long term photometric variability
studies of stellar objects \citep[][]{2006SPIE.6267E...4S,Jean_bina}. Therefore, a reasonable
balance between different filters is a must. The $i'$ filter can be placed around the bright moon
phases and during the remaining nights a combination of $g'$, $r'$ and $i'$ filters can be placed
alternatively. Such an observing strategy may equally be useful for SN candidate detection and other
science cases as well.
\section{Summary and conclusions}\label{concl}
The redshift-integrated supernova rate may turn out to be very large
\citep[$\simeq$5 -- 15 events/sec,][]{1998MNRAS.297L..17M}. Considering their random occurrence
in the Universe, it is not feasible to detect and observe each event. Monitoring all of them is
also almost impossible as it will require a significant amount of telescope time.
On the other hand, a regular imaging of a same strip of sky with the ILMT will be
advantageous to apply the image subtraction technique for detecting transients such as
SNe. Moreover, once a SN is detected in the ILMT images, it will by default provide dense sampled
(successive night) light curves in different filters.
The single scan ILMT limiting magnitudes are $\sim$22.8, $\sim$22.3 and $\sim$21.4 mag in $g'$, $r'$
and $i'$ filters, respectively and can be increased if we co-add the images taken on different nights.
In this way, the ILMT survey should play an important role in SNe detection up to reasonably
fainter limits with a precise and unbiased imaging of a strip of sky at a declination equal
to the latitude of Devasthal.
We are expecting to detect hundreds of Type Ia as well as core-collapse SNe up to intermediate
redshifts thanks to the ILMT survey (c.f. Table~\ref{ILMT_sn_det}).
The multi-band and well sampled observations should enable the photometric type determination
(by template fitting, colour information) of SNe more accurately. The expected large SNe samples
yielded from the ILMT may also increase the representative objects of each type with better
statistics. Furthermore, the ILMT will provide an untargeted search with plentiful of anonymous
galaxies in each night images, which may allow us to construct a SN sample without host-galaxy
biases in a given limited patch of sky.
The observational properties and theoretical modelling indicate that supernova light curves are
mainly powered by a combination of two energy sources, i.e. shock generated energy deposition and
radioactive decay $^{56}$Ni $\rightarrow$ $^{56}$Co $\rightarrow$ $^{56}$Fe, synthesized in the
explosion \citep[see][]{1960ApJ...132..565H,1969ApJ...157..623C,1982ApJ...253..785A}. In some of the
SNe, circumstellar interactions \citep{1978MmSAI..49..389R,1991MNRAS.250..513C,1994ApJ...420..268C}
and rapidly rotating magnetars \citep{2010ApJ...717..245K,2010ApJ...719L.204W,2012MNRAS.426L..76D}
can also supply energy.
The light curves of different types of SNe exhibited diverse characteristics,
e.g. Type II SNe: \citep{2012ApJ...756L..30A,2014ApJ...786...67A,2015ApJ...799..208S},
stripped envelope SNe\footnote{In these events, the outer envelopes of hydrogen and/or helium
of their progenitors are partially or completely removed before the explosion (e.g. Type IIb, Ib,
Ic, and Ic-BL).}: \citep{2011ApJ...741...97D,2014ApJS..213...19B,2015A&A...574A..60T,
2016MNRAS.457..328L,2016MNRAS.458.2973P} and Type Ia SNe: \citep[][and references
therein]{2010ApJ...712..350H,2016MNRAS.460.3529A,2017ApJ...846...58H}.
The high quality light curves can provide an opportunity to robustly determine various parameters
and empirical relations including the `rise time', the light curve decline rate parameter
\citep[$\Delta$m$_{15}$,][]{1993ApJ...413L.105P}, and the color evolution etc. of different types
of SNe. This will also allow us to use theoretical models for a robust determination of the explosion
parameters (e.g. the $^{56}$Ni mass synthesized, the ejected mass and explosion energy).
That will in turn shed some light on the explosion mechanisms of different SNe and evolutionary
stages of their progenitors.
It is noteworthy that the spectra of SNe provide crucial information about the composition and
distribution of elements in the ejecta. Therefore, the spectroscopic monitoring of peculiar events
will also be very valuable. In this context, a guaranteed-time allocation strategy on 3.6-m {\it DOT}
to follow-up newly discovered objects will fulfill our needs.
Because of the tight link between SNe and star formation, the ILMT with complementary observations and
along with other sky surveys (e.g. Large Synoptic Survey Telescope (LSST), ZTF, etc.)
may provide better measurements of the moderate redshift history of the cosmic star-formation rate.
New SNe discoveries and precise investigation of their light curves could improve our knowledge on a variety
of problems including cosmology and SN physics.
\section*{Acknowledgments}
The authors thank the referee for his/her useful comments that substantially improved this paper.
We are grateful to the members of the ILMT team who provided their sincere efforts for this project
that is now in its final stage of installation. Although the list of ILMT contributors is long, we
specially thank J. P. Swings, Ram Sagar, Hum Chand, Ludovic Delchambre, Serge Habraken and Bikram Pradhan
for their precious support in this project.
Active involvement of Anna Pospieszalska is highly acknowledged.
BK and KLP also thank Amy Lien for fruitful discussions during the preparation of this manuscript.
BK acknowledges the Science and Engineering Research Board (SERB) under the Department of Science
\& Technology, Govt. of India, for financial assistance in the form of National Post-Doctoral
Fellowship (Ref. no. PDF/2016/001563).
SBP acknowledges BRICS grant DST/IMRCD/BRICS/Pilotcall/ProFCheap/2017(G) for the present work.
This research has also been supported by R\'{e}gion Wallonne (Belgium) under the convention 516238,
F.R.S.--FNRS (Belgium), the Li\`{e}ge University and the Natural Sciences and Engineering Research
Council of Canada. JS wishes to express his special thanks to Ir. Alain Gillin, Director,
for his comprehension in renewing the Convention no 516238 `TML4M' during many years and
Prof. Govind Swarup, former Chair of the ARIES Governing Council, for his constant and very important
support. Part of this work was initiated during the doctoral thesis of Brajesh Kumar.
|
1,314,259,993,383 | arxiv | \subsection*{\bf WIMP-nucleus scattering}
Since the relative speed $v\approx 300 \mbox{km/s} \approx 10^{-3} c$,
the process can be treated non-relativistically. The center of mass
momentum is given in terms of the reduced WIMP-nucleus mass as $ k =
\mu_i v $ and is $ \mathrel{\rlap{\raise 2pt \hbox{$<$} A_i \, {\rm MeV}/c $ since $\mu_i \le m_i $. The
corresponding de Broglie wavelength is $\mathrel{\rlap{\raise 2pt \hbox{$>$} 200\, {\rm fm}/A_i$, and
can be smaller than the size of heavy target nuclei, in which case
nuclear form factors are important. In the laboratory frame, the
nucleus recoils with momentum $q = 2 k \sin(\theta_{\rm cm}/2) $ and
energy $\nu = q^2/2m_i$. Here $\theta_{\rm cm}$ is the
center-of-mass scattering angle. The 4-momentum transfer is very
small, $Q^2 \mathrel{\rlap{\raise 2pt \hbox{$<$} A_i^2 \, 10^{-6} \,{\rm GeV}^2/c^2 $ (compare with a
typical deep inelastic $Q^2 \mathrel{\rlap{\raise 2pt \hbox{$>$} 1\,{\rm GeV}^2/c^2 $).
The differential scattering rate per unit recoil energy and unit
target mass is formally
\begin{equation}
{{\rm d} R \over {\rm d} \nu} = {\rho_\chi \over m_\chi} \sum_i f_i \eta_i(q)
{\overline{\left\vert T_i(q^2) \right\vert^2} \over 2 \pi \hbar^4 }.
\end{equation}
The sum is over the nuclear isotopes in the target, $T_i(q^2)$ is the
scattering matrix element at momentum transfer squared $q^2 = 2 m_i
\nu$, and $f_i$ is the mass fraction of isotope $i$. A sum over final
and average over initial polarizations is understood in
$\overline{|T_i(q^2)|^2}$. The factor
\begin{equation}
\eta_i(q) = \int_{q/2\mu_i}^\infty {f_\chi(v) \over v} {\rm d}^3 v,
\end{equation}
with units of inverse velocity, incorporates the $\chi$ velocity
distribution $f_\chi(v)$. For a Maxwellian distribution with velocity
dispersion $v_{\rm rms}$, seen by an observer moving
at speed $v_O$,
\begin{equation}
\eta_i(q) = {1\over 2v_O} \left[ {\rm erf}\left({v_q+v_O \over \sqrt{2}
v_{\rm rms}}\right)-{\rm erf}\left({v_q-v_O \over \sqrt{2}
v_{\rm rms}}\right) \right],
\end{equation}
with $v_q=q/2\mu_i$. For standard halo parameters, $\eta_i(q)$ is
approximately exponential in the deposited energy $\nu$. The
previously-mentioned modulations enter the rate through $\eta_i(q)$.
The scattering matrix element $T(q^2)$ can be written as the Fourier
transform
\begin{equation}
T(q^2) = \int \bra{\rm f} V({\vec r}) \ket{\rm i} {\rm e}^{i {\vec
q} {\vec r}/\hbar} {\rm d}{\vec r}
\end{equation}
of a non-relativistic WIMP-nucleus potential
\begin{equation}
V({\vec r}) = \sum_{ {\rm pointlike} \atop {{\rm nucleons} \atop
n={\rm p,n}} }
\left( G^n_s + G^n_a {\vec \sigma}_\chi {\vec \sigma}_n \right)
\delta({\vec r}-{\vec r}_n) .
\end{equation}
The constants $G^n_s$ and $G^n_a$ are effective four-fermion coupling
constants for nucleon-WIMP interactions, and are analogous to Fermi's
constant $G_F$. $G_s^n$ represents scalar\footnote{Associated to
scalar and axial vectors under 3d rotations.} or spin-independent
interactions, $G_a^n$ axial$^1$ or spin-dependent interactions. Both
terms are coherent in the quantum-mechanical sense when $q R_{\rm
nucleus} \ll \hbar$, {\it i.e.\ } when the nucleus can be treated as pointlike
and $T(q^2)$ can be taken as $T(0)$. At larger $q$, which can occur
with heavy target nuclei, both terms are incoherent. Nuclear form
factors $F(q^2)$, conventionally defined by $T(q^2) = T(0) F(q^2)$,
should then be introduced. The scalar and spin form factors are in
general different, reflecting the difference in the mass and spin
distributions inside the nucleus.
The task of a theoretician is to provide a theoretical estimate of
$T(q^2)$ starting from a particle-physics model. We accomplish this
by stages, successively finding the WIMP-quark, the WIMP-nucleon and
the WIMP-nucleus effective lagrangians. Step 1, finding the effective
WIMP-quark lagrangian at small $q^2$, is analogous to going from the
Standard Model to four-fermion interactions. Step 2 requires knowledge of
the quark content of the nucleon, {\it i.e.\ } the contributions of different
quarks to the nucleon mass and spin. Step 3 needs a nuclear model to
describe how protons and neutrons are distributed in a nucleus.
This procedure is now illustrated for a Dirac neutrino and for a
Majorana particle, an example of which is the neutralino.
\subsection*{\bf Dirac neutrino}
Step 1: a Dirac neutrino $\nu$ interacts with a quark q through the
diagram in Fig.~1a. At $q^2 \ll m^2_{\rm Z}$, the Z propagator reduces to
$ig^{\mu\nu}/m^2_{\rm Z}$, and the four-fermion amplitude reads
\begin{equation}
\sqrt{2} G_F \; \bar{\nu} ( v_{\nu} - a_{\nu} \gamma_5 ) \gamma_\mu
\nu \; \bar{\rm q} ( v_{\rm q} - a_{\rm q} \gamma_5 ) \gamma^\mu
{\rm q} ,
\end{equation}
with $v_{\nu} = a_{\nu} = \frac{1}{2}$, $a_{\rm q} = T_{3\rm q}$ and
$v_{\rm q} = T_{3\rm q} - 2 e_{\rm q} \sin^2 \theta_W$. Here $\sin^2
\theta_W \simeq 0.23 $ and $e_q$ and $T_{3q}$ are the electric charge
and the third component of the weak isospin of quark q.
For a non-relativistic
neutrino, only the time component of the vector current and the space
components of the axial current survive. The first is spin-independent
($ \bar{\nu} \gamma_0 \nu \propto \nu^\dagger\nu$) and the second
spin-dependent ($ \bar{\nu} \vec{\gamma} \gamma_5 \nu \propto
\nu^\dagger \vec{\sigma} \nu$).
Step 2 for the vector part
\begin{equation}
\sqrt{2} G_F \, v_{\nu} v_{\rm q} \; \bar{\nu} \gamma_\mu \nu \; \bar{\rm
q} \gamma^\mu {\rm q} ,
\end{equation}
because of vector current conservation, simply amounts to
summing $T_{3q}$ and $e_q$ of the constituent quarks. For protons and
neutrons one obtains respectively
\begin{eqnarray}
G_s^{\rm p} & = & {G_F\over\sqrt{2}} (1-4\sin^2 \theta_W) v_{\nu} \\
G_s^{\rm n} & = & - {G_F\over\sqrt{2}} v_{\nu} .
\end{eqnarray}
The interaction is mainly with the neutrons since $1-4\sin^2
\theta_W\approx 0$.
Step 2 for the axial part
\begin{equation}
\sqrt{2} G_F \, a_{\nu} a_{\rm q} \; \bar{\nu} \gamma_\mu \gamma_5 \nu \;
\bar{\rm q} \gamma^\mu \gamma_5 {\rm q} ,
\end{equation}
leads to the four-fermion coupling constants
\begin{eqnarray}
G_a^{\rm p} = \sqrt{2} G_F a_{\nu} \left( a_{\rm u} \Delta {\rm u} + a_{\rm d}
\Delta {\rm d} + a_{\rm s} \Delta {\rm s} \right), \\
G_a^{\rm n} = \sqrt{2} G_F a_{\nu} \left( a_{\rm u} \Delta {\rm d} + a_{\rm d}
\Delta {\rm u} + a_{\rm s} \Delta {\rm s} \right).
\end{eqnarray}
Here $\Delta{\rm q}$ is the fraction of the proton spin carried by
quark q, $\frac{1}{2} \bra{\rm p} \bar{\rm q} \gamma_\mu \gamma_5 {\rm
q} \ket{\rm p} = \Delta{\rm q} s_\mu$. It can be
obtained\cite{EllisKarliner} from data on neutron and hyperon
$\beta$-decay, which give $\Delta{\rm u} - \Delta{\rm d} =
1.2573\pm0.0028$ and $\Delta{\rm u} + \Delta{\rm d} -2\Delta{\rm s} =
0.59\pm0.03$, respectively. The contribution of the strange quark is
$\Delta{\rm s}=0$ in the naive quark model, $\Delta{\rm s} =
-0.11\pm0.03\pm\cdots$ from deep inelastic data, and $\Delta{\rm
s}=-0.15\pm 0.09$ from elastic $\nu{\rm p}\to\nu{\rm p}$ data.
Step 3 for the spin-independent part introduces the nuclear mass form factor
$F_{\rm mass}(q^2)$, and results in
\begin{equation}
\overline{\Bigl| T_s(q^2) \Bigr|^2} = \Bigl| Z G_s^{\rm p} + N G_s^{\rm
n} \Bigr|^2 \, \Bigl| F_{\rm mass}(q^2) \Bigr|^2,
\label{Ts}
\end{equation}
where $N$ ($Z$) is the number of neutrons (protons) in the nucleus.
Neutron scattering off nuclei suggests that $F_{\rm mass}(q^2) \simeq
F_{\rm e.m.}(q^2)$, the electromagnetic form factor. The electric
charge distribution is well-described by a Fermi or Woods-Saxon
form,\cite{Hofstadter} whose Fourier transform is indistinguishable
from the convenient analytic expression\cite{Helm}
\begin{equation}
F_{\rm mass}(q^2) \simeq {3 j_1(qR) \over qR} {\rm
e}^{-\frac{1}{2}(qs)^2} .
\label{Fem}
\end{equation}
The electromagnetic radius $R$ and the surface thickness $s$ can be
obtained by fitting electron scattering data,\cite{Hofstadter} or can
be roughly approximated by $R \approx A^{1/3}$ fm and $s \approx $ 1
fm.\cite{EngelReview} $ F_{\rm e.m.}(q^2)$ presents diffraction zeros when
the modified Bessel function $j_1(qR)=0$, the first of which occurs at
$qR \simeq 4.2$. In electron scattering, these diffraction zeros are
filled in, because due to the long-range Coulomb attraction the
electron wave function is distorted from a simple plane wave and the
form factor is not simply the Fourier transform of the charge
density. The short-range nature of WIMP-nucleus interactions make us
expect no wave function distortion, and diffraction zeros
remain.\footnote{The first corrections are at a level of $10^{-6}$ and
come from neglected higher powers of the incoming WIMP velocity.} The
first diffraction zero is important in assessing bounds from some
present-day detectors.\cite{Bottino}
\begin{figure}[t!]
\centering
\epsfxsize=\hsize
\epsfbox{moriond1.eps}
\caption[]{\small \it Examples of WIMP--quark scattering.}
\end{figure}
Step 3 for the spin-dependent part requires the expectation values of
the total spin of protons $ \langle S_{\rm p} \rangle $ and neutrons
$ \langle S_{\rm n} \rangle $ separately. At $q=0$,
\begin{equation}
\overline{\Bigl\vert T_a(0) \Bigr\vert^2} = {4 (J+1)\over J} \Bigl\vert G_a^{\rm
p} \langle S_{\rm p} \rangle + G_a^{\rm n} \langle S_{\rm n} \rangle
\Bigr\vert ^2 ,
\end{equation}
where $J$ is the nuclear spin. Even-even nuclei, with even numbers of
protons and of neutrons, do not have spin, and for them $T_a(0)=0$. For
even-odd nuclei with $J\ne 0$, a nuclear model is needed to estimate $
\langle S_{\rm p} \rangle $ and $ \langle S_{\rm n} \rangle $. For
instance, ${}^{73}{\rm Ge}$ is an odd-neutron nucleus with
$J=\frac{9}{2}$. The single-particle shell
model\cite{GoodmanWitten,EllisFlores} gives
\begin{equation}
\langle S_{\rm n} \rangle = {1\over2} \left[ 1 + { \frac{3}{4} -l(l+1)
\over j(j+1) } \right] = 0.50, \qquad \langle S_{\rm p} \rangle = 0 ;
\end{equation}
the odd-group model,\cite{EngelVogel} in which the odd-nucleon spin is
related to the nuclear magnetic moment $\mu$ and gyromagnetic factors
$g^{L,S}_{\rm n,p}$, gives
\begin{equation}
\langle S_{\rm n} \rangle = { \mu - g^L_{\rm n} J \over g^S_{\rm n} -
g^L_{\rm n} } = 0.23, \qquad \langle S_{\rm p} \rangle = 0 ;
\end{equation}
a more sophisticated interacting shell model\cite{TedRessell} gives
\begin{equation}
\langle S_{\rm n} \rangle = 0.468, \qquad \langle S_{\rm p} \rangle
=0.011.
\end{equation}
The proton might have a small but non-zero contribution to the cross
section, which might change the relative merits of different nuclei
for dark matter searches.
At $q\ne 0$, nuclear spin form factors are needed. The neutron and
proton contributions differ, and at present only complex
calculations\cite{TedRessell,Fspin} for specific nuclei provide an
estimate of the isoscalar and isovector spin form factors $F_{\rm
spin}^0(q^2)$ and $F_{\rm spin}^1(q^2)$, in terms of which
\begin{equation}
\overline{\Bigl|T_a(q^2)\Bigr|^2} = {J+1\over J} \left| ( G_a^{\rm
p}\!+\!G_a^{\rm n}) \, \langle S_{\rm p}\!+\!S_{\rm n} \rangle \,
F_{\rm spin}^0(q^2) \, + \, ( G_a^{\rm p}\!-\!G_a^{\rm n}) \, \langle S_{\rm
p}\!-\!S_{\rm n} \rangle \, F_{\rm spin}^1(q^2) \right|^2,
\end{equation}
The results of these calculations can be conveniently resumed by the
approximate expressions
\begin{equation}
F_{\rm spin}^0(q^2) \simeq
\exp\left(-\frac{r_0^2q^2}{\hbar^2}\right), \qquad\qquad
F_{\rm spin}^1(q^2) \simeq \exp\left(-\frac{r_1^2q^2}{\hbar^2}+i\frac{cq}{\hbar}\right) ,\
\end{equation}
with parameters given in the following table for selected nuclei:
\bigskip
\centerline{\vbox{
{ \tabskip=1.1pc \halign{#\hfil &\hfil#\hfil & \hfil#\hfil & \hfil#\hfil
& \hfil#\hfil & \hfil#\hfil & \hfil#\hfil & \hfil#\hfil & \hfil#\hfil \cr
\noalign{\hrule\kern2pt}
& $J$ & $\nu_{\rm max}$/keV & $\langle S_{\rm p} \rangle$ & $\langle
S_{\rm n}
\rangle$ & $r_0$/fm & $r_1$/fm & $c$/fm &
${\rm valid~for} \atop \nu{\rm /keV}<$ \cr
\noalign{\kern2pt\hrule\kern2pt}
${}^{73}$Ge & ${9\over2}$ & 540 & \phantom{-}0.011\phantom{0} & 0.468 & 1.971 & 2.146 &
-0.246 & 55 \cr
\noalign{\kern -3pt}
${}^{28}$Si & ${1\over2}$ & 216 & -0.0019 & 0.133 & 1.302 & 1.548 &
-0.320 & 145 \cr
\noalign{\kern -3pt}
${}^{27}$A & ${5\over2}$ & 100 & \phantom{-}0.3430 & 0.269 & 1.378 & 1.600 &
\phantom{-}0.196 & $\nu_{\rm max}$ \cr
\noalign{\kern -3pt}
${}^{39}$K & ${3\over2}$ & 145 & -0.184\phantom{0} & 0.054 & 1.746 & 1.847 &
\phantom{-}0.371 & $\nu_{\rm max}$ \cr
\noalign{\kern2pt\hrule}
}}}}
\bigskip
\subsection*{\bf Majorana fermion}
A Majorana fermion is a spin-$\frac{1}{2}$ particle that coincides
with its antiparticle. It carries no conserved quantum number. It has
neither vector nor tensor currents. Of the remaining pseudoscalar,
scalar and axial currents, only the last two have a non-vanishing
non-relativistic limit, spin-independent the first ($\bar{\chi} \chi
\propto \chi^\dagger\chi$) and spin-dependent the second ($\bar{\chi} {\vec
\gamma} \gamma_5 \chi \propto \chi^\dagger {\vec \sigma} \chi$).
Axial currents may arise from exchange of a Z boson as in fig.~1b, and
the analysis is then analogous to that in the previous section, with
the obvious replacement of $a_{\nu}$ with $a_\chi$.
Scalar currents originate from exchange of a scalar particle $\varphi$, {\it e.g.\ } as in
Fig.~1c. At small $q^2$, the $\varphi$ propagator reduces to $-i/m^2_\varphi$ and
the four-fermion amplitude reads
\begin{equation}
- { g_{\varphi\chi\chi} g_{\varphi \rm qq} \over m^2_\varphi } \; \bar{\chi} \chi \; \bar{\rm
q} {\rm q} .
\end{equation}
For a nucleon $n={\rm p,n}$ one then obtains
\begin{equation}
G_s^n = - {g_{\varphi\chi\chi} \over m^2_\varphi} \, \sum_{\rm q}
g_{\varphi \rm qq} \bra{n}
\bar{\rm q} {\rm q} \ket{n} .
\label{Gs}
\end{equation}
For example, in the case of the neutralino with exchange of the lightest
supersymmetric Higgs boson, the sum over quarks is explicitly
\begin{equation}
{g \over 2 m_{\rm W}} \, \left[ {\cos\alpha\over\sin\beta} \langle m_{\rm
u}\bar{\rm u}{\rm u} + m_{\rm c}\bar{\rm c}{\rm c} + m_{\rm t}\bar{\rm
t}{\rm t} \rangle - {\sin\alpha\over\cos\beta} \langle m_{\rm
d}\bar{\rm d}{\rm d} + m_{\rm s}\bar{\rm s}{\rm s} + m_{\rm b}\bar{\rm
b}{\rm b} \rangle \right] .
\end{equation}
The scalar quark content of the nucleon $\bra{n} \bar{\rm q} {\rm
q} \ket{n}$ can be extracted from data with
the help of chiral perturbation theory, $\pi$-nucleon scattering and
heavy quark expansion.\cite{Cheng} The result is
\begin{eqnarray}
\langle m_{\rm u} \bar{\rm u} {\rm u} \rangle \simeq
\langle m_{\rm d} \bar{\rm d} {\rm d} \rangle \simeq 30 \,{\rm MeV}/c^2,
\qquad\qquad\quad\!
\langle m_{\rm s} \bar{\rm s} {\rm s} \rangle \simeq 60\hbox{--}120 \,{\rm MeV}/c^2, \\
\langle m_{\rm c} \bar{\rm c} {\rm c} \rangle =
\langle m_{\rm b} \bar{\rm b} {\rm b} \rangle =
\langle m_{\rm t} \bar{\rm t} {\rm t} \rangle =
\frac{2}{27} \left( m_{\rm p} - \sum_{\rm q=u,d,s}
\langle m_{\rm q} \bar{\rm q} {\rm q} \rangle \right) \simeq
60 \,{\rm MeV}/c^2 .
\end{eqnarray}
The strange quark contribution is uncertain by a factor of 2. Step 3 is
analogous to the Dirac neutrino case, and leads to eq.~(\ref{Ts}) with
four-fermion couplings given by (\ref{Gs}).
\subsection*{\bf Neutralino}
\begin{figure}[t]
\centering
\leavevmode
\epsfxsize=240pt
\epsfysize=240pt
\epsfbox[51 260 506 631]{moriond2a.ps}
\leavevmode
\epsfxsize=240pt
\epsfysize=230pt
\rotate[l]{\epsfbox[140 400 470 690]{moriond2b.ps}}
\caption[]{\small\it Scattering rate versus mass for neutralinos: (a)
phenomenological approach,\cite{Bergstrom} (b) grand-unified
approach.\cite{Berezinsky}}
\end{figure}
Supersymmetry and the neutralino have been presented by Jungman at
this conference. The neutralino has both spin-dependent and
spin-independent interactions with nuclei, the former mediated by Z
boson and squarks, the latter by Higgs bosons and squarks. The general
formalism of the preceding sections can be used. In the limit of heavy
squarks $\tilde{\rm q}_k$, the effective four-fermion constants are given
by
\begin{equation}
G_s^{\rm p} \simeq G_s^{\rm n} = \sum_{{\rm q=u,d,s,c,b,t}} \langle
\bar{\rm q} {\rm q} \rangle \left( - \sum_{h={\rm H}_1,{\rm H}_2} {
g_{h\chi\chi} g_{h\rm qq} \over m_h^2 } + {1\over 2} \sum_{k=1}^6 {
g_{L\tilde{\rm q}_k\chi {\rm q}} g_{R\tilde{\rm q}_k\chi {\rm q}}
\over m^2_{\tilde{\rm q}_k} } \right),
\end{equation}
\begin{equation}
G_a^{\rm p} = \sum_{{\rm q=u,d,s}} \Delta {\rm q} \left( { g_{{\rm
Z}\chi\chi} g_{\rm Zqq} \over m_{\rm Z}^2 } + {1\over 8}
\sum_{k=1}^6 { g_{L\tilde{\rm q}_k\chi {\rm q}}^2 + g_{R\tilde{\rm
q}_k\chi {\rm q}}^2 \over m^2_{\tilde{\rm q}_k} } \right) , \qquad
G_a^{\rm n} = G_a^{\rm p}(\Delta{\rm u} \leftrightarrow\Delta{\rm
d}) .
\end{equation}
Expressions for the elementary vertices $g_{ijk}$ can be found in
ref.~18.
Predictions in supersymmetric models suffer from the presence of many
unknown parameters. Two extreme attitudes are a
phenomenological approach in which what is not excluded is allowed,
and a grand-unified approach in which coupling constants and masses
are unified at some high energy scale. Fig.~2 shows examples of
calculated event rates in ${}^{76}{\rm Ge}$, each point representing a
choice of model parameters: ``predictions'' may well span 10 orders of
magnitude in a phenomenological approach\cite{Bergstrom} and 2 orders
of magnitude in a more restricted scenario.\cite{Berezinsky}
\subsection*{\bf Underabundant dark matter relics}
Given a particle-physics model, the relic density of a species, a WIMP
$\chi$ in particular, is a calculable and definite quantity. Often it
happens that the computed relic density $\Omega_\chi$ is (much)
smaller than the dark matter density in the Universe. For this reason,
some authors simply neglect this case. But even if these WIMPs
constitute only a fraction of the dark matter, they generally have
quite high scattering cross sections off nuclei, because of an
approximate inverse proportionality of the $\chi$ relic density and
the $\chi$--nucleus cross section. However, the scattering rate also
includes the $\chi$ {\it halo} density $\rho_\chi$. It is reasonable
that $\rho_\chi$ is only a fraction of the local dark matter density
$\rho_{\rm DM}$, but which precise fraction it is depends on the model
for galaxy formation. If both the main and the $\chi$ components of
dark matter are cold, we expect them to behave similarly under
gravitation, so that the halo fraction $f_\chi$ might be equal to the
universal fraction $\Omega_\chi/\Omega_{\rm DM}$. Unfortunately,
$\Omega_{\rm DM}$ is poorly known: it can range from $\approx 0.01$
for dark matter associated with galactic halos to $\approx 1$ for a
smooth universal component. In fig.~3, the suppression of scattering
rates due to rescaling of the neutralino halo density by a universal
fraction with $\Omega_{\rm DM} h^2 = 0.025$ is apparent to the left of
the dashed line. This suppression must be included for consistency
when setting bounds on particle-physics models.
\begin{figure}[ht]
\centering
\leavevmode
\epsfxsize=250pt
\epsfbox[54 260 502 600]{moriond3.ps}
\caption[]{\small \it Scattering rate versus relic density for
neutralinos (from ref.~18).}
\end{figure}
|
1,314,259,993,384 | arxiv | \section{\setcounter{equation}{0}\oldsection}
\renewcommand\thesection{\arabic{section}}
\renewcommand\theequation{\thesection.\arabic{equation}}
\newtheorem{claim}{\indent Claim}[section]
\newtheorem{theorem}{\indent Theorem}[section]
\newtheorem{lemma}{\indent Lemma}[section]
\newtheorem{proposition}{\indent Proposition}[section]
\newtheorem{definition}{\indent Definition}[section]
\newtheorem{remark}{\indent Remark}[section]
\newtheorem{corollary}{\indent Corollary}[section]
\newtheorem{example}{\indent Example}[section]
\title{\Large \bf
The Paneitz-Sobolev constant of a closed Riemannian\\ manifold
and an application to the nonlocal \\$\mathbf{Q}$-curvature flow}
\author{Xuezhang Chen\thanks{ X. Chen: xuezhangchen@nju.edu.cn}
\\
\small
$^\ast$Department of Mathematics \& IMS, Nanjing University, Nanjing
210093, P. R. China
}
\date{}
\maketitle
\begin{abstract}
In this paper, we establish that: Suppose a closed Riemannian manifold $(M^n,g_0)$ of dimension $\geq 8$ is not locally conformally flat, then the Paneitz-Sobolev constant of $M^n$ has the property that $q(g_0)<q(S^n)$. The analogy of this result was obtained by T. Aubin in 1976 and had been used to solve the Yamabe problem on closed manifolds. As an application, the above result can be used to recover the sequential convergence of the nonlocal Q-curvature flow on closed manifolds recently introduced by Gursky-Malchiodi.
\end{abstract}
\section{Introduction and the main result}
\indent \indent Similar to the Yamabe problem, or more generally, to the
prescribing scalar curvature problem on $S^n$, a nature problem on closed manifolds involving the fourth order
conformally covariant operator can be proposed as follows:\\
On a closed manifold $(M^n,g_0)$ of dimension $\geq 5$,
does there exist a conformal metric $g=u^{4 \over n-4}g_0$ with constant $Q_g$-curvature? \\
The above problem on $M^n$ with $n \geq 5$ is reduced to the
solvability of the following equation
\begin{equation}\label{prescribed_Q-curvature}
P_{g_0}( u)=\tfrac{n-4}{2} c u^{n+4 \over n-4} \text{~~and~~} u>0 \hbox{~~on~~} M^n,
\end{equation}
where $c$ is a constant and $P_{g_0}$ is fourth-order conformally covariant operator on $(M^n,g_{g_0})$, which will be defined below soon. Recently, under the assumptions that $Q_{g_0}$ is semi-positive and the scalar curvature $R_{g_0}$ is nonnegative, an affirmative answer to the above problem is given by Gursky-Malchiodi \cite{gur_mal}. For more background of the above mentioned problem, one may refer to \cite{gur_mal} and the references therein. \\
\indent Let $(M^n,g)$ be a smooth Riemannian manifold of dimension
larger than four, and $R_g, \text{Ric}_g$ be the scalar curvature,
Ricci curvature of metric $g$, respectively. The following
conformally covariant operator of order four on $(M^n,g)$ is
discovered by T. Branson \cite{branson} and Fefferman-Graham
\cite{feffgra}, concretely,
\begin{eqnarray}
P_g u &=& \Delta_g^2+\hbox{div}_g\{(4A_g-(n-2)\sigma_1(A_g)g)(\nabla u, \cdot)\}+\frac{n-4}{2}Q_g u \label{P_g}
\end{eqnarray}
and the $Q$-curvature $Q_g$ of metric $g$ is defined by
\begin{eqnarray}\label{def_Q-curv}
Q_g=-{1 \over 2(n-1)}\Delta_g R_g+{n^3-4n^2+16n-16 \over 8(n-1)^2(n-2)^2} R_g^2-{2 \over (n-2)^2}|\text{Ric}_g|^2,
\end{eqnarray}
with the Schouten tensor $A_g=\frac{1}{n-2}(\hbox{Ric}_g-\frac{R_g}{2(n-1)}g)$.
Under conformal change $\bar{g}=u^{4 \over n-4}g$, there holds
\begin{equation}\label{conformal-invariant}
P_g(\varphi u)=u^{n+4 \over n-4}P_{\bar{g}}(\varphi)
\end{equation}
for all $\varphi \in C^\infty(M^n)$.
We first state the main reuslt of this paper analogous to T. Aubin's \cite{aubin} in the Yamabe problem.
\begin{theorem}\label{Thm1}
On a closed Riemannian manifold $(M^n,g_0)$ of dimension $\geq 8$, suppose there exists $p \in M^n$ such that the Weyl tensor $W_{g_0}(p) \neq 0$, then $q(g_0)<q(S^n)$.
\end{theorem}
We should point out here that some ideas of the proof of Theorem \ref{Thm1} involve the ones in \cite{er}.
The structure of this paper is as follows: In section \ref{sec2}, some standard results to experts in this field are presented. In section \ref{sec3}, we establish the main result of this paper, Theorem \ref{Thm1}. In section \ref{sec4}, the main Theorem \ref{Thm1} is applied to recover the sequential convergence of the nonlocal $\mathbf{Q}$-curvature flow in \cite{gur_mal}.
\begin{remark}
From the comments of Fr$\acute{e}$d$\acute{e}$ric Robert, the two formulae (21) and (22) on page 511 in \cite{er} may also involve Theorem \ref{Thm1}. For the readers' convenience, one may compare their computations with ours appeared in the proof of Theorem \ref{Thm1}.
\end{remark}
{\bf Acknowledgments:}
The author is partially supported through NSFC (No.11201223) and Program for New Century Excellent Talents in University (NCET-13-0271). He would thank Dr. Yalong Shi for many stimulating discussions, especially in Lemma 3.1 and Professor Xingwang Xu for helpful comments and precious advice on the nonlocal $\mathbf{Q}$-curvature flow.
\section{Preliminaries}\label{sec2}
\indent \indent The following results in this section are standard for the experts in this field. Let $(M^n,g_0)$ be a closed Riemannian manifold. Define a energy functional on $C^\infty(M^n)$ by
$$E_{g_0}[u]=\int_{M^n} u P_{g_0}(u)d\mu_{g_0},$$
where $P_{g_0}$ is defined as (\ref{P_g}).
\begin{lemma}\label{conf_inv_P_g}
For any conformal metric $g$
of $g_0$, then the Paneitz-Sobolev constant
$$q(g)=q(M^n, g)\equiv\inf\left\{\frac{\int_{M^n}w P_g(w)d\mu_g}
{(\int_{M^n}w^{2n \over n-4}d\mu_g)^{n-4 \over n}}; w \in
C^\infty(M^n)\setminus \{0\}\right\}$$ is independent of the selection of
metrics in the conform class of $g_0$.
\end{lemma}
\begin{proof} For any $g_1,g_2$ in the conformal class of $g_0$, there exists some positive smooth function $\varphi$
such that $g_2=\varphi^{4 \over n-4}g_1$. Thus by the definition of
$q(g_1)$ and (\ref{conformal-invariant}), for any $w \in
C^\infty(M^n)\setminus \{0\}$, there holds
\begin{eqnarray*}
q(g_1) \leq \frac{\int_{M^n}w \varphi P_{g_1}(w
\varphi) d\mu_{g_1}}{(\int_{M^n} (w \varphi)^{2n \over
n-4}d\mu_{g_1})^{n-4 \over n}}=\frac{\int_{M^n}w
P_{g_2}(w)d\mu_{g_2}}{(\int_{M^n}w^{2n \over n-4}d\mu_{g_2})^{n-4
\over n}}.
\end{eqnarray*}
Taking the infimum for all $w \in
C^\infty(M^n)\setminus \{0\}$ over the above inequality to show $q(g_1)\leq q(g_2)$. Exchanging $g_1$ and $g_2$, we also obtain $q(g_2) \leq q(g_2)$. Thus $q(g_1)=q(g_2)$ holds for any conformal metrics $g_1, g_2$ of $g_0$.
\end{proof}
Next we establish the so-called ``Kazdan-Warner" condition for prescribed $Q$-curvature problem on $S^n$. \\
\indent Let $g=u^{4 \over n-4} g_{S^n}$ with $0<u \in
C^\infty(S^n)$, in particular set $\varphi=1$ in \eqref{conformal-invariant}, there holds
$$P_{S^n}u={n-4 \over 2} Q_g u^{n+4 \over n-4} \text{~~on~~} S^n.$$
Let $\phi$ be a conformal transformation on $S^n$, define its companion of $u$ related
to $\phi$ by
$$v=(u\circ \phi)|\det d\phi|^{n-4 \over 2n}, \text{~~and~~}\phi^\ast(g_{S^n})=|\det d\phi|^{2/n}g_{S^n}.$$
\begin{lemma}\label{kazdan_warner_con}
For any conformal vector field $X$ on $(S^n,g_{S^n})$, there hold $E_{g_{S^n}}[v]=E_{g_{S^n}}[u]$ and
$$\int_{S^n}<X,\nabla Q_g>_{S^n} u^{2n \over n-4} d\mu_{S^n}=0,$$
where $Q_g$ is the $Q$-curvature of the conformal metric $g=u^{4
\over n-4}g_{S^n}$.
\end{lemma}
\begin{proof} By \eqref{conformal-invariant} and variable
change formula, a direct computation yields
\begin{eqnarray*}
E_{g_{S^n}}[v]&=&\int_{S^n}v P_{S^n}v d\mu_{S^n}\\
&=&\int_{S^n} (u\circ \phi)|\det d\phi|^{n-4 \over 2n}
P_{S^n}( (u\circ \phi)|\det d\phi|^{n-4 \over 2n})d\mu_{S^n}\\
&=&\int_{S^n}(u\circ \phi) P_{|\det d\phi|^{2/n}g_{S^n}} (u\circ \phi) (\det d\phi) d\mu_{S^n}\\
&=&\int_{S^n}(u\circ \phi) P_{\phi^\ast(g_{S^n})}(u\circ \phi) d\mu_{\phi^\ast(g_{S^n})}\\
&=&\int_{S^n}u P_{S^n}(u)d\mu_{S^n}=E_{g_{S^n}}[u].
\end{eqnarray*}
For the second assertion, denote by $\phi(t)$ the family of smooth
conformal transformations on $S^n$ induced by the vector field $X$
with $\phi(0)=\text{id}$, and set its corresponding conformal vector
field $\xi(t)=(d\phi(t))^\ast {d \phi \over dt}$. In particular,
$X=\xi(0)$. Define the companion of $u$ relating to $\phi(t)$ by
$$w(t)=(u\circ\phi(t))(\det d\phi)^{n-4 \over 2n} \text{~~or~~} w^{4 \over n-4}g_{S^n}=\phi^\ast(g).$$
From conformal covariance (\ref{conformal-invariant}) of $P_{S^n}$,
$w$ solves
\begin{equation}\label{Q-curvature_w}
P_{S^n}(w)={n-4 \over 2} (Q_g \circ \phi) w^{n+4 \over n-4}
\text{~~on~~} S^n.
\end{equation}
Differentiate (\ref{Q-curvature_w}) with respect to $t$ to yield
\begin{equation}\label{eqn_w_t}
P_{S^n}(w_t)={n-4 \over 2}\Big[\xi \cdot d(Q_g \circ \phi)w^{n+4 \over
n-4}+{n+4 \over n-4}w^{8 \over n-4}w_t\big].
\end{equation}
Since the volume
$$\int_{S^n}w(t)^{2n \over n-4}d\mu_{S^n}=\int_{S^n}u^{2n
\over n-4}d\mu_{S^n}$$
is preserved, it yields that
\begin{equation}\label{vol_t}
\int_{S^n}w^{n+4 \over n-4} w_t d\mu_{S^n}=0.
\end{equation}
By Lemma \ref{conf_inv_P_g}, (\ref{eqn_w_t}) and (\ref{vol_t}), we
have
\begin{eqnarray*}
0&=&{d \over dt}E_{g_{S^n}}[u]={d \over dt}E_{g_{S^n}}[w(t)]=2\int_{S^n}P_{S^n}(w_t)w d\mu_{S^n}\\
&=& (n-4) \int_{S^n}[\xi \cdot d(Q_g \circ \phi)w^{n+4 \over n-4}+{n+4
\over n-4}w^{8 \over n-4}w_t]w d\mu_{S^n}\\
&=&(n-4) \int_{S^n}\xi \cdot d(Q_g \circ \phi) w^{2n \over
n-4}d\mu_{S^n}.
\end{eqnarray*}
Therefore, the desired assertion is followed by the above identity
evaluated at $t=0$.
\end{proof}
\section{Proof of the main result}\label{sec3}
\indent \indent In order to prove the main Theorem \ref{Thm1}, we first need an elementary result.
\begin{lemma}\label{lem1}
For any fixed $\epsilon>0$, then there exist $C_1(n,\epsilon)>0$ and $C_2(n,\epsilon)>0$ such that as $\alpha \to 0^+$,
\begin{eqnarray*}
&&\int_{0}^{\frac{\epsilon}{\alpha}}\Big[1-\tfrac{(n-4)(n^2-4n+8)}{n(n-2)}\tfrac{\sigma^4}{(1+\sigma^2)^2}\Big](1+\sigma^2)^{4-n}\sigma^{n-1}d\sigma\\
&=&\left\{ \begin{array}{ll}
-C_1(n,\epsilon), \quad &\hbox{~~if~~} n>8;\\
C_2(n,\epsilon)\log \alpha, \quad &\hbox{~~if~~} n=8.
\end{array}
\right.
\end{eqnarray*}
\end{lemma}
\begin{proof}
By a direct computation, one has
\begin{eqnarray*}
&&\int \sigma^{n-1}(1+\sigma^2)^{4-n}d\sigma \\
&=& {1\over n}\sigma^n(1+\sigma^2)^{4-n}+{2(n-4)\over n}\int \sigma^{n+1}(1+\sigma^2)^{3-n}d\sigma\\
&=& {1\over n}\sigma^n(1+\sigma^2)^{4-n}+{2(n-4) \over n(n+2)} \sigma^{n+2}(1+\sigma^2)^{3-n}\\
& & +{4(n-4)(n-3)\over n(n+2)}\int \sigma^{n+3} (1+\sigma^2)^{2-n} d\sigma.
\end{eqnarray*}
Thus we conclude that
\begin{eqnarray*}
&&\int_{0}^{\frac{\epsilon}{\alpha}}\Big[1-\tfrac{(n-4)(n^2-4n+8)}{n(n-2)}\tfrac{\sigma^4}{(1+\sigma^2)^2}\Big](1+\sigma^2)^{4-n}\sigma^{n-1}d\sigma\\
&=& {1\over n}(\tfrac{\epsilon}{\alpha})^n(1+(\tfrac{\epsilon}{\alpha})^2)^{4-n}+\tfrac{2(n-4)}{n(n+2)} (\tfrac{\epsilon}{\alpha})^{n+2}(1+(\tfrac{\epsilon}{\alpha})^2)^{3-n}\\
& & +\tfrac{n-4}{ n}\Big(\tfrac{4(n-3)}{n+2}-\tfrac{n^2-4n+8}{n-2}\Big)\int_0^{\tfrac{\epsilon}{\alpha}} \sigma^{n+3} (1+\sigma^2)^{2-n} d\sigma\\
&=& \tfrac{1}{n}(\tfrac{\epsilon}{\alpha})^n(1+(\tfrac{\epsilon}{\alpha})^2)^{4-n}+\tfrac{2(n-4)}{n(n+2)} (\tfrac{\epsilon}{\alpha})^{n+2}(1+(\tfrac{\epsilon}{\alpha})^2)^{3-n}\\
& & -\tfrac{n-4}{n(n+2)(n-2)}\big[(n-8)(n^2+2n+36)+280\big]\int_0^{\epsilon\over\alpha} \sigma^{n+3} (1+\sigma^2)^{2-n} d\sigma.
\end{eqnarray*}
Note that when $n>8$, the first two terms on the right hand side of the last identity go to 0 when $\alpha\to 0$, and the last term has a negative limit, depending only on $n$. When $n=8$, the first two terms have finite limits, but the last term goes to $-\infty$ at the speed of $-\log \alpha$. Therefore, the proof is complete.
\end{proof}
\noindent{\bf The proof of Theorem \ref{Thm1}.~~}
By Theorem 5.1 in \cite{lp}, there exists some conformal metric $g$ in the conformal class of $g_0$ with conformal normal coordinates at $p$, such that $\forall N \geq 5$, in a chart near $p$ there hold
\begin{eqnarray*}
&&\det g=1+O(r^N), \quad r=|x|=\hbox{dist}(x,p),\\
&&R_g=O(r^2), \quad -\Delta_g R_g(p)=\frac{1}{6}|W(p)|^2.
\end{eqnarray*}
Using the above facts and \ref{def_Q-curv}, one obtains
$$Q_g=\frac{|W(P)|^2}{12(n-1)}+O(r).$$
With its corresponding polar coordinates $\{(r, \varphi); \varphi \in S^{n-1} \}$ and polar metric $\tilde{g}$, there hold
$$\tilde{g}_{rr}=1, \quad \tilde{g}_{r\varphi}=0.$$
Thus we obtain
\begin{eqnarray*}
\sqrt{\det g} ~dx^1 \wedge \cdots \wedge dx^n=r^{n-1}\sqrt{\det g} ~dr \wedge d\Omega_{S^{n-1}},
\end{eqnarray*}
where $d\Omega_{S^{n-1}}$ denotes the volume form on $S^{n-1}$. As shown in \cite{gur_mal}, for any radial function $u$, one has
$$\Delta_g u=\Delta_0 u+O(r^{N-1})u',$$
where $\Delta_0$ denotes the Euclidean Laplacian.
Let $\eta_\epsilon \in C_c^\infty(\mathbb{R}^n)$ be a cutoff function for $\epsilon>0$, $\eta_\epsilon=1$ in $B_\epsilon(0)$ and $\eta_\epsilon=0$ outside of $B_{2\epsilon}(0)$. Let
$$u_\alpha(x)=\Big(\frac{2 \alpha}{\alpha^2+|x|^2}\Big)^{\frac{n-4}{2}}, \hbox{~~for any~~} \alpha>0,$$
then,
$$q(S^n)=\frac{n-4}{2}Q_{S^n}\omega_n^{\frac{4}{n}}=\frac{\int_{\mathbb{R}^n}|\Delta_0 u_\alpha|^2 dx}{\Big(\int_{\mathbb{R}^n}u_\alpha^{\frac{2n}{n-4}}dx\Big)^{\frac{n-4}{n}}},$$
where $\omega_n=\hbox{vol}(S^n,g_{S^n})$ denotes the volume of the standard sphere $S^n$. For the Paneitz-Sobolev constant $q(S^n)$, one may refer to Proposition 1.1 in \cite{dhl}.
Notice that
\begin{eqnarray*}
&&u_\alpha'=-(n-4)\frac{r}{\alpha^2+r^2}u_\alpha,\quad u_\alpha''=\frac{(n-4)(2r^2-\alpha^2)}{(\alpha^2+r^2)^2}u_\alpha,\\
&&\Delta_0 u_\alpha=u_\alpha''+\frac{n-1}{r}u_\alpha'=-\frac{n-4}{(\alpha^2+r^2)^2}u_\alpha[(n-3)r^2+n\alpha^2].
\end{eqnarray*}
For any fixed $\epsilon>0$, choose
$$\varphi_\alpha(x)=\eta_\epsilon(x)u_\alpha(x),$$
then
\begin{eqnarray*}
&&\int_{M^n}\varphi_\alpha P_g \varphi_\alpha d\mu_g\\
&=&\int_{M^n}\Big[ |\Delta_g \varphi_\alpha|^2-4A_g(\nabla \varphi_\alpha,\nabla \varphi_\alpha)+(n-2)\sigma_1(A_g)|\nabla \varphi_\alpha|_g^2+\frac{n-4}{2}Q_g \varphi_\alpha^2 \Big]d\mu_g.
\end{eqnarray*}
We start to compute term by term on the right hand side of the above identity.
(i) To estimate
\begin{eqnarray*}
&&\int_{M^n}|\Delta_g \varphi_\alpha|^2 d\mu_g\\
&=&\int_{B_\epsilon(0)}|\Delta_0 u_\alpha+O(r^{N-1})u_\alpha'|^2 (1+O(r^N))dx+\int_{A_\epsilon}|\Delta_g \varphi_\alpha|^2 d\mu_g,
\end{eqnarray*}
where $A_\epsilon=B_{2\epsilon}\setminus B_\epsilon(0)$.
It is not hard to check that
\begin{eqnarray*}
\int_{\mathbb{R}^n \setminus B_\epsilon(0)}|\Delta_0 u_\alpha|^2 dx&=&O\Big(\Big(\frac{\alpha}{\epsilon}\Big)^{n-4}\Big);\\
\int_{A_\epsilon}|\Delta_0 \varphi_\alpha|^2 dx&=&\int_{A_\epsilon}|\Delta_0 \eta_\epsilon u_\alpha + 2 \eta_\epsilon' u_\alpha'+\eta_\epsilon \Delta_0 u_\alpha|^2 dx\\
&\leq& 2 \int_{A_\epsilon}[|\Delta_0 \eta_\epsilon u_\alpha|^2+|\Delta_0 u_\alpha|^2]dx\\
&=&O(\epsilon^{8-n}\alpha^{n-4})+O(\epsilon^{4-n}\alpha^{n-4});\\
\int_{A_\epsilon} |\Delta_g \varphi_\alpha |^2 d\mu_g&=&\int_{A_\epsilon}|\Delta_0 \varphi_\alpha+O(r^{N-1})\varphi_\alpha'|^2 (1+O(r^N)) dx\\
&=&O(\epsilon^{4-n}\alpha^{n-4})+O(\epsilon^{2N-n+4}\alpha^{n-4})
\end{eqnarray*}
Thus, we obtain
$$\int_{M^n}|\Delta_g \varphi_\alpha|^2 d\mu_g=\int_{\mathbb{R}^n}|\Delta_0 u_\alpha|^2 dx+O_\epsilon(\alpha^{n-4}).$$
(ii) To estimate
\begin{eqnarray*}
&&\int_{M^n}Q_g \varphi_\alpha d\mu_g\\
&=&\int_{B_\epsilon(0)}\big(\tfrac{|W(P)|^2}{12(n-1)}+O(r)\big)u_\alpha^2(1+O(r^N))dx+\int_{A_\epsilon}\big(\tfrac{|W(P)|^2}{12(n-1)}+O(r)\big)\varphi_\alpha^2 (1+O(r^N))dx.
\end{eqnarray*}
Thus one has
\begin{eqnarray*}
&&\frac{n-4}{2}\int_{M^n}Q_g \varphi_\alpha d\mu_g\\
&=&\frac{n-4}{24(n-1)}|W(p)|^2\int_{B_\epsilon(0)}u_\alpha^2 dx+\int_{B_{\epsilon}(0)}O(r) u_\alpha^2 dx+\int_{A_\epsilon}O(1)u_\alpha^2 dx\\
&=&\frac{n-4}{24(n-1)}|W(p)|^2\int_{B_\epsilon(0)}u_\alpha^2 dx+O_\epsilon(\alpha^{n-4}).
\end{eqnarray*}
(iii) To estimate
\begin{eqnarray*}
&&-4\int_{B_\epsilon(0)}A_g(\nabla u_\alpha, \nabla u_\alpha)d\mu_g\\
&=&-4\int_{B_\epsilon(0)}\big(A_{ij}(p)+A_{ij,k}(p)x^k+\tfrac{1}{2}A_{ij,kl}(p)x^kx^l+O(r^3)\big)x^ix^j r^{-2}|u_\alpha'|^2(1+O(r^N))dx\\
&=&-2\int_{B_\epsilon(0)}\big(A_{ij,kl}(p)x^kx^l+O(r^3)\big)x^ix^jr^{-2}|u_\alpha'|^2 dx\\
&=&-\frac{|W(p)|^2}{6n(n-1)(n-2)}\int_{B_\epsilon(0)}r^2 |u_\alpha'|^2 dx+\int_{B_\epsilon(0)}O(r^3)|u_\alpha'|^2 dx.
\end{eqnarray*}
Observe that
\begin{eqnarray*}
&&\int_{B_\epsilon(0)}O(r^3)|u_\alpha'|^2dx\\
&=& \int_0^{\epsilon}O(r^3)\frac{r^2}{(\alpha^2+r^2)^2}\Big(\frac{2\alpha}{\alpha^2+r^2}\Big)^{n-4}r^{n-1}dr\\
&\stackrel{\sigma=\tfrac{\epsilon}{\alpha}}{=}&O(\alpha^5)\Big[O(1)+\int_1^{\tfrac{\epsilon}{\alpha}}\sigma^{8-n}d\sigma\Big]\\
&=&\left\{\begin{array}{lll}
O_\epsilon(\alpha^4), \quad & \hbox{~~if~~} n=8;\\
O_\epsilon(\alpha^5 \log \alpha^{-1}), \quad &\hbox{~~if~~} n=9;\\
O_\epsilon(\alpha^5) \quad &\hbox{~~if~~} n\geq 10.
\end{array}
\right.
\end{eqnarray*}
and
\begin{eqnarray*}
&&\int_{A_\epsilon}A_g(\nabla \varphi_\alpha, \nabla \varphi_\alpha)d\mu_g\\
&=&O_\epsilon (1) \int_{A_\epsilon} \big[|u_\alpha'|^2+|u_\alpha|^2\big]dx\\
&=&O_\epsilon(\alpha^{n-4}).
\end{eqnarray*}
Thus, we obtain
\begin{eqnarray*}
&&-4\int_{B_\epsilon(0)}A_g(\nabla u_\alpha, \nabla u_\alpha)d\mu_g\\
&=&-\frac{|W(p)|^2}{6n(n-1)(n-2)}\int_{B_\epsilon(0)}r^2 |u_\alpha'|^2 dx+\left\{\begin{array}{lll}
O_\epsilon(\alpha^4), \quad & \hbox{~~if~~} n=8;\\
O_\epsilon(\alpha^5 \log \alpha^{-1}), \quad &\hbox{~~if~~} n=9;\\
O_\epsilon(\alpha^5) \quad &\hbox{~~if~~} n\geq 10.
\end{array}
\right.
\end{eqnarray*}
(iv) To estimate
\begin{eqnarray*}
&&(n-2)\int_{B_\epsilon(0)}\sigma_1(A_g)|u_\alpha'|^2 d\mu_g\\
&=&\frac{n-2}{2(n-1)}\int_{B_\epsilon(0)}R_g |u_\alpha'|^2 (1+O(r^N))dx\\
&=&-\frac{n-2}{24n(n-1)}|W(p)|^2 \int_{B_\epsilon(0)}r^2|u_\alpha'|^2 dx+\int_{B_\epsilon(0)}O(r^3)|u_\alpha'|^2 dx.
\end{eqnarray*}
Together with some computations of (iii), we obtain
\begin{eqnarray*}
&& (n-2)\int_{M^n}\sigma_1(A_g)|u_\alpha'|^2 d\mu_g\\
&=&-\frac{n-2}{24n(n-1)}|W(p)|^2 \int_{B_\epsilon(0)}r^2|u_\alpha'|^2 dx
+\left\{\begin{array}{lll}
O_\epsilon(\alpha^4), \quad & \hbox{~~if~~} n=8;\\
O_\epsilon(\alpha^5 \log \alpha^{-1}), \quad &\hbox{~~if~~} n=9;\\
O_\epsilon(\alpha^5), \quad &\hbox{~~if~~} n\geq 10.
\end{array}
\right.
\end{eqnarray*}
Consequently, we are in a position to compute the coefficient of $|W(p)|^2$ by Lemma \ref{lem1}:
\begin{eqnarray*}
&&\frac{n-4}{24(n-1)}\int_{B_\epsilon(0)}u_\alpha^2 dx-\frac{n^2-4n+8}{24n(n-1)(n-2)}\int_{B_\epsilon(0)}r^2 |u_\alpha'|^2 dx\\
&=&\frac{n-4}{24(n-1)}\int_{B_\epsilon(0)}u_\alpha^2\Big[1-\frac{(n-4)(n^2-4n+8)}{n(n-2)}\frac{r^4}{(\alpha^2+r^2)^2}\Big]dx\\
&\stackrel{\sigma=\tfrac{\epsilon}{\alpha}}{=}&\frac{(n-4)2^{n-4}\omega_{n-1}}{24(n-1)}\alpha^4 \int_{0}^{\frac{\epsilon}{\alpha}}\Big[1-\tfrac{(n-4)(n^2-4n+8)}{n(n-2)}\tfrac{\sigma^4}{(1+\sigma^2)^2}\Big](1+\sigma^2)^{4-n}\sigma^{n-1}d\sigma\\
&=&-\left\{ \begin{array}{ll}
O_\epsilon(\alpha^4), \quad &\hbox{~~if~~} n>8;\\
O_\epsilon(\alpha^4 \log \alpha^{-1}), \quad &\hbox{~~if~~} n=8.
\end{array}
\right.
\end{eqnarray*}
(v) To estimate
\begin{eqnarray*}
&&\int_{M^n}\varphi_\alpha^{\frac{2n}{n-4}}d\mu_g\\
&=&\int_{\mathbb{R}^n}u_\alpha^{\frac{2n}{n-4}}dx+\Big[\int_{\mathbb{R}^n\setminus B_\epsilon(0)}+\int_{A_\epsilon}\varphi_\alpha^{\frac{2n}{n-4}}(1+O(r^N))dx\Big]\\
&=&\int_{\mathbb{R}^n}u_\alpha^{\frac{2n}{n-4}}dx+O((\tfrac{\alpha}{\epsilon})^n).
\end{eqnarray*}
Thus, we obtain
$$\Big(\int_{M^n}\varphi_\alpha^{\frac{2n}{n-4}}d\mu_g\Big)^{\frac{n-4}{n}}=\Big(\int_{\mathbb{R}^n}u_\alpha^{\frac{2n}{n-4}}dx\Big)^{\frac{n-4}{n}}+O((\tfrac{\alpha}{\epsilon})^n).$$
Therefore, putting these above facts together, we conclude that
\begin{eqnarray*}
q(g)&\leq&\frac{\int_{M^n}\varphi_\alpha P_g \varphi_\alpha d\mu_g}{\Big(\int_{M^n}\varphi_\alpha^{\frac{2n}{n-4}}d\mu_g\Big)^{\frac{n-4}{n}}}\\
&=&Y(S^n)- \left\{\begin{array}{ll}
O_\epsilon(\alpha^4)|W(p)|^2+O_\epsilon(\alpha^5), &\hbox{~~if~~} n\geq10\\
O_\epsilon(\alpha^4)|W(p)|^2+O_\epsilon(\alpha^5 \log \alpha^{-1}), &\hbox{~~if~~} n=9\\
O_\epsilon(\alpha^4 \log \alpha^{-1})|W(p)|^2+O_\epsilon(\alpha^4), &\hbox{~~if~~} n=8
\end{array}
\right.\\
&<&Y(S^n)
\end{eqnarray*}
for all $n \geq 8$ by choosing $\alpha$ sufficiently small. \hfill $\Box$
\section{An application to the convergence of the nonlocal $\mathbf{Q}$-curvature flow}\label{sec4}
\indent\indent Recently, Gursky and Malchiodi introduced \cite{gur_mal} a nonlocal $Q$-curvature flow on a closed Riemannian manifold $(M^n,g_0)$ of dimension $n \geq 5$:
\begin{eqnarray}
\frac{\partial u}{\partial t}&=&-u+\mu P_{g_0}^{-1}\big(|u|^{\tfrac{n+4}{n-4}}\big)\label{gm_Qflow}\\
u(0,x)&=&u_0\label{initial data}
\end{eqnarray}
for some initial data $u_0 \in C^\infty_\ast$, where
$$\mu(t)=\frac{\int_{M^n}uP_g u d\mu_{g_0}}{\Big(\int_{M^n}u^{\tfrac{2n}{n-4}}\Big)^{\tfrac{n-4}{n}}}$$
and
$$C^\infty_\ast\equiv\{w \in C^\infty(M^n,g_0); w>0, P_{g_0}w \geq 0\}.$$
Under the assumptions that the $Q_{g_0}$ is semi-positive and the scalar curvature $R_{g_0}$ is nonnegative, which yield that the Paneitz-Sobolev constant $q(g_0)=q(M^n,g_0)$ is positive by Proposition 2.3 in \cite{gur_mal}. The positivity of $u$ is preserved along the nonlocal $Q$-curvature flow and the long time existence of the above nonlocal $\mathbf{Q}$-curvature flow is established. From now on, we adopt the above assumptions in this section. Thus, the $Q$-curvature equation gives
$$P_{g_0}u=Q_g u^{\tfrac{n+4}{n-4}}, \qquad u>0 \hbox{~~on~~} M^n$$
where $Q_g$ is the $Q$-curvature of the flow metric $g(t)=u(t)^{\frac{4}{n-4}}g_0$.
From Lemma 3.3 in \cite{gur_mal}, $\mu(t)$ is non-increasing and uniformly bounded below and above by two positive constants, as well as the volume of the flow metric $\int_{M^n}d\mu_{g(t)}$, then it yields
$$\lim_{t \to \infty}\mu(t)=\mu_\infty>0.$$
For brevity, set
$$\varphi=-u+\mu P_{g_0}^{-1}\big(|u|^{\tfrac{n+4}{n-4}}\big)$$
and
$$F_2(t)=\int_{M^n}\varphi P_{g_0} \varphi d\mu_{g_0}.$$
In essence, we can further show the asymptotic behavior of $F_2(t)$ as $t \to \infty$.
\begin{lemma}
There holds
$$\lim\limits_{t \to \infty}F_2(t)=0.$$
\end{lemma}
\begin{proof}
By \eqref{gm_Qflow}, a direct computation yields
\begin{eqnarray*}
\frac{1}{2}\frac{d}{dt}F_2(t)&=&\int_{M^n}\varphi P_{g_0}\varphi_t d\mu_{g_0}\\
&=&-\int_{M^n}\varphi P_{g_0}\varphi d\mu_{g_0}+\mu_t\int_{M^n}\varphi u^{\tfrac{n+4}{n-4}}d\mu_{g_0}
+\frac{n+4}{n-4}\mu \int_{M^n}u^{\tfrac{8}{n-4}}\varphi^2 d\mu_{g_0}.
\end{eqnarray*}
By Lemma 3.3 in \cite{gur_mal} and H\"{o}lder's inequality, one has
\begin{eqnarray*}
\Big|\mu_t\int_{M^n}\varphi u^{\tfrac{n+4}{n-4}}d\mu_{g_0}\Big|&\leq& C F_2(t)\Big(\int_{M^n}\varphi^{2n \over n-4}d\mu_{g_0}\Big)^{n-4 \over 2n}\Big(\int_{M^n} u^{2n \over n-4}d \mu_{g_0}\Big)^{n+4 \over 2n}\\
&\leq& C q(g_0)^{-{1 \over 2}} F_2^{3 \over 2}(t).
\end{eqnarray*}
By H\"{o}lder's inequality, we estimate
\begin{eqnarray*}
\Big|\int_{M^n}\alpha f u^{8 \over n-4} \varphi^2 d\mu_{g_0}\Big|&\leq& C\Big(\int_{M^n}u^{2n \over n-4}d\mu_{g_0}\Big)^{4 \over n}\Big(\int_{M^n}\varphi^{2n \over n-4}d\mu_{g_0}\Big)^{n-4 \over n}\\
&\leq& C q(g_0)^{-1} F_2(t).
\end{eqnarray*}
Thus, we obtain
\begin{equation}\label{diff_ineq1}
\frac{d}{dt} F_2(t) \leq CF_2(t)(1+F_2(t)^{1 \over 2}).
\end{equation}
From the estimate $\int_0^\infty F_2(t)dt< \infty$, there exists a sequence $\{t_j\}$ with $t_j \to \infty$ as $j \to \infty$, such that
$$\lim_{t \to \infty}F_2(t_j)=0.$$
Set
$$H(t)=\int_0^{F_2(t)}\frac{ds}{1+s^{1 \over 2}}=2F_2(t)^{\tfrac{1}{2}}-2\log\big(1+F_2(t)^{1 \over 2}\big),$$
then as the same argument in \cite{chxu}, we assert that there exists some uniform constant $C_0>0$ such that
$$H(t) \geq C_0 F_2(t)$$
for sufficiently large $t \geq 0$. Using
$\lim\limits_{j \to \infty}H(t_j)=0$ and \eqref{diff_ineq1}, for any $t \geq t_j$, we obtain
$$H(t) \leq H(t_j)+C \int_{t_j}^t F_2(\tau)d\tau.$$
Then, we conclude that
$$\lim\limits_{t \to \infty}F_2(t) \leq C_0^{-1}\lim\limits_{t \to \infty}H(t)=0.$$
This completes the proof.
\end{proof}
As a direct consequence, we apply Theorem \ref{Thm1} to recover the sequential convergence of the above $Q$-curvature flow in a special case of \cite{gur_mal}.
\begin{corollary}
Let $(M^n,g_0)$ be a closed Riemannian manifold of $n \geq 8$, suppose $M^n$ is not locally conformally flat and Gursky-Malchiodi's assumptions hold true, that is, $Q_{g_0} \geq 0$ and is positive somewhere; $R_{g_0} \geq 0$. Then the nonlocal Q-curvature problem \eqref{gm_Qflow}-\eqref{initial data} is sequentially convergent as $t \to \infty$.
\end{corollary}
\begin{proof}
As the proof of Theorem \eqref{Thm1}, based on test functions in the proof of Therem \ref{Thm1} or \cite{er}, Gursky-Malchiodi set up a scheme to modify them to a sequence $\{\hat{u}^0_n\}$ of positive functions as initial data of the flow. Then by adopting the same argument in Theorem 6.1 of \cite{gur_mal}, the sequential convergence of the nonlocal Q-curvature problem \eqref{gm_Qflow}-\eqref{initial data} follows.
\end{proof}
|
1,314,259,993,385 | arxiv | \section{Introduction}
\label{Intro}
This paper presents a factor ordering of the canonically quantized
Yang-Mills Hamiltonian operator, and a corresponding gauge-invariant
candidate ground state in what might be called the Schr\"{o}dinger
representation. \ As usual in a canonically quantized gauge theory, the
``position'' variable is the vector potential $A$ of the gauge connection. \
The corresponding momentum $E$ is given by the (negative) Yang-Mills
electric field variable. \ These variables satisfy the Poisson bracket
relations
\begin{eqnarray*}
\Bigl\{ A_{i}^{I}\left( x\right) ,A_{j}^{J}\left( y\right) \Bigr\}
= 0 = \left\{ E_{I}^{i}\left( x\right), E_{J}^{j}\left( y\right) \right\},
\ \
\left\{ E_{J}^{j}\left( y\right) ,A_{i}^{I}\left( x\right)
\right\} =\delta ^{3}\left( x,y\right) \delta _{i}^{j}\delta _{J}^{I}
\end{eqnarray*}
and can be promoted to quantum operators as%
\begin{equation}
\begin{array}{l}
\hat{A}_{i}^{I}(x):\psi (A)\rightarrow A_{i}^{I}(x)\psi (A) \\
\hat{E}_{I}^{i}(x):\psi (A)\rightarrow -i\frac{\delta }{\delta A_{i}^{I}(x)}%
\psi (A).%
\end{array}
\label{Sch}
\end{equation}%
The commutators of these quantum operators mirror the classical Poisson
brackets, as required:%
\begin{eqnarray*}
\left[ \hat{A}_{i}^{I}(x),\hat{A}_{j}^{J}(y)\right] =0=\left[ \hat{E}%
_{I}^{i}(x),\hat{E}_{J}^{j}(y)\right] ,\left[ \hat{E}_{J}^{j}(y),\hat{A}%
_{i}^{I}(x)\right] =-i\delta ^{3}\left( x,y\right) \delta _{i}^{j}\delta
_{J}^{I}.
\end{eqnarray*}%
where we have set Planck's constant $\hbar $ equal to 1.
The motivation for choosing a canonical quantization lies in the hope of
addressing physical questions relating to ground states, and ultimately
measures on the space of field configurations, for quantized gauge theories.
\ In a canonical approach, gauge invariance is implemented at the quantum
level by means of Dirac constraints. \ The ground state presented here is
in fact automatically gauge-invariant (as well as spatially rotation and
translation invariant) from its construction. \ Ideally, full Poincare
invariance of the ground state is hoped to result by promoting all Poincare
generators to quantum conserved quantities and verifying that they annihilate
the ground state as well as exibiting appropriate commutators.
Formulated in terms of the Ashtekar variables, general relativity is also a
gauge theory, and under a canonical quantization, full diffeomorphism invariance is similarly imposed as a quantum constraint operator (see e.g. \cite{Smolin}).
Yang-Mills theory is a valuable testing ground for ideas to be applied to canonical quantum general relativity. \ For instance the Kodama state, at present the only known candidate ground state for gravity in the Ashtekar variables (see \cite{Smolin}), arose as a generalization of the Chern-Simons state in Yang-Mills theory. \ While exhibiting many positive features, the Kodama state as usually constructed seems to inherit unphysical properties of the Chern-Simons state. \ Alternative candidate ground states for canonical quantum general relativity may aid the ongoing search for a physical inner product. \ Since the Kodama state is a generalization from the Chern-Simons ground state in Yang-Mills theory, a
reasonable effort towards the construction of a normalizable ground state for
quantum general relativity would be to search for a well-behaved ground state
in Yang-Mills theory.
Such being the aim of the current project, we do not attempt to conform to the usual ideals of quantized Yang-Mills theory per se, as envisioned for instance in the formulation of the Clay Prize Millenium Problem \cite{JaffeWitten}. \ That is, we do not seek a quantization likely to yield a mass gap, since in quantum gravity a mass gap is not expected.
In the Schrodinger representation (\ref{Sch}), a ground state $\Omega \left(
A\right) $ for the Hamiltonian operator is the first step toward finding a
state space of the form $L^{2}\left( \mathcal{A},d\mu \right) ,$ for some
measure $d\mu $ on the space of connections $\mathcal{A}$. \ Heuristically
speaking, the first candidate for the measure $d\mu $ would be something like%
\begin{equation}
``\left[ \Omega \left( A\right) \right] ^{2}dA,"
\label{YMmeasure}
\end{equation}%
where $``dA"$ is a naive ``Lebesgue'' measure on $\mathcal{A}$. \ Of course,
the usual way (e.g. \cite{GlimmJaffe}) to make sense of such expressions
will encounter resistance on two fronts: \ first, $\Omega \left( A\right) $
will be non-Gaussian for a nonabelian gauge theory, and secondly the space $%
\mathcal{A}$ should in fact be composed of equivalence classes of
connections modulo gauge transformations $\mathcal{A}/\mathcal{G}$, and
hence is not a linear space. \ Similar difficulties are at least partly
addressed within the literature on Yang-Mills path integrals (\cite%
{Rivasseau}, \cite{almmt}); however, arriving at a full rigorous
understanding of a measure such as (\ref{YMmeasure}) is obviously highly
nontrivial. \ In the meantime, however, it would be nice at least to know a
well-behaved candidate ground state for Yang-Mills theory. \ By
well-behaved, we mean that the ground state should decay rapidly for
connections which are ``large'' in some suitable sense -- e.g. of large $%
L^{2}$ norm -- so as to be a normalizable ground state with respect to some
measure on $\mathcal{A}$. \ This is what the Chern-Simons state%
\begin{eqnarray*}
\Psi _{CS}\left( A\right) =\exp \left[ \int_{\Sigma }tr\left( A\wedge dA-%
\frac{2}{3}A\wedge A\wedge A\right)d^{3}x\right]
\end{eqnarray*}%
fails to do, since the Chern-Simons form in the exponent changes sign under
parity (for a good discussion of the Chern-Simons state's problems, see \cite%
{Witten}). \ For the abelian case of free Maxwell theory, a well-behaved
zero-energy ground state has already been written by Wheeler \cite%
{Geometrodynamics} in closed form as%
\begin{equation}
\Omega (A)= \mathcal{N}\exp \left( -\frac{1}{4\pi ^{2}}\int\limits_{%
\mathbb{R}^{3}}\int\limits_{\mathbb{R}^{3}}\frac{\left( \nabla \times
A(x)\right) \cdot \left( \nabla \times A(y)\right) }{\left| x-y\right| ^{2}}%
d^{3}x d^{3}y\right) ,
\label{Wheeler}
\end{equation}%
and in fact for linearized general relativity, Kuchar \cite{Kuchar} derived
the strongly analogous ground state wave functional%
\begin{eqnarray*}
\Psi \left( h\right) =\mathcal{N}\exp \left( -\frac{1}{8\pi ^{2}}%
\int\limits_{\mathbb{R}^{3}}\int\limits_{\mathbb{R}^{3}}\frac{\left(
h_{ik,l}^{TT}(x)\right) \cdot \left( h_{ik,l}^{TT}(y)\right) }{\left|
x-y\right| ^{2}} d^{3}x d^{3}y\right)
\end{eqnarray*}%
in terms of the linearized metric tensor%
\begin{eqnarray*}
h_{ik}=g_{ik}-\eta _{ik}
\end{eqnarray*}%
in transverse traceless gauge (denoted $h_{ik}^{TT}$).
The explicit construction of such ground states, however, relies on integral
kernel methods available only for linear theories. \ To deal with the
nonlinearities displayed by full nonabelian Yang-Mills theory or general
relativity, we need a new and more indirect means of finding well-behaved
ground states, reducing in free cases to these known examples.
Thus motivated, we generalize a method developed by Moncrief and Ryan (\cite%
{Moncrief}, \cite{MonRy}, \cite{Ryan}) in nonlinear quantum mechanical
settings, using classical Hamilton-Jacobi theory to derive an expression for
an exact quantum state which is a zero-energy ground state with respect to a
particular ordering of the Hamiltonian operator. \ Encouragement for the prospect of extending to general relativity comes from the fact that in \cite{MonRy}, Moncrief and Ryan present an explicit solution for such a ground state in the vacuum Bianchi IX cosmology.
As explained in Sect.~\ref{Ordering},\ the ground state we seek is essentially the exponential of Hamilton's principal function for the corresponding Euclidean problem, so in order to find the principal function (or rather functional), we must solve the Dirichlet problem for Yang-Mills theory. \ For a compact base manifold, this has been collectively achieved by Uhlenbeck \cite{Uhlenbeck}, Sedlacek \cite{Sedlacek}, and Marini \cite{Marini}, using the direct method in the
calculus of variations. \ We follow a similar technique but generalize to
the case of a noncompact manifold, since we are interested in Yang-Mills
theory on Minkowski space. \ Some preliminaries necessary to solving the
Dirichlet problem on a Riemannian manifold are presented in Sect.~\ref{Prelim},
and the solution to the Riemannian Yang-Mills Dirichlet problem is presented
in Sect.~\ref{YMDirichlet}. \ Finally, in Sect.~\ref{Gauge}, we conclude gauge
invariance of the ground state and discuss partial results and ongoing work
to test Poincare invariance.
\section{Preliminaries}
\label{Prelim}
To solve the Yang-Mills Dirichlet problem for a compact manifold $M$, Marini %
\cite{Marini} introduces a terminology for coverings of $M$ by geodesic
balls and half-balls; these are described respectively as neighborhoods of
type 1 and type 2. \ Let $M$ be a smooth $n$-dimensional manifold $M$
equipped with a Riemannian metric $g$, and let $\partial M$ be its boundary.
\ Then neighborhoods of type 1, in the manifold's interior, are denoted%
\begin{eqnarray*}
U^{\left( 1\right) }\equiv \left\{ x=\left( x^{0},...,x^{n-1}\right) :\left|
x\right| <1\right\}
\end{eqnarray*}%
while neighborhoods of type 2, centered around points in $\partial M$, are
of the form%
\begin{eqnarray*}
U^{\left( 2\right) }\equiv \left\{ x=\left( x^{0},...,x^{n-1}\right) :\left|
x\right| <1,x^{0}\geq 0\right\} ,
\end{eqnarray*}%
where the coordinate $x^{0}$ parametrizes unit-speed geodesics orthogonal to
$\partial M=\left\{ x^{0}=0\right\} $. \ The boundary of a type 2
neighborhood divides into
\begin{eqnarray*}
\partial _{1}U &=&\left\{ x\in \partial U^{\left( 2\right) }:x^{0}=0\right\}
, \\
\partial _{2}U &=&\left\{ x\in \partial U^{\left( 2\right) }:\left| x\right|
=1\right\} .
\end{eqnarray*}
In our problem, the manifold of interest is $\mathbb{R}_{+}\times
\mathbb{R}^{3}=\left\{ \left( x^{0},x^{1},x^{2},x^{3}\right) :x^{0}\geq
0\right\} $ with the Euclidean metric; however we solve the Yang-Mills
Dirichlet problem for a general smooth 4-dimensional Riemannian manifold
with boundary, generalizing Marini's procedure to the non-compact case (Sect.~\ref{YMDirichlet}). \ Certain results used are also valid in general
dimension $n$; such distinctions are clearly noted in the statements. \ We
return to consider the importance of dimension more thoroughly in Sect.~\ref%
{YMDirichlet}.
The main ingredient in a Yang-Mills theory is the structure group; this is a
compact Lie group $G \subset SO \left( l \right)$ with Lie algebra $\mathfrak{g}$. \ For $P$ a principal
$G$-bundle over $M$, the Yang-Mills field is a connection $A\in \Lambda
^{1}P\otimes \mathfrak{g}$. \ Given a local section $\sigma _{\alpha
}:U_{\alpha }\rightarrow P$ for some neighborhood $U_{\alpha }\subset M$,
the connection 1-form $A$ pulls back to a $\mathfrak{g}$-valued 1-form $%
A_{\alpha }=\sigma _{\alpha }^{\ast }A$ on $U_{\alpha }$; the transformation
of $A$ on overlapping neighborhoods $U_{\alpha }$ and $U_{\beta }$ is given
by the transition function $\tau _{\alpha \beta }:U_{\alpha }\cap U_{\beta
}\rightarrow G$ defined by $\sigma _{\beta }\left( x\right) =\sigma _{\alpha
}\left( x\right) \tau _{\alpha \beta }(x)$:%
\begin{eqnarray*}
A_{\alpha }(x)=\tau _{\alpha \beta }(x)^{-1}d\tau _{\alpha \beta }(x)+\tau
_{\alpha \beta }(x)^{-1}A_{\beta }\left( x\right) \tau _{\alpha \beta }(x),\
\ x\in U_{\alpha }\cap U_{\beta }.
\end{eqnarray*}%
The important quantity for Yang-Mills theory is the curvature $F\in \Lambda
^{2}P\otimes \mathfrak{g}$ of the connection $A$, given by $F=d_{P}A+\frac{1%
}{2}\left[ A,A\right] $ where the bracket $\left[ \cdot ,\cdot \right] $
denotes the graded commutator on forms, so that $\left[ A,A\right] =2\left(
A\wedge A\right) .$ \ In terms of a local section $\sigma _{\alpha
}:U_{\alpha }\rightarrow P$, $F$ pulls back to a $\mathfrak{g}$-valued
2-form $F_{\alpha }=\sigma _{\alpha }^{\ast }F$ on $U_{\alpha }$, given by $%
F_{\alpha }=d_{M}A_{\alpha }+\frac{1}{2}\left[ A_{\alpha },A_{\alpha }\right]
$, transforming as%
\begin{eqnarray*}
F_{\alpha }(x)=\tau _{\alpha \beta }(x)^{-1}F_{\beta }\left( x\right) \tau
_{\alpha \beta }\left( x\right) ,\ \ x\in U_{\alpha }\cap U_{\beta }
\end{eqnarray*}%
for $\tau _{\alpha \beta }$ as given above. \ In local coordinates, $F$
reduces to%
\begin{eqnarray*}
F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }+\left[ A_{\mu
},A_{\nu }\right] .
\end{eqnarray*}%
(here $\left[ \cdot ,\cdot \right] $ is the ordinary commutator in $%
\mathfrak{g}$).%
In order to describe the Yang-Mills action, we use the local expressions of $%
F$ as a $\mathfrak{g}$-valued 2-form on neighborhoods of $M$; however all
definitions are gauge-invariant and therefore do not depend on the
particular section used to pull back $F$.
The Yang-Mills action can be conveniently couched in terms of the inner
product for $\mathfrak{g}$-valued $k$-forms on the manifold $M$:%
\begin{equation}
\left\langle \eta ,\theta \right\rangle _{2}=\int_{M}tr\left( \eta \wedge
\ast \theta \right) ,
\label{FormProd}
\end{equation}%
where $\ast $ denotes the Hodge dual with respect to the metric $g$ on $M$.
\ We occasionally also write $\left\langle \eta ,\theta \right\rangle $ for
the pointwise inner product $\left\langle \eta ,\theta \right\rangle
=tr\left( \eta \wedge \ast \theta \right) $. \ The inner product (\ref%
{FormProd}) in turn allows us to define $L^{p}$ and Sobolev spaces of forms,
using the norm%
\begin{eqnarray*}
\left\| \eta \right\| _{p}=\left( \int_{M}\left| \eta \right|
^{p}\right) ^{\frac{1}{p}}=\left( \int_{M}\left\langle \eta ,\eta \right\rangle
^{p/2}\right) ^{\frac{1}{p}}=\left( \int_{M}\left[ tr\left( \eta \wedge
\ast \eta \right) \right] ^{p/2}\right) ^{\frac{1}{p}}.
\end{eqnarray*}%
In terms of local coordinates $\left\{ x^{\mu} \right\}$ on $M$ and a basis $\left\{ e_{I} \right\}$ for the Lie algebra $\mathfrak{g}$, membership of $\eta$ in the Sobolev space of forms is equivalent to each component $\eta^{I}_{\mu_1 ... \mu_k}$ being Sobolev in the ordinary sense of functions. \ Using this notation, the Yang-Mills action can be given as%
\begin{eqnarray*}
I(A)=\frac{1}{2}\left\| F\right\| _{2}^{2}=\frac{1}{2}\int_{M}tr\left(
F\wedge \ast F\right) =\frac{1}{4}\int_{M}trF_{\mu \nu }F^{\mu \nu }\sqrt{g}%
dx^{1}\cdot \cdot \cdot dx^{n},
\end{eqnarray*}%
where integration is done using the local form of $F$ on a neighborhood
(gauge invariance of the form $tr\left( F\wedge \ast F\right) $ negates any
ambiguity due to choice of local trivialization).
The above manner of formulating the Yang-Mills action functional also offers
an easy proof of lower semicontinuity, necessary in using the direct method
to find a minimizer:
\begin{theorem}
The Yang-Mills functional on a manifold $M$ of dimension 4 is lower
semicontinuous with respect to the weak topology on $W_{loc}^{1,2}\left(
M\right) .$
\end{theorem}
\begin{proof}
It is good enough to prove that on any open bounded set $U\subset M$, if $%
A_{i}\rightharpoonup A$ in $W^{1,2}\left( U\right) $, then $I\left( A\right)
\leq \lim \inf_{i\rightarrow \infty }I\left( A_{i}\right) $. \ Locally we
can write%
\begin{eqnarray*}
F_{i}=dA_{i}+\frac{1}{2}\left[ A_{i},A_{i}\right] .
\end{eqnarray*}%
Using the same reasoning as Sedlacek's in Lemma 3.6 of \cite{Sedlacek}, weak
convergence of $\left\{ A_{i}\right\} $ to $A$ in $W^{1,2}\left( U\right) $
implies weak convergence of $dA_{i}$ to $dA$ in $L^{2}\left( U\right) .$ \
The continuity of the imbedding $W^{1,2}\hookrightarrow L^{4}$ and of the
multiplication $L^{4}\times L^{4}\rightarrow L^{2}$ along with boundedness
of $\left\{ \left| \left| A_{i}\right| \right| _{2,1} \right\} $ implies that
$\left\{ \left| \left| \left[ A_{i},A_{i}\right] \right| \right|
_{2}\right\} $ is bounded. \ This together with a.e. pointwise convergence
yield $\left[ A_{i},A_{i}\right] \rightharpoonup \left[ A,A\right] $, so
that $F_{i}\rightharpoonup F$ in $L^{2}\left( U;P\right) $. \ Finally, lower
semicontinuity of the $\left| \left| \cdot \right| \right| _{2}$ norm
concludes lower semicontinuity of the Yang-Mills functional.
\end{proof}
The Yang-Mills field equations are%
\begin{eqnarray*}
d_{D}\ast F=0
\end{eqnarray*}%
where $d_{D}=d+\left[ A,\cdot \right] $; solutions to this system correspond
exactly to critical points of the Yang-Mills action $I(A)$. \ To see this,
vary $I(A)$ by varying $A$ as $A+\lambda h$, where $h$ vanishes at $t=0$ and
is supported on some compact subset $N$ (dependent on $h$) of $M$:%
\begin{eqnarray*}
\delta _{h}\left( I\right) \left( A\right) &=&\int_{N}\left\langle
d_{D}h,F\right\rangle =\int_{N}tr\left( d_{D}h\wedge \ast F\right) \\
&=&\int_{\partial N}tr\left( h\wedge \ast F\right) -\int_{N}tr\left( h\wedge
d_{D}\ast F\right) .
\end{eqnarray*}%
It is evident that $\delta _{h}\left( I\right) \left( A\right) $ vanishes
for all variations $h$ precisely when $d_{D}\ast F$ is identically 0.
As described in Sect.~\ref{Intro}, this paper will deal with the canonical
quantization ansatz; therefore we must derive the canonical variables. \ To
make the transformation from a Lagrangian to a Hamiltonian formulation, we
specialize to the case of interest $M=\mathbb{R}_{+}\times \mathbb{R}%
^{3}$. \ Since $M$ is contractible, every bundle over $M$ is trivial and
therefore admits a global section. \ We can then drop the distinction
between $A$ and its local coordinate representation. \ Because the
Lagrangian is independent of $A_{0}$, the Legendre transformation breaks
down for an arbitrary gauge (see e.g. \cite{Jackiw}), and we must choose to
work in the Weyl gauge $A_{0}=0.$ \ Thus our canonical position variable is $%
A_{i}^{I}$, where $i$ runs over the three spatial parameters and $I$ over
the basis of the Lie algebra $\mathfrak{g}$, and the canonical momentum is $%
E_{I}^{i}=\dot{A}_{i}^{I}$ (this is the negative of the ``electric field''
variable).
With respect to these variables we obtain the Hamiltonian%
\begin{equation}
H=\frac{1}{2}\int_{\mathbb{R}^{3}}tr\left( E^{2}+B^{2}\right) \ ,
\label{YMHam}
\end{equation}%
where $B^{i}=\frac{1}{2}\varepsilon ^{ijk}F_{jk}$. \ Hamilton's equations
follow by writing the integral $J=\int_{0}^{\infty }H$ $dt$ as $%
\int_{0}^{\infty }\int_{\mathbb{R}^{3}}\mathcal{H}$ \ $d^{3}x$ $%
dt=-I+\int_{0}^{\infty }\int_{\mathbb{R}^{3}}E\dot{A}$ \ $d^{3}x$ $dt$. \
Varying both expressions with respect to a one-parameter family $A_{\lambda }
$ (where the variation has compact support in $\mathbb{R}_{+}\times
\mathbb{R}^{3}$ and vanishes for $t=0$), we arrive at the equality%
\begin{eqnarray*}
\frac{dJ}{d\lambda } &=&\int_{0}^{\infty }\int_{\mathbb{R}^{3}}\frac{\delta H%
}{\delta A}\delta A+\frac{\delta H}{\delta E}\delta E \\
&=&\int_{0}^{\infty }\int_{\mathbb{R}^{3}}\left[ E\delta \dot{A}+\dot{A}%
\delta E\right] -\frac{dI}{d\lambda } \\
&=&\int_{0}^{\infty }\int_{\mathbb{R}^{3}}\left[ -\dot{E}\delta A+\dot{A}%
\delta E\right] -\frac{dI}{d\lambda },
\end{eqnarray*}%
using integration by parts. \ In order for equality to hold between the
first and last lines for all variations, Hamilton's equations%
\begin{eqnarray*}
\dot{E} &=&-\frac{\delta H}{\delta A} \\
\dot{A} &=&\frac{\delta H}{\delta E}
\end{eqnarray*}%
must be equivalent to the vanishing of the variation $\frac{\delta I}{\delta
A}.$
Notice that in taking the Weyl gauge $A_0 = 0$, we have lost the field equations describing gauge transformations, and therefore in the quantized theory, the Gauss law constraint
\begin{eqnarray*}
D_{i}E^{i} = 0
\end{eqnarray*}%
must be dealt with separately, either by promoting to a quantum operator and verifying that it annihilates the ground state, or by other means. For the ground state we construct here, gauge invariance in fact turns out the be directly verifiable (see Sect.~\ref{Gauge}).
\section{Nonlinear normal ordering}
\label{Ordering}
For nonlinear quantum mechanical situations, Moncrief \cite{Moncrief} and
Ryan \cite{Ryan} present a ``normal'' ordering scheme for the Hamiltonian
operator, yielding a well-behaved associated ground state. \ Consider a
nonlinear quantum mechanical Hamiltonian of the form%
\begin{eqnarray*}
H=\frac{1}{2}\left| p\right| ^{2}+V(x).
\end{eqnarray*}%
In a linear system, the function $V(x)$ would be a quadratic form $%
\left\langle x,Mx\right\rangle $, $M$ a positive self-adjoint operator. \
The normal ordering would proceed by factoring $M$ as $T^{2}$, in terms of
its unique positive self-adjoint square root $T$, and defining creation and
annihilation operators $a^{\ast }=\frac{1}{\sqrt{2}}\left( T\hat{x}-i\hat{p}%
\right) $ and $a=\frac{1}{\sqrt{2}}\left( T\hat{x}+i\hat{p}\right) $. \
Under usual assignment of canonical quantum operators $\hat{x}^{i}:\psi
(x)\rightarrow x^{i}\psi (x)$, $\hat{p}^{i}:\psi (x)\rightarrow -i\frac{%
\partial }{\partial x^{i}}\psi (x)$, this immediately yields the ground
state $\psi (x)=\mathcal{N}\exp \left( -\frac{\left\langle x\ ,%
Tx\right\rangle }{2}\right) $ with energy $\frac{tr\ T}{2}$, for $H$
expressed as a quantum operator $\hat{H}=a^{\ast }a+\frac{tr\ T}{2}I.$
The idea of the nonlinear normal ordering is to factorize the function $V(x) \geq 0$ by solving the imaginary-time zero-energy Hamilton-Jacobi equation%
\begin{equation}
\frac{1}{2}\sum\limits_{i}\left( \frac{\partial S}{\partial x^{i}}\right)
^{2}-V(x)=0.
\label{HJE}
\end{equation}%
We can then order the quantum Hamiltonian operator as%
\begin{equation}
\hat{H}=\frac{1}{2}\sum\limits_{i}\left( \widehat{\frac{\partial S}{\partial
x^{i}}}-i\hat{p}^{i}\right) \left( \widehat{\frac{\partial S}{\partial x^{i}}%
}+i\hat{p}^{i}\right) ,
\label{NNO}
\end{equation}%
admitting the zero-energy ground state%
\begin{eqnarray*}
\mathcal{N}\exp \left( -S(x)\right) .
\end{eqnarray*}%
This factorization can be illustrated with the anharmonic oscillator
Hamiltonian%
\begin{equation}
H=\frac{1}{2}p^{2}+\frac{1}{2}x^{2}+\frac{1}{4}\lambda x^{4}\ ,
\label{NO}
\end{equation}%
in which case the Hamilton-Jacobi equation $\frac{1}{2}\left( \frac{dS}{dx}%
\right) ^{2}=\frac{1}{2}x^{2}+\frac{1}{4}\lambda x^{4}\ $is easily
integrated to yield the solution$\ S(x)=\frac{2}{3\lambda }\left( 1+\frac{%
\lambda }{2}x^{2}\right) ^{3/2}-\frac{2}{3\lambda }.$ \ While the resulting
ground state is not the usual anharmonic oscillator ground state obtained
from the factor ordering $\hat{H}=\frac{1}{2m}\hat{p}^{2}+\frac{m\omega ^{2}%
}{2}\hat{q}^{2}+\frac{1}{4}\lambda \hat{q}^{4}$, it is the correct
zero-energy ground state for nonlinear normal ordering (\ref{NNO}). \
Finding a ground state with zero energy is not a priority for an ordinary
quantum mechanical system like the anharmonic oscillator, but in the realm
of relativistic field theories, a quantum ground state must have zero energy
to be invariant under infinitesimal time-translations. \ Hence full Poincare
invariance in a canonically quantized relativistic field theory requires a
zero energy ground state, suggesting that the nonlinear normal ordering
approach may yield well-behaved candidate ground states for a canonically
quantized nonlinear field theory.
Following this line of reasoning for Yang-Mills theory, we set up the
imaginary-time zero-energy Hamilton-Jacobi equation for the Yang-Mills
Hamiltonian (\ref{YMHam}):
\begin{equation}
\int_{\mathbb{R}^{3}}tr\left( \frac{\delta S}{\delta A}\right) ^{2}=\int_{%
\mathbb{R}^{3}}trB^{2},
\label{YMHJE}
\end{equation}%
whose solution $S\left( A\right) $ will yield the nonlinear normal ordering%
\begin{eqnarray*}
\hat{H}=\frac{1}{2}\int_{\mathbb{R}^{3}}tr\left( \widehat{\frac{\delta S}{%
\delta A}}-i\hat{E}\right) \left( \widehat{\frac{\delta S}{\delta A}}+i%
\hat{E}\right)
\end{eqnarray*}%
and associated zero-energy ground state%
\begin{eqnarray*}
\mathcal{N}\exp \left( -S(A)\right) .
\end{eqnarray*}%
However, the Hamilton-Jacobi equation (\ref{YMHJE}) is not as easily solved
as in the anharmonic oscillator problem. \ Fortunately, the classical
Hamilton-Jacobi theory provides us with a means of constructing the
solution, essentially as Hamilton's principal function for the
imaginary-time problem. \ With the transformation to imaginary time $%
t\rightarrow it$, the chain rule yields $A\rightarrow A$, $\dot{A}%
\rightarrow -i\dot{A}$, and $E\rightarrow -iE$, so that the imaginary-time
Lagrangian and Hamiltonian are%
\begin{eqnarray*}
\tilde{L} &=&\frac{1}{2}\int_{\mathbb{R}^{3}}\left( -\dot{A}%
^{2}-B^{2}\right) \ d^{3}x \\
\tilde{H} &=&\frac{1}{2}\int_{\mathbb{R}^{3}}\left( -E^{2}+B^{2}\right) \ d^{3}x.
\end{eqnarray*}%
The full Hamilton-Jacobi equation is%
\begin{equation}
\frac{\partial S}{\partial t}+\tilde{H}\left( A,\frac{\delta S}{\delta A}%
,t\right) =0;
\label{tHJE}
\end{equation}%
a time-independent solution $S(A)$ to this equation will be the solution we
seek for (\ref{YMHJE}). \ In fact, the solution we construct will be
time-independent, but for the moment we assume that an explicit time
dependence is possible, using the definition of functional derivatives to
write%
\begin{eqnarray*}
\frac{dS}{dt}=\frac{\partial S}{\partial t}+\int_{\mathbb{R}^{3}}\frac{%
\delta S}{\delta A_{t}}\frac{\partial A_{t}}{\partial t}\ d^{3}x.
\end{eqnarray*}%
Subsituting from (\ref{tHJE}) we obtain%
\begin{eqnarray*}
\frac{dS}{dt} &=&-\tilde{H}\left( A_{t},\frac{\delta S}{\delta A_{t}}\right)
+\int_{\mathbb{R}^{3}}\frac{\delta S}{\delta A_{t}}\frac{\partial A_{t}}{%
\partial t}\ d^{3}x \\
&=&\int_{\mathbb{R}^{3}}\frac{\delta S}{\delta A_{t}}\frac{\partial A_{t}}{%
\partial t}-\tilde{\mathcal{H}}\left( A_{t},\frac{\delta S}{\delta A_{t}}%
\right) \ d^{3}x.
\end{eqnarray*}%
For $A_{t}$ the solution to%
\begin{equation}
\frac{\partial A_{t}}{\partial t}=\left. \frac{\delta \tilde{H}}{\delta E}%
\medskip \right| _{E=\frac{\delta S}{\delta A_{t}}}
\end{equation}%
with initial data $A_{t=0}=A$, we get%
\begin{eqnarray*}
\int_{\mathbb{R}^{3}}\frac{\delta S}{\delta A_{t}}\frac{\partial A_{t}}{%
\partial t}-\tilde{\mathcal{H}}\left( A_{t},\frac{\delta S}{\delta A_{t}}%
\right) \ d^{3}x &=&\int_{\mathbb{R}^{3}}E\dot{A}_{t}-\tilde{\mathcal{H}}%
\left( A_{t},E\right) \ d^{3}x \\
&=&L\left( A_{t},DA_{t}\right) \\
&\Rightarrow &S\left( A_{t_{0}}\right) -S\left( A\right) =\int_{0}^{t_{0}}%
\tilde{L}\left( A_{t},\dot{A}_{t}\right) \ dt
\end{eqnarray*}
Taking $S\left( A\right) =-\int_{0}^{\infty }\tilde{L}\left( A_{t},\dot{A}%
_{t}\right) $ $dt$ clearly satisfies this relation, since we will then have $%
S\left( A_{t_{0}}\right) =-\int_{t_{0}}^{\infty }\tilde{L}\left( A_{t},\dot{A%
}_{t}\right) $ $dt$. \ The exponential $\exp \left( -S(A)\right) $ will peak
about the field configuration $A=0$. \ To prove that the functional $S(A)$
exists, we need only prove the existence of a solution $A_{t}$ of the
Euclidean Yang-Mills equations, with initial data $A$.
\section{Solving the Yang-Mills Dirichlet problem}
\label{YMDirichlet}
For a compact manifold $M$ with boundary $\partial M$, the method of
Uhlenbeck \cite{Uhlenbeck}, Sedlacek \cite{Sedlacek}, and Marini \cite%
{Marini} begins with a localizing theorem, proving that given a sequence of
connections with a uniform global bound on the Yang-Mills action, there
exists a cover for $M$ (possibly missing a finite collection of points) such
that on neighborhoods of the cover, the Yang-Mills action for connections in
the sequence eventually becomes lower than an arbitrary pre-set bound $%
\varepsilon $. \ This result depends on compactness as proved in \cite%
{Sedlacek} (see Proposition 3.3, or in \cite{Marini} Theorem 3.1). \ We
reprove the result here in a manner independent of compactness (Theorem \ref%
{goodcover}), so that the overall argument now applies to noncompact
manifolds with boundary as well.
Note that another possible solution to the problem would be to transform $M=%
\mathbb{R}_{+}\times \mathbb{R}^{3}$ into a compact manifold with
boundary, using inversion in the sphere (this suggestion is due to T.
Damour). \ In this approach, one considers the unit sphere centered at the
origin of $\mathbb{R}^{4}$, imbedding $\mathbb{R}_{+}\times \mathbb{R}%
^{3}$ into $\mathbb{R}^{4}$ as the set $\left\{ x:x^{0}\geq 2\right\} $. \
The inversion mapping is%
\begin{eqnarray*}
y^{i}=\frac{x^{i}}{r^{2}}\ ,
\end{eqnarray*}%
where $r^{2}=\left( x^{0}\right) ^{2}+\left( x^{1}\right) ^{2}+\left(
x^{2}\right) ^{2}+\left( x^{3}\right) ^{2}$. \ Under this transformation,
the hyperplane $x^{0}=2$ maps to a sphere $S_{1/4}$ of radius $\frac{1}{4}$,
with its south pole (the image of all points at infinity) at the origin. \
The half-space $\left\{ x:x^{0}>2\right\} $ maps to the interior of $S_{1/4}$.
Since the mapping is conformal, the Yang-Mills action remains invariant, and
the problem of interest on $\mathbb{R}_{+}\times \mathbb{R}^{3}$ has
been effectively mapped to a compact problem, to which the arguments of \cite%
{Uhlenbeck}, \cite{Sedlacek}, and \cite{Marini} should apply directly. \ We
do not pursue this approach here, since the crucial result using compactness
can be shown to generalize (Theorem \ref{goodcover}); however we note its
potential usefulness to future work, such as the investigation of uniqueness
of solution to the Yang-Mills Dirichlet problem (see Sect.~\ref{Fun}). \ An
issue pertinent to the conformal mapping approach is the behavior of initial
data at the south pole of $S_{1/4}$, the image of points at infinity.
Returning to our sketch of the Yang-Mills Dirichlet problem's solution,
local control over the Yang-Mills action is used to prove existence and
regularity of a minimizer. \ From this point on, the proofs in \cite%
{Uhlenbeck}, \cite{Sedlacek}, and \cite{Marini} are purely local and hold
unchanged in the noncompact case; proofs are thus not repeated here. \
Locally, the argument for existence of a Yang-Mills minimizer consists in\
finding a Sobolev-bounded minimizing sequence satisfying the boundary
conditions; this sequence then has a weakly convergent subsequence, which
proves to be a solution to the original Dirichlet problem. \ Local solutions
are related by transition functions on overlapping neighborhoods.
Gauge freedom turns out to be a help as well as a hindrance. \ Of course it
forces the necessity of working locally and proving compatibility on
overlaps, but at the same time gauge freedom offers an elegant solution to
the regularity problem. \ A judicious choice of gauge -- the ``Hodge gauge''
-- complements the Yang-Mills equation in such a way as to yield an elliptic
system. \ In the Yang-Mills equation%
\begin{eqnarray*}
d\ast dA+\left[ A,dA\right] +\left[ A,\left[ A,A\right] \right] ,
\end{eqnarray*}%
the highest order term is related to the first term of the Laplace-de Rham
operator $\Delta =\delta d+d\delta $, where $\delta $ is the codifferential $%
\delta =\left( -1\right) ^{n(k+1)+1}\ast d\ast $ ($k$ being the degree of
the differential form operated upon). \ Choosing the Hodge gauge, in which $%
d\ast A=0$, ensures that every solution of our system in this gauge is also
a solution of the elliptic system $\Delta A+\ast \left( \left[ A,dA\right] +%
\left[ A,\left[ A,A\right] \right] \right) =0$, and therefore enjoys the
regularity properties of such solutions. \ (Additional work is needed to
establish boundary regularity; Marini accomplishes this in \cite{Marini}
using the technique of local doubling.)
In the physical problem, we are interested in Yang-Mills theory over a $4$%
-dimensional manifold with boundary. \ However many theorems which follow
are also valid over any smooth $n$-dimensional Riemannian manifold with
boundary, and we retain this level of generality in stating and proving
results. \ The caveat lies in stringing together the individual theorems
into a complete argument for existence and regularity of a solution to the
Yang-Mills Dirichlet problem; to accomplish this, the dimension must be 4
(see the remarks in \cite{Marini} following Theorem 3.1). \ The ``good
cover'' theorem (Theorem \ref{goodcover} here, or Theorem 3.1 in \cite{Marini}%
) guarantees a cover of $M\backslash \left\{ x_{1},...,x_{k}\right\} $ on
whose neighborhoods the local Yang-Mills action for the connections in the
sequence is eventually bounded by an arbitrary pre-set bound $\varepsilon $.
\ However, the condition for existence of a Hodge gauge solution to the
Yang-Mills Dirichlet problem is a bound on the local $L^{n/2}$ norm of the
Yang-Mills field strength $F$, which except in dimension $4$ is not the same
as a local bound on the Yang-Mills action.
Without further ado, we give the precise statements of all theorems needed
for the existence and regularity of a Yang-Mills minimizer on a $4$%
-dimensional manifold with boundary. \ On a neighborhood $U$ of type 1 or 2,
the condition for local existence of a gauge satisfying $d\ast A=0$ is an $%
L^{n/2}$ bound on Yang-Mills field strength. \ Consider the sets%
\begin{eqnarray*}
\mathfrak{A}_{K}^{1,p}(U) &=&\left\{ D=d+A:\ \ A\in W^{1,p}(U), \ \ \left\|F_{A}\right\| _{L^{n/2}(U)}<K\right\} \\
\mathfrak{B}_{K}^{1,p}(U) &=&\left\{ D=d+A:\ \
\begin{array}{l}
A\in W^{1,p}(U) \\
A_{\tau }\in W^{1,p}(\partial _{1}U)%
\end{array}%
\ ,\ \
\begin{array}{l}
\left\| F_{A}\right\| _{n/2}<K \\
\left\| F_{A_{\tau }}\right\| _{L^{n/2}(\partial _{1}U)}<K%
\end{array}%
\right\}
\end{eqnarray*}%
describing connections with field strength locally $L^{n/2}$-bounded on a
neighborhood $U$ of type 1 and type 2, respectively. \ (All norms are
defined on $U$, unless otherwise specified.) \ As proven in \cite{Uhlenbeck}
(Thm 2.1) for interior neighborhoods and in \cite{Marini} (Thms 3.2 and 3.3)
for boundary neighborhoods, a good choice of gauge exists for connections
belonging to $\mathfrak{A}_{K}^{1,p}(U)$ or $\mathfrak{B}_{K}^{1,p}(U).$ \
More precisely,
\begin{theorem}
\label{localgauge}For $\frac{n}{2}\leq p<n$, there exists $K\equiv K(n)>0$
and $c\equiv c(n)$ such that every connection $D=d+A\in \mathfrak{A}_{K}^{1,p}\left( U\right) $ ($\mathfrak{B}_{K}^{1,p}\left( U\right) $) is
gauge equivalent to a connection $d+\hat{A},$ $\hat{A}\in W^{1,p}(U),$ satisfying
\\
$%
\begin{array}{l}
\begin{array}{ll}
(a)\ \ d\ast \hat{A}=0 \ \ & (a^{\prime })\ \ \left(
\begin{array}{l}
d\ast \hat{A}=0 \\
d_{\tau }\ast \hat{A}_{\tau }=0\ \ on\ \partial _{1}U%
\end{array}%
\right) \\
(b)\ \ \hat{A}_{\nu }=0 \ on \ \partial U \ \ & (b^{\prime })\ \ \hat{A}_{\nu }=0 \ on \ \partial _{2}U%
\end{array}
\\
(c=c^{\prime }) \ \ \left\| \hat{A}\right\| _{1,n/2}<c(n)\left\| F_{\hat{A}}\right\| _{n/2} \\
(d=d^{\prime }) \ \ \left\| \hat{A}\right\| _{1,p}<c(n)\left\| F_{\hat{A}}\right\| _{p}%
\end{array}%
$ \\
(Unprimed conditions $(a)$-$(d)$ refer to $\mathfrak{A}_{K}^{1,p}(U)$;
primed conditions $(a^{\prime })$-$(d^{\prime })$ to $\mathfrak{B}_{K}^{1,p}(U)$). \ Moreover, the gauge transformation $s$ satisfying $\hat{A}=s^{-1}ds+s^{-1}As$ can be taken in $W^{2,n/2}(U)$ ($s$ will in fact always be one degree smoother than $A$; see Lemma 1.2 in \cite{Uhlenbeck}).
\end{theorem}%
\begin{proof}
See \cite{Uhlenbeck}, \cite{Marini}.
\end{proof}
As noted in \cite{Marini}, the condition $\left\| F_{A}\right\| _{n/2}<K$ is
conformally invariant, while the norm $\left\| F_{A_{\tau }}\right\|
_{L^{n/2}(\partial _{1}U)}$ picks up a factor of $r$ under the dilation $%
x^{\prime }=rx$, so that the simultaneous conditions $\left\| F_{A}\right\|
_{n/2}<K$, $\left\| F_{A_{\tau }}\right\| _{L^{n/2}(\partial _{1}U)}<K$ on a
neighborhood $U$ of type 2 can always be achieved by applying an appropriate
dilation (the Dirichlet boundary data is prescribed to be smooth, so $%
\left\| F_{A_{\tau }}\right\| _{L^{n/2}(\partial _{1}U)}$ already satisfies
some bound).
To find a regular minimizer of the Yang-Mills action on a 4-dimensional
manifold $M$, we must find a cover $\left\{ U_{\alpha }\right\} $ of $M$ and
a minimizing sequence $\left\{ A_{i}\right\} $ whose members satisfy%
\begin{eqnarray*}
S_{YM}(\left. A_{i}\right| _{U_{\alpha }})=\int_{U_{\alpha }}\left|
F_{A_{i}}\right| ^{2} dx<K\ \ \forall \ \alpha ,i,
\end{eqnarray*}%
where $K\equiv K(4)$ is as given in Theorem \ref{localgauge}. \ For a compact
manifold this is proved in \cite{Sedlacek} using a counting argument. \ Here
we use dilations of the neighborhoods in a cover to construct a proof valid
for any smooth Riemannian manifold.
\begin{theorem}
\label{goodcover}
Let $\left\{ A(i)\right\} $ be a sequence of connections in
$G$-bundles $P_{i}$ over $M$, with uniformly bounded action $\int_{M}\left|
F(i)\right| ^{2}$ $dx<B$ \ $\forall $ $i$. \ For any $\varepsilon >0$, there
exists a countable collection $\left\{ U_{\alpha }\right\} $ of
neighborhoods of type 1 and 2, a collection of indices $I_{\alpha }$, a
subsequence $\left\{ A(i)\right\} _{\mathcal{I}^{\prime }}\subset \left\{
A(i)\right\} _{\mathcal{I}}$, and at most a finite number of points $\left\{
x_{1},...,x_{k}\right\} \in M$ such that%
\begin{eqnarray*}
\bigcup U_{\alpha } &\supset &\left. M\right\backslash \left\{
x_{1},...,x_{k}\right\} \\
\int_{U_{\alpha }}\left| F(i)\right| ^{2} dx &<&\varepsilon \ \ \forall i\in \mathcal{I}^{\prime }, \ \ i>I_{\alpha }.
\end{eqnarray*}
\end{theorem}
\begin{proof}
For each $n\in \mathbb{N},$ consider the cover $\left\{ B_{n}(x):x\in
M\right\} $ of $M$ given by geodesic balls of radius $\frac{1}{n}$ centered
at each point $x\in M$ (for $x\in \partial M$, the geodesic ``ball'' $%
B_{n}(x)$ will actually be a half-ball, a fact which makes no difference in
the proof). \ By separability, each such cover has a countable subcover $%
C_{n}=\left\{ B_{n}(x_{n,m}):m\in \mathbb{N}\right\} $.
On any ball $B_{n}(x_{n,m})$, we have the uniform bound $%
\int_{B_{n}(x_{n,m})}\left| F(i)\right| ^{2}$ $dx<B$ \ $\forall $ $i$. \
Therefore for the ball $B_{n}(x_{n,1})$ in a given cover $C_{n}$, there
exists a subsequence of $\left\{ A(i)\right\} $ for which the corresponding
subsequence of $\left\{ \int_{B_{n}(x_{n,1})}\left| F(i)\right|
^{2}dx\right\} $ converges. \ Of this subsequence, there exists a further
subsequence such that the corresponding subsequence of $\left\{
\int_{B_{n}(x_{n,2})}\left| F(i)\right| ^{2}dx\right\} $ converges, and so
on, for every $m$. \ Diagonalizing\footnote{%
Diagonalizing over a list of sequences $\left\{ a_{j}\left( i\right)
\right\} $ such as $%
\begin{array}{llll}
a_{1}\left( 1\right) & a_{1}\left( 2\right) & a_{1}\left( 3\right) &
\ldots \\
a_{2}\left( 1\right) & a_{2}\left( 2\right) & a_{2}\left( 3\right) &
\ldots \\
a_{3}\left( 1\right) & a_{3}\left( 2\right) & a_{3}\left( 3\right) &
\ldots \\
\vdots & \vdots & \vdots & \ddots
\end{array}%
$ selects out the new sequence $\left\{ a_{i}\left( i\right) \right\} $. \
In the case important for this proof, each row represents a subsequence of
the previous row, so that for any $j$, the diagonalized sequence $\left\{
a_{k}\left( k\right) \right\} $is a subsequence of $\left\{ a_{j}\left(
i\right) \right\} $ for $k\geq j$.} over these nested subsequences, we
obtain a subsequence of $\left\{ A(i)\right\} $ such that the corresponding
subsequence of $\left\{ \int_{B_{n}(x_{n,m})}\left| F(i)\right|
^{2}dx\right\} $ converges for every $m\in \mathbb{N}$.
Performing a similar diagonalization over all covers $C_{n}$, there exists a
subsequence $\left\{ A(i)\right\} _{\mathcal{I}^{\prime }}\subset \left\{
A(i)\right\} _{\mathcal{I}}$ such that for every ball in every cover, the
sequence $\left\{ \int_{B_{n}(x_{n,m})}\left| F(i)\right| ^{2}dx\right\} _{%
\mathcal{I}^{\prime }}$ converges. \ For each $C_{n}$, consider the
collection of balls $\left\{ B_{n}(y_{n,m})\right\} $, $\left\{
y_{n,m}\right\} \subset \left\{ x_{n,m}\right\} $, for which $\left\{
\int_{B_{n}(y_{n,m})}\left| F(i)\right| ^{2}dx\right\} _{\mathcal{I}^{\prime
}}$ converges to a value greater than or equal to $\varepsilon $. \ Note
that for any $i\in \mathcal{I}$, there is an upper bound on the number $%
N_{i,n}$ of disjoint balls of radius $\frac{1}{n}$ for which $\int_{B_{n}(y_{n,m})}%
\left| F(i)\right| ^{2}dx\geq \varepsilon :$%
\begin{eqnarray*}
B\geq N_{i,n}\varepsilon.
\end{eqnarray*}%
Thus the upper bound $\frac{B}{\varepsilon }$ limits the number of
disjoint balls in the set $\left\{ B_{n}(y_{n,m})\right\} .$
Choose a maximal disjoint set $\left\{ B_{n}(y_{n,m_{j}})\right\} _{j=1}^{J}$
of balls in $\left\{ B_{n}(y_{n,m})\right\} $, and consider the set $\left\{
B_{n}^{\ast }(y_{n,m_{j}})\right\} _{j=1}^{J}$ of balls centered at the
points $y_{n,m_{j}}$ but having radius $\frac{3}{n}$. \ Then we have $%
\bigcup_{\left\{ y_{n,m}\right\} }B_{n}(y_{n,m})\subset
\bigcup_{j=1}^{J}B_{n}^{\ast }(y_{n,m_{j}}).$ \ This shows that if we
discard the balls $\left\{ B_{n}(y_{n,m})\right\} $ from the cover $C_{n}$,
we will only have discarded a set which was contained in $J\leq \frac{B}{%
\varepsilon }$ balls of radius $\frac{3}{n}$. \ We can then safely discard
the balls $\left\{ B_{n}(y_{n,m})\right\} $ from each cover $C_{n}$, and
form the union $C=\bigcup_{n\in \mathbb{N}}C_{n}\backslash \left\{
B_{n}(y_{n,m})\right\} $ to obtain a cover $C$ of $M\backslash \left\{
x_{1},...,x_{k}\right\} $, where $k\leq \frac{B}{\varepsilon }$, and each ball $B_{n}\left( x_{n,m} \right) \in C$ satisfies $\int_{B_{n}\left( x_{n,m} \right)} { \left| F \left( i \right) \right|^{2}}dx < \varepsilon$ .
\end{proof}
Since a minimizing sequence $\left\{ A(i)\right\} _{i\in \mathcal{I}}$ by
definition admits a uniform bound on the action, we can use Theorem \ref%
{goodcover} to select a subsequence $\left\{ A(i)\right\} _{i\in \mathcal{I}%
^{\prime }}$ of the minimizing sequence and a cover $\left\{ U_{\alpha
}\right\} $ satisfying $\int_{U_{\alpha }}\left| F(i)\right| ^{2}$ $dx<K(4)$
$\ \ \forall $ $i\in \mathcal{I}^{\prime }$, $i>I_{\alpha }$. \ On any
neighborhood $U_{\alpha }$ in the cover, Theorem \ref{localgauge} implies
that each member $A_{\alpha }(i)$ of the subsequence is gauge-equivalent to
a connection $\hat{A}_{\alpha }(i)$ in the Hodge gauge, satisfying a uniform
$W^{1,2}(U)$ bound on $\hat{A}_{\alpha }(i)$. \ Weak compactness of Sobolev
spaces now yields a further subsequence of $\left\{ \hat{A}_{\alpha
}(i)\right\} $, weakly convergent in $W^{1,2}$ to some $\hat{A}_{\alpha }$.
\ It only remains to show that $\hat{A}_{\alpha }$ retains the desired\
regularity properties and boundary data, and that the set $\left\{ \hat{A}%
_{\alpha }\right\} $ can be patched to a global connection on $M.$ \ These
objectives are accomplished by Theorem 3.4 in \cite{Marini} (generalizing
Theorem 3.6 in \cite{Uhlenbeck} and Theorem 3.1 in \cite{Sedlacek}). \ Their
results are paraphrased below; the proof is by weak compactness of Sobolev
spaces:
\begin{theorem}
\label{goodlimit}
Let $\left\{ A(i)\right\} _{i\in \mathcal{I}}$ be a
sequence of $G$-connections with uniformly bounded action as described in
Theorem \ref{goodcover}, and with prescribed smooth tangential boundary
components $\left( A\left( i\right) \right) _{\tau }=a_{\tau }$ on $\partial
M$. \ Let $\varepsilon =K(4)$, where $K(4)$ is the constant from Theorem \ref%
{localgauge}. \ Then, for the subsequence $\left\{ A\left( i\right) \right\}
_{i\in \mathcal{I}^{\prime }}$ found in Theorem \ref{goodcover} and cover $%
\left\{ U_{\alpha }\right\} $, there exists a further subsequence $\left\{
A\left( i\right) \right\} _{i\in \mathcal{I}^{\prime \prime }}$, sections $%
\sigma _{\alpha }\left( i\right) :U_{\alpha }\rightarrow P_{i}$ ($i\in
\mathcal{I}^{\prime \prime }$) and connections $A_{a}$ on $U_{\alpha }$ such
that \\ \\
\begin{tabular}{ll}
(e) & $\sigma _{\alpha }^{\ast }\left( i\right) \left( A_{i}\right) \equiv
A_{\alpha }\left( i\right) \rightharpoonup A_{\alpha }$ \ in $W^{1,2}\left(
U_{\alpha }\right) $ \\
(f) & $F\left( A_{\alpha }\left( i\right) \right) \equiv F_{\alpha }\left(
i\right) \rightharpoonup F_{\alpha }$ in $L^{2}\left( U_{\alpha }\right) $
\\
(g) & $s_{\alpha \beta }\left( i\right) \rightharpoonup s_{\alpha \beta }$
in $W^{2,2}\left( U_{\alpha }\cap U_{\beta }\right) $ \\
(h) & $\left. \left( A_{\alpha }\right) _{\tau } \right|
_{\partial _{1}U_{\alpha }}\sim \left. a_{\tau } \right|
_{\partial _{1}U_{\alpha }}$ by a smooth gauge transformation \\
(i) & $\left(
\begin{array}{c}
d\ast A_{\alpha }=0\ on \ U_{\alpha } \\
d_{\tau }\ast \left( A_{\alpha }\right) _{\tau }=0\ on \ \partial _{1}U%
\end{array}%
\right) $ \\
(j) & $A_{\alpha }\equiv s_{\alpha \beta }^{-1}A_{\beta }s_{\alpha \beta
}+s_{\alpha \beta }^{-1}ds_{\alpha \beta }$%
\end{tabular}%
\\ \\
Here $s_{\alpha \beta }(i)$ is the transition function $A_{\beta }\left(
i\right) \rightarrow A_{\alpha }\left( i\right) $; i.e,
\begin{eqnarray*}
A_{\alpha }\left( i\right) \equiv s_{\alpha \beta }^{-1}\left( i\right)
A_{\beta }\left( i\right) s_{\alpha \beta }\left( i\right) +s_{\alpha \beta
}^{-1}\left( i\right) ds_{\alpha \beta }\left( i\right) .
\end{eqnarray*}
\end{theorem}
\begin{proof}
See \cite{Sedlacek}, \cite{Marini}. \ Note that the result follows by weak
compactness of Sobolev spaces, after applying diagonalization (as in the
proof of Theorem \ref{goodcover}) over the countable cover $\left\{
U_{\alpha }\right\} $.
\end{proof}
Lower semicontinuity of the Yang-Mills functional now implies that the value
of the action on the limiting connection $A$ of the sequence described in
Theorem \ref{goodlimit} is in fact $m\left( a_{\tau }\right) \equiv
\min\limits_{\mathcal{A}}I\left( A\right) $ where $\mathcal{A}$ is the set
of connections on $G$-bundles on $M$ such that $\left. A_{\tau } \right| _{\partial M}$. \ In Theorem 3.5 in \cite{Marini} and 4.1 in \cite%
{Sedlacek}, it is proved by contradiction that $A$ in fact satisfies the
Yang-Mills equations. \ These proofs are completely local, and hold
unchanged in our case.
The proofs of regularity of the connection in Hodge gauge are also local and
hold unchanged. \ Regularity except for at the points $\left\{
x_{1},...,x_{k}\right\} $ from Theorem \ref{goodcover} is a consequence of
the ellipticity of the Yang-Mills equations in Hodge gauge. \ At the points $%
\left\{ x_{1},...,x_{k}\right\} $, the limiting connection may not be
defined, so removable singularity theorems are needed to extend $A$ to these
points. \ The case of interior points is covered by Theorem 4.1 in \cite%
{UhlRS}, and that of boundary points by Theorem 4.6 in \cite{Marini}, so that the connection $A$ extends to a smooth connection (provided the Dirichlet boundary data is
smooth). \ More precisely, we have
\begin{theorem}
\label{RS}
Let $U^{\left( 1\right) }$ ($U^{\left( 2\right) }$) be a
neighborhood of type 1 (2); let $U_{\ast }^{\left( i\right) }=U^{\left(
i\right) }\backslash \left\{ 0\right\} $. \ Let $A$ be a connection in a
bundle $P$ over $U_{\ast }^{\left( i\right) }$, $\left\| F_{A}\right\|
_{L^{2}\left( U\right) }<B<\infty $. \ Then%
\\
\begin{tabular}{ll}
(Type 1) & If $A$ is Yang-Mills on $\left. P\right| U_{\ast }^{\left(
1\right) }$, there exists a $C^{\infty }$ connection $A_{0}$ \\
& defined on $U^{\left( 1\right) }$ such that $A_{0}$ is gauge-equivalent to
$A$ on $U_{\ast }^{\left( 1\right) }$. \\
(Type 2) & If $A$ is Yang-Mills and $C^{\infty }$ on $\left. P\right|
U_{\ast }^{\left( 2\right) }$, there exists a $C^{\infty }$ connec- \\
& tion $A_{0}$ defined on $U^{\left( 2\right) }$ such that $A_{0}$ is
gauge-equivalent to $A$ on \\
& $U_{\ast }^{\left( 2\right) }$, by a gauge transformation in $C^{\infty
}\left( U_{\ast }\right) $.%
\end{tabular}%
\end{theorem}
\begin{proof}
See \cite{UhlRS}, \cite{Marini}.
\end{proof}
\section{The Euclidean Yang-Mills Hamilton-Jacobi functional}
\label{Fun}
Having shown the existence of an absolute minimizer $A_{t}$ for the
Euclidean Yang-Mills action given prescribed smooth initial tangential
components $A=a_{\tau }$, we can now define the Hamilton-Jacobi functional%
\footnote{Note that for $S\left( A\right) $ to be finite, we are implicitly making the assumption that for all initial data $A$ of physical interest, there exists at least one trajectory $A_{s}$ ($A_{s=0}=A$) such that $-\tilde{I}\left(
A_{s}\right) <\infty $. \ This constraint defines the set of physical
fields, since for any $A$ on $\mathbb{R}^{3}$ for which no such $A_{s}$ can
be found, allowing $S\left( A\right) $ to take an infinite value implies
that evaluated on this $A$, the ground state $\Omega \left( A\right) $ is
zero.}%
\begin{eqnarray*}
S\left( A\right) =\int_{\mathbb{R}_{+}\times \mathbb{R}^{3}}tr\left(
F_{A_{t}}\wedge \ast F_{A_{t}}\right) dt,
\end{eqnarray*}%
where $\ast $ indicates the Hodge star operator in the Euclidean metric.
The values of this functional are well-defined even allowing for the
possible existence of more than one gauge-equivalence class of minimizers
for the given initial data; in principle we can simply choose a minimizer
starting from the field configuration $A=a_{\tau }.$ \ However, while still
an open question, there exist partial results toward establishing uniqueness
of a minimizer for given initial data in the compact case. \ In \cite{Isobe}%
, Isobe has shown that for flat boundary values, the Dirichlet problem on a
star-shaped bounded domain in $\mathbb{R}^{n}$\ can only have a flat
solution. \ Non-uniqueness results are proven by Isobe and Marini \cite%
{IsMar} for Yang-Mills connections in bundles over $B^{4}$, but the
solutions are topologically distinct, belonging to differing Chern classes.
\ On the domain $M=\mathbb{R}_{+}\times \mathbb{R}^{3}$, it seems
likely that given initial data determines a minimizer unique up to gauge
transformation. \ Future work will aim to settle this question; one possible
means of approach is a conformal transformation to the compact case, as
described in Sect.~\ref{YMDirichlet}.
In order to make the claim that $S(A)$ solves the imaginary-time zero-energy
Yang-Mills Hamilton-Jacobi equation, we must also verify its functional
differentiability. \ This can be done using the same integration by parts
argument as in the derivation of the Euler-Lagrange equation. \ However, we
must first write the solution to the Euclidean Dirichlet problem in a global
gauge which is smooth and decays sufficiently rapidly at spatial and
temporal infinity.
First, Theorem \ref{RS} implies that the solution $A$ to the Yang-Mills
Dirichlet problem extends to a smooth connection on a smooth bundle over all
of $M=\mathbb{R}_{+}\times \mathbb{R}^{3}$. \ Since the only bundle
over a contractible base manifold is the trivial one (see e.g. \cite{Nakahara}), $A$ is also a connection on the trivial bundle $P\cong M\times
G $. \ Therefore we can write $A$ in terms of a smooth global section $%
\sigma :M\rightarrow G$. \ Using this trivialization, $D=d+A$ is smoothly
defined over all of $M$.
The following lemma controls the growth of $A$ and $F,$ for a good choice of
gauge. \ Part \textit{(a)} is a version of Uhlenbeck's Corollary 4.2 \cite%
{UhlRS} for our base manifold $M=\mathbb{R}_{+}\times \mathbb{R}^{3}$;
part \textit{(b)} extends the same principle to bound the growth of the
connection 1-form $A$.
\begin{lemma}
\label{Decay}
Let $D=d+A$ be a connection in a bundle $P$ over an exterior
region $\mathcal{V}=\left\{ y\in \mathbb{R}_{+}\times \mathbb{R}%
^{3}:\left| y\right| \geq N\right\} $ satisfying $\int_{\mathcal{V}}\left|
F\right| ^{2}<\infty $. \ Then
\\
\begin{tabular}{ll}
(a) & $\left| F\right| \leq C\left| y\right| ^{-4}$ for some constant $C$
(not uniform); \\
(b) & There exists a gauge in which $D=d+\tilde{A}$ satisfies $\left|
\tilde{A}\right| \leq K\left| y\right| ^{-2}$.%
\end{tabular}%
\end{lemma}
\begin{proof}
\textit{(a)} \ Following the reasoning in \cite{UhlRS}, we define the
conformal mapping
\begin{eqnarray*}
f:U_{\ast }\rightarrow \mathcal{V} \\
y=f\left( x\right) =N\frac{x}{\left| x\right| ^{2}},%
\end{eqnarray*}%
where $U_{\ast }=\left\{ x\in \mathbb{R}_{+}\times \mathbb{R}%
^{3}:0<\left| x\right| \leq 1\right\} $. \ By conformal invariance of the
Yang-Mills action, we have%
\begin{eqnarray*}
\int_{U_{\ast }}\left| f^{\ast }F\right| ^{2}=\int_{U_{\ast }}\left| F\left(
f^{\ast }D\right) \right| ^{2}=\int_{\mathcal{V}}\left| F\right| ^{2}.
\end{eqnarray*}%
Applying part (b) of Theorem \ref{RS} to the pullback $f^{\ast }D$ of $D$
under $f$, there exists a gauge transformation $\sigma :U_{\ast }\rightarrow
G$ in which $f^{\ast }D$ extends smoothly to $U$. \ Thus using the
transformation law for $2$-forms, we have the following
\begin{eqnarray*}
\left| F\left( y\right) \right| &=&\left| f^{\ast }F(x)\right| \left|
df\left( x\right) \right| ^{-2} \\
&\leq &\max_{x\in U}\left| f^{\ast }F(x)\right| \cdot \left( N/\left|
x\right| ^{2}\right) ^{-2} \\
&=&C^{\prime }N^{2}\left| y\right| ^{-4}
\end{eqnarray*}
\textit{(b)} \ Define the gauge transformation $s=\sigma \circ f^{-1}:%
\mathcal{V}\rightarrow G$. \ Denoting $A^{s}=s^{-1}ds+s^{-1}As$ by $\tilde{A}
$ and $\left( f^{\ast }A\right) ^{\sigma }=\sigma ^{-1}d\sigma +\sigma
^{-1}\left( f^{\ast }A\right) \sigma $ by $\widetilde{f^{\ast }A}$, we have $%
f^{\ast }\tilde{A}=\widetilde{f^{\ast }A}$. \ Thus again applying Theorem
11(b) and using the transformation law for $1$-forms,%
\begin{eqnarray*}
\left| \tilde{A}\left( y\right) \right| &=&\left| f^{\ast }\tilde{A}%
(x)\right| \left| df\left( x\right) \right| ^{-1} \\
&\leq &\max_{x\in U}\left| \widetilde{f^{\ast }A}(x)\right| \cdot \left(
N/\left| x\right| ^{2}\right) ^{-1} \\
&=&C^{\prime \prime }N\left| y\right| ^{-2}.
\end{eqnarray*}%
\end{proof}
We are now ready to prove differentiability of our Hamilton-Jacobi
functional. \ Thanks are due to V. Moncrief for suggesting the form of this
argument.
\begin{theorem}
The functional%
\begin{eqnarray*}
S\left( A\right) =-\tilde{I}\left( A\right) =\int_{\mathbb{R}%
_{+}\times \mathbb{R}^{3}}tr\left( F_{A_{t}}\wedge \ast F_{A_{t}}\right) dt
\end{eqnarray*}%
is functionally differentiable, and $\frac{\delta S}{\delta A}=E=\dot{A}_{t=0}$.
\end{theorem}
\begin{proof}
To find the functional derivative of $S(A)=-\tilde{I}\left( A_{t}\right) $
at a given connection $A_{0}$ on the slice $x^{0}=0$, consider the
1-parameter family $A_{0}+\lambda h$, constructing%
\begin{eqnarray*}
\left. \frac{d}{d\lambda }\left[ S\left( A_{0}+\lambda h\right) \right]
\right| _{\lambda =0} &=&\lim\limits_{\lambda \rightarrow 0}\frac{S\left(
A_{\lambda }\right) -S(A_{0})}{\lambda } \\
&=&\lim\limits_{\lambda \rightarrow 0}\frac{-\tilde{I}\left( A_{\lambda
,t}\right) -\left( -\tilde{I}(A_{0,t})\right) }{\lambda } \\
&=&-\lim\limits_{\lambda \rightarrow 0}\frac{1}{\lambda }\left[ \tilde{I}%
\left( A_{\lambda ,t}\right) -\tilde{I}(A_{0,t})\right]
\end{eqnarray*}%
where for each $A_{\lambda }=A_{0}+\lambda h$, $A_{\lambda ,t}$ denotes the
absolute minimizer of $-\tilde{I}$ given initial data $A_{\lambda }$. \ For
any given value $\lambda _{0}$, the difference $\tilde{I}\left( A_{\lambda
_{0},t}\right) -\tilde{I}(A_{0,t})$ can be expressed in terms of a Taylor
series, as follows. \ First, use the parameter $\lambda $ to interpolate
between $A_{0,t}$ and $A_{\lambda _{0},t}$, describing a 1-parameter family $%
X_{\lambda ,t}$,%
\begin{eqnarray*}
X_{\lambda ,t}\equiv \frac{\lambda }{\lambda _{0}}A_{\lambda _{0},t}+\left(
1-\frac{\lambda }{\lambda _{0}}\right) A_{0,t},
\end{eqnarray*}%
so that $X_{\lambda ,0}=A_{\lambda }$. \ The standard Taylor series
expansion of $\tilde{I}\left( X_{\lambda ,t}\right) $ as a function of $%
\lambda $ then gives%
\begin{equation}
\tilde{I}\left( A_{\lambda _{0},t}\right) -\tilde{I}(A_{0,t})=\lambda
_{0}\left( \frac{\partial \tilde{I}}{\partial \lambda }\right) _{\lambda
=0}+O\left( \lambda _{0}^{2}\right) .
\label{Idiff}
\end{equation}%
Let $h_{t}=\frac{1}{\lambda _{0}}\left( A_{\lambda _{0},t}-A_{0,t}\right) $,
so that $X_{\lambda ,t}=A_{0,t}+\lambda h_{t}$. \ Then%
\begin{eqnarray*}
\left. \frac{\partial \tilde{I}}{\partial \lambda }\right| _{\lambda =0}
&=&\left. \frac{\partial }{\partial \lambda }\left[ \int_{\mathbb{R}%
_{+}\times \mathbb{R}^{3}}\left\langle F_{X_{\lambda ,t}},F_{X_{\lambda
,t}}\right\rangle \right] \right| _{\lambda =0} \\
&=&2\int_{\mathbb{R}_{+}\times \mathbb{R}^{3}}\left\langle dh_{t}+%
\left[ A_{0,t},h_{t}\right] ,F_{A_{0,t}}\right\rangle \\
&=&2\lim_{R\rightarrow \infty }\left( \int_{\partial _{1}}\left\langle
h,F_{A_{0}}\right\rangle +\int_{\partial _{2}}\left\langle
h_{t},F_{A_{0,t}}\right\rangle -\int_{0\leq \left| x\right| <R}\left\langle
h_{t},D^{\ast }F_{A_{0,t}}\right\rangle \right)
\end{eqnarray*}%
where $\partial _{1}=\left\{ \left| x\right| <R,x^{0}=0\right\} ,$ $\partial
_{2}=\left\{ \left| x\right| =R,x^{0}>0\right\} $.
The last term on the right-hand side vanishes due to the fact that $%
F_{A_{0,t}}$ is a solution to the Yang-Mills equations. \ Working with $%
A_{\lambda _{0},t}$ and $A_{0,t}$ both in the gauge guaranteed by Lemma \ref%
{Decay} (for some fixed $N$ which $R$ eventually surpasses), the middle term
also approaches zero as $R$ approaches infinity, since%
\begin{eqnarray*}
\left\langle h_{t},F_{A_{0,t}}\right\rangle &\leq &\left| h_{t}\right|
\left| F_{A_{0,t}}\right| \\
&\leq &\frac{1}{\lambda _{0}}\left( \left| A_{\lambda _{0},t}\right| +\left|
A_{0,t}\right| \right) \left| F_{A_{0,t}}\right| \\
&\leq &\frac{1}{\lambda _{0}}\left( K_{\lambda _{0}}+K_{0}\right) C_{0}\cdot
R^{-6}.
\end{eqnarray*}%
Since the area element on $\partial _{2}$ contributes only a factor of $R^{2}
$, the middle term is easily seen to vanish. \ Thus we are left with only
the first term, so that%
\begin{eqnarray*}
\left. \frac{\partial \tilde{I}}{\partial \lambda }\right| _{\lambda
=0}=\int_{\mathbb{R}^{3}}\left\langle h,F_{A_{0}}\right\rangle ,
\end{eqnarray*}%
and the definition of functional derivative implies that%
\begin{eqnarray*}
\frac{\delta S}{\delta A}=E=\dot{A}_{t=0}.
\end{eqnarray*}
\end{proof}
\section{Gauge and Poincare invariance}
\label{Gauge}
In order for the candidate ground state wave functional%
\begin{eqnarray*}
\Omega (A)= \mathcal{N} \exp \left( -S\left( A\right) \right)
\end{eqnarray*}%
to be physical, it must remain invariant under the action of gauge
transformations $g\left( x\right) $, $x\in \mathbb{R}^{3}$, on the
connection $A(x):$%
\begin{eqnarray*}
S\left( g^{-1}dg+g^{-1}Ag\right) =S\left( A\right) ,
\end{eqnarray*}%
so that $S$ is in fact a functional on the physical configuration space $%
\mathcal{A/G}$ of connections modulo gauge transformations, rather than the
kinematical configuration space $\mathcal{A}$. \ Gauge invariance of $S$
follows immediately from its form%
\begin{eqnarray*}
S(A)=-\int_{0}^{\infty }\tilde{L}\left( A_{t},\dot{A}_{t}\right) dt=\int_{\mathbb{R}_{+}\times \mathbb{R}^{3}}tr\left( F_{A_{t}}\wedge
\ast F_{A_{t}}\right)
\end{eqnarray*}%
where $\ast $ denotes the Hodge star operator in the Euclidean metric on $%
\mathbb{R}_{+}\times \mathbb{R}^{3}$. \ The gauge transformation $%
g\left( x\right) $, $x\in \mathbb{R}^{3}$, can simply be extended to $\mathbb{R}_{+}\times \mathbb{R}^{3}$ by taking $g\left( t,x\right) =g\left(
x\right) $ constant over $\mathbb{R}_{+}$, and the cyclic property of
the trace implies%
\begin{eqnarray*}
S\left( g^{-1}dg+g^{-1}Ag\right) &=&\int_{\mathbb{R}_{+}\times
\mathbb{R}^{3}}tr\left( F_{g\cdot A_{t}}\wedge \ast F_{g\cdot A_{t}}\right)
\\
&=&\int_{\mathbb{R}_{+}\times \mathbb{R}^{3}}tr\left( F_{A_{t}}\wedge
\ast F_{A_{t}}\right) =S\left( A\right) .
\end{eqnarray*}
Similarly, rotations and translations applied to $\mathbb{R}^{3}$ do not
affect the value of $S\left( A\right) $, because we can extend them
constantly through time over $\mathbb{R}_{+}\times \mathbb{R}^{3}$,
and by a change of coordinates the value of the integral defining $S\left(
A\right) $ is unchanged. \ The only remaining Poincare transformations are
boosts, which cannot be verified directly in our canonical framework. \ The
conserved quantity generating an infinitesimal boost in the $x^{i}$
direction is
\begin{equation}
C_{B(i)}=\int_{\mathbb{R}^{3}}\left( x^{0}\delta^{i}_{\mu} + x^{i}\delta^{0}_{\mu} \right) T^{\mu 0}d^{3}x,
\label{boost}
\end{equation}%
where $T^{\mu \nu }=-\frac{1}{4\pi } tr \left\{ F_{\ \ \alpha }^{\mu
}F^{\nu \alpha }-\frac{1}{4}\eta ^{\mu \nu }F_{\alpha \beta }F^{\alpha \beta
}\right\} $ is the stress-energy tensor of Yang-Mills theory. \ This
infinitesimal generator must be promoted to a quantum operator which annihilates our candidate ground state. \ A test case in which this can be done is the
abelian case of $U\left( 1\right) $ gauge theory (free Maxwell theory),
using Wheeler's zero-energy ground state (\ref{Wheeler}) as in Sect.~\ref{Intro}%
. \ This is alternatively attainable as%
\begin{eqnarray*}
\Omega (A)=\mathcal{N}\exp \left( -\frac{\left\langle A, \left( \ast d \right)\triangle ^{-1/2}\left( \ast d\right) A\right\rangle _{2}}{2%
}\right)
\end{eqnarray*}%
by using the normal ordering as described in Sect.~\ref{Ordering} to write the
unique positive square root of the operator $\ast d\ast d$ in the form $%
\mathbf{(}\ast d)\triangle ^{-1/2}\left( \ast d\right) $. \ Writing the
operator $\triangle ^{-1/2}$ in terms of its integral kernel shows the
equality of this state with (\ref{Wheeler}). \ Invariance under
infinitesimal boosts follows by expressing (\ref{boost}) as the sum of a
translation generator $\frac{x^{0}}{4\pi }\int_{\Sigma }\varepsilon
^{ijk}E_{j}B_{k}d^{3}x$ plus the term $\frac{1}{8\pi }\int_{\Sigma
}x^{i}\left( \left| E\right| ^{2}+\left| B\right| ^{2}\right) d^{3}x$. \
Translation invariance being already established, it only remains to verify
that (\ref{Wheeler}) is annihilated by the remaining term under our
ordering. \ Indeed, the functional $S(A)$ in the exponent of (\ref{Wheeler})
can be directly shown to satisfy%
\begin{eqnarray*}
\int_{\mathbb{R}^{3}}x^{i}\left| \frac{\delta S}{\delta A}\right|
^{2}d^{3}x=\int_{\mathbb{R}^{3}}x^{i}\left| B\right| ^{2}d^{3}x,
\end{eqnarray*}%
allowing the extra term to be ordered in the same way as the Hamiltonian. \
Using the abelian case as a model, we hope to extend invariance under boosts
to the nonabelian case in future work.
\\ \\
\textit{ \textbf{Acknowledgements.} I am very grateful to my advisor, Vincent Moncrief, for suggesting this project, and for help and guidance throughout its development. \ For kind hospitality, I offer my heartfelt thanks to the University of Amsterdam's
Korteweg-de Vries Institute for Mathematics, in particular Eric Opdam and
Jan Wiegerinck.}
\pagebreak
|
1,314,259,993,386 | arxiv | \section{The goals of the survey}
Cosmology is undergoing a {\it fin-de-millennium} flowering brought on by
the hothouse of recent technological progress. Cosmography, one of the
roots of cosmology, is thriving too---maps of the distribution of
luminous objects are covering larger volumes, reaching higher redshifts
and encompassing a wider variety of sources than ever before. New ways
of interpreting these observations are yielding a richer and more
detailed picture of the structure and evolution of the universe, and of
the underlying cosmology.
The 2dF Galaxy Redshift Survey at the Anglo-Australian Telescope aims to
map the optically-luminous galaxies over a statistically-representative
volume of the universe in order to characterise as fully as possible the
large-scale structure of the galaxy distribution and the interplay
between this structure and the properties of the galaxies themselves. In
doing so, the survey will address a variety of fundamental problems in
galaxy formation and cosmology. The major scientific goals include:
\begin{enumerate}
\item
Measuring the power spectrum of the galaxy distribution on scales up to
a few hundred Mpc, filling the gap in our knowledge of the power
spectrum between scales less than 100\mbox{\,h$^{-1}$\,Mpc}\ derived from earlier galaxy
redshift surveys and scales greater than 1000\mbox{\,h$^{-1}$\,Mpc}\ probed by microwave
background anisotropy observations. The shape of the power spectrum on
these large scales provides a strong constraint on the nature of the
dark matter (i.e.\ whether it is hot or cold) and the total mass density
$\Omega$.
\item
Measuring the distortion of the large-scale spatial clustering pattern
in redshift space due to the peculiar velocity field produced by the
mass distribution. This distortion depends on both the mass density
parameter $\Omega$ and the bias factor $b$ of the galaxy distribution
with respect to the mass distribution, and its measurement constrains
the combination $\beta \approx \Omega^{0.6}/b$.
\item
Investigating the morphology of galaxy clustering and the statistical
properties of the fluctuations: on large scales, to determine whether
the density distribution is Gaussian as predicted by most inflationary
models of the early universe; on small scales, to probe the non-linear
evolution of the density field.
\item
Determining the variations in the spatial and velocity distributions of
galaxies as a function of galaxy properties, including luminosity and
spectral type. This provides a detailed picture of the link between the
masses and star-formation histories of galaxies and their local
environment within the large-scale distribution. Such information will
constrain models of galaxy formation and evolution.
\item
Tracking the galaxy luminosity function, clustering amplitude and mean
star formation rate out to a redshift of $z$$\sim$0.5, in order to test
models for the evolution of the galaxies' stellar populations and
large-scale distribution.
\item
Defining large, homogeneous samples of groups and clusters of galaxies
and other large-scale structures. Redshifts for many galaxies in a
representative sample of clusters and groups will give a snapshot of the
dynamical evolution of the most massive bound objects in the universe at
the present epoch. Mapping the infall patterns around clusters will also
yield dynamical estimates of cluster masses at large radii.
\end{enumerate}
The redshift survey will also provide a massive database for use in
conjunction with other surveys or as a source for follow-up programs.
Various interesting subsamples of galaxies (such as AGN or cDs) can be
defined using the positions, luminosities and spectral types provided by
the survey. Other rich veins of information can be tapped by correlating
the galaxies in this survey with sources found at other wavelengths
(X-ray, infrared, radio) by existing or planned satellite or
ground-based surveys (ROSAT, IRAS, WIRE, FIRST, etc.).
\section{Survey design}
The survey design seeks to achieve the above goals using the
capabilities of the 2dF multi-fibre spectrograph on the Anglo-Australian
Telescope (AAT) in approximately 100 nights of telescope time. A full
description of the 2dF facility is available on the WWW at {\tt
http://www.aao.gov.au/2df/}. However for the purposes of the survey, the
essential features of 2dF are its 2~degree diameter field of view
covered by 400 fibres and that it is attached to a 4m telescope. With
both the goals of the survey and the capabilities of 2dF in mind, and
with the limitation that the survey should not require more than about
100 nights of AAT time, we have arrived at the following survey design.
\subsection{Source catalogue}
The source catalogue for the survey is a revised and extended version of
the APM galaxy catalogue (Maddox et~al.\ 1990a,b,c). This catalogue is
based on Automated Plate Measuring machine (APM) scans of 390 plates
from the UK Schmidt Telescope (UKST) Southern Sky Survey. The magnitude
system for the Southern Sky Survey is defined by the response of Kodak
IIIaJ emulsion in combination with a GG395 filter, zeropointed by CCD
photometry to the Johnson B band. The extended version of the APM
catalogue includes over 5 million galaxies down to b$_J$=20.5 in both
north and south Galactic hemispheres over a region of almost
10$^4$~sq.deg (bounded approximately by declination $\delta$$\leq$+3 and
Galactic latitude $b$$\gs$20). The fields included in the catalogue are
shown as the red squares in Figure~\ref{fig:skyplot}. The astrometry for
the galaxies in the catalogue has been significantly improved, so that
the rms error is now 0.25~arcsec for galaxies with b$_J$=17--19.5. Such
precision is required in order to minimize light losses with the
2~arcsec diameter fibres of 2dF. The photometry of the catalogue is
calibrated with numerous CCD sequences and has a precision of
approximately 0.2~mag for galaxies with b$_J$=17--19.5.
\begin{figure}
\centering
\vspace*{5pt}
\parbox{\textwidth}{\epsfxsize=\textwidth \epsfbox{skyplot.ps}}
\vspace*{5pt}
\caption{The 2dF survey fields (small circles) superimposed on the APM
survey area (dotted outlines of Sky Survey plates). There are
approximately 140,000 galaxies in the 75$^\circ$$\times$15$^\circ$
South Galactic Hemisphere strip centred on the South Galactic
Pole, 70,000 galaxies in the 75$^\circ$$\times$7.5$^\circ$ North
Galactic Hemisphere equatorial strip, and 40,000 galaxies in the
100 random 2dF fields scattered over the entire southern region of the
APM galaxy survey.
\label{fig:skyplot}}
\end{figure}
\subsection{Survey geometry}
The geometry of the survey was chosen to be an effective compromise
between the desire to sparsely sample the largest possible volume in
order to determine the power spectrum on very large scales and the
desire to fully sample a representative but compact volume in order to
investigate the redshift space distortions and the topology of the
galaxy distribution. There is also the observational requirement to
spread the survey over a wide R.A.\ range to permit efficient use of
telescope time. The survey geometry adopted is shown in
Figure~\ref{fig:skyplot}. It consists of two contiguous declination
strips plus 100 random 2-degree fields. One strip is in the southern
Galactic hemisphere and covers approximately
75$^\circ$$\times$15$^\circ$ centred close to the South Galactic Pole at
($\alpha$,$\delta$)=($01^h$,$-30$); the other strip is in the northern
Galactic hemisphere and covers 75$^\circ$$\times$7.5$^\circ$ centred at
($\alpha$,$\delta$)=($12.5^h$,$+00$). The 100 random fields are spread
uniformly over the 7000~sq.deg region of the APM catalogue in the
southern Galactic hemisphere. At the mean redshift of the surey
($\bar{z}\approx0.11$), 100\mbox{\,h$^{-1}$\,Mpc}\ subtends about 20~degrees, so the two
strips are 375\mbox{\,h$^{-1}$\,Mpc}\ long and have widths of 7.5\mbox{\,h$^{-1}$\,Mpc}\ (south) and
3.8\mbox{\,h$^{-1}$\,Mpc}\ (north). The volume directly sampled by the survey (out to
$z$=0.2) is 3$\times$10$^7$\mbox{\,h$^{-3}$\,Mpc$^3$}; the volume sparsely sampled
including the random fields is 1$\times$10$^8$\mbox{\,h$^{-3}$\,Mpc$^3$}.
\subsection{Sample selection}
The sample is chosen to be magnitude-limited at b$_J$=19.5 after
extinction-correcting all the magnitudes in the APM catalogue using the
extinction maps of Schlegel et~al.\ (1998). This limit was chosen
because the mean number of galaxies per square degree at b$_J$=19.5 is
well-matched to the density of fibres available with 2dF. Due to
clustering, however, the number in a given field varies considerably. To
make efficient use of the instrument we employ a sophisticated tiling
algorithm to cover the survey area with the minimum number of 2dF
fields. With this algorithm we are able to achieve a 93\% sampling rate
with on average fewer than 5\% wasted fibres per field. Over the whole
area of the survey there are in excess of 250,000 galaxies. The mean
redshift of this b$_J$=19.5 magnitude-selected sample is $\bar{z}$=0.11.
\subsection{The faint survey}
The `bright' survey described above will be supplemented by a `faint'
survey of 10,000 galaxies down to R=21. These are drawn from APM scans
of deep UK Schmidt Telescope films taken on Kodak TechPan emulsion. The
faint survey is limited to selected fields in the two survey strips and
is carried out as an over-ride on the bright survey in the best
observing conditions. The mean redshift of the faint survey is
$\bar{z}$=0.3, and thus extends the bright survey sample by a factor of
3 in depth and a factor of 10 in luminosity at the cost of a 10\%
increase in total observing time.
\section{Survey observations and status}
The total integration time on a given field has a lower limit set by the
time required for 2dF's robotic fibre positioner to configure the
off-line field-plate. Other than this hardware limitation, the
integration time is determined by the requirement that we can obtain
precise and reliable redshifts and spectral type classifications, which
is comfortably met with integrations of 60~min. The spectra we obtain
cover the range 3800--8000\AA\ with a two-pixel resolution of 8.6\AA,
and have a minimum S/N of about 10 per pixel at 5000\AA; most spectra
will easily exceed this target. Two example spectra from the survey are
shown in Figure~\ref{fig:specs}. One is a b$_J$=19.2 emission-line
galaxy at $z$=0.067 and the other is a b$_J$=19.3 absorption-line galaxy
at $z$=0.246.
\begin{figure}
\centering
\vspace*{5pt}
\parbox{\textwidth}{\epsfxsize=0.7\textwidth \epsfbox{specs.ps}}
\vspace*{5pt}
\caption{Example spectra from the survey: a b$_J$=19.2 emission-line
galaxy at $z$=0.067 and a b$_J$=19.3 absorption-line galaxy at
$z$=0.246.
\label{fig:specs}}
\end{figure}
This minimum S/N permits very reliable automatic spectral classification
and redshift measurement. Employing both a standard cross-correlation
and line-fitting code and a new code which uses principal component
analysis and $\chi^2$-fitting to simultaneously classify the spectrum
and measure its redshift (Glazebrook et~al.\ 1998), we find we achieve a
very high level of reliability. A comparison of the redshifts obtained
from these codes with redshifts determined via visual inspection shows a
very low level of failures in the automatic algorithms. The success rate
in identifying redshifts for survey galaxies is currently 90--95\%; the
goal is to achieve an overall success rate in excess of 95\%.
The first data for the survey was taken at the start of 2dF's
commissioning in November 1996. The first survey observations with all
400 fibres were obtained in October 1997. As of this writing (March
1998) we have obtained $\sim$8000 redshifts for the survey in 43
different fields (many of which were observed with only 200 fibres and
most of which were shared with the QSO survey---see Boyle, these
proceedings). At present the fibre positioner is taking 110~min to
configure a typical field, limiting the survey to 4--5 fields per night.
However it is confidently expected that fine-tuning of the robot's
hardware and software will reduce this configuration time to about
60~min within the next few months, doubling the rate at which fields are
done. We therefore currently expect to complete the survey before the
end of 2000.
\begin{figure}
\centering
\vspace*{5pt}
\parbox{\textwidth}{(a)\epsfxsize=0.95\textwidth \epsfbox{2dFzcone.ps}}\\
\vspace*{5pt}
\parbox{\textwidth}{(b)\epsfxsize=0.95\textwidth \epsfbox{mock_cdm.ps}}
\vspace*{5pt}
\caption{(a)~A redshift slice for the galaxies observed to date,
combining northern and southern strips and including $\sim$8000 galaxies
($\sim$3\% of the full sample). (b)~A cone plot for a mock 2dF redshift
survey from Cole et~al.\ (1998).
\label{fig:zslice}}
\end{figure}
\section{Preliminary Results}
The survey is still in its infancy, with less than 3\% of the total
number of redshifts measured to date. Moreover the fields observed so
far are scattered over the survey volume, making the analysis of the
large-scale structure of the distribution problematic. Nonetheless, the
preliminary results presented here do provide some hints of the power
and scope of the full survey.
Figure~\ref{fig:zslice}a is a cone plot showing the distribution of the
8000 galaxies observed so far. Note that fields at all declinations are
projected onto this right ascension slice through the galaxy
distribution, which combines both the northern and southern strips. Even
with the highly incomplete filling of the slice by the fields obtained
to date, there are clear glimpses of significant large-scale structures.
To provide a visual reference for comparison, Figure~\ref{fig:zslice}b
shows a 90$^\circ$$\times$3$^\circ$ slice through a mock 2dF survey
based on a CDM N-body simulation and a recipe for galaxy biasing (see
Cole et~al.\ 1998).
\begin{figure}
\centering
\vspace*{5pt}
\parbox{\textwidth}{\epsfxsize=0.8\textwidth \epsfbox{allnz.ps}}
\vspace*{5pt}
\caption{A preliminary redshift distribution from the 2dF survey. The
smooth curve is the predicted redshift distribution neglecting
clustering.
\label{fig:nz}}
\end{figure}
Figure~\ref{fig:nz} shows the combined redshift distribution for all
fields in comparison to the predicted distribution in a homogeneous
universe. The mean redshift is 0.11, as expected, although the clear
signature of clustering shows that we are still far from having a
representative volume of the universe, in which the deviations from the
redshift distribution approach the Poisson limit.
\begin{figure}
\centering
\vspace*{5pt}
\parbox{\textwidth}
{\hspace*{0.1\textwidth}(a)~\epsfxsize=0.67\textwidth\epsfbox{xi_sigpi.ps}}\\
\vspace*{5pt}
\parbox{\textwidth}
{\hspace*{0.1\textwidth}(b)~\epsfxsize=0.67\textwidth\epsfbox{xi_r.ps}}
\vspace*{5pt}
\caption{(a)~The correlation function in redshift space
($\xi_s(\sigma,\pi)$) as a function of separation in the plane of the
sky ($\sigma$) and along the line-of-sight ($\pi$). The spatial
resolution of this contour plot is 0.5\,$h^{-1}$\,Mpc, and the contours
are $-$0.1, 0, 0.1, 0.2, 0.5, 1, 2, 5, 10, 30 and 50. (b)~A preliminary
estimate of the real-space galaxy correlation function. $\Xi(r)$ is the
projection of $\xi_s(\sigma,\pi)$ in the plane of the sky (i.e.\ onto
the $\sigma$ axis); $\Xi(r)/r \propto \xi_r(r)$ for a power-law
real-space correlation function $\xi_r(r)=(r/r_0)^{-\gamma}$. Solid and
open points indicate positive and negative values respectively. The
solid line shows the prediction from the deprojected power spectrum of
Baugh and Efstathiou (1993).
\label{fig:xiplots}}
\end{figure}
Preliminary measurements of the galaxy correlation function are shown in
Figure~\ref{fig:xiplots}. The redshift-space distortion of the
distribution is illustrated in Figure~\ref{fig:xiplots}a, which shows
$\xi_s(\sigma,\pi)$, the correlation function in redshift-space as a
function of separation in the plane of the sky ($\sigma$) and along the
line-of-sight ($\pi$). At small separations we see that the correlation
function is flattened along the line of sight due to the finger-of-god
effect. At sufficiently large separations (i.e.\ in the linear regime)
we expect a flattening in the plane of the sky that depends on
$\beta=\Omega^{0.6}/b$, although the preliminary determination of
$\xi_s(\sigma,\pi)$ presented here is too noisy on large scales to
permit an estimate of $\beta$. We can project $\xi_s(\sigma,\pi)$ in the
plane of the sky (i.e.\ onto the $\sigma$ axis) to obtain the projected
real-space correlation function $\Xi(r)$. Figure~\ref{fig:xiplots}b
shows $\Xi(r)/r$, since $\Xi(r)/r \propto \xi_r(r)$ (the deprojected
real-space correlation function) in the case of a power-law,
$\xi_r(r)=(r/r_0)^{-\gamma}$. For $r$$<$10\mbox{\,h$^{-1}$\,Mpc}\ we find that $\Xi(r)/r
\propto r^{-1.7}$ in agreement with most previous studies and notably
with the prediction by Baugh and Efstathiou (1993) from the deprojection
of the angular correlation function for the full APM galaxy survey. On
larger scales $\Xi(r)/r$ is as yet poorly determined due to the small
number of fields and their irregular geometry.
\begin{figure}
\centering
\vspace*{5pt}
\parbox{\textwidth}{\epsfxsize=\textwidth \epsfbox{2dFzlf.ps}}
\vspace*{5pt}
\caption{A preliminary galaxy luminosity function from the 2dF survey
using mean K-corrections and normalised to the APM number counts. The
points are the $1/V_{max}$ LF, the solid curve is the $\chi^2$ fit of a
Schechter function to the points and the dotted curve is the STY fit of
a Schechter function. The parameters of these fits are given on the
figure. The inset shows the 1, 2 and 3-$\sigma$ contours of the $\chi^2$
fit in $M^*$ and $\alpha$; the cross marks the $M^*$ and $\alpha$
obtained from the STY fit.
\label{fig:phi}}
\end{figure}
Although so far we have observed too few fields to effectively address
questions of large-scale structure, we do have a sufficiently large
sample of redshifts to begin to look at the properties of local
galaxies. Figure~\ref{fig:phi} shows the galaxy luminosity function for
the entire sample. This preliminary determination uses a single global
K-correction and is normalized by the number counts from the APM input
catalogue, so that the both the shape and normalization may change in
the final analysis. Note however that we are achieving a good
determination of the luminosity function 5 magnitudes below $L^*$ even
with this small subset of the full survey. The LF is generally well
represented by a Schechter function with $M^* \approx -19.5$, $\alpha
\approx -1.1$ and $\phi^* \approx 0.02$.
\begin{figure}
\centering
\vspace*{5pt}
\parbox{\textwidth}{\epsfxsize=\textwidth \epsfbox{means.ps}}
\vspace*{5pt}
\caption{The mean spectra corresponding to the five spectral types
defined from the principal component analysis of 3000 spectra.
\label{fig:means}}
\end{figure}
The crucial missing element in this analysis is a spectral
classification scheme, needed both in order to determine K-corrections
for individual galaxies and also in order to investigate the variation
in the luminosity function for different galaxy populations. As a first
step towards this goal, we have determined spectral types from a
principal component analysis for a subset of 3000 galaxies (Folkes
et~al.\ 1996). The galaxies were split into five spectral types based on
their projection in the plane of the first two principal components
(Folkes 1998). The mean spectra corresponding to each of these five
spectral types are shown in Figure~\ref{fig:means}. Type~1 corresponds
to a purely absorption-line galaxy spectrum and type~5 corresponds to a
strongly emission-line dominated spectrum, with the other types
intermediate.
\begin{figure}
\centering
\vspace*{5pt}
\parbox{\textwidth}{\epsfxsize=\textwidth \epsfbox{lumfns.ps}}
\vspace*{5pt}
\caption{The luminosity functions for the subsample of 3000 galaxies
with spectral types (and hence K-corrections) and for each of the
spectral classes individually.
\label{fig:lumfn}}
\end{figure}
The luminosity function for these 3000 galaxies with spectral types, now
with appropriate K-corrections, is shown in the top left panel of
Figure~\ref{fig:lumfn}. The remaining panels of the figure show the
luminosity functions for each type separately, and reveal a trend
towards fainter characteristic magnitudes $M^\ast$ and steeper faint end
slopes $\alpha$ as we move from `early' types (with absorption-line
spectra) to `late' types (with emission-line spectra). With the full
survey sample we will be able to refine this analysis in exquisite
detail, determining the variations in the luminosity function with both
spectral type and environment (i.e.\ local density) simultaneously.
\vspace*{10pt}
Previous descriptions of the 2dF galaxy redshift survey and the 2dF
facility can be found in Colless (1997), Maddox (1998) and Colless and
Boyle (1998). Updates are posted on the WWW at {\tt
http://msowww.anu.edu.au/$\sim$colless/2dF/} and {\tt
http://www.ast.cam.ac.uk/$\sim$2dFgg/}.
\thebibliography{}
\item
Baugh C.M., Efstathiou G., 1993, MNRAS, 265, 145
\item
Cole S., Hatton S., Weinberg D.H., Frenk C.S., 1998, MNRAS, in press
\item
Colless M.M., 1997, Wide Field Spectroscopy and the Universe, {\it Wide
Field Spectroscopy}, eds Kontizas M., Kontizas E., Kluwer, pp.227--240
\item
Colless M.M., Boyle B.J., 1998, in `Highlights of Astronomy', Vol.11, in
press
\item
Folkes S.R., 1998, PhD thesis, University of Cambridge
\item
Folkes S.R., Lahav O., Maddox S.J., 1996, MNRAS, 283, 651
\item
Glazebrook K., Offer A.R., Deeley K., 1998, ApJ, 492, 98
\item
Maddox S.J., 1998, in `Large Scale Structure: Tracks and Traces', World
Scientific, in press
\item
Maddox S.J., Efstathiou G., Sutherland W.J., Loveday J., 1990a, MNRAS,
242, 43{\sc p}
\item
Maddox S.J., Sutherland W.J., Efstathiou G., Loveday J., 1990b, MNRAS,
243, 692
\item
Maddox S.J., Efstathiou G., Sutherland W.J., 1990c, MNRAS, 246, 433
\item
Schlegel D.J., Finkbeiner D.P., Davis M., 1998, ApJ, 499, in press
\endthebibliography
\end{document}
|
1,314,259,993,387 | arxiv | \section{Introduction} \label{sec:Introduction}
Multigrid methods are especially well-known for their efficiency in solving linear systems arising from the discretization of partial differential equations (PDEs) \cite{Bra77,Bra82,BHM00,RDF06,Hac85,TOS01,Yav06}. Over many years, multigrid methods have been developed and their scope greatly expanded, and numerous multigrid algorithms now exist. One aspect that seems to have received relatively limited attention in these developments is the recursive structure of the multigrid cycle. Although adaptive multigrid cycles have certainly been studied (e.g., in \cite{Bra73,Bra82,Rue93}), in the vast majority of applications a fixed recursive strategy is employed: most commonly the so-called $V$-cycle, and often $W$-cycles or $F$-cycles. Other cycling strategies have been employed over the years for specific purposes, but they are far less common. Indeed, multigrid textbooks typically introduce the standard cycle, with the {\em cycle index}, typically denoted by $\gamma$, as a parameter that determines the number of recursive calls to the cycle routine at each level of the multigrid hierarchy, and from then on they refer only to $\gamma = 1$ or $\gamma = 2$ (corresponding to the $V$ and $W$ cycles, respectively). The $F$-cycle, which in a sense mixes $\gamma = 1$ and $\gamma = 2$, is also mentioned because it is useful in many cases. For example, the classical introductory textbook \cite{BHM00} presents the multigrid cycle with the cycle index, denoted there by $\mu$, and then mentions that, in practice, only $\mu = 1$ and $\mu = 2$ are used, corresponding to the $V$ and $W$ cycles, respectively. As in other textbooks, alternative fixed cycling strategies are not considered.
To formally define the classical multigrid cycles, we consider the solution of a linear system of the form
$$
A^1u^1 = f^1 \, ,
$$
\noindent
where the $1$ superscript indicates that this problem corresponds to the finest level of the multigrid hierarchy. The standard family of multigrid cycles with cycle index $\gamma$ and $n$ levels is defined recursively in Algorithm \ref{alg:gamma_cycle} (see, e.g., \cite{BHM00}).
\SetNlSkip{-0.5em}
\begin{algorithm2e}[h]
\DontPrintSemicolon
\label{alg:gamma_cycle}
\caption{The classical $\gamma$-cycle}
{$v^\ell \leftarrow \gamma\mbox{-}cycle(v^\ell,f^\ell,A^\ell,\ell,n,\gamma)$}\;
\Indp
\nl
If $\ell == n$ (coarsest level), solve $A^\ell v^\ell = f^\ell$ and return.\;
\nl
{\em Relax} $\nu_1$ times on $A^\ell u^\ell = f^\ell$ with initial guess $v^\ell$.\;
\nl
$f^{\ell+1} \leftarrow Restrict(f^\ell - A^\ell v^\ell)$.\;
\nl
$v^{\ell+1} \leftarrow 0$. \;
\nl
Repeat $\gamma$ times:\;
$v^{\ell+1} \leftarrow \gamma\mbox{-}cycle(v^{\ell+1},f^{\ell+1},A^{\ell+1},\ell+1,n,\gamma)$.\;
\nl
$v^\ell \leftarrow v^\ell + Prolong(v^{\ell+1})$.\;
\nl
{\em Relax} $\nu_2$ times on $A^\ell u^\ell = f^\ell$ with initial guess $v^\ell$.\;
\end{algorithm2e}
As noted above, with $\gamma = 1$ (i.e., a single recursive call per cycle) we get the classical $V$-cycle, while $\gamma = 2$ (two recursive calls) yields the $W$-cycle. The third commonly used cycle, the $F$-cycle, cannot be described in this form. Rather awkwardly, once we have defined the $\gamma$-cycle, the $F$-cycle
is defined in Algorithm \ref{alg:F_cycle}. Here, we have a single recursive call to the $F$-cycle, followed by a call to the $V$-cycle (i.e., the $\gamma$-cycle with $\gamma = 1$).
We remark that, when $\ell = n-1$, the first recursive call in line 5 of Algorithm \ref{alg:gamma_cycle} returns the exact solution of the coarsest-level problem, so the execution of subsequent recursive calls when $\gamma > 1$ has no effect. Similarly, line 6 of Algorithm \ref{alg:F_cycle} has no effect when $\ell = n-1$. Nevertheless, we adopt this standard form for all levels. This choice typically has a negligible effect on the overall cost of the cycle, and furthermore, the additional call to the coarsest level does have an effect when the problem on line 1 is only solved approximately, which is not uncommon in real applications.
\begin{algorithm2e}[h]
\DontPrintSemicolon
\label{alg:F_cycle}
\caption{The F-cycle}
{$v^\ell \leftarrow \mbox{$F$-}cycle(v^\ell,f^\ell,A^\ell,\ell,n)$}\;
\Indp
\nl
If $\ell == n$ (coarsest level), solve $A^\ell v^\ell = f^\ell$ and return.\;
\nl
{\em Relax} $\nu_1$ times on $A^\ell u^\ell = f^\ell$ with initial guess $v^\ell$.\;
\nl
$f^{\ell+1} \leftarrow Restrict(f^\ell - A^\ell v^\ell)$.\;
\nl
$v^{\ell+1} \leftarrow 0$. \;
\nl
$v^{\ell+1} \leftarrow \mbox{$F$-}cycle(v^{\ell+1},f^{\ell+1},A^{\ell+1},\ell+1,n)$.\;
\nl
$v^{\ell+1} \leftarrow \gamma\mbox{-}cycle(v^{\ell+1},f^{\ell+1},A^{\ell+1},\ell+1,n,1)$.\;
\nl
$v^\ell \leftarrow v^\ell + Prolong(v^{\ell+1})$.\;
\nl
{\em Relax} $\nu_2$ times on $A^\ell u^\ell = f^\ell$ with initial guess $v^\ell$.\;
\end{algorithm2e}
The reason for the casual treatment of the recursive structure of multigrid algorithms is probably that there is often no practical need for alternative strategies. If the simplest and cheapest variant---the $V$-cycle---does the job, as it often does, then there seems to be no reason to look any further. If a stronger cycle is required, then we can try $\gamma = 2$, i.e., the $W$-cycle, or compromise with the $F$-cycle if that is efficient. Nevertheless, in this paper we argue that, although these three cycle types
may have been sufficient in the days of more limited machines and moderate-sized problems, it is worthwhile to reconsider this in modern-day and future computing platforms. Such platforms may have a very high throughput for big problems, but are much less efficient for a large number of sequential small problems. This observation is true for the GPU platforms we consider in this paper. More generally, efficient parallel implementation of algebraic multigrid (AMG) for unstructured problems on modern high-performance computers is known to be challenging \cite{baker2012preparing}, requiring specialized handling. Examples of this include explicit sparsification of coarse-grid operators \cite{BFG16}, gathering and redistributing coarse-grid data to reduce communication costs \cite{GGJ13}, and non-Galerkin coarsening \cite{FS14,TY15}. Even structured multigrid solvers require special redistribution techniques on coarse levels to obtain good scaling properties for very large problems \cite{ROM18}.
Suppose that we apply the standard multigrid algorithm with $n$ levels, numbered $1$ to $n$ from finest to coarsest. In a single $V$-cycle, implemented recursively, the cycle routine is called only once per level---a total of $n$ calls. However, in the $W$-cycle it is called $2^{\ell-1}$ times on level $\ell$, $\ell = 1, \ldots \, , n$, totaling $2^n-1$ calls to the cycle routine in one complete $W$-cycle. For example, in a 2D problem with standard coarsening, the cycle routine is called once per iteration at the finest level, where there are $N$ variables. At the next level the $W$-cycle is called twice and the number of variables is N/4. Each such $W$-cycle makes two calls at the next-coarser level, with the number of variables divided again by 4. Thus, the number of activations of the cycle is doubled with each coarsening, even as the size of the problem is divided by 4. The upshot is that, if the number of operations per level depends linearly on the number of variables, then the total number of operations remains linear in $N$. However, the calls to the cycle routine are performed sequentially, and their number is evidently exponential in the number of levels. For instance, for such a problem of size $2^{10} \times 2^{10}$, about one million fine-level variables, a $W$-cycle using ten levels performs over 1000 calls to the cycle routine, compared to just ten calls for a $V$-cycle, whereas the number of operations performed by the $W$-cycle is only $50\%$ more compared to the $V$-cycle.
This ``exponential gap'' between $V$ and $W$-cycles in the number of routine calls may be significant, especially when $(a)$ big problems are attacked; $(b)$ the total cost of coarse-grid work is relatively high because of complexity growth on coarser grids, as may happen in AMG methods which provide limited explicit control over the complexity of the operators on the coarse grids and over the rate of coarsening; $(c)$ parallel processing efficiency is significantly reduced in the $W$-cycle, because the coarse-grid problems are small and the $2^{n}-1$ cycle routine calls are performed sequentially, and not in parallel.
Finally, on a more conceptual level, we ask: why define a general parameter---the cycle index $\gamma$---if only the values 1 or 2 are used in practice? Moreover, why is the popular $F$-cycle not describable in the standard cycling-scheme framework?
These aspects of the standard multigrid cycle seem to be somewhat at odds with the spirit of Occam's Razor, which gives preference to simplicity of representation.
In the next section we replace the {\em cycle index} $\gamma$ by another positive integer, $\kappa$, which we dub the {\em cycle counter} in order to distinguish it from the standard cycle index. With this we define a family of multigrid cycles whose recursive structure is determined by $\kappa$. For certain choices of $\kappa$, we obtain the three common cycles, $V$, $W$ and $F$, but for other choices we get other cycles, that are stronger than the $V$ and $F$ cycles, and yet retain the property that the total number of cycle routine calls over all levels\footnote{Although the algorithms are presented in recursive form, our arguments hold also for non-recursive implementations, which are readily available. The higher computational cost resulting from increasing $\kappa$ is due to the additional sequential traversing across levels, rather than to the recursion per se.} is polynomial in $n$, rather than exponential as in the $W$-cycle.
The remainder of this paper is organized as follows. In \cref{sec:TheKappaCycle} we introduce the $\kappa$-cycle and present theoretical and practical complexity properties. In \cref{sec:timePrediction} we introduce and verify in practice a simple model for predicting the run-time of $\kappa$-cycles. In \cref{sec:NumericalTests} we test the $\kappa$-cycle performance on GPU processors, and we summarize and conclude in \cref{sec:Conclusions}.
\section{The \texorpdfstring{{$\kappa$}}{k}-cycle} \label{sec:TheKappaCycle}
\subsection{\texorpdfstring{{$\kappa$}}{k}-cycle definition} \label{subsec:RecursiveStructure}
Given a positive integer $\kappa$, the $\kappa$-cycle, defined in Algorithm \ref{alg:kappa_cycle}, performs one recursive call inheriting the same counter $\kappa$, followed by a second call with the counter reduced by one. The latter call is performed only if the counter for this call remains positive.
\begin{algorithm2e}[h]
\DontPrintSemicolon
\label{alg:kappa_cycle}
\caption{The $\kappa$-cycle}
{$v^\ell \leftarrow \kappa\mbox{-}cycle(v^\ell,f^\ell,A^\ell,\ell,n,\kappa)$}\;
\Indp
\nl
If $\ell == n$ (coarsest level), solve $A^\ell v^\ell = f^\ell$ and return.\;
\nl
{\em Relax} $\nu_1$ times on $A^\ell u^\ell = f^\ell$ with initial guess $v^\ell$.\;
\nl
$f^{\ell+1} \leftarrow Restrict(f^\ell - A^\ell v^\ell)$.\;
\nl
$v^{\ell+1} \leftarrow 0$. \;
\nl
$v^{\ell+1} \leftarrow \kappa\mbox{-}cycle(v^{\ell+1},f^{\ell+1},A^{\ell+1},\ell+1,n,\kappa)$.\;
\nl
If $\kappa > 1$\;
$v^{\ell+1} \leftarrow \kappa\mbox{-}cycle(v^{\ell+1},f^{\ell+1},A^{\ell+1},\ell+1,n,\kappa - 1)$.\;
\nl
$v^\ell \leftarrow v^\ell + Prolong(v^{\ell+1})$.\;
\nl
{\em Relax} $\nu_2$ times on $A^\ell u^\ell = f^\ell$ with initial guess $v^\ell$.\;
\end{algorithm2e}
\noindent \cref{fig:cycle_illustrations} provides an illustration of $\kappa$-cycles with 5 levels. Circles represent relaxation, corresponding to lines 2 and 8 in Algorithm \ref{alg:kappa_cycle}, while each red X represents a recursive call.
The following proposition relates the $\kappa$-cycle to the three classical multigrid cycles.
\begin{proposition} \label{prop:VFW}
$ $
\begin{itemize}
\item
For $\kappa = 1$, the $\kappa$-cycle is identical to the standard $V$-cycle.
\item
For $\kappa = 2$, the $\kappa$-cycle is identical to the standard $F$-cycle.
\item
For $\kappa \geq n$, the $\kappa$-cycle is identical to the standard $W$-cycle.
\end{itemize}
\end{proposition}
\begin{proof}
For $\kappa = 1$, Algorithm \ref{alg:kappa_cycle} evidently performs just a single recursive call on each level, so it is indeed identical to the $V$-cycle. Given this, observe that for $\kappa = 2$, Algorithm \ref{alg:kappa_cycle} performs a recursive call with $\kappa = 2$, followed by a $V$-cycle, so it is indeed identical to the $F$-cycle of Algorithm \ref{alg:F_cycle}. For the third claim, observe that the smallest cycle counter that appears in a cycle routine call at level $\ell>1$ is smaller by one than the smallest cycle counter that appears in a cycle routine call at the next-finer level $\ell-1$. If the finest-level $\kappa$ is at least $n$, this implies that $\kappa$ is positive in the entire cycle, implying two recursive calls on each level but the coarsest. Therefore, for $\kappa \geq n$, the $\kappa$-cycle is equivalent to the $\gamma$-cycle of Algorithm \ref{alg:gamma_cycle} with $\gamma = 2$, i.e., the $W$-cycle.
\end{proof}
\begin{figure}
\centering
\subfigure[1-cycle.] {\includegraphics[scale=0.4]{section2/k1.png}}
\subfigure[2-cycle.] {\includegraphics[scale=0.4]{section2/k2.png}}
\subfigure[3-cycle.] {\includegraphics[scale=0.4]{section2/k3.png}}
\subfigure[5-cycle.] {\includegraphics[scale=0.4]{section2/k5.png}}
\caption{The $\kappa$-cycle is illustrated with 5 levels and $\kappa=1,2,3,5$. Circles denote relaxations, corresponding to lines 2 and 8 of Algorithm \ref{alg:kappa_cycle}, while red X's denote recursive calls.}
\label{fig:cycle_illustrations}
\end{figure}
In the next subsection we study properties of this family of cycles.
We denote by $levelCalls(\kappa, \ell)$ the number of times that the $\kappa$-cycle routine is called in a single cycle on level $\ell$ when employing cycle counter $\kappa$, and $totalCalls(\kappa, n)$ denotes the total number of calls to the cycle routine in a single cycle of $n$ levels and cycle counter $\kappa$. In the proofs we commonly employ the well-known identities:
\begin{equation} \label{eq:BinomIdentity}
\binom{a-1}{b}+\binom{a-1}{b-1}=\binom{a}{b} ~~\mbox{and}~~ \sum_{j=0}^{a} \binom{a}{j} = 2^a \, .
\end{equation}
\subsection{Theoretical properties} \label{subsec:TheoreticalProperties}
We begin by computing the number of recursive calls at each level of the $\kappa$-cycle. As we will see in \cref{cor:levelCalls}, the number of calls per level is a monotonically increasing function of $\kappa$ for $1\leq\kappa\leq \ell$. For $\kappa\geq \ell$, the number of calls per level remains constant, because the $\kappa$-cycle is equivalent to a $W$-cycle in this regime.
\begin{proposition}\label{prop:levelCalls}
The number of calls to the cycle routine on level $\ell$ of a single $\kappa$-cycle with $\kappa \geq 1$ is given by
$$
levelCalls(\kappa, \ell)=\sum_{j=0}^{\min(\kappa-1,\ell-1)}\binom{\ell-1}{j} \, .
$$
\end{proposition}
\begin{proof}
Note first that for $\kappa = 0$ this formula yields $levelCalls(0,\ell) \equiv 0$, as it should, because the cycle routine is never called recursively with a non-positive cycle counter in Algorithm \ref{alg:kappa_cycle}. Next, we apply induction over the levels.
For $\ell=1$:
$$
levelCalls(\kappa, \ell = 1) = \sum_{j=0}^{\min(\kappa-1,\ell-1)} \binom{\ell-1}{j} = \sum_{j=0}^{0} \binom{0}{j} = \binom{0}{0} = 1 \, ,
$$
\noindent which is correct, as there is always one call on the first level of a single cycle. Now, we assume by induction that this claim is true for level $\ell=p$ and prove it for $\ell=p+1$. To this end, we need to relate the number of calls on level $p+1$ to the number of calls on level $p$. Observe that in the processing on the finest level $1$ in Algorithm \ref{alg:kappa_cycle} we make two recursive calls to $\kappa$-cycles on level $2$: one with cycle counter $\kappa$ and one with cycle counter $\kappa - 1$. Observe also that level $p+1$ of the original cycle is level $p$ of the two cycles beginning on level $2$. This implies
$$
levelCalls(\kappa, p+1) = levelCalls(\kappa, p) + levelCalls(\kappa-1, p) \, .
$$
\noindent
Therefore, it remains to be proved that
$$
\sum_{j=0}^{\min(\kappa-1,p)}\binom{p}{j} = \sum_{j=0}^{\min(\kappa-1,p-1)}\binom{p-1}{j} + \sum_{j=0}^{\min(\kappa-2,p-1)}\binom{p-1}{j}\, .
$$
\noindent
We distinguish between two cases:
\begin{enumerate}
\item $\kappa \leq p$.
It follows that $\kappa-2 < \kappa-1 \leq p-1$, and therefore
\begin{eqnarray*}
\sum_{j=0}^{\min(\kappa-1,p-1)} \binom{p-1}{j} + \sum_{j=0}^{\min(\kappa-2,p-1)} \binom{p-1}{j} & = & \sum_{j=0}^{\kappa-1}\binom{p-1}{j} + \sum_{j=0}^{\kappa-2}\binom{p-1}{j} \\
& = & \binom{p-1}{0} + \sum_{j=1}^{\kappa-1} \left( \binom{p-1}{j} + \binom{p-1}{j-1} \right) \\ & = & \binom{p-1}{0} + \sum_{j=1}^{\kappa-1} \binom{p}{j} = \sum_{j=0}^{\kappa-1}\binom{p}{j} \\ & = & \sum_{j=0}^{\min(\kappa-1,p)} \binom{p}{j} \, .
\end{eqnarray*}
\item $\kappa\geq p+1$.
It follows that $\kappa-1 > \kappa-2 \geq p-1$, and therefore
\begin{eqnarray*}
\sum_{j=0}^{\min(\kappa-1,p-1)} \binom{p-1}{j} + \sum_{j=0}^{\min(\kappa-2,p-1)} \binom{p-1}{j} & = & \sum_{j=0}^{p-1} \binom{p-1}{j} + \sum_{j=0}^{p-1} \binom{p-1}{j} \\ & = & 2^{p-1} + 2^{p-1} = 2^p = \sum_{j=0}^{p}\binom{p}{j} \, .
\end{eqnarray*}
\noindent
But $\kappa \geq p+1$ implies that $\min(\kappa-1, p)=p$, hence,
$$
\sum_{j=0}^{p} \binom{p}{j} = \sum_{j=0}^{\min(\kappa-1,p)} \binom{p}{j} \, .
$$
\end{enumerate}
\noindent
We find that in both cases we obtain $levelCalls(\kappa, p+1)=\sum_{j=0}^{\min(\kappa-1,p)}\binom{p}{j}$.
\end{proof}
\begin{corollary} \label{cor:levelCalls}
\mbox{ }
\begin{enumerate}
\item
For $\ell > \kappa$ we have:
$$
levelCalls(\kappa,\ell)=\sum_{j=0}^{\min(\kappa-1,\ell-1)}\binom{\ell-1}{j}=\sum_{j=0}^{\kappa-1}\binom{\ell-1}{j} \, .
$$
\noindent
Hence, the number of calls per level grows monotonically with the cycle counter for $\ell > \kappa$.
\item
For $\ell \leq \kappa$ we have:
$$
levelCalls(\kappa,\ell)=\sum_{j=0}^{\min(\kappa-1,\ell-1)}\binom{\ell-1}{j}=\sum_{j=0}^{\ell-1}\binom{\ell-1}{j}=2^{\ell-1} \, .
$$
\noindent
Hence, the number of calls per level is independent of the cycle counter for $\ell \leq \kappa$.
\end{enumerate}
\end{corollary}
We next sum up the number of recursive calls over all the levels.
\begin{proposition} \label{prop:totalCalls}
The total number of calls to the cycle routine in a single $\kappa$-cycle with $\kappa \geq 1$ is given by
$$
totalCalls(\kappa, n) = \sum_{j=1}^{\min(\kappa,n)} \binom{n}{j} \, .
$$
\end{proposition}
\begin{proof}
We apply induction over the number of levels $n$. In the case of a single level, $n=1$, we have just one cycle call, and indeed
$$
totalCalls(\kappa,n=1) = \sum_{j=1}^{\min(\kappa,n)} \binom{n}{j} = \sum_{j=1}^{1}\binom{1}{j}=\binom{1}{1}=1 \, .
$$
\noindent
We assume that the claim is true for $n = p$ and prove it by induction for $n = p+1$.
Note that $totalCalls$ is nothing but the sum of $levelCalls$ over all levels, hence,
$$
totalCalls(\kappa,p+1) = totalCalls(\kappa,p) + levelCalls(\kappa,p+1) \, .
$$
\noindent
By the induction hypothesis and \cref{prop:levelCalls}, this yields
$$
totalCalls(\kappa,p+1) = \sum_{j=
1}^{\min(\kappa,p)}\binom{p}{j} + \sum_{j=0}^{\min(\kappa
-1,p)}\binom{p}{j} \, .
$$
\noindent
We distinguish between two cases:
\begin{enumerate}
\item
$\kappa \leq p$. Hence,
\begin{eqnarray*}
totalCalls(\kappa,p+1) & = & \sum_{j=1}^{\min(\kappa,p)} \binom{p}{j} + \sum_{j=0}^{\min(\kappa-1,p)} \binom{p}{j} = \sum_{j=1}^{\kappa} \binom{p}{j} + \sum_{j=0}^{\kappa-1} \binom{p}{j} \\ & = & \sum_{j=1}^{\kappa} \binom{p}{j} + \sum_{j=1}^{\kappa} \binom{p}{j-1} = \sum_{j=1}^{\kappa} \left( \binom{p}{j}+\binom{p}{j-1} \right) \\
& = & \sum_{j=1}^{\kappa} \binom{p+1}{j} = \sum_{j=1}^{\min(\kappa,p+1)} \binom{p+1}{j}
\end{eqnarray*}
\item
$\kappa \geq p+1$. Hence,
\begin{eqnarray*}
totalCalls(\kappa,p+1)& = & \sum_{j=1}^{\min(\kappa,p)} \binom{p}{j} + \sum_{j=0}^{\min(\kappa-1,p)} \binom{p}{j} = \sum_{j=1}^{p} \binom{p}{j} + \sum_{j=0}^{p} \binom{p}{j} \\ & = & 2^p-1+2^p=2^{p+1}-1 = \sum_{j=1}^{p+1} \binom{p+1}{j} = \sum_{j=1}^{\min(\kappa,p+1)} \binom{p+1}{j} \, .
\end{eqnarray*}
\end{enumerate}
\noindent
In both cases the claim is satisfied.
\end{proof}
\begin{corollary} \label{cor:CycleCallComplexity}
\mbox{ }
\begin{enumerate}
\item
For $\kappa \geq n$ we have $totalCalls(\kappa, n)=\sum_{j=1}^{n}\binom{n}{j}=2^n-1$, i.e., exponential in $n$ (as we observed in \cref{prop:VFW}).
\item
For $\kappa < n$ we have $totalCalls(\kappa, n) = \sum_{j=1}^{\kappa}\binom{n}{j}$. We conclude that when the number of levels $n$ is greater than the cycle counter $\kappa$, $totalCalls(\kappa, n)$ is a $\kappa$-degree polynomial in the number of levels $n$, because
$$
\binom{n}{j} = \frac{n!}{j!(n-j)!} = \frac{1}{j!} \left( n(n-1)(n-2) \ldots (n-j+1) \right)\, ,
$$
\noindent
a $j$-degree polynomial in $n$. Hence,
$$
totalCalls(\kappa,n) = \sum_{j=1}^{\kappa}\binom{n}{j} \, ,
$$
\noindent
is a polynomial in $n$ whose degree is $\kappa$.
\end{enumerate}
\end{corollary}
\noindent Evidently, the number of calls, and correspondingly the computational cost of the cycle, grows as $\kappa$ is increased until $\kappa = n$. On the other hand, as we shall demonstrate in our numerical tests, the convergence rate generally improves as $\kappa$ is increased. Our aim will therefore be to choose $\kappa$ that yields the best trade-off in terms of overall run time.
\Cref{cor:CycleCallComplexity} leads us to the next formal observation on the linear complexity of the $\kappa$-cycle under suitable assumptions.
\begin{proposition} \label{prop:LinearComplexity}
Assume that the number of operations on the first (finest) level of the $\kappa$-cycle is linear in the number of variables $N$, that is, bounded from above by $CN$ as $N \rightarrow \infty$ for some constant $C$. Assume also that the number of operations per level is a monotonically decreasing function of the level $\ell$, and in fact the number of operations on any level $\ell$ is bounded from above by $c$ times the number of operations on the next-finer level $\ell-1$, for $\ell = 2, \ldots , n$, where $c < 1$ is a constant. Finally, assume that the number of operations per call to the coarsest level is a constant. Then, for any fixed cycle counter $\kappa$, the total number of operations per $\kappa$-cycle is $O(N)$ as $N \rightarrow \infty$.
\end{proposition}
Note that this is in contrast to the $W$-cycle where, for example, even for $c = 0.5$ the total number of operations may be as high as $CN \log_2 N$. In particular, this occurs when the $W$-cycle is applied in geometric multigrid employing semi-coarsening for 2D problems.
\begin{proof}[Sketch of Proof]
The total number of operations in a single cycle is bounded by
$$
totalOps \leq CN \left( \sum_{\ell = 1}^n c^{\ell-1}levelCalls(\kappa,\ell)\right) \, .
$$
\noindent
Assuming $n > \kappa$ (else the result is obvious), we obtain by \cref{prop:levelCalls}
\begin{eqnarray*}
totalOps & \leq & CN \left(\sum_{\ell = 1}^{\kappa} c^{\ell-1} \sum_{j = 0}^{\ell-1} \binom{\ell-1}{j} + \sum_{\ell = \kappa + 1}^{n} c^{\ell-1} \sum_{j = 0}^{\kappa-1} \binom{\ell-1}{j}\right) \\ & \leq & CN \left( \sum_{\ell = 1}^{\kappa} (2c)^{\ell-1} + \sum_{\ell = \kappa + 1}^{\infty} c^{\ell-1} Poly_{\kappa-1}(\ell) \right) \, ,
\end{eqnarray*}
\noindent where $Poly_{\kappa-1}$ is a polynomial of degree $\kappa - 1$. Observe that both terms in the final brackets are equal to some constants independent of $N$ or $n$, and therefore $totalOps$ is indeed linear in $N$. The final term can be bounded, e.g., by dominating it with an appropriate integral and performing multiple integration by parts. We omit the remaining details.
\end{proof}
\subsection{Number of operations in a geometric multigrid \texorpdfstring{{$\kappa$}}{k}-cycle}
\cref{prop:LinearComplexity} is mainly of academic interest, as the undetermined constants might be very large in some cases. In this subsection we focus on the practical determination of the number of operations performed in a $\kappa$-cycle for the case of geometric multigrid, where the coarsening factor is (exactly or nearly) a level-independent constant. This result will provide us with a practical tool in the next section. Because we will be using induction over levels, starting from the coarsest, in this subsection we reverse our convention and number the levels 1 to $n$ from coarsest to finest.
Consider a $\kappa$-cycle with $n$ levels, where the number of variables on the coarsest level is a constant $N_1$ and the coarsening factor is a constant $c\in(0,1)$. Thus, the number of variables on the second-coarsest level is $N_2=c^{-1}N_1$, and so on until the finest level where the number of variables is $N_n=c^{1-n}N_1$. Assume further that the number of operations performed at any level but the coarsest when the cycle routine is called is $C$ times the number of variables on that level, that is, $CN_n$ operations on the finest level, $CN_{n-1}=cCN_n$ on the second finest level, and so on until the second coarsest level, whereas on the coarsest level the problem is solved directly at a cost of $\tilde{C}N_1$ operations for some given constant $\tilde{C}$ which may be different from $C$. Then, the following proposition provides us with an exact formula for the total number of operations in a single $\kappa$-cycle.
\begin{proposition}\label{prop:n_ops}
The total number of operations in a single $\kappa$-cycle with $n$ levels is given by
\begin{equation} \label{eq:n_ops}
N^n_{ops}(\kappa, c)=f(\kappa, c)CN_n+P_{\kappa,c}(n) \, ,
\end{equation}
where
\begin{equation} \label{eq:n_ops_f}
f(\kappa, c)=\left\{
\begin{split}
& \frac{1}{1-2c}\left[1-\left(\frac{c}{1-c}\right)^\kappa\right], \, & if \, c \neq 0.5, \\
& 2\kappa , \, & if \, c=0.5,
\end{split}
\right.
\end{equation}
with (corresponding to a $W$-cycle) $f(\infty, c)=\frac{1}{1-2c}$ if $c<0.5$, while for $c\geq0.5$, $f(\infty, c)$ is undefined and $N^n_{ops}(\infty, c)$ has superlinear complexity. In (\ref{eq:n_ops}),
\begin{eqnarray*}
P_{\kappa,c}(n) & = & \sum_{j=0}^{min(\kappa-1,n-1)}[\tilde{C}-f(\kappa-j,c)C]N_1\binom{n-1}{j} \\
& \leq & \sum_{j=0}^{min(\kappa-1,n-1)} \left[ \tilde{C}-f(1,c)C \right] N_1\binom{n-1}{j} \\ & = & \left(\tilde{C}-\frac{C}{1-c}\right)N_1\sum_{j=0}^{min(\kappa-1,n-1)}\binom{n-1}{j}.
\end{eqnarray*}
\end{proposition}
Note that $P_{\kappa,c}(n)$ is not necessarily positive. Note also that for $0<c<0.5$ we have $\frac{C}{1-c}=f(1,c) \leq f(\kappa,c) \leq f(\infty,c)=\frac{C}{1-2c}$ so $P_{\kappa,c}(n)$ is bounded from above and below as follows.
$$
\left(\tilde{C}-\frac{C}{1-2c}\right)N_1\sum_{j=0}^{min(\kappa-1,n-1)}\binom{n-1}{j}\leq P_{\kappa,c}(n)\leq\left(\tilde{C}-\frac{C}{1-c}\right)N_1\sum_{j=0}^{min(\kappa-1,n-1)}\binom{n-1}{j}.
$$
We prove this proposition with the aid of two lemmas. The first is a simplification of \cref{prop:n_ops}, where the cost of the coarsest level solution is modified in a way that leaves only the linear term in (\ref{eq:n_ops}). The second lemma yields the correction required for the actual cost of the coarsest level solution in \cref{prop:n_ops}.
\begin{lemma} \label{prop:simple_n_ops}
Consider the $\kappa$-cycle of \cref{prop:n_ops}, modified such that the cost of the coarsest level solution is given by $f(\kappa, c)CN_1$ instead of $\tilde{C}N_1$, where the argument $\kappa$ is the cycle counter appearing in the call to the routine on the coarsest level. Then, the claim of \cref{prop:n_ops} holds with the second term eliminated, i.e.,
\begin{equation} \label{eq:simple_n_ops}
N^n_{ops}(\kappa, c)=f(\kappa, c)CN_n \, ,
\end{equation}
with $f(\kappa, c$) as defined in (\ref{eq:n_ops_f}).
\end{lemma}
\noindent Note that the assumption on the modified cost of the coarsest level solve implies that it is not constant but rather varies within the $\kappa$-cycle.
\begin{proof}
We employ induction over the cycle counter $\kappa$, with the induction step itself proved by an inner induction over the number of levels in the cycle. Note that the modified cost of the coarsest level solution ensures that (\ref{eq:simple_n_ops}) is automatically satisfied on the coarsest level for any $\kappa$. Therefore, when executing the inner induction over levels it only remains to prove the induction step.
\noindent
$\kappa=1$ \emph{inner induction step}: assume that the induction hypothesis holds for a 1-cycle (i.e., $\kappa$-cycle with $\kappa=1$), with $n\geq1$ levels, that is, $N^n_{ops}(1,c)=f(1,c)CN_n=\frac{1}{1-c}CN_n$. Then, by the definition of the $\kappa$-cycle, the induction hypothesis, and the given constant coarsening factor $N_n=cN_{n+1}$, the number of operations in a 1-cycle of $n+1$ levels satisfies
\begin{eqnarray*}
N^{n+1}_{ops}(1, c) & = & CN_{n+1}+N^{n}_{ops}(1,c)
= CN_{n+1}+f(1,c)CN_n \\
& = & CN_{n+1} \left( 1+c\frac{1}{1-c} \right)
= \frac{1}{1-c}CN_{n+1} = f(1,c)CN_{n+1} \, .
\end{eqnarray*}
We conclude that the claim holds for $\kappa=1$ and any number of levels.
\noindent
\emph{Outer induction step}: assume that the induction hypothesis holds for a $\kappa$-cycle with given $\kappa \geq 1$ and any number of levels, that is, $N^{n}_{ops}(\kappa,c)=f(\kappa,c)CN_{n}$. For the ($\kappa+1$)-cycle with $n=1$ the claim holds automatically as noted above, by the assumption on the modified cost of the coarsest level solution. Assume then that the claim holds for the ($\kappa+1$)-cycle and $n\geq1$ levels. Then, by the definition of the $\kappa$-cycle, the induction hypothesis, and the given constant coarsening factor $N_n=cN_{n+1}$, the number of operations in a ($\kappa+1$)-cycle of $n+1$ levels satisfies for $c \neq 0.5$,
\begin{eqnarray*}
N^{n+1}_{ops}(\kappa+1, c) & = & CN_{n+1} + N^{n}_{ops} (\kappa+1,c) + N^{n}_{ops}(\kappa,c) \\
& = & CN_{n+1} + f(\kappa+1,c) CN_{n} + f(\kappa,c)CN_{n} \\
& = & CN_{n+1} \left(1+\frac{c}{1-2c} \left[1 - \left(\frac{c}{1-c} \right)^{\kappa+1} + 1 - \left( \frac{c}{1-c} \right)^\kappa \right] \right) \\
& = & CN_{n+1} \left( 1 + \frac{2c}{1-2c} - \frac{c}{1-2c} \left[ \left( \frac{c}{1-c} \right)^{\kappa+1} + \left( \frac{c}{1-c} \right)^\kappa \right] \right) \\
& = & CN_{n+1} \frac{1}{1-2c} \left(1-c \left( \frac{c}{1-c} \right)^{\kappa+1} \left( 1+\frac{1-c}{c} \right) \right) \\
& = & CN_{n+1} \frac{1}{1-2c} \left(1 - \left( \frac{c}{1-c} \right)^{\kappa+1} \right) = f(\kappa+1,c)CN_{n+1}.
\end{eqnarray*}
\noindent
For $c=0.5$ we have
\begin{eqnarray*}
N^{n+1}_{ops}(\kappa+1, c) & = & CN_{n+1} + N^{n}_{ops} (\kappa+1,c) + N^{n}_{ops}(\kappa,c) \\
& = & CN_{n+1} + 2(\kappa+1)CN_{n} + 2\kappa CN_{n} \\
& = & CN_{n+1} \left[1 + 0.5(4\kappa+2) \right] \\ & = & 2(\kappa+1) CN_{n+1} = f(\kappa+1,c)CN_{n+1},
\end{eqnarray*}
completing the proof.
\end{proof}
To complete the proof of \cref{prop:n_ops}, we need to subtract off the modified coarsest level costs of Lemma \ref{prop:simple_n_ops}, and add the corresponding coarsest level costs of the proposition. For this we need to derive how many times the routine is called at the coarsest level with each value of the cycle counter. This is done in the following lemma.
\begin{lemma} \label{lem:CoarseLevelCorrection}
In a single complete $\kappa$-cycle with $n$ levels, the routine is called $\binom{n-1}{j}$ times at the coarsest level with cycle counter $\kappa-j$, for $j=0,...,min(\kappa-1,n-1)$.
\end{lemma}
\noindent
Note that the total number of calls at the coarsest level agrees with the result of Proposition \ref{prop:levelCalls}, but there we did not track the cycle counter at each call.
\begin{proof}[Sketch of Proof]
For $\kappa=1$ the result is obvious, the routine is called just once on the coarsest level with cycle counter 1. For $\kappa>1$ we apply induction over the number of levels. For $n=2$ the routine is called on the coarsest level once with cycle counter $\kappa$ and once $\kappa-1$, corresponding to $j=0,1$, respectively, so the claim is satisfied. Assume now that the claim is satisfied for a given $\kappa\geq2$ and $n\geq2$ levels. By the definition of the $\kappa$-cycle, at level $n+1$ we call the routine recursively at level $n$, once with cycle counter $\kappa$ and once with $\kappa-1$. The result follows from the induction hypothesis and the fact that for $j>0$ we have
$$
\binom{n}{j}=\binom{n-1}{j}+\binom{n-1}{j-1},
$$
whereas for $j=0$
$$
\binom{n}{j}=\binom{n-1}{j}=1,
$$
leading to the stated result.
\end{proof}
These two lemmas prove \cref{prop:n_ops}, with the final inequality in this proposition resulting from the fact that $f(\kappa,c)$ is positive and monotonically increasing for any positive $\kappa$ and $c \in (0,1)$.
We remark, finally, that in our tests in \cref{sec:timePrediction} $c=0.25$, and the coarsest level solution is very cheap, so $P_{\kappa,c}$ is negligible compared to $f(\kappa,c)CN_n$ except for very small problems. Therefore, the formula of Lemma \ref{prop:simple_n_ops} provides us with an accurate value for $N_{ops}$.
\section{Predicting \texorpdfstring{{$\kappa$}}{k}-cycle run-time on GPU processors} \label{sec:timePrediction}
In this section we introduce a very simple model for predicting the approximate run-time of a single $\kappa$-cycle on GPU processors. When using GPUs for the computation, the CPU launches GPU-specific functions for execution on the GPU. These are called GPU kernels. As discussed in \cite{DBLP:conf/ics/VernerSS11}, the throughput of GPU kernels is in general not linear with the input size, and the accurate prediction of a GPU kernel runtime is quite complicated. However, we show that in our particular case the run-time per cycle can easily be calculated in advance fairly accurately, and therefore this simple predictive tool may be useful in choosing the best cycle for a given problem on a given system.
The model bases the run-time prediction on two system-dependent factors, and thus also gives an indication of the relative efficiency of different systems with respect to $\kappa$-cycles. All computations in our tests are done on GPU processors. The comparison includes the standard $V$, $F$ and $W$-cycles as the special cases $\kappa=1,2$ and $\kappa=\infty$, respectively. In addition, the cases of $\kappa=3$ and $\kappa=4$, which do not correspond to any previously known multigrid cycle, are included.
In our code, each relaxation, restriction, prolongation, etc., is implemented as a GPU kernel, and the number of GPU kernel calls can be calculated in advance. Before calling the cycle routines, the CPU prepares the problem. The data are then copied once from the CPU memory to the GPU memory, and after that all the data reside in the GPU memory and all the computations are done by the GPU.
\subsection{The model}
Here, a simple model for the run-time of a single cycle is presented. The model predicts the approximate run-time of one cycle, given the number of unknowns on the finest level and the cycle counter $\kappa$. The model assumes that the run-time for one cycle is a linear combination of the number of GPU kernel launches and the number of operations done by the kernels (which may include memory accesses, as well as arithmetic calculations), i.e.,
\begin{equation} \label{eq:LinearCombination}
T=\alpha\cdot N_{gpuCalls}+\beta\cdot N_{ops}.
\end{equation}
\noindent
Here, $\alpha$ and $\beta$ depend on the system (specific CPU, GPU, OS, etc.), and on particular problem and cycle properties such as the discrete operator, coarsening factor, relaxation type, number of relaxations, prolongation and restriction operation, etc. They do not depend on the number of levels and on $\kappa$, as these dependencies are included in $N_{gpuCalls}$ and in $N_{ops}$. Once the parameters of the cycle are fixed, $\alpha$ and $\beta$ will have the same approximate values for any number of levels $n$ and any cycle counter $\kappa$.
$N_{gpuCalls}$ is the number of GPU kernel launches per cycle, which we can calculate. In our implementation, there is one GPU kernel launch in each routine call at the coarsest level, and a fixed number of GPU kernel launches in each routine call on the finer levels, which depends on the number of relaxations. Using this knowledge together with the $totalCalls(\kappa, n)$ value from \cref{sec:TheKappaCycle}, we can predict the total number of GPU kernel launches. $N_{ops}$ is a measure for the total amount of GPU operations, such as memory reads/writes and arithmetic operations. We show how to compute this value in our particular example in (\ref{eq:n_ops_2D}). A summary of all the formulas is given in \cref{table:notations}.
This model is based on a simplified assumption that each GPU kernel has an overhead which takes a constant time per launch, and that the time required for the kernel itself is in direct proportion to the amount of operations it does, dominated by memory reads and writes in our case, except for very small problems. Because the cycle routine on each level but the coarsest has the same GPU kernel launches (same number and same GPU functions), the total memory reads and writes are in direct proportion to the number of variables in that level, and therefore the time spent by GPU kernels is also proportional to this number. CPU time is not considered in the model, because the CPU execution is parallel to GPU execution and CPU time is much smaller. Note that in practice these operations are affected by caching, operation latencies and other considerations.
Below, we present results of numerical tests performed in order to assess this model, and we show that our model is able to usefully predict the actual run-time per cycle. We therefore conclude that our simplifications are reasonable. Note that our tests show that the bottleneck of our kernels are the memory transfers and not the computations themselves, but the amount of memory transferred and the amount of computations are both in direct proportion to the number of unknowns, so the same model would be valid either way.
In the tests, the problem of 2D rotated anisotropic diffusion with constant coefficients is solved. The problem is discretized on a square grid employing a nine-point finite-difference stencil. More details are provided in the next section. Only a stand-alone $\kappa$-cycle is considered in this section. We use standard coarsening, so the coarsening factor is approximately $c=0.25$. Two pre-relaxations and two post-relaxations are employed in this section, and we use a maximum of 13 levels, which implies $(2^{13}-1)^2 = 8191^2 = 67,108,864$ unknowns on the finest level. We use double precision floats, which are 8 bytes each. For each level there are 3 pre-allocated arrays: current estimation, right-hand side, and temporary values.
\Cref{fig:cycle_times}(a) shows the mean measured times for one cycle with $\kappa=1,2,3,4$ and $\kappa=\infty$, for 4-13 levels. The vertical lines in the graph show where the operation times become equal to the overhead times, referred to as ``turning points'' below.
\subsection{Estimating the model parameters}
In order to approximate the constants $\alpha$ and $\beta$ in our system \eqref{eq:LinearCombination}, the simple least-squares problem \eqref{eq:MinProblem} is solved for $\alpha$ and $\beta$.
\begin{equation} \label{eq:MinProblem}
(\alpha,\beta) = argmin_{\tilde{\alpha},\tilde{\beta}} \sum_{\kappa=1,2,3,4,\infty}\sum_{n=4}^{13}(\tilde{\alpha}\cdot N_{gpuCalls}(\kappa,n)+\tilde{\beta}\cdot N_{ops}(\kappa,n)-T(\kappa,n))^2,
\end{equation}
\noindent
where $T(\kappa,n)$ is the measured time for one $\kappa$-cycle (averaged over 200 runs), $N_{gpuCalls}$ is the number of GPU kernel launches in one cycle, and $N_{ops}(\kappa,n)$ is proportional to the total number of memory accesses. (The exact number depends on the number and type of the relaxations and other factors, which we assume to be constant and represented in $\beta$). They, in turn, are proportional to the number of unknowns. $N_{gpuCalls}$ and $N_{ops}$ are computed according to \cref{table:notations}. In our main test system, the values of $\alpha$ and $\beta$ were found to be $\alpha=2.48\cdot 10^{-3}$ ms, $\beta=1.18\cdot 10^{-6}$ ms. These two model parameters suffice to predict fairly accurate run-times for any $\kappa$ and problem size, as seen in \cref{fig:cycle_times}(b-f).
Actual mean run-times for a single cycle with $\kappa=1,2,3,4,\infty$, are shown in black in \cref{fig:cycle_times}(b-f). The times predicted by \eqref{eq:LinearCombination} are plotted in blue. All graphs use the values of $\alpha$ and $\beta$ as stated above. The red curves show the call cost component of \eqref{eq:LinearCombination}, while the green curve shows the computation cost component, hence, the blue curve is the sum of the red and the green curves. As the number of levels $n$ increases, the cost of operations grows faster than the cost of calls, so for larger $\kappa$'s the operation costs catch up with the overhead at larger problems (at the so-called turning point, where the green and red curves cross in \cref{fig:cycle_times}). This is due to the fact that larger $\kappa$ implies more calls at coarse levels.
\begin{figure}[hpbt!]
\centering
\subfigure[Measured times for $\kappa$-cycles on the GPU.]{
\includegraphics[scale=0.8]{section3_4relax/measured_times.pdf}}
\subfigure[$V$-cycle times and predictions.] {\includegraphics[scale=0.8]{section3_4relax/k1.pdf}}
\subfigure[$F$-cycle times and predictions.] {\includegraphics[scale=0.8]{section3_4relax/k2.pdf}}
\subfigure[$\kappa$-cycle times and predictions for $\kappa=3$.] {\includegraphics[scale=0.8]{section3_4relax/k3.pdf}}
\subfigure[$\kappa$-cycle times and predictions for $\kappa=4$.] {\includegraphics[scale=0.8]{section3_4relax/k4.pdf}}
\subfigure[$W$-cycle times and predictions.] {\includegraphics[scale=0.8]{section3_4relax/kn.pdf}}
\caption{Measured and predicted run-times per cycle for $\kappa=1,2,3,4,\infty$. The vertical lines in panel (a) show the turning points, where red and green curves cross in panels (b)-(f).}
\label{fig:cycle_times}
\end{figure}
\subsection{Values for \texorpdfstring{{$N_{gpuCalls}$ and $N_{ops}$}}{NgpuCalls and Nops} in our implementation}
\label{sec:n_comp}
There are $5$+$\nu$ GPU kernel calls per cycle routine call, where $\nu=\nu_1+\nu_2$ is the total number of relaxations, except at the coarsest level, where there is one GPU kernel launch, as seen in Algorithm \ref{alg:launches}. The number of GPU kernel launches per cycle is calculated as shown in \cref{table:notations}.
\begin{algorithm2e} \label{alg:launches}
\DontPrintSemicolon
\label{alg:kappa_cycle_kernels}
\caption{Kernel launches in the $\kappa$-cycle}
{$v^\ell \leftarrow \kappa\mbox{-}cycle(v^\ell,f^\ell,A^\ell,\ell,n,\kappa)$}\;
\Indp
\nl
If $\ell == n$ (coarsest level), solve $A^\ell v^\ell = f^\ell$ and return. - \textbf{1 kernel launch on coarsest level}\;
\nl
{\em Relax} $\nu_1$ times on $A^\ell u^\ell = f^\ell$ with initial guess $v^\ell$. - \textbf{$\nu_1$ kernel launches}\;
\nl
$f^{\ell+1} \leftarrow Restrict(f^\ell - A^\ell v^\ell)$. - \textbf{$2$ kernel launches}: 1 residual + 1 restrict\;
\nl
$v^{\ell+1} \leftarrow 0$. - \textbf{$1$ kernel launches}: make the vector zero\;
\nl
$v^{\ell+1} \leftarrow \kappa\mbox{-}cycle(v^{\ell+1},f^{\ell+1},A^{\ell+1},\ell+1,n,\kappa)$.\;
\nl
If $\kappa > 1$\;
$v^{\ell+1} \leftarrow \kappa\mbox{-}cycle(v^{\ell+1},f^{\ell+1},A^{\ell+1},\ell+1,n,\kappa - 1)$.\;
\nl
$v^\ell \leftarrow v^\ell + Prolong(v^{\ell+1})$. - \textbf{$2$ kernel launches}: 1 prolong + 1 vector addition\;
\nl
{\em Relax} $\nu_2$ times on $A^\ell u^\ell = f^\ell$ with initial guess $v^\ell$. - \textbf{$\nu_2$ kernel launches}\;
\end{algorithm2e}
Regarding $N_{ops}$, the last level in our tests has only one unknown ($N_1=1$). Therefore, $P_{\kappa,c}(n)$ is negligible compared to $f(\kappa, c)CN_n$ in (\ref{eq:n_ops}), and we can use (\ref{eq:simple_n_ops}). Also, when using standard coarsening for 2D problems, the coarsening factor is very close to $c=0.25$. Using $c=0.25$ in (\ref{eq:simple_n_ops}) yields:
\begin{eqnarray}
\label{eq:n_ops_2D}
N^n_{ops}(\kappa, 0.25) & = & f(\kappa, 0.25)CN_n = \\ & & \frac{1}{1-2\cdot0.25}\left[1-\left(\frac{0.25}{1-0.25}\right)^\kappa\right]CN_n = 2\left(1-\frac{1}{3^\kappa}\right)CN_n \, . \nonumber
\end{eqnarray}
\subsection{Calculating the turning point}
The problem size at the turning point, where the overhead time and computation time are equal, is denoted $N_{tp}$. This notion is relevant because the number of operations grows linearly with the number of unknowns, hence, much faster than the growth due to calls (compare the slopes of the red and green curves in \cref{fig:cycle_times}). The turning point, where the green and red curves cross each other, provides an indication of the approximate problem-size for which the $\kappa$-Cycle enjoys its maximal relative advantage in terms of run-time, compared to the cycle obtained when $\kappa$ is increased by one.
Evidently, the turning point does not depend on $\alpha$ and $\beta$ separately, but rather only on their ratio:
\begin{equation} \label{eq:equalCosts}
\alpha\cdot N_{gpuCalls}=\beta\cdot N_{ops} => N_{ops}=\frac{\alpha}{\beta}\cdot N_{gpuCalls}
\end{equation}
\noindent
For the values $\alpha=2.48\cdot 10^{-3}$, $\beta=1.18\cdot 10^{-6}$ of our system, $N_{tp}$ and corresponding (unrounded) finest levels for various $\kappa$'s are shown in \cref{table:equal_times}. Details on how we calculate these numbers are shown in the appendix.
\begin{table}
\centering
\caption{The approximate turning point and corresponding unrounded finest level}
\begin{tabular}{ |c|c|c| }
\hline
$\kappa$ & $n_{tp}$ & $N_{tp}$ \\
\hline
1 ($V$-cycle) & 8.2 & 80,000 \\
\hline
2 ($F$-cycle) & 9.1 & 320,000 \\
\hline
3 & 10.0 & 1,000,000 \\
\hline
4 & 10.7 & 2,700,000 \\
\hline
$\infty$ ($W$-cycle) & 12.4 & 28,000,000 \\
\hline
\end{tabular}
\label{table:equal_times}
\end{table}
We find that when there are up to $10^7$ unknowns (12 levels or less), most of the run-time of a $W$-Cycle is spent on overhead. In a $V$-cycle, in contrast, about $10^5$ unknowns (9 levels) are sufficient for the calculation time to exceed the overhead time.
\subsection{Relative prediction errors} \cref{table:relative_errors} shows the relative errors of the model when compared to actual mean run-times. It can be seen that the relative prediction errors are just a few percent, and the prediction is especially accurate for large run-times. We explain the somewhat lower accuracy for shorter runs by noticing that for seven levels or less the required data fit entirely into the GPU cache: the GPU in our main system has L2 cache of 1.5MB, and the amount of memory required is about $3\cdot 8\cdot 4^{N_{levels}}$ bytes (about $4^{N_{levels}}$ doubles per array, 3 arrays per level, 8 bytes per double). This causes the execution of the kernels to become very fast, and the required time is dominated by the kernel overhead. The proposed model does not account for this phenomenon, but in big problems this has a small effect on the overall time.
\begin{table}
\centering
\caption{Relative run-time prediction errors}
\label{table:relative_errors}
\begin{tabular}{ |c|c|c|c|c|c|c| }
\hline
$\kappa$ $\setminus$ n & 8 & 9 & 10 & 11 & 12 & 13 \\
\hline
1 & -4.28\% & -1.68\% & -3.64\% & -1.96\% & 1.55\% & 1.42\% \\
\hline
2 & -0.80\% & 0.05\% & -2.19\% & -5.14\% & -0.50\% & 0.90\% \\
\hline
3 & -2.13\% & -1.35\% & -2.33\% & -5.94\% & -2.18\% & -0.12\% \\
\hline
4 & -0.49\% & 0.08\% & -0.72\% & -5.28\% & -4.38\% & -1.02\% \\
\hline
$\infty$ & 3.61\% & 3.21\% & 1.07\% & -0.62\% & 0.12\% & 0.30\% \\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Notation and formulas}
\begin{tabular}{ |c|c|c|}
\hline
Symbol & Meaning & Formula \\
\hline
$n$ / $N_{levels}$ & number of levels & \\
\hline
$N$ / $N_{unknowns}$ & number of unknowns on the finest level & $(2^{N_{levels}}-1)^2$\\
\hline
$\kappa$ & {\em cycle counter} & \\
\hline
$levelCalls(\kappa, \ell)$ & number of routine calls on level $\ell$ per cycle & $\sum_{j=0}^{\min(\kappa-1,\ell-1)}\binom{\ell-1}{j}$ \\
\hline
$totalCalls(\kappa, n)$ & number of routine calls per cycle & $\sum_{j=1}^{\min(\kappa,n)} \binom{n}{j}$ \\
\hline
$C(\kappa)$ & see formula & $2\left(1-\frac{1}{3^\kappa}\right)$ \\
\hline
$\nu$ & number of relaxations per routine call & $\nu_1 + \nu_2$\\
\hline
$N_{gpuCalls}$ & number of GPU kernel & $(5+\nu)\cdot totalCalls(\kappa,n-1)+$ \\
& launches per cycle & $totalCalls(\kappa, n)-totalCalls(\kappa, n-1)$ \\
\hline
$N_{ops}$ & measure for the total amount of operations & $N_{unknowns}\cdot2\left(1-\frac{1}{3^\kappa}\right)$ \\
& done by the GPU in one cycle (see formula) & \\
\hline
\end{tabular}
\label{table:notations}
\end{table}
\section{Numerical results} \label{sec:NumericalTests}
In this section we report some numerical tests and results in greater detail.
The run-time per cycle is monotonically increasing with $\kappa$ (until $\kappa = n$, as from there on the $\kappa$-cycle coincides with the $W$-cycle). However, a larger $\kappa$ typically results in faster convergence per cycle. Thus, there is a tradeoff in selecting $\kappa$, and the optimal choice is problem-dependent and system-dependent. To demonstrate the potential of the $\kappa$-cycle, we test our algorithms on a 2D rotated anisotropic diffusion problem, on a square domain $\Omega$ and with Dirichlet boundary conditions:
\begin{align*}
-\epsilon u_{xx} - u_{yy} &= f(x,y) \, , \, (x, y) \in \Omega \, , \\
u &= g(x, y) \, , \, (x, y) \in \partial \Omega \, ,
\end{align*}
\noindent discretized on a square equi-spaced grid, where $0< \epsilon \leq 1$ is a parameter, and the coordinates $(x,y)$ form an angle $\phi$ with the grid-lines.
In the tests we use a right-hand side and boundary conditions of zero, hence the exact solution is zero, and the starting guess is random (but it is the same in all tests). This allows us to check the asymptotic behavior of the method without numerical round-off errors (without loss of generality, because the problem is linear and the method is stationary).
This problem is challenging for standard geometric multigrid with simple coarsening when $\epsilon$ is small and the coordinates are not aligned with the grid, e.g., for $\phi = \pi/4$, which is our choice in most of the tests (see, e.g., \cite{Yav98}). We use the following nine-point discretization stencil:
\medskip
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
$\frac{1}{2}(1-\epsilon)CS$ & $-(\epsilon C^2+ S^2)$ & $-\frac{1}{2}(1-\epsilon)CS$ \\
\hline
$-(C^2+\epsilon S^2)$ & $2(1+\epsilon)$ & $-(C^2+\epsilon S^2)$ \\
\hline
$-\frac{1}{2}(1-\epsilon)CS$ & $-(\epsilon C^2+ S^2)$ & $\frac{1}{2}(1-\epsilon)CS$ \\
\hline
\end{tabular}
\end{center}
\medskip
\noindent
where $C=cos(\phi)$, $S=sin(\phi)$.
Six types of solvers are tested.
One uses a stand-alone $\kappa$-cycle (which includes the $V$-cycle, $F$-cycle and $W$-cycle as special cases) with standard coarsening and simple relaxation---Jacobi with damping factor between 0.8 and 0.87 (following \cite{YO98}). A second solver uses Conjugate Gradients, preconditioned by a single $\kappa$-cycle per iteration, where the $\kappa$-cycle is the same as in the previous solver.
The third solver uses an alternating ``zebra'' relaxation---see details below, and the standard full coarsening, together with Galerkin coarse-grid operators.
The fourth solver uses Conjugate Gradients, preconditioned by a single $\kappa$-cycle per iteration, where the $\kappa$-cycle is the same as in the third solver.
The fifth solver tested uses a line Gauss-Seidel relaxation (in a ``zebra'' order with no relaxation parameter) and semi-coarsening, with Galerkin coarse-grid operators---see details below. The sixth solver uses Conjugate Gradients, preconditioned by a single $\kappa$-cycle per iteration, where the $\kappa$-cycle is the same as in the fifth solver.
We apply two pre-relaxations and two post-relaxations, because this gives better times than other configurations with equal number of pre-relaxations and post-relaxations (for the third and fourth solvers, two relaxations mean one relaxation in each direction). In this section, we use a maximum of 14 levels, which result in $(2^{14})^2 = 16384^2 = 268,435,456$ unknowns on the finest level. The first two computations are performed on a system with NVIDIA GTX 1060 GPU, having 1280 cores and a peak theoretical memory bandwidth of 192 GB/s. This would allow the GTX 1060 a throughput of 1280 arithmetic instructions per GPU clock cycle, if the instructions were for single precision floats. However, the throughput for double precision\footnote{Double precision is required for obtaining accurate solutions in large problems of this type.} floats is only 40 instructions per clock cycle on the GTX 1060 (see CUDA site, especially \cite{cuda_c_programming_guide} and \cite{pascal_tuning_guide}). Indeed, the throughput for doubles in most GPUs currently in use is much smaller than their float throughout. However, the impact of this in our implementation seems to be small, because in our tests the memory bandwidth is the bottleneck (so long as the required data size is above the 1.5MB L2 cache of the GPU). Still, the double precision floats, as their name implies, require double the memory bandwidth compared to single precision floats, and the time for the arithmetic instructions may still impact the overall time. We remark that the cycle time may be improved by using a hybrid program, as recommended by \cite{HPGMG_GPU}, whereby a few coarse levels of the cycle are run on the CPU, and finer levels are run on the GPU. Here, however, we use only the GPU for the computations, in order to obtain a clean demonstration of our results. This is also justified by the fact that, if current trends continue, the added value of using CPU is likely to greatly diminish over time. Indeed, the advantage of a hybrid program already seems to be small in our implementation, probably because the time needed for copying the data to CPU memory is already close to the time needed for running the coarser levels on the GPU.
\cref{fig:solve_times} shows the total run-times for reducing the $L_2$ error norms by a factor of $10^8$ for $\epsilon \in \{10^{-5}, 10^{-4}, 10^{-3}, 10^{-2}, 10^{-1}, 1\}$, using 9 levels (``small'' problem with about 260K variables on the finest grid) and 13 levels (``large'' problem with about 67M finest-grid variables). Here, the $\kappa$-cycle is used as a stand-alone solver. For $\epsilon=1$ this is the simple Poisson problem, and the $V$-cycle yields the minimal run-time. As $\epsilon$ is decreased, the problem becomes more challenging, especially the large problem, and the $V$-cycle loses its efficiency because of the fast deterioration in its convergence factor. Still, the $W$-cycle, which has the best convergence factor, is expensive. For the small problem, the $F$-cycle becomes the most efficient for $\epsilon \leq 0.01$. For the large problem, however, larger $\kappa$ values are optimal once $\epsilon$ drops below 0.01, and $\kappa = 4$ becomes the optimal parameter for still smaller $\epsilon$ values.
\begin{figure}[hpbt!]
\centering
\includegraphics[scale=0.8]{section4/levels9.pdf}
\includegraphics[scale=0.8]{section4/levels13.pdf}
\caption{Solve run-times}
\label{fig:solve_times}
\end{figure}
\begin{table}
\centering
\caption{Stand-alone multigrid with $\kappa(2,2)$, using damped Jacobi relaxation and standard coarsening, 12 levels, $\phi=45\degree$. Relative total time is the total time divided by that of the $W$-cycle}
\label{table:Stand_alone}
\begin{tabular}{ |c|c|c|c|c| }
\hline
$\kappa$ & Total time(ms) & Relative total time & Average time per cycle (ms) & \#cycles \\
\hline
1 ($V$-cycle) & 182,216 & 4.29 & 26.4 & 6909 \\
\hline
2 ($F$-cycle) & 52,080 & 1.23 & 37.1 & 1403 \\
\hline
3 & 28,863 & 0.679 & 44.3 & 651 \\
\hline
4 & \textbf{26,356} & \textbf{0.620} & 53.2 & 495 \\
\hline
$\infty$ ($W$-cycle) & 42,483 & 1.00 & 90.4 & 470 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Conjugate Gradients preconditioned by a $\kappa$-cycle (MGCG), using damped Jacobi relaxation and standard coarsening, 12 levels, $\phi=45\degree$. Relative total time is the total time divided by that of the $F$-cycle}
\label{table:mgcg}
\begin{tabular}{ |c|c|c|c|c| }
\hline
$\kappa$ & Total time(ms) & Relative total time & Average time per cycle (ms) & \#iterations \\
\hline
1 ($V$-cycle) & 9014 & 1.71 & 47.7 & 189 \\
\hline
2 ($F$-cycle) & 5262 & 1.00 & 59.1 & 89 \\
\hline
3 & \textbf{4252} & \textbf{0.808} & 67.5 & 63 \\
\hline
4 & 4294 & 0.816 & 76.7 & 56 \\
\hline
$\infty$ ($W$-cycle) & 6142 & 1.17 & 113.7 & 54 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption[Stand-alone $\kappa$-cycle - zebra relaxations]{Stand-alone $\kappa$-cycle run-times (ms) using alternating zebra relaxation and Galerkin coarse-grid operators, 14 levels, the numbers in parentheses indicate the number of cycles}
\label{table:xyzebra_14_levels_e8}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
$\kappa$ $\setminus$ angle & 10 & 30 & 45 & 60 & 80 \\
\hline
1 ($V$-cycle) & 751452 (2909) & timeout & timeout & timeout & 835140 (3234) \\
\hline
2 ($F$-cycle) & 197204 (589) & 451086 (1347) & 577985 (1725) & 553166 (1651) & 219901 (656) \\
\hline
3 & 103595 (274) & 212531 (562) & 266935 (706) & 253346 (670) & 114530 (303) \\
\hline
4 &\textbf{88675} (206) & \textbf{163207} (379) & \textbf{199501} (463) & \textbf{191667} (445) & \textbf{97792} (227) \\
\hline
$\infty$ ($W$-cycle) & 159587 (193) & 268824 (326) & 316618 (385) & 313178 (381) & 174642 (212) \\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption[Conjugate Gradients preconditioned by a $\kappa$-cycle - zebra relaxations]{Conjugate Gradients preconditioned by a $\kappa$-cycle (MGCG) run-times (ms) using alternating zebra relaxation and Galerkin coarse-grid operators, 14 levels, the numbers in parentheses indicate the number of iterations}
\label{table:xyzebra_14_levels_cg_e8}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
$\kappa$ $\setminus$ angle & 10 & 30 & 45 & 60 & 80 \\
\hline
1 ($V$-cycle) & 39348 (124) & 61707 (195) & 72305 (229) & 68893 (218) & 41898 (132) \\
\hline
2 ($F$-cycle) & 23199 (58) & 33852 (85) & 38888 (98) & 37598 (95) & 24343 (61) \\
\hline
3 & 18384 (41) & 25352 (57) & 28401 (64) & 27459 (62) & 19209 (43) \\
\hline
4 & \textbf{18117} (36) & \textbf{23970} (48) & \textbf{26439} (53) & \textbf{25869} (52) & \textbf{19057} (38) \\
\hline
$\infty$ ($W$-cycle) & 31548 (35) & 40367 (45) & 43921 (49) & 43763 (49) & 33166 (37) \\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption[Stand-alone multigrid with $\kappa(2,2)$ - zebra relaxation, semi-coarsening]{Stand-alone multigrid with $\kappa(2,2)$ run-times (ms), using zebra relaxation, semi-coarsening and Galerkin coarse-grid operators, 14 levels, the numbers in parentheses indicate the number of cycles}
\label{table:xzebra_14_levels_1e8}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
$\kappa$ $\setminus$ angle & 10 & 30 & 45 & 60 & 80 \\
\hline
1 ($V$-cycle) & 618154 (1705) & timeout & timeout & timeout & 620274 (1711) \\
\hline
2 ($F$-cycle) & 297372 (371) & 775767 (966) & 875116 (1093) & 677832 (846) & 193031 (241) \\
\hline
3 & \textbf{281482} (187) & \textbf{594359} (395) & \textbf{616255} (410) & \textbf{450079} (299) & 114353 (76) \\
\hline
4 & 410784 (148) & 730504 (263) & 704672 (254) & 482585 (174) & \textbf{110864} (40) \\
\hline
$\infty$ ($W$-cycle) & timeout & timeout & timeout & timeout & 363991 (26) \\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption[Conjugate Gradients preconditioned by a $\kappa$-cycle (MGCG) - zebra relaxation, semi-coarsening]{Conjugate Gradients preconditioned by a $\kappa$-cycle (MGCG) run-times (ms) with $\kappa(2,2)$, using zebra relaxation, semi-coarsening and Galerkin coarse-grid operators, 14 levels, the numbers in parentheses indicate the number of iterations}
\label{table:xzebra_14_levels_cg_1e8}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
$\kappa$ $\setminus$ angle & 10 & 30 & 45 & 60 & 80 \\
\hline
1 ($V$-cycle) & 39548 (93) & 70761 (164) & 79566 (189) & 69379 (165) & 39811 (94) \\
\hline
2 ($F$-cycle) & \textbf{39377} (45) & \textbf{61557} (71) & \textbf{68511} (79) & \textbf{58007} (67) & \textbf{31492} (36) \\
\hline
3 & 52292 (33) & 77894 (48) & 79088 (50) & 64627 (41) & 31889 (20) \\
\hline
4 & 85674 (30) & 115353 (40) & 114044 (40) & 94055 (33) & 43082 (15) \\
\hline
$\infty$ ($W$-cycle) & 411286 (29) & 538387 (38) & 510449 (36) & 408226 (29) & 170028 (12) \\
\hline
\end{tabular}
\end{table}
\Cref{table:Stand_alone} shows the total run-times for reducing the $L_2$ error norms by a factor of $10^8$ for $\epsilon=0.0001$ using 12 levels (about 17M finest-grid variables), where the $\kappa$-cycle is used as a stand-alone solver. These parameters result in a challenging problem, and we clearly see the tradeoff here, with the required number of cycles decreasing as $\kappa$ increases, but the average time per cycle increasing significantly. In terms of total run-time, the best tradeoff is provided for $\kappa=4$, with $\kappa=3$ close behind.
\Cref{table:mgcg} shows run-times for the case where the $\kappa$-cycle is used as a preconditioner for Conjugate Gradients (called MGCG in \cite{Tatebe93}). As expected, this yields a much more efficient solver than the stand-alone $\kappa$-cycle. We see once again that setting $\kappa$ to 3 or 4 yields superior run-times, this time with $\kappa = 3$ in the lead.
For the angle $\phi=45\degree$, the point relaxation used in the two tests described above is adequate, because the bottleneck is the coarse-grid correction. However, when $\epsilon$ is small and $\phi$ is close to 0 or 90 degrees, and thus the direction of strong coupling is nearly aligned with the grid, point relaxation such as Jacobi is known to be ineffective. In order to obtain a sufficiently good smoothing factor in these cases, there are two common methods: using line-relaxation along the coordinate of the strongly-coupled variables, or coarsening only in the coordinate of the strongly-coupled variables (semi-coarsening). Because in real problems we usually do not know in advance the direction of strong coupling (and it may change over the domain), a technique which takes care of both directions is used for robustness. One common method to obtain such a robust solver is using alternating line-relaxation: first along one coordinate and then along the other coordinate. Another well-known method is using line relaxation along one coordinate and coarsening only along the other coordinate.
\Cref{table:xyzebra_14_levels_e8} shows the total run-times for reducing the $L_2$ error norms by a factor of $10^8$ for $\epsilon=0.00001$ using 14 levels (about 268M finest-grid variables), where the $\kappa$-cycle is used as a stand-alone solver. The numbers in parentheses indicate the number of cycles. A comparison of several choices for the angle between the strong-diffusion direction and the $x$-coordinate is presented.
Here, the so-called alternating zebra relaxation (line Gauss Seidel in Red-Black ordering) is used first along the $x$-coordinate and then along the $y$-coordinate (only one relaxation in each direction is employed). Also, Galerkin coarsening is used for constructing the coarse-grid operator. This combination provides very good error smoothing for any angle $\phi$, and the cause for the slow convergence is inadequate coarse-grid corrections \cite{Yav98}. For every angle in the test, setting $\kappa$ to 4 yields a superior run-time.
Note that in this and in the next tests we used a machine with NVIDIA GeForce RTX 3090 GPU, which allowed us the required memory for 14 levels, whereby $\kappa > 2$ becomes advantageous.
\Cref{table:xyzebra_14_levels_cg_e8} shows run-times for the case where the $\kappa$-cycle is used as a preconditioner for Conjugate Gradients, using the same $\kappa$-cycle parameters as in \cref{table:xyzebra_14_levels_e8}.
As in the stand-alone version, for every angle in the test, setting $\kappa$ to 4 yields a superior run-time.
\Cref{table:xzebra_14_levels_1e8} shows run-times for cycles similar to those in \cref{table:xyzebra_14_levels_e8}, but with two relaxations along the x-coordinate (and no relaxation along the y-coordinate), and semi-coarsening: the grid is coarsened only along the $y$-coordinate.
As in the third solver, this combination also provides very good error smoothing for any angle.
For every angle in the test, setting $\kappa$ to 3 or 4 yields superior run-times.
Note that when semi-coarsening in 2D is used together with a $W$-cycle (as in this case), the number of operations required per cycle is no longer linear in N, but is $O(N \log N)$. This accounts for the sharp rise in run-times seen in the last row of \cref{table:xzebra_14_levels_1e8}. For other values of $\kappa$, the number of operations is linear in $N$, but the constant factor may be very large for large values of $\kappa$. This gives a relative advantage for small values of $\kappa$.
\Cref{table:xzebra_14_levels_cg_1e8} shows run-times for the case where the $\kappa$-cycle is used as a preconditioner for Conjugate Gradients, using the same $\kappa$-cycle parameters as in \cref{table:xzebra_14_levels_1e8}.
The semi-coarsening and line relaxations are expensive, especially for large values of $\kappa$, so for each of the angles, setting $\kappa$ to 2 (the $F$-cycle) yields the best time in this case.
Note that the damped Jacobi relaxations are symmetric, and the same number of relaxations is done before and after the recursive calls. Also, the full-weighting restriction operator is the adjoint of the bi-linear prolongation operator. As proved in \cite{Tatebe93}, these conditions are sufficient for the matrices of a $V$-cycle and a $W$-cycle to be symmetric and positive definite, when the start guess is zero, and therefore they are valid preconditioners for Conjugate Gradients. For $1 < \kappa< n$ the cycle is not symmetric (indeed, the $F$-cycle is described as ``not a valid preconditioner'' in \cite{Tatebe96}). Still, the MGCG algorithm converges in all the tests we checked, and, as in the example above, asymmetric $\kappa$-cycles can be better preconditioners than both $V$-cycles and $W$-cycles.
The values of $\alpha$ and $\beta$ are system dependent. For larger ratios $\frac{\alpha}{\beta}$, the importance of smaller $\kappa$'s is more significant, because $N_{ops}$ does not change much for various $\kappa$ values, whereas the change in $N_{gpuCalls}$ may be huge, as can be inferred from \cref{table:notations}.
\section{Conclusions and further research} \label{sec:Conclusions}
In this work, we have presented a new simple fixed recursive structure for multigrid algorithms, yielding a family of multigrid cycles governed by a cycle counter $\kappa$. We have derived theoretical complexity results for this algorithm, developed tools for practical prediction of run-time, and have demonstrated the new structure's utility in numerical tests, showing cases where the $\kappa$-cycle is more efficient than any one of the cycles in common use.
It would be worthwhile to explore $\kappa$-cycle performance in other problems and other computing platforms, with the aim of discovering regimes where even more significant improvement may be obtained. One such platform may be a distributed system with multiple connected computers. In such a platform the relative cost of using a large $\kappa$, and a $W$-cycle in particular, would probably be much bigger than in the GPU platform. Furthermore, distributed platforms allow solution of larger problems, and this may yield a benefit to intermediate $\kappa$'s. Also, a run-time model very similar to the one we have shown in (\ref{eq:LinearCombination}) for a GPU system may be relevant to distributed systems and other computing platforms. On the other hand, modern GPUs with reduced or partly hidden latency, should allow using larger $\kappa$ values to advantage.
In addition, it may be interesting to test the new family of cycles on nonsymmetric problems such as the convection-diffusion equation.
Finally, the $\kappa$-cycle should also be tested for more complex problems that are more practical for real world applications, including AMG for unstructured problems with non-constant coarsening ratio and operator-complexity.
\section{Appendix}
We wish to estimate the number of levels, $n$, where the computation cost matches the call cost, i.e., the turning point. For a given $\kappa$,
$$
\alpha\cdot N_{gpuCalls} = \beta\cdot N_{ops} = \beta\cdot C(\kappa)\cdot(2^n-1)^2 \, .
$$
Hence,
$$
n = \log_2 \left(1+\sqrt{\left(\frac{\alpha}{\beta}\cdot \frac{N_{gpuCalls}}{C(\kappa)} \right)} \right) \, ,
$$
where, for coarsening factor 0.25, $C(\kappa) = 2\left(1-\frac{1}{3^\kappa}\right)$ depends only on $\kappa$, and $N_{gpuCalls}$ depends on $\kappa$ and $n$, but does not depend on $\alpha$ or on $\beta$. This last equation can be used to iteratively solve the problem and find the number of levels (and the number of unknowns). We have initialized $n$ to 2 and found that 20 iterations suffice for converging to $n_{tp}$.
\bibliographystyle{siam}
|
1,314,259,993,388 | arxiv | \section{Introduction}
Non-adiabatic coupling vectors (NACVs) play an important role in photochemistry. They describe the coupling between Born-Oppenheimer surfaces due to the nuclear kinetic energy and allow transitions between electronic states in the absence of radiation \cite{baer_book}. They are a vital ingredient in
\begin{itemize}
\item non-adiabatic molecular dynamics simulations (surface hopping on ``on the fly'') \cite{tapavicza2013ab},
\item searching for minimal energy conical intersections \cite{ragazos1992optimization}
\item and predicting non-radiative transition rates and fluorescence quantum yields \cite{valiev2018first}.
\end{itemize}
The brute force method for computing non-adiabatic coupling vectors is numerical differentiation of the wavefunction with respect to the atomic positions, which requires at least $3 N_{\text{atoms}}$ electronic structure calculations. In the context of TD-DFT exact coupling vectors can be obtained analytically \cite{send2010first} in an efficient way, but the implementation of the method is complicated. Having a simple and intuitive approximation for NACVs that may be combined with any semiempirical electronic structure method is therefore highly desirable.
In this article we compare two different semiempirical approximation for calculating non-adiabatic coupling vectors between the ground state and an excited state (usually $S_1$), which have been implemented in the frame of tight-binding DFT \cite{humeniuk2017dftbaby}.
The first approximation is based on transition charges: In analogy with the transition dipole moment, the NACV is obtained simply from the transition charges and the molecular geometry.
The second approximation, which has been propounded by Abad et.al. \cite{nacs_approx_MOs}, is based on molecular orbitals: Non-adiabatic couplings between Kohn-Sham orbitals are constructed from gradients of the overlap and Hamiltonian matrix between localized atomic orbitals, which are readily available in tight-binding DFT, since the same quantities are needed for evaluation of the energy gradient.
Abad et.al. tested their approximation in the vicinity of conical intersections, where the magnitude of the NACVs is largely determined by the small energy gap. At these photochemical funnels the non-adiabatic coupling diverges and the transfer of population between electronic states is usually extremely fast.
It remains to be investigate how well the approximation performs when the $S_0-S_1$ energy gap is large such as at the $S_1$ minimum or the Franck-Condon point where the
length of the NACVs and their orientation relative to the normal modes determines the non-radiative transition rate.
The article is structured as follows: After a brief description of the approximations (in sections \ref{sec:nacs_charges} and \ref{sec:nacs_mos}), we investigate how the non-adiabatic coupling depends on the delocalization length of an excitation in chromophoric oligomers (in section \ref{sec:quantum_yield}). We then graphically compare the direction and magnitude of the approximate NACVs with their exact counterparts for a range of organic molecules with bright $\pi\pi^*$ excitations (in section \ref{sec:comparison}). Finally we make some qualitative predictions of fluorescence quantum yields in porphyrin tapes (section \ref{sec:tapes}).
\section{Theory}
\subsection{Semiempirical approximations}
The first-order non-adiabatic coupling vector between two electronic Born-Oppenheimer states $m$ and $n$ is
\begin{equation}
\vec{\tau}_{mn} = \bracket{\Psi_m}{\frac{\partial \Psi_n}{\partial \vec{R}}}. \label{eqn:nacv_grad}
\end{equation}
The coupling vector may be expressed as
\begin{equation}
\bracket{\Psi_m}{\frac{\partial \Psi_n}{\partial \vec{R}}} = \frac{\bra{\Psi_m} \frac{\partial \Op{H}}{\partial \vec{R}} \ket{\Psi_n}}{E_n - E_m}
\label{eqn:nacv_ediff}
\end{equation}
by differentiating the electronic Schr\"{o}dinger equation on both sides with respect to the nuclear coordinates $\vec{R}$ and multiplying by $\bra{\Psi_m}$ for $m \neq n$ and rearranging.
The derivation of this expression requires that $\Op{H} \ket{\Psi_n} = E_n \ket{\Psi_n}$ is satisfied exactly, which is a much stronger statement than just requiring that Schr\"{o}dinger's equation is satisfied after projecting onto a finite basis set ${ \{ \ket{\Phi_i} \}_{i=1,\ldots,N_{\text{basis}}} }$:
\begin{equation}
\bra{\Phi_i} \Op{H} \ket{\Psi_n} = \tilde{E}_n \bracket{\Phi_i}{\Psi_n} \quad \quad \forall i=1,\ldots,N_{\text{basis}}
\end{equation}
Therefore eqn.~(\ref{eqn:nacv_ediff}) is strictly correct only if a complete basis set is used. In finite basis sets additional Pulay terms \cite{pulay1969ab} have to be considered which arise from the dependence of the basis set on the nuclear coordinates.
Nevertheless it is a good starting point for semiempirical approximations.
\subsubsection{\label{sec:nacs_charges} Approximation based on transition charges}
Since the electronic Hamiltonian depends on the nuclear geometry only through the Coulomb attraction between nuclei and electrons,
\begin{equation}
\Op{V}_{ne} = \sum_A^{\text{atoms}} \sum_i^{\text{electrons}} \frac{-Z_A}{\vert \vec{R}_A - \vec{r}_i \vert},
\end{equation}
the coupling vector ~(\ref{eqn:nacv_ediff}) on atom $A$ simplifies to
\begin{equation}
\vec{\tau}^A_{mn} = \frac{Z_A}{E_n - E_m} \int d\vec{r} \frac{\vec{R}_A - \vec{r}}{\vert \vec{R}_A - \vec{r} \vert^3} \rho_{mn}(\vec{r}), \label{eqn:nacv_ediff_enuc}
\end{equation}
where we have also introduced the transition density matrix
\begin{equation}
\rho_{mn}(\vec{r}) = N \int \ldots \int d\vec{r}_2 \ldots d\vec{r}_N \Psi_m^*(\vec{r},\vec{r}_2,\ldots,\vec{r}_N) \Psi_n(\vec{r},\vec{r}_2,\ldots,\vec{r}_N).
\end{equation}
By partial integration of eqn.~(\ref{eqn:nacv_ediff_enuc}) (see appendix \ref{sec:partial_integration}) the NACV turns into
\begin{equation}
\vec{\tau}^A_{mn} = \frac{-Z_A}{E_n - E_m} \int d\vec{r} \frac{\nabla \rho_{mn}(\vec{r})}{\vert \vec{R}_A - \vec{r} \vert} \label{eqn:nacv_rho_deriv}.
\end{equation}
This expression is very instructive since it shows that the coupling vector density is proportional to the gradient of the transition density. The largest contribution comes from points where $\vec{r} \approx \vec{R}_A$ due to the singularity of the Coulomb potential. Therefore we can say qualitatively that the non-adiabatic coupling vector on atom $A$ is approximately proportional to the gradient of the transition density around that atom.
To derive a semiempirical approximation for $\tau^A_{mn}$ let us return to eqn.~(\ref{eqn:nacv_ediff_enuc}) and assume that the transition density may be approximated by atomic transition charges (monopoles)
\begin{equation}
\rho_{mn}(\vec{r}) \approx \sum_B q_B \delta(\vec{r} - \vec{R}_B). \label{eqn:rho_monopole_approx}
\end{equation}
where $\delta(\cdot)$ is Dirac's $\delta$-function.
This approximation is frequently employed in semiempirical methods such as tight-binding DFT \cite{koskinen2009density}. The transition charges $q_A$ may be fitted to reproduce the electrostatic potential generated by the transition density (using the CHELPG algorithm) \cite{madjet2006intermolecular} or they may be calculated as Mulliken transition charges from the transition density matrix.
Substituting the monopole approximation ~(\ref{eqn:rho_monopole_approx}) into eqn. ~(\ref{eqn:nacv_ediff_enuc}) and using the property of the $\delta$-function, $\int \delta(x-x_0) f(x) dx = f(x_0)$, we get
\begin{equation}
\vec{\tau}^A_{mn} \approx \frac{Z_A}{E_n - E_m} \sum_{B \neq A} q_B \frac{\vec{R}_A - \vec{R}_B}{\vert \vec{R}_A - \vec{R}_B \vert^3}. \label{eqn:nacv_trchg}
\end{equation}
The term where $A = B$ was excluded to avoid dividing by zero. Only valence electrons are usually included in semiempirical calculations. Then the bare nuclear charge $Z_A$ should be replaced by the charge of the atomic core $Z_A^{\text{core}}$ (nucleus and core electrons), for instance in the case of carbon $Z_A^{\text{core}}=4$ instead of $Z_A=6$.
This approximation is completely analogous to how the transition dipole moment is calculated from the transition charges in the frame of TD-DFTB \cite{niehaus2001tight},
\begin{equation}
\vec{\mu}_{mn} \approx \sum_A q_A \vec{R}_A. \label{eqn:dipole_trchg}
\end{equation}
The simplicity of the derived approximate expressions enables us to make some general statements about the properties of the NACVs. The direction and length of NACVs can be deduced qualitatively by inspecting the transition density or the distribution of the transition charges:
\begin{itemize}
\item Coupling vectors are non-zero only on atoms which take part in an excitation.
\item The coupling vectors point roughly along the direction where the transition density changes most strongly.
Thus, if there is a node in the transition density between two atoms, the NACV on the atom is perpendicular to the nodal surface.
\end{itemize}
As a simple example consider the $\pi\pi^*$ excitation in ethene (Fig. \ref{fig:ethene_nacs_charges_trdensity}). The transition charge is positive on one carbon,
negative on the other and almost zero on the hydrogen atoms. Therefore the coupling vectors on the hydrogen atoms are zero. The transition charges change strongly
from $+q$ to $-q$ when moving from one carbon to the other along the C$=$C bond. Therefore the NACVs on the carbons point along this bond.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\textwidth]{ethene_nacs_charges_trdensity.png}
\caption{\textbf{Ethene, $\pi\pi^*$ transition }.}
\label{fig:ethene_nacs_charges_trdensity}
\end{figure}
The approximation fails completely when the transition density cannot be adequately described by monopoles.
For instance in water, the HOMO-LUMO transition, $4a_1 \leftarrow 1b_1$, has lobes of opposite sign below and above the molecular plane.
The gradient of the transition density points perpendicularly to the molecular plane and is orthogonal to all vectors $\vec{R}_A - \vec{R}_B$.
This implies that the coupling vector cannot be represented in the basis of bond vectors.
The Mulliken transition charges are all zero, as is the approximate non-adiabatic coupling vector.
In this case the approximation for the electric transition dipole given in Eq. \ref{eqn:dipole_trchg} is also incorrect.
\subsubsection{\label{sec:nacs_mos} Approximation based on molecular orbitals}
Here we briefly recapitulate how NACVs are calculated in the local-orbital scheme proposed in Ref. \cite{nacs_approx_MOs} using the language of tight-binding DFT (DFTB).
In DFTB a minimal basis set of valence atomic orbital is used. The molecular orbitals (MO) are linear combinations of these localized basis functions $\ket{\mu}$:
\begin{equation}
\ket{\psi_i} = \sum_{\mu} c_{\mu i} \ket{\mu}
\end{equation}
The coefficients $c_{\mu i}$ for the molecular orbital $i$ are the eigenvector of the Kohn-Sham equation belonging to eigenenergy $\epsilon_i$:
\begin{equation}
\sum_{\nu} \left( H^0_{\mu\nu} - \epsilon_i S_{\mu\nu} \right) c_{\nu i} = 0 \label{eqn:kohn_sham_nonscc}
\end{equation}
Matrix elements of the Kohn-Sham Hamiltonian at the reference density,
\begin{equation}
H^0_{\mu\nu} = \bra{\mu} H^{\text{KS}}[\rho_0] \ket{\nu}
\end{equation}
the overlap matrix elements
\begin{equation}
S_{\mu\nu} = \bracket{\mu}{\nu}
\end{equation}
and their gradients are obtained from Slater-Koster rules\cite{slater_koster}.
With the help of eqn. ~(\ref{eqn:kohn_sham_nonscc}) the authors of Ref. \cite{nacs_approx_MOs} derived an approximate expression for
non-adiabatic coupling vectors between molecular orbitals:
\begin{equation}
\begin{split}
\vec{d}_{ij}^A & = \bracket{\psi_i}{\frac{\partial \psi_j}{\partial \vec{R}_A}} \\
& \approx \frac{1}{\epsilon_i - \epsilon_j} \sum_{\mu,\nu} c^*_{\mu i} c_{\nu j} \left[ - \frac{\partial H^0_{\mu\nu}}{\partial \vec{R}_A} + \frac{\epsilon_i + \epsilon_j}{2} \frac{\partial S_{\mu\nu}}{\partial \vec{R}_A} \right] \label{eqn:coupling_mos}
\end{split}
\end{equation}
In time-dependent density functional theory and its tight-binding version, excited states are represented as linear combinations of singly excited Slater determinants:
\begin{equation}
\ket{\Psi_n} = \sum_{o \in \text{occ}} \sum_{v \in \text{virt}} C_{ov}^{(n)} \ket{\Psi_{o}^v}
\end{equation}
Non-adiabatic coupling vectors between the many-electron ground and excited states are obtained by contraction of the single-particle coupling vectors $\vec{d}_{ij}^A$ with the coefficients $C^{(n)}$:
\begin{equation}
\vec{\tau}^A_{0n} = \bracket{\Psi_0}{\frac{\partial \Psi_n}{\partial \vec{R}_A}} = \sum_{o \in \text{occ}} \sum_{v \in \text{virt}} C_{ov}^{(n)} \vec{d}_{ov}^A \label{eqn:nacv_mos}
\end{equation}
It is worthwhile to highlight some of approximations made in the above derivation:
(a) Eqn. ~(\ref{eqn:kohn_sham_nonscc}) neglects the dependence of the Hamiltonian on the density ($H[\rho] \approx H[\rho_0]$).
This allowed to derive the relatively simple expression ~(\ref{eqn:coupling_mos}) for the coupling vectors. However, in our calculations we use expression
~(\ref{eqn:coupling_mos}) with MO coefficients obtained from solving the Kohn-Sham equations self-consistently.
(b) In principle, the NACV contains a contribution from changes of the coefficients $\frac{\partial C^{(n)}_{ov}}{\partial \vec{R}}$, which is neglected.
(c) The exact NACV diverges when the ground and excited state cross ($E_1 \approx E_0$), whereas for the approximate NACV this happens when the HOMO-LUMO gap closes ($\epsilon_{\text{HOMO}} \approx \epsilon_{\text{LUMO}}$). It is well-known that
HOMO-LUMO gaps are often significantly larger than $S_0-S_1$ excitation energies obtained from TD-DFT due to the mixing of single excitations. This suggests that the approximation will work best when such many-body effects are small so that the $S_0-S_1$ transition predominantly has HOMO $\to$ LUMO character.
Expression ~(\ref{eqn:coupling_mos}) is ideally suited for tight-binding DFT, since the gradients of the matrix elements can be constructed very efficiently at runtime
from precalculated Slater-Koster tables. Since the same quantities are needed for assembling the gradient of the energy,
which is needed in any molecular dynamics (MD) simulation, the computation of the NACVs comes at little additional cost.
This should be contrasted with the computational cost of an alternative method for computing the non-adiabatic couplings in MD simulations:
In the surface hopping method \cite{tully1990molecular} the electronic populations depend only on the scalar product between the NACV and the nuclear velocity vector. This scalar can
be obtained directly from the overlap of the electronic wavefunctions at neighbouring timesteps \cite{mitric2009nonadiabatic} obviating the need for computing the NACVs:
\begin{equation}
\bracket{\Psi_m}{\frac{\partial \Psi_n}{\partial \vec{R}}} \cdot \frac{d\vec{R}}{dt} \approx \frac{1}{\Delta t} \bracket{\Psi_m(t)}{\Psi_n(t+\Delta t)}
\end{equation}
However, since each excited state is a linear combination of Slater determinants, the evaluation of the overlap entails a large number of determinants,
rendering this scheme very expensive for large molecules, unless cutoff thresholds are used for culling determinants which contribute little to the overlap integral.
\subsection{\label{sec:quantum_yield} Qualitative fluorescence quantum yield}
Approximations ~(\ref{eqn:nacv_trchg}) and ~(\ref{eqn:dipole_trchg}) provide qualitative guidelines on how to tune the electronic wavefunctions for increasing the fluorescence quantum yields. At the moment we will only focus on electronic effects, although vibrational effects can also be very important, as will become clear later on.
If the vibrational wavefunction is neglected, according to Fermi's Golden rule the rates for radiative (spontaneous emission) and non-radiative (internal conversion) decays are proportional to the lengths squared of the transition dipole and non-adiabatic coupling vectors, respectively, between the ground state $S_0$ and the first excited state $S_1$:
\begin{eqnarray}
k_{\text{rad}} &\propto \vert \vec{\mu}_{01} \vert^2 \\
k_{\text{IC}} &\propto \vert \vec{\tau}_{01} \vert^2
\end{eqnarray}
To increase the fluorescence quantum yield
\begin{equation}
\text{QY} = \frac{1}{1 + \frac{k_{\text{IC}}}{k_{\text{rad}}}}
\end{equation}
$k_{\text{rad}}$ needs to be maximized while $k_{\text{IC}}$ needs to be minimized.
This can be achieved by
\begin{itemize}
\item increasing the length of the transition dipole
\end{itemize}
and/or
\begin{itemize}
\item avoiding conical intersections, where $E_1 = E_0$
\item and reducing the gradient of the transition density.
\end{itemize}
To avoid the crossing of the energy levels of $S_1$ and $S_0$, the geometry should be rigid, so that we can assume there is a stable minimum on $S_1$ and the reorganization energy is small. Then it remains to maximize the length of the transition dipole moment and to minimize the gradient of the transition density. Since it is easier to analyze only one factor, we build a simple model, where the transition dipole is constant and only the length of the NAC vector changes.
\subsubsection{\label{sec:1d_model} 1D model fluorophore}
Consider a linear molecule (e. g. a polyene) with $2 M$ atoms on an equidistant grid with spacing $h$ (see Fig. \ref{fig:1d_model_fluorophore}a). For simplicity, we assume that each atom has a single $p_z$ orbital and contributes one electron. The $S_1$ state is a HOMO-LUMO transition. The $S_0-S_1$ transition density has nodes between all atoms, so that the transition charges alternate between positive and negative values.
The atomic positions and transition charges for atom $i$ are given by
\begin{eqnarray}
x_i &=& i h \\
q_i &=& (-1)^i \frac{q}{M}
\end{eqnarray}
For simplicity we set the nuclear charge to $Z=1$ and the excitation energy to $E_1 - E_0=1$.
The transition dipole moment is independent of the number of atoms (see appendix \ref{sec:dipole_constant}):
\begin{equation}
\vert \vec{\mu} \vert = \sum_{i=0}^{2M-1} q_i x_i = q h
\end{equation}
The non-adiabatic coupling vector on the $i$-th atom is given by
\begin{equation}
\tau^{(i)} = \frac{q}{M h^2} \sum_{j \neq i}^{2M-1} (-1)^j \frac{i-j}{\vert i-j \vert^3}.
\end{equation}
The length of the total non-adiabatic coupling vector is given by
\begin{equation}
\vert \vec{\tau} \vert = \sqrt{\sum_{i=0}^{2M-1} (\tau^{(i)})^2}.
\end{equation}
and needs to be evaluated numerically. If we plot the length of $\vec{\tau}$ against the number of atoms $2 M$ that participate in the excitation (Fig. \ref{fig:1d_model_fluorophore}b), we see that the rate for internal conversion can be minimized by spreading the transition charge over as many atoms as possible while maintaining the same electric transition dipole moment. Since the transition charges change sign every second atom, the gradient of the transition density can be reduced only if the charges themselves are small. In order to keep the same transition dipole moment, the number of atoms over which the excitation is delocalized needs to be increased.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{linear_model_fluorophore.png}
\caption{1D model fluorophore. \textbf{a)} Linear molecule with 4 atoms. The transition density has nodes between neighbouring atoms. \textbf{b)} The ratio of the non-adiabatic coupling to the transition dipole moment decreases with the number of atoms $2 M$ taking part in the excitation.}
\label{fig:1d_model_fluorophore}
\end{figure}
\section{Results}
\subsection{\label{sec:comparison} Comparison between approximate and exact NACVs}
The two approximations for NACVs are tested for a series organic molecules with bright $\pi\pi^*$ transitions. Many of the selected molecules are fluorescent dyes which have a stable lowest excited singlet state (with the exception of the polyenes).
After optimizing the geometries at the AM1 level of theory, the lowest bright excited state was computed with TD-$\omega$B97XD/def2-SVP using Gaussian 16 \cite{g16}. Analytical NACVs were obtained in the frame of TD-DFT \cite{send2010first} via the keyword \textsl{TD=NAC}. These vectors serve as ``exact'' reference values against which the quality of the approximate vectors is measured. Approximate NACVs based on either Mulliken transition charges (according to eqn. ~(\ref{eqn:nacv_trchg})) or localized orbitals (according to eqns. ~(\ref{eqn:coupling_mos}) and ~(\ref{eqn:nacv_mos})) were computed in the frame of long-range corrected tight-binding DFT \cite{humeniuk2017dftbaby}.
The comparison between the three types of NACVs is presented in a graphical way in Figs. \ref{fig:polyenes_nacs} to \ref{fig:porphyrin_tapes_nacs} below.
The components of the NAC vectors on each atom are shown as little red arrows. Since eigenvalue solvers produce eigenvectors with arbitrary global signs, only the relative orientation of the vectors to each other is important. A sign change in either the bra or the ket wavefunction is equivalent to flipping all vectors simultaneously.
The NACVs were scaled by a factor (which is indicated in the upper left corner) so that the largest vectors in each figure has approximately the same length. We proceed by analyzing the quality of the semiempirical approximations as compared to the exact NACVs for each class of molecules:
Trans-polyenes (Fig. \ref{fig:polyenes_nacs}) are the simplest conjugated systems which behave like the linear 1D model discussed above. The lowest bright state ($B_u$) is polarized along the molecular axis.
The transition density has nodes between neighbouring carbon atoms, which coincide with the positions where the tip of one arrow touches the tail of the next. The individual
arrows become shorter as the number of carbon atoms increases from ethene to hexatriene, reflecting the smearing out of the transition charges over a larger area.
The transition charge (TC) approximation overestimates the NACVs by a factor of 5 but gets the orientation of the vectors right. In turn, the localized orbital (LO) approximation
underestimates the NACVs by at least a factor of 10 and fails to predict the orientation.
The cyanines Cy$N$ (Fig. \ref{fig:cyanine_dyes_nacs}) are fluorescent cationic dyes that consist of a polymethine chain connecting two nitrogens which are part of an indole moiety. Cy3, Cy5 and Cy7 differ by the number of carbon atoms in the bridge, in Cy3B \cite{cooper2004cy3b} the polymethine chain is stabilized against deformation by additional aliphatic rings. In all cyanines the lowest bright excitation is localized on the polymethine chain, and consequently the NACVs are also limited to this region of the molecule. In the polymethine chain the orientation of the arrows alternates as is expected based on the location of the nodes in the transition density.
The LO approximation predicts the position and orientation of the NAC vectors correctly but underestimates their magnitude by a factor of 3. The TC approximation yields NACVs that are spread out too much over non-chromophoric parts of the molecule, such as methyl groups in Cy3-Cy7 or the aliphatic rings in Cy3B. In the polymethine chain the NAC vectors all point in the same direction, but the total magnitude of the NACV is approximately correct.
Dicyanovinyl-substituted squaraines (Fig. \ref{fig:squaraine_dyes_nacs}) \cite{mayerhoffer2013synthesis} are another class of fluorescent dyes.
In squaraine-O position 3 of the indole moiety is replaced by oxygen, whereas in squaraine-CMe a methyl group is added. The excitation is localized on the central four-membered ring and the adjacent methine groups. The LO approximation reproduces the orientation of the vectors accurately, except for those on the C$\!=\!$O group, which are far too short. The TC approximation places the largest NAC vectors on two opposite carbon atoms in the four-membered ring, although the coupling vectors at these positions should be zero. As with the cyanines a tendency of TC is observed to place large NAC vectors on atoms that are not part of the chromophore.
Finally a selection of polycyclic aromatic hydrocarbons is considered in Figs. \ref{fig:aromatic_hydrocarbons_1_nacs} and \ref{fig:aromatic_hydrocarbons_2_nacs}. The couplings were calculated for the lowest excited state. Since the ordering of states can be method-dependent, the symmetry label is given in brackets.
The ring systems give rise to complex patterns in the distribution of NAC vectors.
In fluorene ($B_2$) and phenanthrene ($B_2$) the arrows are arranged in cycles around the outer six-membered rings. This pattern is reproduced by the LO approximation, whereas the TC pattern has no similarity with the exact results. In pyrene ($B_{1u}$), perylene ($B_{1u}$) and rubrene ($B_1$) and relative orientation of the vectors is reproduced correctly both by the LO and the TC approximations, however the relative magnitudes of the vectors differ considerably. The total magnitude of the coupling is severely underestimated by the LO approximation (by a factor of 3-10) and overestimated by the TC approximation (by as much as a factor of 10).
In rubrene the excitation is strictly confined to the tetracene core. Inspite of this, the semiempirical approximations yield large vectors on the adjacent phenyl groups, which are perpendicular to the central tetracene.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{polyenes_nacs.png}
\caption{\textbf{Polyenes.}
Non-adiabatic coupling vectors computed using Furche's analytic method (left)
and the approximations based on transition charges (middle) or couplings between
Kohn-Sham orbitals (right).
The factor by which the vectors where scaled is shown in the upper left corner.}
\label{fig:polyenes_nacs}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{cyanine_dyes_nacs.png}
\caption{\textbf{Cyanine dyes.}}
\label{fig:cyanine_dyes_nacs}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{squaraine_dyes_nacs_renamed.png}
\caption{\textbf{Squaraine dyes.}}
\label{fig:squaraine_dyes_nacs}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{aromatic_hydrocarbons_1_nacs.png}
\caption{\textbf{Aromatic hydrocarbons 1.}}
\label{fig:aromatic_hydrocarbons_1_nacs}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{aromatic_hydrocarbons_2_nacs.png}
\caption{\textbf{Aromatic hydrocarbons 2.}}
\label{fig:aromatic_hydrocarbons_2_nacs}
\end{figure}
\FloatBarrier
\subsection{\label{sec:tapes} Porphyrin tapes}
We will now test the predictions of the 1D model from section \ref{sec:1d_model} for the porphyrin tapes that were synthesized by the Tsuda group \cite{tsuda2001fully}. These tapes consist of triply-fused zinc-porphyrins (the structure is shown as an inset in Fig. \ref{fig:porphyrin_tapes_nacv_tdip_size_dependence}). The monomer units are linked through conjugation allowing the electrons to delocalize freely over the entire tape like particles in a box. The delocalization is reflected in the lowering of the excitation energy far into the infrared with increasing length. At the same time, delocalization of the transition density should also impact the magnitude of the electronic non-adiabatic coupling.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{porphyrin_tapes_1-3_combined.png}
\caption{\textbf{Triply-fused porphyrin oligomers.} NACVs between the lowest B$_{1u}$ state and the ground state computed exactly (left) and approximately (center and right).}
\label{fig:porphyrin_tapes_nacs}
\end{figure}
The transition dipole moments and NACVs
were computed with long-range corrected TD-DFTB for the lowest $B_{1u}$ state, which is polarized along the long axis of the tape. For the monomer, dimer and trimer the NACVs are depicted in Fig. \ref{fig:porphyrin_tapes_nacs}. As the conjugation extends over all porphyrin units, the transition dipole moment $\vec{\mu}$ grows approximately linearly with the size of the tape. However, since the transition charges are spread out over a larger area, the non-adiabatic coupling $\vec{\tau}$ grows sublinearly and saturates. The ratio between the lengths of the two vectors is shown in Fig. \ref{fig:porphyrin_tapes_nacv_tdip_size_dependence}. Since the tapes are also very rigid, one can expect that ultrafast internal conversion through conical intersections, which usually requires some local deformation of the geometry, is not the dominant non-radiative decay channel.
Based on this analysis one would expect the long tapes to have extremely high fluorescence quantum yields.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{porphyrin_tapes_nacv_tdip_size_dependence.png}
\caption{\textbf{Porphyrin tapes.} The ratio of the non-adiabatic coupling vector to the transition dipole moment (in a.u.) is shown as a function of the length $n$ of the porphyrin tape. Observe the similarity with Fig. \ref{fig:1d_model_fluorophore}b. Since NACVs computed with localized orbitals are systematically too low, the semiempirical curve had to be scale by a factor of $8$ to agree with the DFT curve.}
\label{fig:porphyrin_tapes_nacv_tdip_size_dependence}
\end{figure}
However, this is not the case. Article \cite{cho2002photophysical} actually shows that the non-radiative rate increases rapidly with the length of the tapes so that the fluorescence is quenched as compared to the monomer. At first this appears to contradict the fact that the ratio of the electronic non-adiabatic coupling to the transition dipole moment, $\tau / \mu$, decreases. However, the reason for the fluorescence quenching in the oligomers is the lower energy gap\cite{tittelbach1995measurements} and the vastly higher density of states. The reduction in the \textsl{electronic} non-adiabatic coupling per porphyrin unit is more than compensated by the increase of accessible final vibrational states.
To verify this explanation we computed the radiative and non-radiative rates for the smallest porphyrin tapes using Fermi's Golden Rule in the harmonic approximation following the steps of Ref. \cite{valiev2018first}.
\footnote{Calculation of rates is based on the following approximations:
The $S_0$ and $S_1$ potential energy surfaces are harmonic, share the same normal modes and frequencies but have different equilibrium geometries (shifted harmonic oscillators). Frequencies and normal modes are determined from a frequency calculation using $\omega$B97XD/dev2-SVG at the $S_0$ minimum. The Huang-Rhys factors are deduced from the gradient on $S_1$ at the Franck-Condon point. Total rates are obtained by summing over all transitions that start in the initial vibrational ground state on $S_1$ and lead to a final vibrational state on $S_0$. Radiative transitions may lead to any vibrational state lower in energy, while in non-radiative transitions the final vibrational states on $S_0$ have to be approximately isoenergetic with the initial vibrational state on $S_1$. Following Ref. \cite{santoro2007effective} final states are grouped into classes C$_n$ depending on the number $n$ of simultaneously excited modes. The summation is limited to classes $C_0-C_8$, which captures most of the radiative rate and a large fraction of the non-radiative rate. Modes are sorted in decreasing order by Franck-Condon factors and the number of modes from which excitations are allowed, is reduced, until there are no more than $10^8$ elements per class left.}
For the smallest tapes optimizations and frequency calculations on the ground state and the first excited state with $1B_{1u}$ symmetry at the TD-DFT level of theory are still feasible. The resulting rates and quantum yields are listed in table \ref{tbl:porphyrin_tapes_rates}:
The non-radiative rate jumps by orders of magnitude from the monomer to the dimer and increases further in the trimer. With the non-radiative rate increasing much faster than the radiative rate the quantum yield drops to zero, as observed in experiment.
Since the sum over final vibrational states necessarily has to be truncated, the reported non-radiative rates are only a lower limit. Even then it is clear that the non-adiabatic coupling between \textsl{vibrational} states and the sheer density of states is responsible for the fluorescence quenching.
\begin{table}[h!]
\begin{tabular}{ccccc}
\toprule
T$_n$ & $E_{\text{vert}}$ / eV & $k_{\text{rad}}$ (s$^{-1}$) & $k_{\text{nr}}$ (s$^{-1}$) & $QY$ \\
\midrule
1 & 2.35 & 9.6e+05 & 3.0e-07 & 1.0e+00 \\
2 & 1.51 & 1.5e+07 & 8.2e+07 & 1.6e-01 \\
3 & 1.19 & 3.4e+07 & 3.2e+09 & 1.0e-02 \\
\bottomrule
\end{tabular}
\caption{Dependence of the vertical excitation energy $E_{\text{vert}}$, radiative rate $k_{\text{rad}}$, non-radiative rate $k_{\text{nr}}$ and fluorescence quantum yield $QY = \frac{k_{\text{rad}}}{k_{\text{rad}} + k_{\text{nr}}}$ on the number of porphyrin units $n$ in the triply-fused porphyrin tapes T$_n$.}
\label{tbl:porphyrin_tapes_rates}
\end{table}
\FloatBarrier
\section{Discussion}
Judging the quality of the NAC vectors by visual inspection can be misleading since it suggests there is more agreement than there actually is.
The symmetry of NAC vectors is related to the symmetry of the excited state. The relative orientation of the vectors in molecules with high symmetry, is therefore largely determined by the irreducible representation.
In trans-butadiene ($C_{2h}$), for instance, only the relative orientation of two out of four vectors is not already fixed by symmetry. According to TD-DFT these two vectors not related by symmetry should be parallel, but the localized orbital method yields an antiparallel orientation (see Fig. \ref{fig:polyenes_nacs}). The orientation is thus entirely wrong and the magnitude is also wrong by a factor of 10.
The localized orbital method tends to underestimate the magnitude of the vectors: In ethene the vectors are too short by a factor of 60, in the cyanine dyes by a factor of 3 and in the porphyrin tapes by a factor of 8. The large error for a system as simple as ethene is surprising. Eqn. ~(\ref{eqn:nacv_rho_deriv}) and Fig. \ref{fig:ethene_nacs_charges_trdensity} showed that the non-adiabatic coupling in the $\pi\pi^*$ state is due to the gradient of the transition density which points along the C-C bond. The transition charge approximation fares a little bit better in predicting the magnitude of the coupling, but it fails in predicting the distribution of the vectors: In the cyanines the excitation is strictly localized on the polyene bridge, but large vectors can be found on two adjacent methyl groups. An extreme example of this are the porphyrin tapes, where the largest vector is placed on the zinc atom, which does not take part in the excitation at all.
Comparison between the two approximations is hindered by the fact that one is derived from eqn. ~(\ref{eqn:nacv_rho_deriv}) (gradient of transition density) and the other from eqn. ~(\ref{eqn:nacv_grad}) (gradient of excited state wavefunction), but the two expressions are only equivalent in the basis set limit, and the minimal valence basis of DFTB is a long way off a complete basis set.
The LO approximation considers terms that arise because basis functions are attached to the nuclei (Pulay terms) but neglects changes of the excitation coefficients ($\frac{\partial C^{(n)}_{ov}}{\partial \mathbf{R}}$). The TC approximation is independent of the basis set and thus cannot account for Pulay terms.
Some of the errors relative to TD-DFT might also be due to the tight-binding approximations: Semiempirical transition charges, excitation energies and molecular orbitals, which enter the expressions for the TC and LO approximations, differ from their ab initio counterparts.
However, those sources of error are of minor importance.
In fact, if we feed our TC approximation with transition charges that were fitted to reproduce the electrostatic potential of the TD-DFT transition density (using the PSPFFT library\cite{budiardja2011parallel} for solving the Poisson equation and the CHELPG algorithm \cite{breneman1990determining}), the resulting vectors are very similar to the tight-binding results.
The valence basis set employed in DFTB is also not to blame. With a minimal STO-3G basis set the resulting TD-DFT NAC vectors are indistinguishable from the def2-SVP results.
\section{Conclusion}
Two simple semiempirical approximations for non-adiabatic coupling vectors between excited
singlet states and the ground state were implemented in the frame of (LC)-TDDFTB and compared with TD-DFT coupling vectors as benchmarks
for a set of planar chromophores with bright $S_1$ states.
The TC approximation is based on excitation energies, atom-centered transition charges and geometric information.
In the LO approximation the coupling between many-body states is calculated from the coupling vectors between
molecular orbitals.
While easy to implement and highly efficient, both approximations are not accurate enough to predict the absolute magnitude of the non-adiabatic coupling vector. In particular the LO approximation underestimates couplings by one order of magnitude. Nevertheless, the region in the molecule where the coupling is large can often be identified. For a series of fused porphyrin tapes the reduction in the electronic coupling per porphyrin unit can be explained by the increasing delocalization of the excitation. As a general rule, spreading transition charges over a larger area reduces the electronic non-adiabatic coupling. This however, does not imply that the fluorescence quantum yield may be increased simply by enlarging the delocalization length, since larger $\pi$-system also have larger nuclear non-adiabatic couplings due to the increased density of states.
The upshot is that quantitative NAC vectors cannot be obtained with these simple approximations. The implementation of analytical coupling vectors in the spirit of Ref. \cite{send2010first} can in principle be adapted to tight-binding DFT in analogy to the analytic gradients \cite{heringer2007analytical} but will require a major effort. The LO approximation is a first step in that direction. Without going to these lengths, the TC approximation might be improved upon by including higher multipoles to represent the transition density more faithfully away from the molecular plane.
\section*{Acknowledgements}
A.H. and R.M. acknowledge funding by the European Research Council (ERC) Consolidator Grant DYNAMO (Grant No. 646737).
|
1,314,259,993,389 | arxiv | \section{Introduction }
When a small temperature gradient exists between two boundaries of
a material, it is expected that heat will be transported through
the material, which usually obeys the Fourier's law of conduction
($\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{j}} = - \kappa \nabla T)$, a well-known fact in three-dimensional
systems, where
$\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{j}} $ is the heat amount flowing through a unit surface per unit
time,$\nabla T$ is the gradient of the temperature field over the
material, and $\kappa $ is defined as the thermal conductivity.
However, it is not clear whether the Fourier's law is still
correct in the lower (one or two) dimensional systems, which
stimulated a great interest in the past several years. It has been
shown [1-14] that heat conduction exhibits anomalous behavior in
some one dimensional systems. For example, in the one-dimensional
(1D) integrable systems, such as the harmonic chain and the
monoatomic Toda model, no temperature gradient can be set up. In
some 1D nonintegrable systems, such as the F-K model [2], the
discrete $\phi ^4$ model [4] and the Lorentz model [5], the
temperature gradient is uniform, and the heat conductivity $\kappa
$ is a constant, being independent of system size, which means
these 1D systems still obey Fourier's law. However, in some other
anharmonic 1D systems, like the Fermi-Pasta-Ulam (FPU) $\beta
$-model [6, 2], the diatomic Toda chain [7], or in 1D
hard-particle gases with alternating masses [7, 8, 9], the
temperature gradient can be formed as $\frac{dT}{dx} \sim L^{ -
1}$, and their $\kappa $ scales as $\kappa \sim L^\alpha $, where
$L$ is the system size, and $\alpha > 0$. Recently, Wang
\textit{et al.} [14] studied the anomalous thermal conduction in
1D polymer chains with a modeled Hamiltonian, and found three
types of divergent exponent $\alpha $ in them, which are caused by
different couplings between longitudinal and transverse motions.
However, although a great deal of theoretical researches on the
problem had been made in the past several years, there exists
right now still a lot of controversies about the divergence
behavior of the thermal conductance in low dimensional systems,
and many important and fundamental questions in this field remain
unsolved.
But all these systems said above seem to be far from real physical
materials. What will happen for the thermal conduction in a real
low dimensional material? Does it follow the Fourier's law or not?
All of these problems not only stimulate fundamental research
interests, but also have great potential applications in the
thermal conduction of the nano-materials. When the dimensions of
electronic devices shrink to nano-scale due to the fast progress
in the present microelectronic technology, the thermal conduction
problem becomes more and more important because a significant
energy should be dissipated in a much smaller compact space. And
the divergence of the thermal conductance with the length in the
low dimensional materials promises possibility of making the more
outstanding heat dissipation nano-materials, solving the thermal
dissipation problem coming from the miniaturization of the
electronic and optical devices. So, it is very interesting to
study the heat conduction in a real 1D or quasi-1D physical
system.
Recently, carbon nanotubes (CNTs) have attracted much attention
due to their remarkable electronic, thermal and mechanical
properties [15]. The diameter of a typical nanotube ranges from
several to several tens angstroms, and that of the smallest one is
only $3$ \AA[16]. While their lengths can be several $\mu $m, and
even reach to several mm, being much larger than their diameters.
So, carbon nanotube can be thought as a very well 1D system. In
fact, many experiments and numerical simulations have found that
the thermal conductivity $\kappa $ of SWNTs is extremely high
although there exists a distribution of the obtained $\kappa $
values. For example, the observed room-temperature thermal
conductivity of SWNT rope is about 1750$\sim $5800 W/mK [17], and
for individual multiwalled carbon nanotubes (MWNTs), this value is
larger than 3000 W/mK [18]. Using equilibrium and nonequilibrium
molecular dynamics simulations, recent numerical simulations also
give similar results. Berber \textit{et al.} [19] found that, for
an isolated (10, 10) nanotube at room temperature, $\kappa \approx
6600$ W/mK. S. Maruyama \textit{et al.} [20] claimed that $\kappa
$ of (5, 5) nanotube diverges as a power law relation with the
tube length, and got a rather smaller $\kappa $ value of about 150
$\sim $ 500 W/mK for the (5, 5) tube. Zhang and Li [21] studied
three armchair SWNTs, i.e., (5, 5), (10, 10) and (15, 15), and
found that their $\kappa's$ diverge as a power law, too, with
their $\kappa $ values of about 700 $\sim $ 2200 W/mK, higher than
that in Ref.[20]. Yao \textit{et al.} [22] also studied the same
three armchair tubes, and found their $\kappa's$ could diverge
with their lengths, and have the same higher $\kappa $ value of
about 400 $\sim $ 2500 W/mK.
At the same time, a new type of carbon structure, carbon nanowires
(CNWs) [23] have been discovered in the cathode deposits prepared
by hydrogen arc discharge evaporation of carbon rods. The CNWs are
made of extraordinarily long 1D linear carbon chains consisting of
more than 100 carbon atoms inserted into the innermost tube (7 \AA
diameter) of MWNTs. The CNW can be considered as another good
example of the real 1D physical system. In this paper, we will
investigate in detail the heat conduction in the CNWs and pay much
our attention to the divergence behavior of its thermal
conductivity.
In what follows we firstly introduce the model Hamiltonian and calculation
method. Then in Sec. III we give the main numerical results and discuss the
divergence of thermal conductivity in the CNWs. The paper ends with some
concluding remarks in Sec. IV.
\section{Model}
As well known, a bare long carbon chain can not be stable, and so
we suppose a chain of N carbon atoms with the same mass $m_c $ is
inserted into a single-walled carbon nanotube. The interaction
between chain atoms is simulated by the Tersoff-Brenner bond order
potential [24], and the interaction between carbon chain and
outside nanotube is described by Lennard-Jones(LJ) potential,
\begin{equation}
\label{eq1}
u\left( x \right) = 4\varepsilon \left[ { - \left( {\frac{\sigma }{x}}
\right)^6 + \left( {\frac{\sigma }{x}} \right)^{12}} \right].
\end{equation}
In our simulation, $\varepsilon $ and $\sigma $ are taken as 2.41
mev and 3.4 \AA [25], respectively. Then the Hamiltonian of the
chain system can be written as
\begin{equation}
\label{eq2}
H = \sum\limits_{i = 1}^N
{\frac{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{p}} _i^2 }{2m_i }} + \textstyle{1 \over 2}\sum\limits_{i,j = 1}^N {V_{ij} +
U_i } ,
\end{equation}
\noindent
where
\begin{equation}
\label{eq3}
V_{ij} = f_c \left(
{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} _{ij} } \right)\left[ {a_{ij} f_R \left(
{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} _{ij} } \right) + b_{ij} f_A \left(
{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} _{ij} } \right)} \right].
\end{equation}
Here, $f_c \left(
{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} _{ij} } \right)$ is a cut-off function, $f_R \left(
{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} _{ij} } \right)$ and $f_A \left(
{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} _{ij} } \right)$ are two Morse type functions which represent
the attractive and repulsive effects of the potential,
respectively, and $a_{ij} $ and $b_{ij} $ are two parameters
describing bond order and bond angular effects. Full details of
the model potential are available in the original paper of Tersoff
and Brenner [24]. $U_i $ is external potential exerted by outside
nanotube. $m_i $ is the mass of chain atoms, $i.e.$, the carbon
atom mass. For simplicity, we assume the carbon atoms on outside
nanotube is distributed \textit{continuously}, which is well known
as the continuum model and used in a lot of systems [26-37]. For
example, based upon the same idea, L.A. Girifalco \textit{et al.}
developed a simple universal graphitic potential in their paper
[37]. Now, following them, we take external potential as:
\begin{equation}
\label{eq4}
U\left( r \right) = n_\sigma \int {u\left( x \right)} d\Sigma ,
\end{equation}
\noindent
where $r$ and $x$ represent the time-dependent distances of the wire atom to
tube axis and surface element $d\Sigma $, respectively. $n_\sigma $ is the
mean surface density of tube atoms. In the cylindrical coordinates, $U\left(
r \right)$ can be expressed as:
\begin{equation}
\label{eq5}
U\left( r \right) = n_\sigma \int {u\left( x \right)} \rho d\theta dz,
\end{equation}
\noindent
where
\begin{equation}
\label{eq6}
x = \sqrt {\left( {\rho \cos \theta - r} \right)^2 + \rho ^2\sin ^2\theta +
z^2} ,
\end{equation}
\noindent
and $\rho $ is the radius of outside tube, $\theta $ and $z$ is another two
cylinder coordinates of $d\Sigma $.
Thus, when $r \ne 0$, the surface integral can be simplified to give
\begin{equation}
\label{eq7}
U\left( r \right) = 3\pi \rho n_\sigma \varepsilon \left[ { - \frac{\sigma
^6}{\left( {4\rho r} \right)^{\textstyle{5 \over 2}}}I_5 +
\frac{21}{32}\frac{\sigma ^{12}}{\left( {4\rho r} \right)^{\textstyle{{11}
\over 2}}}I_{11} } \right],
\end{equation}
\noindent
where
\begin{equation}
\label{eq8}
I_n = \int_0^{\textstyle{\pi \over 2}} {\frac{dt}{\left( {a^2 + \sin ^2t}
\right)^{\textstyle{n \over 2}}}} ,
\end{equation}
\[
a^2 = \frac{\left( {\rho - r} \right)^2}{4\rho r}.
\]
Here, $I_n $ is an integral related to the hypergeometric function, which
can be made exactly in an expanded series, and obtained result is expressed
as
\[
I_n = \textstyle{\pi \over 2}b^{ - n}\left[ {1 + \sum\limits_{m =
1}^\infty {\frac{\left( {2m - 1} \right)!!\left( {2m + n - 2}
\right)!!}{\left( {n - 2} \right)!!\left[ {\left( {2m} \right)!!}
\right]^2b^{2m}}} } \right],
\]
\begin{equation}
\label{eq9}
b^2 = a^2 + 1 = \frac{\left( {\rho + r} \right)^2}{4\rho r}.
\end{equation}
But when $b \to 1$, $i. e.$, $r \sim \rho $, the summation in the $I_n $ will
converge very slowly. So, in that case, after taking some algebra technique,
a more efficient expression can be finally obtained:
\begin{widetext}
\begin{equation}
\label{eq10}
I_{2k + 1} \approx \left\{ {\frac{1}{\left( {2k - 1} \right)!!}\left(
{\frac{2}{a^2}} \right)^k\sum\limits_{m = 0}^{k - 1} {\frac{\left[ {\left(
{2m} \right)!} \right]^2}{\left[ {m!} \right]^3}\frac{\left( {k - m - 1}
\right)!}{2}\left( {\frac{a}{4}} \right)^{2m}} + \frac{\left( {2k - 1}
\right)!!}{\left( {2k} \right)!!}} \right\}.
\end{equation}
\end{widetext}
Although this expression is an approximate one, but when $a$ is
small enough, it can give very accurate result of $I_n $, and
needs only to take a few terms in its summation. Thus combining
Eq. 7, 8, 9 with Eq. 10, we obtain an efficient external
potential, representing in high precision the interaction between
the chain atoms and the outside tube.
In this work, we only consider armchair SWNT (5, 5) as the outside
tube because its radius is about 3.4 \AA, which is the closest to
that of the innermost tube observed experimentally [23]. And the
average equilibrium distance between the chain atoms is set to be
$a$ = 1.84 \AA, which means there are four carbon atoms in three
periods of outside armchair nanotube. We should mention that at
present there are \textbf{NO experimental data} about the distance
between carbon atoms in the nanowire, which is so selected from
the \textbf{commensurability} between both periods of nanowire and
outside SWNT. In fact, we can choose different $a$ values to study
its effect on the thermal conduction of the CNW, which is beyond
scope of the present investigation and will be left for future
study.
We determine the heat current in a temperature gradient by
nonequilibrium molecular dynamics method. Two atoms at each end of
the CNW are subject to heat baths at $T_L $ and $T_H $
respectively, which usually can be simulated by Nos\'{e}-Hoover
thermostats [38]. The equations of motion for these four atoms are
written as
\[
\ddot
{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} }_i = - \zeta _L \dot
{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} }_i +
\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{f}} _i ,
\]
\begin{equation}
\label{eq11}
\ddot
{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} }_j = - \zeta _R \dot
{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} }_j +
\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{f}} _j ,
\end{equation}
\noindent
where $f_i $ is the force applied on the $i$th carbon atom. The thermal
variables $\zeta _L $ and $\zeta _R $ evolve according to the equations
\begin{equation}
\label{eq12}
\dot {\zeta }_{L,R} = \frac{1}{Q}\left( {\sum\limits_i
{\frac{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{p}} _i^2 }{m_i } - gk_B T} } \right),
\end{equation}
\[
Q = gk_B T\tau ^2.
\]
The number of degrees of freedom for particles in thermostats is given by $g
= 6$, and $\tau $ is the relaxation time of the heat bath. The rest of the
atoms follows the equations of motion
\begin{equation}
\label{eq13}
\ddot
{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} }_i =
\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{f}} _i ,
\quad
i = 3, \cdots ,N - 2,
\end{equation}
\noindent
and fixed boundary conditions are assumed for the zeroth and (N+1)th atoms
($\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} _0 \equiv \left( {0,0,0} \right)$,
$\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} _{N + 1} \equiv \left( {0,0,(N + 1)a} \right))$.
We first use an eighth-order Runge-Kutta algorithm to solve the
ordinary differential equations, which provides more accurate
results than those of the usual fourth-order Runge-Kutta
algorithm. The time step is chosen from $h = 0.01$ to $0.05$ in
the unit of 0.035267 ps. Typical total MD steps are taken as
$10^7$ to $10^8$. And for comparison, we also use the velocity
Verlet algorithm [39] in the same evolution.
The total heat flux can be expressed as follows:
\begin{equation}
\label{eq14}
\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{J}} \left( t \right) = \frac{d}{dt}\sum\limits_i
{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} _i \left( t \right)\varepsilon _i \left( t \right)} ,
\end{equation}
\noindent
where
$\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} _i \left( t \right)$is the time-dependent coordinate of the wire atom
$i$, and $\varepsilon _i \left( t \right)$ is its total energy, which
contains both of the kinetic and the potential energies. And after
introducing the force on the atom $i$ from atom $j$, i.e.,
$\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{F}} _{ij} = \frac{\partial \varepsilon _i }{\partial
\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} _j }$, the instantaneous local heat current per particle can be
expressed as:
\begin{equation}
\label{eq15}
\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{J}} \left( t \right) = \sum\limits_i {\dot
{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} }_i \varepsilon _i + \sum\limits_{i,j,i \ne j}
{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} _{ij} \left(
{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{F}} _{ij} \cdot \dot
{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} }_i } \right)} }
\end{equation}
\noindent
where
$\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} _{ij} =
\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} _i -
\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{r}} _j $ is the relative distance between atom $i$ and $j$.
Because of the linear temperature distribution, the classical definition on
the heat conductivity can be used, leading to:
\begin{equation}
\label{eq16}
\kappa = JN / \left( {T_L - T_R } \right)
\end{equation}
When $N \to \infty $, above expression corresponds to the
coefficient of heat conductivity of the chain under temperature $T
= \left( {T_L + T_R } \right) / 2$. In our MD process, $T_L$ and
$T_R$ are kept as 0.03 and 0.025, which correspond to real 290K
and 348K in practice, respectively.
The alternative way to calculate the heat conductivity of the chain is based
on the Green-Kubo formula [40]
\begin{equation}
\label{eq17}
\kappa _s = \frac{1}{k_B T^2N}\int_0^t {\left\langle {J\left( \tau
\right)J\left( 0 \right)} \right\rangle } d\tau ,
\end{equation}
\noindent
where $J\left( t \right)$ is defined in Eq. 14.
In our calculations, we found that these two definitions always give almost
equal results (the difference between them never exceeds a few percent).
\noindent
\section{numerical result and discussions}
\begin{figure}
\includegraphics[width=\columnwidth]{Fig1a.eps}
\label{fig1a}
\includegraphics[width=\columnwidth]{Fig1b.eps}
\label{fig1b} \caption{The temperature profiles on the chain at
$T_L = 0.03$ and $T_R = 0.025$, with $N$ = 64 (solid line) and 128
(dash-dot lines). The averages are carried over a time interval of
$10^4 \sim 10^5$. The distance between CNW atoms is 1.844$\mathop
A\limits^ \circ $. a). Eighth-order Runge-Kutta algorithm. b).
Velocity Verlet algorithm.}
\end{figure}
In Fig. 1 we show the temperature distribution on the chain,
calculated by both eighth-order Runge-Kutta algorithm [Fig. 1(a)]
and velocity Verlet algorithm [Fig. 1(b)]. It is seen from Fig.
1(a) and 1(b) that the linear temperature distribution can be
obtained with both algorithms, but there are still some small
differences between them. The velocity Verlet algorithm is a very
efficient one, by which, a very smooth temperature distribution is
obtained, but in this case, the Nos\'{e}-Hoover boundary condition
on the left end of the chain with higher temperature could not be
well treated unless the chain is long enough, which may result
from the sensitivity of this algorithm to the thermostat boundary
condition. So, we will mainly use the Runge-Kutta algorithm in
this work, and the velocity Verlet algorithm is used only as a
reference.
\begin{figure}
\includegraphics[width=\columnwidth]{Fig2a.eps}
\label{fig2a}
\includegraphics[width=\columnwidth]{Fig2b.eps}
\label{fig2b} \caption{Thermal conductivity of the CNW as a
function of its length. $a_0$ = 1.844 \AA. a) Linear-log plot. b)
Log-log plot. The solid lines in a) and b) are used for guiding
eyes.}
\end{figure}
The calculated thermal conductivities of CNWs with different
lengths are shown in Fig. 2, in which two types of scales are
shown, i.e., linear-log and log-log scale. Here we should explain
our definition on the cross section of CNW. The statistical radial
distribution for the motion of wire atoms along the direction
perpendicular to the tube axis is calculated, and obtained result
is given in Fig. 3, which can be fitted by $f\left( r \right) = A
\cdot r \cdot \exp \left[ { - \left( {\textstyle{r \over b}}
\right)^2} \right]$, with $r$ the radial distance to the tube
axis. It is found that the parameter $b$ is equal to 0.151 \AA,
and so, the final cross section area for the heat transport can be
expressed as $4\pi b^2$.
\begin{figure}
\includegraphics[width=\columnwidth]{Fig3.eps}
\caption{\label{fig3}Statistical radial distribution for the
motion of wire atoms perpendicular to the tube axis.}
\end{figure}
Thus in this case, the thermal conductivity $\kappa $ is obtained
to be 142 W/mK$\sim $1323 W/mK, for the chain length $L$ from 3 nm
to 188 nm, which is very high. For comparison, it is interesting
to list the $\kappa $ value of the SWNTs with their lengths in the
same range, which is about 10$^{2}$ W/mK [20], or about
10$^{2}\sim $10$^{3}$ W/mK [21, 22]. So, the thermal conductivity
of CNW is comparable to that of the SWNTs, at least, not much
smaller than that of the SWNT. For example, our calculated thermal
conductivity of the CNW with 512 atoms is 1.2x10$^{3}$ W/mK. But
Zhang et al. [21] got the corresponding thermal conductivity of
nearly 1.6x10$^{3}$ W/mK for the (5, 5) SWNT having 384 layers
(its length is the same as that of the CNW), which is on the same
order as our CNW result.
Now, we would like to ask a question: which type of divergence
behavior does our CNW system follow? Power law or logarithmic law?
From Fig. 2 it is clearly seen that when the system length is
increased, the $\kappa $ does NOT show completely a linear
behavior in both of linear-log and log-log scales, making
difficult to justify which type of divergent behavior, power or
logarithmic law is suitable to the CNW. But, comparing Fig. 2a
with 2b, we could conclude that the $\kappa$ of the CNW prefers
more the logarithmic divergence than the power law, at least for
the middle part of the data from about 32 to 512 atoms, which
demonstrates the divergence behavior of the CNW is different from
that of the SWNT, following the power law. Why logarithmic for the
CNW? We think it is just the transverse motions of the carbon
atoms on the CNW to relax its thermal conductance divergence, and
lead it to deviate from the 1D power law divergence.
\begin{figure}
\includegraphics[width=\columnwidth]{Fig4.eps}
\caption{\label{fig4}The length dependence of thermal conductivity
of perfect 1D carbon chain model.}
\end{figure}
In order to see more clearly the influence of the transverse
motion of the CNW, we have also studied the thermal conductivity
of perfect 1D carbon chains connected by the Tersoff-Brenner bond
order potential, in which, their transverse motions are not
permitted. The initial equilibrium distance between their
neighboring atoms is set to be 1.65 \AA, and their cross sections
are determined as follows. As well known, the cutoff distance in
the Tersoff-Brenner bond order potential is about 2.0 \AA, which
can be approximately considered as an interaction range between
two nearby carbon chains. So, the cross section of a perfect 1D
carbon chain can be roughly estimated to be about 3.14
{\AA}$^{2}$. The length dependence of the thermal conductivity is
shown in Fig. 4. It is seen from Fig. 4 that the $\kappa$ diverges
with chain length as, $\kappa \propto L^\beta $, with $\beta
\approx 0.39\pm 0.02$, and its $\kappa $ is about 212 W/m K$\sim
$511 W/m K, which is comparable with the result of CNW.
Comparison between the thermal conductivities of both 1D carbon
chain and the quasi-1D CNW clearly show that it is indeed the
transverse motions of the carbon atoms on the CNW to cause its
logarithmic thermal conductance divergence. We should emphasize
that in real systems, the divergence of thermal conductivity will
not be as simple as that found in the theoretical models such as
FPU model, which probably rests with the aspect ratio of the
system.
\begin{figure}
\includegraphics[width=\columnwidth]{Fig5.eps}
\caption{\label{fig5}Heat flux autocorrelation function of CNW
with 64 particles.}
\end{figure}
The heat flux autocorrelation function in the case of N=64 is also
calculated and shown in Fig. 5, from which it is seen that after a
very slow decay in about 9000 ps, the heat flux autocorrelation
function decreases to a small value. The similar phenomenon has
been observed by Yao \textit{et al.} [22], which can be understood
by the fact that after a long enough evolution, the final state
has no relationship with the initial state.
Finally, we will check the validity of ensemble average in this
low dimensional system. First, we compare those evolutions
starting from the different initial conditions. The obtained
results are shown in Fig. 6. Here the low or high initial
temperature means the initial temperature of every atom on the
chain is set to the lower or higher boundary temperature,
respectively. And the linear initial temperature means the initial
temperature of every atom is chosen based upon a linear
temperature distribution between the two boundaries. All the
distributions are calculated after a time interval $ \approx
2\times 10^5$.
\begin{figure*}
\includegraphics[width=0.9\columnwidth]{Fig6a.eps}
\label{fig6a}
\includegraphics[width=0.9\columnwidth]{Fig6b.eps}
\label{fig6b}
\includegraphics[width=0.9\columnwidth]{Fig6c.eps}
\label{fig6c}
\includegraphics[width=0.9\columnwidth]{Fig6d.eps}
\label{fig6d} \caption{The final temperature distributions of 512
particles evolved from different initial temperature profile. a).
low initial temperature. b). linear initial temperature. c). high
initial temperature. d). average of a), b) and c).}
\end{figure*}
At the first sight, four figures in Fig. 6 seem to be the same. In
fact there are only little differences between them, which means
our numerical results are reasonable, being independent of the
initial conditions. However, it is seen from Fig. 6 that Fig. 6(d)
is the smoothest one, which means an average over other three can
give a more reliable result, just like averaging over a longer
time interval. Thus, we can improve our calculation efficiency by
the following method: very different initial states are chosen
first, from which the system evolves, and after a period of
evolution time, an average over the different system evolutions
starting from the different initial conditions can be made. This
method can be considered as a high efficient parallel algorithm,
by which, the highest acceleration coefficient can be gained.
\section{Conclusion}
In this paper, the heat conduction of a finite length carbon chain
inserted into a (5, 5) SWNT has been studied by using the
nonequilibrium molecular dynamics method, in which both
longitudinal and transverse motions of the chain atoms are
permitted. The interaction between chain atoms and nanotubes has
been simulated by a continuous model for the nanotube. It is found
that heat conduction of CNW does not obey the Fourier's law, and
its thermal conductivity $\kappa $ logarithmicly diverges with CNW
length $L$ as $\kappa \sim \log \left( L \right)$.
\begin{acknowledgments}
We acknowledge support from the Natural Science Foundation of
China under Grants No. 90103038 and No. A040108.
\end{acknowledgments}
|
1,314,259,993,390 | arxiv | \section{Introduction}
\label{sec:intro}
Petri nets can be used to model systems and processes.
Many properties have been defined for Petri nets
that describe desirable characteristics of the modeled system or process \cite{lopn1,structure-theory-ToPNoC-advanced-course2010,murata}.
Examples include
deadlock-freeness (the system is always able to perform an action),
liveness (actions cannot get disabled permanently),
boundedness (the number is states is finite),
safeness (objects cannot be at the same location at the same time),
soundness (a case can always terminate properly) \cite{soundness-FACS}, etc.
In this paper, we investigate another foundational property: \emph{lucency}.
A system is lucent if it does not have different reachable states that enable the same actions, i.e., the set of enabled actions
uniquely characterizes the state of the system \cite{lucent-PN2018}.
Think of an information system that has a user interface showing what the user can do. In this example, lucency implies that the offered actions fully determine the internal state
and the system will behave consistently from the user's viewpoint.
If the information system would not be lucent, the user could encounter situations where the set of offered actions is the same, but the behavior is very different.
Another example is the worklist of a workflow management system that shows the workitems that can or should be executed.
Lucency implies that the state of a case can be derived based on the workitems offered for it.
In a Petri net setting, lucency can be defined as follows.
\emph{A marked Petri net is lucent if there are no two different reachable markings enabling the same set of transitions, i.e.,
markings are fully characterized by the transitions they enable.}
\begin{figure}[thb!]
\centering
\includegraphics[width=9.0cm]{./figures/fig-intro-fc}
\caption{$(N_1,M_1)$ is a free-choice net that is lucent, has a home cluster, but is not perpetual.}
\label{fig-intro-fc}
\end{figure}
Figure~\ref{fig-intro-fc} shows a marked Petri net that is lucent.
Each of the four reachable markings has a different set of enabled transitions.
Figure~\ref{fig-intro-nfc} shows a marked Petri net that is not lucent. Initially, one of the transitions $t1$ or $t2$ can occur, leading to two different states (the markings $[p2,p5]$ and $[p2,p6]$) that cannot be distinguished.
Only transition $t3$ is enabled, but the internal state matters.
$t1$ is always followed by $t4$ and $t2$ is always followed by $t5$.
\begin{figure}[thb!]
\centering
\includegraphics[width=9.0cm]{./figures/fig-intro-nfc}
\caption{$(N_2,M_2)$ is a non-free-choice net that is not lucent because the markings $[p2,p5]$ and $[p2,p6]$ enable the same set of transitions (just $t3$), thereby hiding the internal state.}
\label{fig-intro-nfc}
\end{figure}
Although we focus on Petri nets, lucency is a general notion that is independent of the modeling language used.
Even though lucency is an easy to define and foundational property,
it was not investigated until recently \cite{lucent-PN2018,lucent-translucent-fi-2019}. As described in \cite{lucent-translucent-fi-2019},
lucent process models are easier to discover from event data.
When the underlying process has states that are different,
but that enable the same set of activities,
then it is obviously not easy to learn these ``hidden'' states.
Commercial process mining systems mostly use the so-called Directly-Follows Graph (DFG) as a process model. Here the ``state'' is considered to be the last activity executed.
DFGs have problems dealing with concurrent processes and tend to produce imprecise and ``Spaghetti-like'' models because of that. More advanced process discovery techniques are able to discover concurrent process models \cite{process-mining-book-2016}, but need to ``guess'' the state of the process after each event. When using, for example, region theory, the state is often assumed to be the prefix of activities (or the multiset of activities already executed), leading to overfitting and incompleteness problems (one needs to see all possible prefixes).
For lucent process models, this problem is slightly easier because the state is fully determined by the set of enabled activities. See \cite{lucent-translucent-fi-2019} for more details about the discovery of lucent process models using translucent event logs.
Given the examples in Figures~\ref{fig-intro-fc} and \ref{fig-intro-nfc}, there seems a natural connection between the well-known free-choice property \cite{deselesparza} and lucency.
In a free-choice net, choice and synchronization can be separated.
However, as illustrated by Figure~\ref{fig-locally-safe-not-perpetual}, it is not enough to require that the net is free-choice. $(N_3,M_3)$ shown in Figure~\ref{fig-locally-safe-not-perpetual} is free-choice. It is actually a marked graph since there are no choices (i.e., places with multiple output arcs). The model in Figure~\ref{fig-locally-safe-not-perpetual} satisfies most of the (often considered desirable) properties defined for Petri nets.
$(N_3,M_3)$ is deadlock-free, live, bounded, safe, well-formed, free-choice, all markings are home markings, etc.
However, surprisingly $(N_3,M_3)$ is not lucent because the two reachable markings $[p1,p3,p6]$ and $[p1,p4,p6]$ enable the same set of transitions ($t1$ and $t4$).
This example shows that lucency does not coincide with any (or a combination) of the properties normally considered.
\begin{figure}[thb!]
\centering
\includegraphics[width=8.0cm]{./figures/fig-locally-safe-not-perpetual}
\caption{$(N_3,M_3)$ is a marked graph that is not lucent because the markings $[p1,p3,p6]$ and $[p1,p4,p6]$ enable the same set of transitions ($t1$ and $t4$), thereby hiding the internal state.}
\label{fig-locally-safe-not-perpetual}
\end{figure}
The notion of lucency was first introduced in \cite{lucent-PN2018}. The paper uses the example shown in Figure~\ref{fig-locally-safe-not-perpetual}
to demonstrate that even nets that are free-choice, live, and safe may not be lucent. Therefore, an additional requirement was added.
In \cite{lucent-PN2018}, the class of \emph{perpetual} nets is introduced in an attempt to relate well-known Petri net properties to lucency. Perpetual free-choice nets are free-choice Petri nets that are live and bounded and have a home cluster, i.e., there is a cluster such that from any reachable state,
there is a reachable state marking the places of this cluster.
Such a home cluster in a perpetual net
serves as a ``regeneration point'' of the process, e.g., to start a new process instance (case, job, cycle, etc.).
Any perpetual marked free-choice net is lucent.
However, there are many lucent systems that are
not perpetual because they are terminating or have an initialization phase (and are therefore not live).
This paper extends the work presented in \cite{lucent-PN2018}
which focused exclusively on perpetual marked free-choice nets.
For example, $(N_1,M_1)$ in Figure~\ref{fig-intro-fc}
is not perpetual. Actually, most of the work done on free-choice nets is limited to well-formed nets, i.e., nets that have a marking that is live and bounded. This is a structural property allowing for many interesting and advanced forms of analysis and reasoning \cite{bestfcn,structure-theory-ToPNoC-advanced-course2010,deselesparza}.
Such nets are automatically strongly-connected and do not have source and sink places to model
the start and the end of the process.
However, in many applications, such nets are not suitable.
For example, it is impossible to model systems and processes that can terminate.
In some cases, one can apply a trick and ``short-circuit''
the actual net to make it well-formed (see, for example, the analysis of soundness for workflow nets \cite{aaljcsc}).
However, this distracts from the essence of the property being analyzed.
This paper proves this point by showing that liveness is \emph{irrelevant} for ensuring lucency.
For example, the Petri net in Figure~\ref{fig-intro-fc} is lucent, but not well-formed.
In this paper, we show that all \emph{free-choice nets having a home cluster are lucent}. This significantly extends the
class perpetual marked free-choice nets and also includes
non-well-formed nets such as $(N_1,M_1)$ in Figure~\ref{fig-intro-fc}.
To do this, we provide a direct proof
that is \emph{not} building on
the traditional stack of results for well-formed free-choice nets.
In \cite{lucent-PN2018}, we need to use the
coverability theorem and the blocking marking theorem.
Moreover, the proof in \cite{lucent-PN2018} turned out to be incomplete and the repaired proof is even more involved.
\emph{The approach used to prove the correctness of the main result provides a novel perspective
enabling new analysis techniques for free-choice nets that do not need to be well-formed.}
Novel concepts like ``expediting transitions'', ``rooted disentangled paths'', and ``conflict-pairs'' can be used to prove many other properties free-choice nets having a home cluster.
This paper also relates the novel concepts and techniques presented in this paper to results based on
short-circuiting nets that are non-live and not strongly-connected (Section~\ref{sec:perpmarknets}).
This relation is used to show that we can check whether there is home cluster in polynomial time for free-choice nets (whether they are live and strongly-connected or not).
The remainder is organized as follows.
Section~\ref{sec:rw} briefly discusses related work.
Section~\ref{sec:prelim} introduces Petri nets and some of the basic notations.
Lucent Petri nets are defined in Section~\ref{sec:lucency}.
In Section~\ref{sec:arelucent} we show that free-choice nets having a home cluster are indeed lucent.
To do this, we introduce new notions such as (rooted)
disentangled paths and conflict-pairs.
Section~\ref{sec:perpmarknets} relates the work to perpetual marked free-choice nets and our earlier paper \cite{lucent-PN2018}.
Section~\ref{sec:concl} concludes the paper.
\section{Related Work}
\label{sec:rw}
This paper extends the work presented in \cite{lucent-PN2018}.
There are no other papers on the analysis of lucency (which is surprising).
Hence, we can only point to more indirectly related work.
For more information about Petri nets, we refer to \cite{mbp-aal-stahl-2011,murata,reisig-book-2013,lopn1,lopn2}.
Within the field of Petri nets ``structure theory'' plays an important role \cite{bestfcn,structure-theory-ToPNoC-advanced-course2010,deselesparza}.
Free-choice nets are well studied \cite{bdefcn,structure-theory-ToPNoC-advanced-course2010,Esparza98TCS,thiavoss}.
The definite book on the structure theory of free-choice nets is \cite{deselesparza}.
Also, see \cite{structure-theory-ToPNoC-advanced-course2010} for pointers to literature.
Therefore, it is surprising that the question of whether markings are uniquely identified by the set of enabled transitions (i.e., lucency)
has not been explored in literature.
Lucency is unrelated to the so-called ``frozen tokens'' \cite{frozen-tokens-wehler2010}.
A Petri net has a frozen token if there exists an infinite occurrence sequence never using the token.
It is possible to construct live and bounded free-choice nets that are lucent while having frozen tokens.
Conversely, there are live and bounded free-choice nets that do not have frozen tokens and are not lucent.
The results presented in this paper are also related to the \emph{blocking theorem}
\cite{blocking-theorem-gaujala2003,blocking-theorem-wehler2010}.
Blocking markings are reachable markings that enable transitions from only a single cluster. Removing the cluster yields a dead marking.
The blocking theorem states that in a bounded and live free-choice net each cluster has a unique blocking marking.
Lucency is broader than blocking markings since multiple clusters and concurrent transitions are considered.
Actually, lucency can be seen as a generalization of unique blocking markings.
Moreover, \cite{blocking-theorem-gaujala2003,blocking-theorem-wehler2010} only consider live Petri nets.
In \cite{reduction-wvda-PN2021}, we propose a framework based on sequences of $t$-induced T-nets and
$p$-induced P-nets to convert free-choice nets into T-nets and P-nets while preserving properties such as
well-formedness, liveness, lucency, pc-safety, and perpetuality. The framework allows for systematic
proofs that ``peel off'' non-trivial parts while retaining the essence of the problem
(e.g., lifting properties from T-nets and P-nets to free-choice nets).
A major difference between the work reported in this paper and
the extensive body of knowledge just mentioned is that we do \emph{not} require the Petri net to be well-formed. Liveness assumes that the system is cyclic and actions are always still possible in the future. This does not align well with the standard ``case notion'' used
in Business Process Management (BPM), Workflow Management (WFM), and Process Mining (PM) \cite{aaljcsc,process-mining-book-2016,soundness-FACS}.
Process instances have a clear start and end.
For example, process discovery algorithms from the field of PM all generate process models close to the workflow nets.
The languages used for BPM and WFM, e.g., BPMN and UML Activity Diagrams, are very different from well-formed Petri nets and closer to workflow nets.
The work presented in this paper supports both views.
The process models may be well-formed or not.
Therefore, we significantly generalize over the work presented in \cite{lucent-PN2018} and also present results that could be used for other questions.
\section{Preliminaries}
\label{sec:prelim}
This section introduces concepts related to Petri nets and some basic notations.
\subsection{Multisets, Sequences, and Functions}
\label{sec:basics}
$\bag(A)$ is the set of all \emph{multisets} over some set $A$.
For some multiset $b\in \bag(A)$, $b(a)$ denotes the number of times element $a\in A$ appears in $b$.
Some examples: $b_1 = [~]$, $b_2 = [x,x,y]$, $b_3 = [x,y,z]$, $b_4 = [x,x,y,x,y,z]$, and $b_5 = [x^3,y^2,z]$
are multisets over $A=\{x,y,z\}$.
$b_1$ is the empty multiset, $b_2$ and $b_3$ both consist of three elements, and
$b_4 = b_5$, i.e., the ordering of elements is irrelevant and a more compact notation may be used for repeating elements.
The standard set operators can be extended to multisets, e.g., $x\in b_2$, $b_2 \bplus b_3 = b_4$, $b_5 \setminus b_2 = b_3$, $|b_5|=6$, etc.
$\{a \in b\}$ denotes the set with all elements $a$ for which $b(a) \geq 1$.
$b(X) = \sum_{a \in X}\ b(x)$ is the number of elements in $b$ belonging to set $X$, e.g., $b_5(\{x,y\}) = 3+2=5$.
$b \leq b'$ if $b(a) \leq b'(a)$ for all $a \in A$. Hence, $b_3 \leq b_4$ and $b_2 \not\leq b_3$ (because $b_2$ has two $x$'s).
$b < b'$ if $b \leq b'$ and $b \neq b'$.
Hence, $b_3 < b_4$ and $b_4 \not< b_5$ (because $b_4 = b_5$).
$\sigma = \langle a_1,a_2, \ldots, a_n\rangle \in X^*$ denotes a \emph{sequence} over $X$ of length $\card{\sigma} = n$.
$\sigma_i = a_i$ for $1 \leq i \leq \card{\sigma}$.
$\langle~\rangle$ is the empty sequence.
$\sigma_1 \cdot \sigma_2$ is the concatenation of two sentences, e.g., $\langle x,x,y\rangle \cdot \langle x,z\rangle = \langle x,x,y,x,z\rangle$.
The notation $[a \in \sigma]$ can be used to convert a sequence into a multiset. $[a \in \langle x,x,y,x,z\rangle] = [x^3,y^2,z]$.
\subsection{Petri Nets}
\label{sec:petrinets}
Figures~\ref{fig-intro-fc}, \ref{fig-intro-nfc}, and \ref{fig-locally-safe-not-perpetual} already showed examples of marked Petri nets. To reason about such processes and to formalize lucency, we now provide the basic formalizations \cite{mbp-aal-stahl-2011,murata,reisig-book-2013,lopn1,lopn2}.
\begin{definition}[Petri Net]\label{def:pn}
A \emph{Petri net} is a tuple $N=(P,T,F)$ with $P$ the non-empty set of places, $T$ the non-empty set of transitions such that
$P \cap T = \emptyset$, and $F\subseteq (P \times T) \cup (T \times P)$ the flow relation such that the graph $(P \cup T, F)$ is (weakly) connected.
\end{definition}
Figure~\ref{fig-intro-fc} has
four places ($p1,p2,p3,p4$),
five transitions ($t1,t2,t3,t4,t5$),
and ten arcs.
The initial marking contains just one token located in place $p1$.
\begin{definition}[Marking]\label{def:mrk}
Let $N=(P,T,F)$ be a Petri net.
A \emph{marking} $M$ is a multiset of places, i.e., $M \in \bag(P)$.
$(N,M)$ is a marked net.
\end{definition}
A Petri net $N=(P,T,F)$ defines a directed graph with nodes $P\cup T$ and edges $F$.
For any $x\in P\cup T$, $\pre{x} = \{y\mid (y,x)\in F\}$ denotes the set of input nodes and
$\post{x} = \{y\mid (x,y)\in F\}$ denotes the set of output nodes.
The notation can be generalized to sets: $\pre{X}=\{y\mid \exists_{x\in X} \ (y,x)\in F\}$ and $\post{X} = \{y\mid \exists_{x\in X} \ (x,y)\in F\}$
for any $X \subseteq P\cup T$.
A transition $t \in T$ is \emph{enabled} in marking $M$ of net $N$, denoted as $(N,M)[t\rangle$, if each of its input places ${\pre{t}}$ contains at least one token.
$\mi{en}(N,M) = \{ t \in T \mid (N,M)[t\rangle \}$ is the set of enabled transitions.
An enabled transition $t$ may \emph{fire}, i.e., one token is removed from each of the input places ${\pre{t}}$ and
one token is produced for each of the output places ${\post{t}}$.
Formally: $M' = (M\bmin {\pre{t}})\bplus {\post{t}}$ is the marking resulting from firing enabled transition $t$ in marking $M$ of Petri net $N$.
$(N,M)[t\rangle (N,M')$ denotes that $t$ is enabled in $M$ and firing $t$ results in marking $M'$.
Let $\sigma = \langle t_1,t_2, \ldots, t_n \rangle \in T^*$ be a sequence of transitions.
$(N,M)[\sigma\rangle (N,M')$ denotes that there is a set of markings $M_1, M_2, \ldots, M_{n+1}$ ($n \geq 0$)
such that $M_1 = M$, $M_{n+1} = M'$, and $(N,M_i)[t_i\rangle (N,M_{i+1})$ for $1 \leq i \leq n$.
A marking $M'$ is \emph{reachable} from $M$ if there exists a \emph{firing sequence} $\sigma$ such that $(N,M)[\sigma\rangle (N,M')$.
$R(N,M) = \{ M' \in \bag(P) \mid \exists_{\sigma \in T^*} \ (N,M)[\sigma\rangle (N,M') \}$ is the set of all reachable markings.
$(N,M)[\sigma\rangle$ denotes that the sequence $\sigma$ is enabled when starting in marking $M$ (without specifying the resulting marking).
For the marked net in Figure~\ref{fig-intro-nfc}: $R(N_2,M_2)= \{ [p1],[p2,p5],[p2,p6],[p3,p5],[p3,p6],[p4]\}$.
Note that $\mi{en}(N_2,[p2,p5]) = \mi{en}(N_2,[p2,p6]) = \{t3\}$.
\subsection{Liveness, Boundedness, and Home Markings}
\label{sec:livenessetc}
Next, we define some of the standard behavioral properties for Petri nets: liveness, boundedness, and the presence of home markings.
\begin{definition}[Live, Bounded, Safe, Dead, Deadlock-free, Well-Formed]\label{def:lb}
A marked net $(N,M)$ is \emph{live} if for every reachable marking $M' \in R(N,M)$ and for every transition $t\in T$ there exists a marking $M'' \in R(N,M')$ that enables $t$.
A marked net $(N,M)$ is $k$-bounded if for every reachable marking $M' \in R(N,M)$ and every $p \in P$: $M'(p) \leq k$.
A marked net $(N,M)$ is \emph{bounded} if there exists a $k$ such that $(N,M)$ is $k$-bounded.
A 1-bounded marked net is called \emph{safe}.
A place $p\in P$ is \emph{dead} in $(N,M)$ when it can never be marked (no reachable marking marks $p$).
A transition $t\in T$ is \emph{dead} in $(N,M)$ when it can never be enabled (no reachable marking enables $t$).
A marked net $(N,M)$ is \emph{deadlock-free} if each reachable marking enables at least one transition.
A Petri net $N$ is \emph{structurally bounded} if $(N,M)$ is bounded for any marking $M$.
A Petri net $N$ is \emph{structurally live} if there exists a marking $M$ such that $(N,M)$ is live.
A Petri net $N$ is \emph{well-formed} if there exists a marking $M$ such that $(N,M)$ is live and bounded.
\end{definition}
\begin{definition}[Home Marking]\label{def:hm}
Let $(N,M)$ be a marked net.
A marking $M_H$ is a \emph{home marking} if for every reachable marking $M' \in R(N,M)$: $M_H \in R(N,M')$.
\end{definition}
Note that home markings do not imply liveness or boundedness, i.e., a Petri net may be non-well-formed and still have home markings. $(N_1,M_1)$ in Figure~\ref{fig-intro-fc} is not live and has one home marking $[p4]$. $(N_3,M_3)$ in Figure~\ref{fig-locally-safe-not-perpetual} is live and all of its reachable markings are home markings.
\subsection{Clusters}
\label{sec:clusters}
Clusters play a major role in this paper.
A cluster is a maximal set of connected nodes, only considering
arcs connecting places to transitions.
\begin{definition}[Cluster]\label{def:clust}
Let $N=(P,T,F)$ be a Petri net and $x \in P \cup T$.
The \emph{cluster} of node $x$, denoted $\cluster{x}$ is the smallest set such that
(1) $x \in \cluster{x}$,
(2) if $p \in \cluster{x} \cap P$, then ${\post{p}} \subseteq \cluster{x}$, and
(3) if $t \in \cluster{x} \cap T$, then ${\pre{t}} \subseteq \cluster{x}$.
$\cluster{N}= \{ \cluster{x} \mid x \in P \cup T\}$ is the set of clusters of $N$.
\end{definition}
Note that $\cluster{N}$ partitions the nodes in $N$.
The Petri net in Figure~\ref{fig-intro-fc} has four clusters:
$C_1 = \{p1,t1,t2\}$,
$C_2 = \{p2,t3\}$,
$C_3 = \{p3,t4,t5\}$, and
$C_4 = \{p4\}$.
The Petri net in Figure~\ref{fig-locally-safe-not-perpetual}
also has four clusters:
$C_1 = \{p1,t1\}$,
$C_2 = \{p2,p3,t2\}$,
$C_3 = \{p4,p5,t3\}$, and
$C_4 = \{p6,t4\}$.
\begin{definition}[Cluster Notations]\label{def:clustnot}
Let $N=(P,T,F)$ be a Petri net and $C \in \cluster{N}$ a cluster.
$\mi{Pl}(C) = P \cap C$ are the places in $C$, $\mi{Tr}(C) = T \cap C$ are the transitions in $C$, and $\mi{Mrk}(C) = [p \in \mi{Pl}(C)]$ is the smallest marking fully enabling the cluster.
\end{definition}
\subsection{Structural Properties}
\label{sec:structprop}
As defined before,
we require Petri nets to be \emph{weakly} connected.
$N$ is \emph{strongly} connected if the graph $(P \cup T,F)$ is strongly-connected, i.e., for any two nodes $x$ and $y$ there is a path leading from $x$ to $y$.
Various subclasses of Petri nets have been defined
based on the network structures they allow.
State machines, also called P-nets, do not allow for transitions with multiple input or output places.
Marked graphs, also called T-nets, do not allow for places with multiple input or output transitions.
In this paper, we focus on \emph{free-choice} nets that are \emph{proper}.
\begin{definition}[Free-choice Net]\label{def:fcne}
Let $N=(P,T,F)$ be a Petri net.
$N$ is \emph{free-choice net} if for any $t_1,t_2 \in T$: ${\pre{t_1}} = {\pre{t_2}}$ or ${\pre{t_1}} \cap {\pre{t_2}} = \emptyset$.
\end{definition}
In free-choice nets, choice and synchronization can be separated.
$(N_2,M_2)$ in Figure~\ref{fig-intro-nfc} is not a free-choice net, because the choice between $t4$ and $t5$ is controlled by the places $p5$ and $p6$.
\begin{definition}[Proper Petri Net]\label{def:proppn}
A Petri net $N=(P,T,F)$ is \emph{proper} if all transitions have input and output places, i.e., for all $t \in T$: $\pre{t} \neq \emptyset$ and $\post{t} \neq \emptyset$.
\end{definition}
Well-formed Petri nets are strongly-connected and therefore also proper. Workflow nets are not strongly-connected, but by definition proper. For the main results in this paper,
we consider proper Petri nets
instead of enforcing stronger structural or behavioral requirements such as strongly-connectedness, liveness, and boundedness.
\section{Lucent Petri Nets}
\label{sec:lucency}
This paper focuses on \emph{lucent} process models whose states are uniquely identified based on the activities they enable.
Lucency is a generic property that can be formulated in the context of Petri nets.
Given a marked Petri net, we would like to know whether each reachable marking has a unique ``footprint'' in terms of the transitions it enables.
If this is the case, then the Petri net is \emph{lucent}.
\begin{definition}[Lucent Petri nets]\label{def:lucent}
Let $(N,M)$ be a marked Petri net. $(N,M)$ is \emph{lucent} if and only if for any $M_1,M_2 \in R(N,M)$: $\mi{en}(N,M_1)=\mi{en}(N,M_2)$ implies $M_1 = M_2$.
\end{definition}
$(N_1,M_1)$ depicted in Figure~\ref{fig-intro-fc} is lucent.
$(N_2,M_2)$ and $(N_3,M_3)$ in Figures~\ref{fig-intro-nfc} and \ref{fig-locally-safe-not-perpetual} are not lucent.
$(N_4,M_4)$ depicted in Figure~\ref{fig-fc-nonlucid} is also not lucent. Both $[p3,p5,p7]$ and $[p3,p7,p8]$ are reachable from the initial marking and enable the same set of transitions.
\begin{figure}[thb!]
\centering
\includegraphics[width=10.0cm]{./figures/fig-fc-nonlucid}
\caption{$(N_4,M_4)$ is a marked free-choice Petri net that is not lucent: $[p3,p5,p7]$ and $[p3,p7,p8]$ enable $t1$ and $t4$.}
\label{fig-fc-nonlucid}
\end{figure}
Unbounded marked Petri nets are, by definition, not lucent.
However, the examples illustrate that the reverse does not hold.
\begin{proposition}[Boundedness of Lucent Petri Nets]
Any lucent marked Petri net is bounded.
\end{proposition}
\begin{proof}
A marked net with $n= \card{T}$ transitions cannot have more than $2^n$ possible sets of enabled transitions.
Lucency implies that each set of enabled transitions corresponds to a unique marking.
Hence, there cannot be more than $2^n$ reachable markings (implying boundedness).
\end{proof}
We would like to find subclasses of nets that are guaranteed to be lucent based on their structure.
At first, one is tempted to think that bounded free-choice nets are lucent.
However, as Figure~\ref{fig-locally-safe-not-perpetual} and Figure~\ref{fig-fc-nonlucid} show, this is not sufficient.
Lucency is related to the notion of transparency, i.e.,
all tokens are in the input places of enabled transitions and therefore not ``hidden''.
\begin{definition}[Transparent Marking]\label{def:trabsp}
Let $(N,M)$ be a marked Petri net. Marking $M$ is a \emph{transparent} marking of $N$ if and only if $M =
[ p \in P \mid \exists_{t \in \mi{en}(N,M)} \ p \in \pre{t}]$.
$(N,M)$ is fully transparent if and only if each reachable marking is transparent.
\end{definition}
Full transparency implies lucency, but the reverse does not hold.
Actually, full transparency does not allow for synchronization and concurrency and is therefore very limiting.
\begin{proposition}
Let $(N,M)$ be a marked Petri net.
If $(N,M)$ is fully transparent, then $(N,M)$ is lucent.
The reverse does not hold.
\end{proposition}
Figure~\ref{fig-home-lucent}
shows a marked free-choice Petri net that is lucent but not fully transparent.
Consider, for example, the reachable marking $[p4,p7]$ enabling $t5$. There is only one reachable marking which enables only $t5$.
However, marking $[p4,p7]$ is not transparent since the token in $p7$ is ``hidden''.
\begin{figure}[thb!]
\centering
\includegraphics[width=9.0cm]{./figures/fig-home-lucent}
\caption{$(N_5,M_5)$ is a marked free-choice Petri net that is lucent but not fully transparent.}
\label{fig-home-lucent}
\end{figure}
\section{Free-Choice Nets With Home Clusters Are Lucent}
\label{sec:arelucent}
In \cite{lucent-PN2018}, we defined the class of perpetual nets in an attempt to identify a large class of lucent Petri nets.
Here, we aim to substantially extend the class of Petri nets
that is guaranteed to be lucent.
Like in \cite{lucent-PN2018} we use the notion of \emph{home clusters},
but drop the liveness and boundedness requirements.
Actually, none of the Petri nets shown in this paper is perpetual, including the two lucent nets $(N_1,M_1)$ and $(N_5,M_5)$.
\begin{definition}[Home Clusters]\label{def:homeclust}
Let $(N,M)$ be marked Petri net. $C$ is a \emph{home cluster} of $(N,M)$ if and only if $C \in \cluster{N}$ (i.e., $C$ is a cluster) and
$\mi{Mrk}(C)$ is a home marking of $(N,M)$. If such a $C$ exists, we say that $(N,M)$ has a home cluster.
\end{definition}
Note that a home marking may be dead, but then it should be unique, i.e., a clear \emph{termination point}.
If the initial marking is a home marking, it can be seen as a \emph{regeneration point}.
A mentioned before,
the key results in this paper apply only to proper Petri nets where all
transitions have input and output places.
It is always possible to add a self-loop place to ensure this (without changing the behavior).
Moreover, a Petri net having a transition without any input places
and at least one output place, is unbounded for any initial marking
and therefore non-lucent.
Transitions without output places make most sense in unbounded nets (which are non-lucent).
Adding a self-loop place to make the Petri net proper, typically results in a model that has no home cluster. However, such models tend to be unbounded and therefore non-lucent anyway.
Note that in literature most authors consider well-formed Petri nets. These are strongly-connected and therefore also proper. Here, we consider a substantially larger class of models.
For example,
the marked nets $(N_1,M_1)$, $(N_2,M_3)$,
$(N_4,M_4)$, and $(N_5,M_5)$ are not well-formed, but proper.
Actually, $(N_3,M_3)$ in Figure~\ref{fig-locally-safe-not-perpetual} is the only well-formed net in this paper (and therefore also proper).
This paper shows that we can drop the well-formedness requirement
and still ensure lucency.
\subsection{Properties of Home Clusters}
\label{sec:prophc}
We first explore some of the essential properties of home clusters
in marked proper free-choice nets.
First, we show that there are two types of clusters: (1) just an isolated end place or (2) a set of places sharing one or more output transitions.
\begin{proposition}[Two Types Of Clusters]\label{prop:cluschar}
Let $(N,M)$ be a marked proper Petri net having a home cluster $C$.
If there is a reachable marking $M' \in R(N,M)$ that is dead,
then $M' = \mi{Mrk}(C)$,
$\card{\mi{Pl}(C)} = 1$, and $\mi{Tr}(C) = \emptyset$.
If $(N,M)$ is deadlock-free, then $\mi{Tr}(C) \neq \emptyset$.
\end{proposition}
\begin{proof}
From any reachable marking, one can reach $\mi{Mrk}(C)$.
Hence, if there is a dead marking, then $\mi{Mrk}(C)$ can be the only reachable dead marking.
If not, $\mi{Mrk}(C)$ would not be reachable from this alternative dead marking.
If all places in $\mi{Pl}(C)$ are marked, all transitions $\mi{Tr}(C)$ must be enabled. Hence, $\mi{Tr}(C) = \emptyset$ (otherwise $\mi{Pl}(C)$ cannot be dead).
If $\mi{Tr}(C) = \emptyset$, then the cluster must be a singleton, i.e., $C = \{p_C\}$ (transitions are needed to enlarge the cluster, see Definition~\ref{def:clust}).
If $(N,M)$ is deadlock-free, $\mi{Mrk}(C)$ can be reached and should not be dead. Hence, $\mi{Tr}(C) \neq \emptyset$.
\end{proof}
Most of the results for home markings only apply to \emph{well-formed} free-choice nets,
e.g.,
S-Coverability Theorem,
T-Coverability Theorem,
Rank Theorem,
Duality Theorem,
Completeness of Reduction Rules Theorem,
Existence of Home Markings Theorem,
Blocking Marking Theorem,
and Home Marking Theorem
\cite{bestfcn,structure-theory-ToPNoC-advanced-course2010,deselesparza}.
We focus on proper free-choice nets
and do \emph{not} require liveness to ensure boundedness, as is shown next.
\begin{definition}[Expedite a Transition in a Transition Sequence]\label{def:expedite}
Let $N=(P,T,F)$ be a free-choice net, $M \in \bag(P)$,
$\sigma = \langle t_1,t_2, \ldots ,t_i, \ldots ,t_j,\allowbreak \ldots ,t_n \rangle \in T^*$,
$(N,M)[\sigma\rangle$ (i.e., the sequence $\sigma$ is enabled),
and $1 \leq i < j \leq n$.
$\mi{exp}_{(N,M)}(\sigma,i,j) = \mi{true}$ if and only if
\begin{itemize}[noitemsep,topsep=2pt]
\item $(N,M)\allowbreak[\langle t_1,t_2, \ldots ,t_{i-1},t_j\rangle \rangle$ (i.e., it is possible to execute the prefix involving the first $i-1$ transitions followed by $t_j$), and
\item $\cluster{t_k} \neq \cluster{t_j}$ for all $k \in \{i, \ldots, j-1\}$ (i.e., $t_j$ is the first transition of the respective cluster after $t_{i-1}$).
\end{itemize}
$\mi{exp}_{(N,M)}(\sigma,i,j)$ denotes that the $j$-th transition can be \emph{expedited} by moving $t_j$ to position $i$.
$\sigma_{i \leftarrow j} = \langle t_1,t_2, \ldots ,t_{i-1},t_j,t_i, \ldots ,t_{j-1},t_{j+1} \ldots ,t_n \rangle$ is the corresponding transition sequence where the
$j$-th transition is moved to the $i$-th position.
$\mi{Exp}_{(N,M)}(\sigma) \subseteq T^*$ is the subset of all transition sequences that can be obtained by repeatedly expediting transitions, i.e.,
$\mi{Exp}_{(N,M)}(\sigma)$ is the smallest set such that:
\begin{itemize}[noitemsep,topsep=2pt]
\item $\sigma \in \mi{Exp}_{(N,M)}(\sigma)$ and
\item $\sigma'_{i \leftarrow j} \in \mi{Exp}_{(N,M)}(\sigma)$ if $\sigma' \in \mi{Exp}_{(N,M)}(\sigma)$, $1 \leq i < j \leq \card{\sigma'}$, and $\mi{exp}_{(N,M)}(\sigma',i,j)$.
\end{itemize}
\end{definition}
Any $\sigma' \in \mi{Exp}_{(N,M)}(\sigma)$ is a permutation of $\sigma$ and, as we will show next, is enabled if $\sigma$ is enabled.
$\sigma_{i \leftarrow j}$ moves the $j$-th transition to the $i$-th position and is enabled at that position.
Consider $(N_5,M_5)$ in Figure~\ref{fig-home-lucent} and $\sigma =
\langle t_2,t_5,t_6,t_8,t_8 \rangle$, $\sigma_{2 \leftarrow 3} = \langle t_2,t_6,t_5,t_8,t_8 \rangle$ and $\sigma_{2 \leftarrow 4} = \sigma_{2 \leftarrow 5} = \langle t_2,t_8,t_5,t_6,t_8 \rangle$.
$\sigma_{2 \leftarrow 3} \in \mi{Exp}_{(N_5,M_5)}(\sigma)$, because $\langle t_2,t_6\rangle$ is possible and $t_6$ is the first transition of the respective cluster.
$\sigma_{2 \leftarrow 4} \not\in \mi{Exp}_{(N_5,M_5)}(\sigma)$, because $\langle t_2,t_8\rangle$ is not possible ($t_8$ is not enabled yet).
Next, we show that expediting transitions is possible and leads to the same marking.
\begin{lemma}[Expediting Transitions Is Safe]\label{lemma:reord}
Let $N=(P,T,F)$ be a free-choice net,
$M,M' \in \bag(P)$, and
$\sigma \in T^*$ such that
$(N,M)[\sigma \rangle (N,M')$.
For any $\sigma' \in \mi{Exp}_{(N,M)}(\sigma)$:
$(N,M)[\sigma' \rangle (N,M')$.
\end{lemma}
\begin{proof}
Assume $N=(P,T,F)$ is a free-choice net and $M$, $M'$, and $\sigma$ are such that $(N,M)[\sigma \rangle (N,M')$.
$\mi{Exp}_{(N,M)}(\sigma)$ is defined as the smallest set such that (1) $\sigma \in \mi{Exp}_{(N,M)}(\sigma)$ and (2)
$\sigma'_{i \leftarrow j} \in \mi{Exp}_{(N,M)}(\sigma)$ if $\sigma' \in \mi{Exp}_{(N,M)}(\sigma)$, $1 \leq i < j \leq \card{\sigma}$, and $\mi{exp}_{(N,M)}(\sigma',i,j)$.
We provide a proof using induction based on the iterative construction of $\mi{Exp}_{(N,M)}(\sigma)$.
(1) The base step $\sigma'= \sigma$ obviously holds, because $\sigma \in \mi{Exp}_{(N,M)}(\sigma)$ and $(N,M)[\sigma \rangle (N,M')$.
(2) For the inductive step, it suffices to prove that
$(N,M)[\sigma'_{i \leftarrow j} \rangle (N,M')$ assuming that $\sigma' = \langle t_1, \ldots ,t_n \rangle \in \mi{Exp}_{(N,M)}(\sigma)$, $1 \leq i < j \leq n$, $\mi{exp}_{(N,M)}(\sigma',i,j)$, and $(N,M)[\sigma' \rangle (N,M')$.
We need to prove that $\sigma'_{i \leftarrow j} = \langle t_1,t_2, \ldots ,t_{i-1},\allowbreak t_j,\allowbreak t_i, \ldots ,t_{j-1},t_{j+1} \ldots ,t_n \rangle$ is indeed enabled
and leads to the same final marking, i.e., $(N,M)[\sigma'_{i \leftarrow j} \rangle (N,M')$.
Let $M''$ be the marking after firing the first $i-1$ transitions, i.e., $(N,M)\allowbreak[\langle t_1,t_2, \ldots ,t_{i-1}\rangle \rangle (N,M'')$.
$t_j \in \mi{en}(N,M'')$ because $\mi{exp}_{(N,M)}(\sigma',i,j)$ (see first condition).
The transitions $t_i, \ldots ,t_{j-1}$ do not consume any tokens from $\cluster{t_j}$ (use the second condition in $\mi{exp}_{(N,M)}(\sigma',i,j)$ stating that $\cluster{t_k} \neq \cluster{t_j}$ for all $k \in \{i, \ldots, j-1\}$)
and therefore can still be executed ($t_j$ only consumed tokens from places in $\cluster{t_j}$).
The marking reached after $\langle t_1,t_2, \ldots ,t_{i-1},\allowbreak t_j,\allowbreak t_i, \ldots ,t_{j-1} \rangle$ is the same as
reached after prefix $\langle t_1,t_2, \ldots ,t_{i-1},\allowbreak t_i, \ldots ,t_{j-1},t_j \rangle$.
Moreover, the same subsequence of transitions $\langle t_{j+1} \ldots ,t_n \rangle$ remains.
Hence, $(N,M)[\sigma'_{i \leftarrow j} \rangle (N,M')$ thus completing the proof.
\end{proof}
Note that for any $\sigma' \in \mi{Exp}_{(N,M)}(\sigma)$: $[t \in \sigma'] = [t \in \sigma]$ (i.e., $\sigma'$ and $\sigma$ are permutations of the same multiset) and the order per cluster does not change, i.e., transitions can only ``overtake'' transitions of other clusters.
Lemma~\ref{lemma:reord} shows that expediting transitions does not jeopardize the ability to execute the remainder of an enabled firing sequence.
This can be used to show that it is impossible to have a marking dominating the home marking (i.e., one cannot reach a strictly larger marking).
\begin{theorem}[No Dominating Markings in Free-Choice Nets With a Home Cluster]\label{theo:dommark}
Let $(N,M)$ be a marked proper free-choice net having a home cluster $C$.
For all $M' \in R(N,M)$: if $M' \geq \mi{Mrk}(C)$,
then $M' = \mi{Mrk}(C)$.
\end{theorem}
\begin{proof}
Consider a marked proper free-choice net $(N,M)$ having a home cluster $C$.
Assume there exists a reachable marking $M' \in R(N,M)$ such that $M' > \mi{Mrk}(C)$.
We show that this is \emph{impossible}, thereby proving the theorem.
First, we assume that $(N,M)$ has a deadlock and show that this leads to a contradiction.
Using Proposition~\ref{prop:cluschar},
we know that $\mi{Mrk}(C)$ is the only reachable dead marking
and $\mi{Tr}(C) = \emptyset$.
However, $M' > \mi{Mrk}(C)$ is reachable and the token in $C$ cannot be removed anymore if $\mi{Tr}(C) = \emptyset$.
Since the net is proper, any marking reachable from $M'$ will have at least one extra token next to the token in $\mi{Mrk}(C)$.
Therefore, $\mi{Mrk}(C)$ cannot be reached, contradicting that $C$ is a home cluster. Hence, $(N,M)$ must be deadlock-free.
Since $(N,M)$ is deadlock-free, $\mi{Tr}(C) \neq \emptyset$ (use Proposition~\ref{prop:cluschar}), i.e., the home cluster has at least one transition.
All transitions in $\mi{Tr}(C)$ live, because we can always reach the home marking $\mi{Mrk}(C)$ again and again.
Without loss of generality, we can assume that $M' \in R(N,M)$ is such that
the distance to the home marking $\mi{Mrk}(C)$ is \emph{minimal}.
Let $\sigma_s$ be a \emph{shortest} path from $M'$ to $\mi{Mrk}(C)$ having length $ \card{\sigma_s}$.
In other words, $(N,M')[\sigma_s\rangle (N,\mi{Mrk}(C))$, and for all $M_{\mi{alt}} \in R(N,M)$ and $\sigma_{\mi{alt}} \in T^*$ such that $M_{\mi{alt}} > \mi{Mrk}(C)$ and
$(N,M_{\mi{alt}})[\sigma_{\mi{alt}}\rangle (N,\allowbreak \mi{Mrk}(C))$: $\card{\sigma_{\mi{alt}}} \geq \card{\sigma_s}$. Obviously, $\card{\sigma_s} \geq 1$ (otherwise $M' = \mi{Mrk}(C)$ contradicting our initial assumption).
$\mi{Exp}_{(N,M')}(\sigma_s)$ contains all permutations
of the shortest sequence $\sigma_s$ that are obtained by expediting transitions.
Let $\sigma_1$ and $\sigma_2$ be such that $\sigma_1 \cdot \sigma_2 \in \mi{Exp}_{(N,M')}(\sigma_s)$, $(N,\mi{Mrk}(C))[\sigma_1\rangle (N,M'')$, and
$\mi{en}(N,M'') \cap \{t \in \sigma_2\} = \emptyset$.
$\sigma_1$ contains the transitions in $\sigma_s$ that can also be executed
starting from the home marking. This leads to marking $M''$. In this marking, none of the remaining transitions in $\sigma_s$ (i.e., the transitions in $\sigma_2$) can be executed.
In other words, starting from we $\mi{Mrk}(C)$, we try to execute as much of $\sigma_s$ as possible by expediting transitions (as described in Definition~\ref{def:expedite}). $\sigma_1$ is the part that can be executed
(leading to $M''$) and $\sigma_2$ is the remaining part of $\sigma_s$. $\sigma_2$ only contains transitions that are not enabled in $M''$. Obviously, there always exist
$\sigma_1$ and $\sigma_2$ such that these requirements are met ($\mi{Exp}_{(N,M')}(\sigma_s) \neq \emptyset$ and we can add transitions to $\sigma_1$ until this is no longer possible).
Moreover, $\sigma_1$ can also be executed starting in $M'$ because it is the prefix of an expedited sequence. Let $M'''$ be the corresponding marking, i.e.,
$(N,M')[\sigma_1\rangle (N,M''')$. From this marking, we can reach the home marking
by executing $\sigma_2$ (because $\sigma_1 \cdot \sigma_2 \in \mi{Exp}_{(N,M')}(\sigma_s)$ and Lemma~\ref{lemma:reord}).
\begin{figure}[thb!]
\centering
\includegraphics[width=5.5cm]{./figures/fig-proof-dom}
\caption{Sketch of the construction used in Theorem~\ref{theo:dommark}.
The solid arrows denote firing sequences and the dashed lines indicate multiset domination.
$\sigma_s$ is a firing sequence of minimal length leading from a marking $M'$ (which is larger than $\mi{Mrk}(C)$) to $\mi{Mrk}(C)$. Firing sequence $\sigma_1 \cdot \sigma_2$ is a permutation of $\sigma_s$ such that the transitions also enabled when starting from $\mi{Mrk}(C)$ are expedited leading to firing sequence $\sigma_1$. The remaining transitions in $\sigma_2$ are not enabled when starting from $\mi{Mrk}(C)$.}
\label{fig-proof-dom}
\end{figure}
To summarize,
$\sigma_1 \cdot \sigma_2 \in \mi{Exp}_{(N,M')}(\sigma_s)$,
$(N,\mi{Mrk}(C))[\sigma_1\rangle (N,M'')$,
$(N,M')[\sigma_1\rangle (N,M''')$,
$(N,M''')[\sigma_2\rangle (N,\mi{Mrk}(C))$, and
$\mi{en}(N,M'') \cap \{t \in \sigma_2\} = \emptyset$.
Moreover, because $M' > \mi{Mrk}(C)$ also $M''' > M''$ and $\card{\sigma_s} \geq 1$.
Figure~\ref{fig-proof-dom} shows the relations between the different markings.
To complete the proof we consider two cases ($\sigma_1 = \langle ~ \rangle$ and $\sigma_1 \neq \langle ~ \rangle$):
\begin{itemize}[noitemsep,topsep=2pt]
\item Assume $\sigma_1 = \langle ~ \rangle$.
This implies that $M''' = M'$, $M'' = \mi{Mrk}(C)$, $\sigma_2 = \sigma_s$, $\mi{en}(N,M'')= \mi{Tr}(C)$,
and $\mi{Tr}(C) \cap \{t \in \sigma_s\} =\emptyset$.
Hence, when executing $\sigma_s$ starting from $M'$ the home cluster remains fully marked.
However, there is at least one additional token in $M'$ that cannot ``disappear''
when executing $\sigma_s$ (the net is proper) leading to a contradiction.
\item Assume $\sigma_1 \neq \langle ~ \rangle$.
This implies that $M''' \not> \mi{Mrk}(C)$, otherwise there would be a shorter
sequence than $\sigma_s$, namely $\sigma_2$. (Recall that we selected $M'$ and $\sigma_s$ such that there is no shorter sequence leading to the home marking.)
There must exist an enabled cluster $C_e$ in $M''$ (the net cannot be dead), i.e.,
$\mi{Tr}(C_e) \subseteq \mi{en}(N,M'')$.
$\mi{Tr}(C_e) \cap \{t \in \sigma_2\} = \emptyset$.
If $C_e = C$, then we find a contradiction
because this implies $M''' > \mi{Mrk}(C)$.
If $C_e \neq C$, then there is a place
$p_e \in \mi{Pl}(C_e)$ outside of the home cluster $C$
that is marked in $M''$ and also $M'''$,
but $p_e \not\in \mi{Mrk}(C)$.
The token in $p_e$ is never removed by the transitions in $\sigma_2$.
However, after executing $\sigma_2$, place $p_e$ should be empty because only places in $C$ are marked,
thus leading to a contradiction.
\end{itemize}
Hence, in all cases we find a contradiction, proving that $M' \leq \mi{Mrk}(C)$.
\end{proof}
Theorem~\ref{theo:dommark} implies boundedness.
Later, we show that marked proper free-choice nets having a home cluster are also safe.
\begin{corollary}[Boundedness]\label{corr:dommark}
Let $(N,M)$ be a marked proper free-choice net having a home cluster $C$.
For all $M_1,M_2 \in R(N,M)$: $M_1 \not> M_2$.
Hence, $(N,M)$ is also bounded.
\end{corollary}
\begin{proof}
Assume $M_1,M_2 \in R(N,M)$ such that $M_1 > M_2$ (first marking is strictly larger).
There exists a $\sigma$ such that
$(N,M_2)[\sigma\rangle (N,\mi{Mrk}(C))$.
Since $M_1 > M_2$ there must be another reachable marking $M_3$ such that
$(N,M_1)[\sigma\rangle (N,M_3)$ and $M_3 > \mi{Mrk}(C)$.
However, Theorem~\ref{theo:dommark} says this is impossible, leading to a contradiction.
\end{proof}
\subsection{Rooted Disentangled Paths}
\label{sec:rdespath}
We will now reason about the numbers of tokens on specific paths in the Petri net. Therefore, we first provide some standard definitions and then introduce the new notion of \emph{rooted disentangled paths}.
\begin{definition}[Elementary Paths and Circuits]\label{def:elempath}
A \emph{path} in a Petri net $N=(P,T,F)$ is a non-empty sequence of nodes $\rho = \langle x_1,x_2, \ldots ,x_n \rangle$ such that $(x_i,x_{i+1}) \in F$ for $1 \leq i < n$.
Hence, $x_{i-1} \in {\pre{x_i}}$ for $1< i \leq n$ and $x_{i+1} \in {\post{x_i}}$ for $1 \leq i < n$.
$\mi{paths}(N)$ is the set of all paths in $N$.
$\rho$ is an \emph{elementary path} if $x_i \neq x_j$ for $1 \leq i < j \leq n$ (i.e., no element occurs more than once). An elementary path is called is a \emph{circuit} if $x_1 \in {\post{x_n}}$.
\end{definition}
Next, we focus on paths that start and end with a place and that visit a cluster at most once.
Consider $(N_5,M_5)$ in Figure~\ref{fig-home-lucent}.
$\langle t8,p7,t8,p8\rangle$ is a path that is not elementary. This implies that also a cluster is visited multiple times.
$\langle p1,t1,p3,t4,p7 \rangle$ is a so-called disentangled path since each place on the path belongs to a different cluster.
\begin{definition}[(Rooted) Disentangled Paths]\label{def:rdespath}
Let $N=(P,T,F)$ be a Petri net.
$\rho = \langle p_1,t_1,p_2, \ldots , t_{n-1},p_n \rangle$ is a \emph{disentangled path} of $N$ if and only if
$\rho$ is a path of $N$ ($\rho \in \mi{paths}(N)$),
$p_1 \in P$,
$p_n \in P$, and
for all $1 \leq i < j \leq n$: $\cluster{p_i} \neq \cluster{p_j}$ (i.e., $\rho$ starts and ends with a place and does not contain elements that belong to the same cluster).
A disentangled path is $Q$-\emph{rooted} if $p_n \in Q$.
\end{definition}
Disentangled paths are elementary, but not all elementary paths are disentangled.
Consider $N_3$ in Figure~\ref{fig-locally-safe-not-perpetual}.
$\rho_1 = \langle p5,t3,p3,t2,p4 \rangle$ is elementary, but not disentangled because $\cluster{p5} = \cluster{p4}$.
$\rho_2 = \langle p5,t3,p3,t2,p1 \rangle$ is elementary and disentangled.
$\rho_2$ is $Q$-rooted where $Q$ can be any subset of places that includes $p1$.
In the remainder of this subsection, we reason about the existence of disentangled paths and the number of tokens on them.
\begin{lemma}[Existence of Rooted Disentangled Paths]\label{lem:exstrdespath}
Let $N=(P,T,F)$ be a free-choice net, $C$ a cluster of $N$, $p \in P$, and $q \in C \cap P$. If $N$ has a path $\rho \in \mi{paths}(N)$ starting in $p$ and ending in $q$, then there also exists a $C$-rooted disentangled path starting in $p$.
\end{lemma}
\begin{proof}
Let $\rho = \langle p_1,t_1,p_2, \ldots , t_{n-1},p_n \rangle \in \mi{paths}(N)$ be the path connecting $p = p_1$ and $q = p_n$.
$\rho$ can be converted into a $Q$-rooted disentangled path starting in $p$.
This is done by removing parts of the path through shortcuts that can be taken when the same cluster is visited multiple times.
Let $i \in \{1, \ldots, n\}$ be a pointer pointing to place $p_i$ in $\rho$.
We start with $i = 1$ (i.e., $i$ points to the first place $p_1$) and move towards the end of the path $i=n$.
\begin{itemize}[noitemsep,topsep=2pt]
\item If pointer $i$ points to place $p_i$ and $p_i \in C$, we can ignore the rest of the sequence because we already reached $C$ via a unique sequence of clusters.
(Note that if $p=p_1$ is already in $C$, we have a sequence of length 1.)
\item If $p_i \not\in C$, then $i < n$ because $q = p_n \in C$.
Hence, there still exists an output transition $t_i$ with output place $p_{i+1}$ in $\rho$.
\begin{itemize}[noitemsep,topsep=2pt]
\item If none of the $p_j$ with $j>i$ is in the same cluster as $p_i$, then we keep $p_i$ and $t_i$, and continue with $p_{i+1}$ (i.e, increment $i$).
\item If there is a $p_j$ with $j>i$ that is in the same cluster as $p_i$, then we take the largest $j$ for which this holds. Also $j < n$, because $p_j \not\in C$ (here $p_i$ and $p_j$ are in the same cluster). Hence, there exists a $t_j$ and $p_{j+1}$.
$p_i$ and $p_j$ may refer to the same place or not.
However, $\{p_i,p_j\} \subseteq \pre{t_j}$ (both are in the same cluster and all transitions in the cluster consume from all places in the cluster).
Since $p_i \in \pre{t_j}$, we can remove the subsequence $\langle t_i,p_{i+1}, \ldots , t_{j-1},p_j \rangle$ and directly connect $p_i$ to $t_j$.
Sequence
$\langle \ldots , p_i, t_j, p_{j+1}, \ldots , t_{n-1},\allowbreak p_n \rangle$
constitutes a path in the Petri net and we continue with $p_{j+1}$ (i.e, set $i = j+1$).
In summary, $\langle p_1, \ldots , p_i, t_i, t_{i+1}, \ldots, p_j, t_j, p_{j+1}, \ldots , t_{n-1},p_n \rangle$ is transformed into
$\langle p_1, \ldots , p_i, \allowbreak \stkout{t_i, t_{i+1}, \ldots, p_j}, t_j, p_{j+1}, \ldots , t_{n-1},p_n \rangle$ and continues with $i=j+1$.
\end{itemize}
\end{itemize}
We repeat this process until we reach $C$.
Each of the elements in the resulting path is connected to the previous one and we never visit the same cluster twice. We also keep the initial place $p$. Therefore, the resulting path is a $C$-rooted disentangled path starting in $p$.
\end{proof}
Consider the path $\rho = \langle p6,t4,p5,t3,p3,t2,p4,t3,p3,t2,p1 \rangle$ in Figure~\ref{fig-locally-safe-not-perpetual}.
the path ends in the cluster $C=\{p1,t1\}$. Using the approach used in the proof of Lemma~\ref{lem:exstrdespath},
this path is converted into the $C$-rooted disentangled path $\langle p6,t4,p5,\stkout{t3,p3,t2,p4},t3,p3,t2,p1 \rangle = \langle p6,t4,p5,t3,p3,t2,p1 \rangle$.
We can construct a $C$-rooted disentangled path starting in any place $p$ that is not dead,
i.e., a place marked in at least one of reachable markings.
\begin{corollary}[Existence of Rooted Disentangled Paths from Marked Places]\label{corr:pathexists}
Let $(N,M)$ be a marked proper free-choice net having a home cluster $C$.
For any non-dead place $p \in P$, there exists a $C$-rooted disentangled path starting in $p$.
\end{corollary}
\begin{proof}
Take an arbitrary place $p$ that can be marked in some reachable marking $M'$. If $p \in C$, then $\rho = \langle p \rangle$ is a
$C$-rooted disentangled path.
If $p \not\in C$, then there must be a path from $p$ to one of the places in $C$ (say $q$).
This follows from the fact that the net is proper, i.e., for all $t \in T$: $\pre{t} \neq \emptyset$ and $\post{t} \neq \emptyset$.
Therefore, a token can not simply disappear and must end up in $C$.
To see this, color the token in $p$ red and then execute a firing sequence ending in $\mi{Mrk}(C)$.
When executing a transition with at least one red token, make all produced tokens also red.
Because we cannot consume a red token without producing at least one new red token,
we know that at least one red token will end up in $\mi{Mrk}(C)$.
This proves that there is a path $\rho$ starting in $p$ and ending in some $q \in C \cap P$ (follow back one red token in $\mi{Mrk}(C)$).
Since there is such a path $\rho$, there also exists a $C$-rooted disentangled path starting in $p$
(apply Lemma~\ref{lem:exstrdespath}).
\end{proof}
Dead places do not change the behavior and can be removed together with the output transitions if desired
(but do not have to be removed, since they remain empty anyway).
The next lemma plays a key role in our analysis of nets having a home cluster $C$:
$C$-rooted disentangled paths are \emph{safe}, i.e.,
at any time \emph{all} places on a $C$-rooted disentangled path \emph{together} contain at most one token.
\begin{lemma}[Rooted Disentangled Paths Are Safe]\label{lem:pathsafety}
Let $(N,M)$ be a marked proper free-choice net having a home cluster $C$.
For any reachable marking, $M' \in R(N,M)$ and $C$-rooted disentangled path $\rho = \langle p_1,t_1,p_2, \ldots , t_{n-1},\allowbreak p_n \rangle$: $M'(\{p_1,p_2, \ldots ,p_n\}) \allowbreak \leq 1$.
\end{lemma}
\begin{proof}
Assume $(N,M)$ is a marked proper free-choice net, $C$ is a home cluster, and $\rho = \langle p_1,t_1,p_2, \ldots , t_{n-1},\allowbreak p_n \rangle$ is a $C$-rooted disentangled path.
Let $P_{\rho} = \{p_1,p_2, \ldots ,p_n\}$ and $T_{\rho} = \{t_1,t_2, \ldots ,t_{n-1}\}$.
Assume that the lemma does not hold, i.e., $\rho$ is not safe and
$M'(P_{\rho}) > 1$ for some $M' \in R(N,M)$.
We show that this leads to a contradiction.
Consider the tokens (at least two) in the places $P_{\rho}$. We try to move these tokens towards $p_n \in C$.
Each place $p_i \in P_{\rho}$ corresponds to a unique cluster $C_i$ because the path is disentangled. This combined with the free-choice property, allows us to fully control the trajectories of the tokens in $P_{\rho}$.
First, we look a the case where $C$ has a transition, say $t_C$ (i.e., there are no dead markings, see Proposition~\ref{prop:cluschar}).
We start in marking $M_c = M'$. If one of the transitions in $T_{\rho}$ is enabled, then we fire this transition (in any order and perhaps also multiple times) and update the current marking $M_c$. This cannot decrease the number of tokens, i.e., we still have $M_c(P_{\rho}) > 1$.
Note that a transition in $T_{\rho}$ consumes precisely one token
``from the path'' and produces at least one token ``on the path'' (disentangled paths are elementary).
If $t_C$ is enabled, then $M_c \geq \mi{Mrk}(C)$.
However, given the second token in $P_{\rho}$ this implies $M_c > \mi{Mrk}(C)$.
This leads to a contradiction using Theorem~\ref{theo:dommark}.
If none of the transitions in $T_{\rho} \cup \{t_C\}$ is enabled in $M_c$, then
we must be able to fire a sequence of other transitions enabling a transition in $T_{\rho} \cup \{t_C\}$.
$C$ is a home cluster and there are no deadlocks, so we can always walk towards a marking enabling one of the transitions in
$T_{\rho} \cup \{t_C\}$. The moment one of the transitions
in $T_{\rho}$ is enabled, we can again control the choices involved.
Hence, we can continue to move tokens along the path until we find a contradiction.
Next, we look a the case where $C$ does not have a transition (i.e., the home marking is is a deadlock, see Proposition~\ref{prop:cluschar}).
We can use exactly the same strategy to move the tokens towards $p_n$ (there one case less to consider).
The moment a token reaches $p_n$ there is at least one additional token in $P_{\rho}$ and this one can also be moved to $p_n$
leading to a contradiction (apply Theorem~\ref{theo:dommark} to show that there cannot be two tokens in $p_n$).
\end{proof}
The previous results can be combined to show
that the class of marked Petri nets considered is safe.
\begin{corollary}[Marked Proper Free-Choice Net Having a Home Cluster Are Safe]\label{corr:safe}
Let $(N,M)$ be a marked proper free-choice net having a home cluster $C$. $(N,M)$ is safe.
\end{corollary}
\begin{proof}
Follows directly from Lemma~\ref{lem:pathsafety} and Corollary~\ref{corr:pathexists}.
If a place $p$ is dead, then it will never have a token and thus safe.
If a place $p$ is not dead, then there exists a $C$-rooted disentangled path (Corollary~\ref{corr:pathexists}) starting in $p$, and this path must be safe due to Lemma~\ref{lem:pathsafety}. Hence, all places are safe.
\end{proof}
\subsection{Conflict-Pairs}
\label{sec:cfpair}
If a marked Petri net is not lucent, then there must be two different markings enabling the same set of transitions.
We will convert such a pair of markings
into a \emph{conflict-pair}. By showing that these do not exist, we can prove lucency.
\begin{definition}[Conflict-Pair]\label{def:cfpair}
Let $(N,M)$ be a marked Petri net.
$(M_1,M_2)$ is called a \emph{conflict-pair} for $(N,M)$
if and only if
\begin{itemize}[noitemsep,topsep=2pt]
\item $M_1$ and $M_2$ are reachable markings of $(N,M)$ (i.e., $M_1,M_2 \in R(N,M)$),
\item $M_1$ and $M_2$ are not dead (i.e., $\mi{en}(N,M_1) \neq \emptyset$ and $\mi{en}(N,M_2) \neq \emptyset$),
\item $\mi{en}(N,M_1) \cap \mi{en}(N,M_2) = \emptyset$ (no transition is enabled in both markings),
\item for all $t \in \mi{en}(N,M_1)$: $M_2(\pre t)\geq 1$, and
\item for all $t \in \mi{en}(N,M_2)$: $M_1(\pre t)\geq 1$.
\end{itemize}
\end{definition}
Consider $(N_3,M_3)$ Figure~\ref{fig-locally-safe-not-perpetual} and markings $M_1 = [p2, p3, p5]$ and $M_2 = [p2, p4, p5]$.
$M_1$ can be reached by firing $t1$ and $t4$.
$M_2$ can be reached by firing $t1$, $t2$, $t1$, and $t4$.
$\mi{en}(N,M_1) = \{t2\}$, $\mi{en}(N,M_2) = \{t3\}$,
$\mi{en}(N,M_1) \cap \mi{en}(N,M_2) = \emptyset$,
$M_2(\pre t2) = 1 \geq 1$, and
$M_1(\pre t3) = 1 \geq 1$.
To better understand the above definition, let us split
each of the two markings in the conflict-pair $(M_1,M_2)$ in an ``agreement'' and a ``disagreement'' part.
$M^{\mi{agree}}$ is the maximal marking such that
$M^{\mi{agree}} \leq M_1$ and $M^{\mi{agree}} \leq M_2$.
Now we can write
$M_1 = M^{\mi{agree}} \bplus M^{\mi{disagree}}_1$ and
$M_2 = M^{\mi{agree}} \bplus M^{\mi{disagree}}_2$.
Obviously, all three submarkings $M^{\mi{agree}}$, $M^{\mi{disagree}}_1$, and $M^{\mi{disagree}}_2$ are non-empty.
This allows us to speak about ``agreement tokens'' (tokens in $M^{\mi{agree}}$) and ``disagreement tokens'' (tokens in $M^{\mi{disagree}}_1$ or $M^{\mi{disagree}}_2$).
For $M_1 = [p2, p3, p5]$ and $M_2 = [p2, p4, p5]$, we have $M^{\mi{agree}} = [p2,p5]$, $M^{\mi{disagree}}_1 = [p3]$,
and $M^{\mi{disagree}}_2 = [p4]$.
Both $M_1$ and $M_2$ should enable at least one transition, but there cannot be a transition enabled by both.
This means that $\mi{en}(N,M^{\mi{agree}}) = \emptyset$.
The last two requirements in Definition~\ref{def:cfpair}
state that transitions enabled in $M_1$ and $M_2$ should also
consume at least one agreement token.
Hence, for any $t_1 \in \mi{en}(N,M_1)$:
$t_1 \not\in \mi{en}(N,M_2)$,
$t_1 \not\in \mi{en}(N,M^{\mi{agree}})$,
$M^{\mi{agree}}(\pre{t_1}) \geq 1$, and
$M^{\mi{disagree}}_1(\pre{t_1}) \geq 1$.
Similarly, for any $t_2 \in \mi{en}(N,M_2)$:
$t_2 \not\in \mi{en}(N,M_1)$,
$t_2 \not\in \mi{en}(N,M^{\mi{agree}})$,
$M^{\mi{agree}}(\pre{t_2}) \geq 1$, and
$M^{\mi{disagree}}_2(\pre{t_2}) \geq 1$.
Next, we show that the absence of conflict-pairs implies lucency.
Later, we show that a marked proper free-choice net with a home cluster cannot have a conflict-pair.
Hence, such nets are guaranteed to be lucent.
\begin{figure}[thb!]
\centering
\includegraphics[width=15.0cm]{./figures/fig-locally-safe-conflict-pair}
\caption{Example showing how two markings that have the same ``footprint'' (left) in terms of enabling can be converted into a conflict-pair (right).
The left-hand side shows markings $M_1 = [p1,p3,p6]$ and $M_2 = [p1,p4,p6]$.
The right-hand side shows markings $M_1' = [p2,p3,p5]$ and $M_2' = [p2,p4,p5]$.
The ``agreement tokens'' are depicted as black dots (denoted by $\bullet$), the ``disagreement tokens'' are shown as
$\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}$ (only in $M_1$ and $M_1'$)
or $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}}$ (only in $M_2$ and $M_2'$).
}
\label{fig-locally-safe-conflict-pair}
\end{figure}
To show that the absence of conflict-pairs implies lucency for free-choice nets having a home cluster,
Lemma~\ref{lem:cflucent} shows that it is possible to convert two markings $M_1$ and $M_2$ that have the same ``footprint'' in terms of enabling (i.e., $\mi{en}(N,M_1)=\mi{en}(N,M_2)$) into a conflict-pair $(M_1',M_2')$.
To illustrate the construction, we consider the free-choice net $N_3$ in Figure~\ref{fig-locally-safe-not-perpetual}
which does \emph{not} have a home cluster
(we can find two markings having the same ``footprint'' because of this).
The left-hand side of Figure~\ref{fig-locally-safe-conflict-pair} shows
the markings $M_1 = [p1,p3,p6]$ and $M_2 = [p1,p4,p6]$.
$M_1$ is the initial marking and $M_2$ can be reached by firing $t1$ and $t2$.
Tokens in $M_1$ but not in $M_2$ are represented by $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}$
and tokens in $M_2$ but not in $M_1$ are represented by $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}}$.
Tokens in both markings are denoted by $\bullet$.
$M_1$ and $M_2$ demonstrate that the net is \emph{not} lucent because $\mi{en}(N_3,M_1)=\mi{en}(N_3,M_2)=\{t1,t4\}$.
To move to the conflict-pair $(M_1',M_2')$ with $M_1' = [p2,p3,p5]$ and $M_2' = [p2,p4,p5]$ on the right-hand side of Figure~\ref{fig-locally-safe-conflict-pair}, we do not ``touch'' the disagreement tokens denoted by
$\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}$ and $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}}$.
This implies that no transitions in the corresponding clusters can fire and that these disagreement tokens do not move.
Hence, we can only fire transitions that only consume agreement tokens. These are depicted as normal black dots $\bullet$ in Figure~\ref{fig-locally-safe-conflict-pair}. In the example, we can fire $t1$ and $t4$ involving only agreement tokens. Such transitions consume and produce agreement tokens.
Since the net is guaranteed to be safe, no agreement tokens can be produced for places that have disagreement tokens
(i.e., $p3$ and $p4$ in Figure~\ref{fig-locally-safe-conflict-pair}).
Hence, the $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}$ and $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}}$ tokens cannot disappear in the process.
The main idea of Lemma~\ref{lem:cflucent} is to fire transitions that are enabled by
agreement tokens until this is no longer possible. This leads to markings like $M_1' = [p2,p3,p5]$ and $M_2' = [p2,p4,p5]$ in Figure~\ref{fig-locally-safe-conflict-pair}. In such markings, all enabled transitions have a mix of agreement and disagreement tokens in their input places. For example, $t2$ is enabled in $M_1'$ by a $\bullet$ token in $p2$ and a $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}$ token in $p3$, and
$t3$ is enabled in $M_2'$ by a $\bullet$ token in $p5$ and a $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}}$ token in $p4$.
Lemma~\ref{lem:cflucent} shows that it is always possible to reach such markings using the fact that the home marking $\mi{Mrk}(C)$ is always reachable.
Later, we will show that free-choice nets having a home cluster cannot have conflict-pairs.
Therefore, Figure~\ref{fig-locally-safe-conflict-pair} need to use an example that
does not have a home cluster.
The proof of Lemma~\ref{lem:cflucent} can be summarized as follows.
Start from two different markings $M_1$ and $M_2$ that enable the same set of transitions.
The tokens of both markings are split into ``agreement tokens'' denoted by $\bullet$ and ``disagreement tokens'' marked by $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}$ or $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}}$ (as shown in Figure~\ref{fig-locally-safe-conflict-pair}).
Next, we fire transitions that consume only $\bullet$ tokens.
It is possible to do this in such a way that the process stops
and there are no such transitions enabled anymore (just try to move tokens closer to the home marking, this must stop at some stage because the disagreement tokens are needed).
The $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}$ and $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}}$ tokens do not move
and enabled transitions require at least one ``disagreement token'' ($\bullet$).
This way we can construct a conflict-pair $(M_1',M_2')$.
Hence, if there are no conflict-pairs, there cannot be two markings $M_1$ and $M_2$ that enable the same set of transitions, thus proving lucency.
\begin{lemma}[Nets Without Conflict-Pairs Are Lucent]\label{lem:cflucent}
Let $(N,M)$ be a marked proper free-choice net having a home cluster.
If $(N,M)$ has no conflict-pairs, then $(N,M)$ is lucent.
\end{lemma}
\begin{proof}
Let $(N,M)$ be a marked proper free-choice net having a home cluster $C$. We need to prove that if $N=(P,T,F)$ has no conflict-pairs, then $N$ is lucent. This can be rewritten to the logically equivalent contrapositive ``if $N$ is not lucent, then $N$ has a conflict-pair''. We will construct such a conflict-pair.
Assume $N$ is \emph{not} lucent. There must be two markings $M_1,M_2 \in R(N,M)$ such that $\mi{en}(N,M_1)=\mi{en}(N,M_2)$ and $M_1 \neq M_2$.
We will show that, based on these markings, we can construct a conflict-pair $(M_1',M_2')$.
The only dead reachable marking is $\mi{Mrk}(C)$.
Since $\mi{en}(N,M_1)=\mi{en}(N,M_2)$ and $M_1 \neq M_2$, we conclude
that $\mi{en}(N,M_1)=\mi{en}(N,M_2) \neq \emptyset$, $M_1 \neq \mi{Mrk}(C)$, and $M_2 \neq \mi{Mrk}(C)$ (use see Proposition~\ref{prop:cluschar}).
Since $(N,M)$ is safe (see Corollary~\ref{corr:safe}), we can
partition the tokens into three groups based on the places where they reside:
$P_{\bullet} = \{p \in P \mid p \in M_1 \ \wedge \ p \in M_2 \}$,
$P_{1} = \{p \in P \mid p \in M_1 \ \wedge \ p \not\in M_2 \}$, and
$P_{2} = \{p \in P \mid p \not\in M_1 \ \wedge \ p \in M_2 \}$.
Tokens in $P_{\bullet}$ are shared by both markings
(i.e., the ``agreement tokens'' mentioned before).
Tokens in $P_{1}$ and $P_{2}$ exist in only one of the two markings (i.e., the ``disagreement tokens'' mentioned before).
None of these three sets can be empty.
Because $\mi{en}(N,M_1)=\mi{en}(N,M_2) \neq \emptyset$, transitions enabled in both markings must agree on the marked input places. Hence, $P_{\bullet} \neq \emptyset$.
Because $M_1 \neq M_2$ and one cannot be strictly larger than the other one (Corollary~\ref{corr:dommark}), $P_{1} \neq \emptyset$ and $P_{2} \neq \emptyset$.
We also create three groups of transitions:
$T_1 = \{t \in T \mid \pre{t} \cap P_{1} \neq \emptyset \}$,
$T_2 = \{t \in T \mid \pre{t} \cap P_{2} \neq \emptyset \}$, and
$T_{\mi{rest}} = T \setminus (T_1 \cup T_2) = \{t \in T \mid \pre{t} \cap (P_{1} \cup P_{2}) = \emptyset \}$.
Note that $T_1$ and $T_2$ may overlap in principle, but do not overlap with $T_{\mi{rest}}$ i.e.,
$(T_1 \cup T_2)$ and $T_{\mi{rest}}$ partition $T$.
Each subset (i.e., $T_1$, $T_2$ or $T$) includes all transitions of a cluster or none (i.e., clusters agree on membership).
After introducing these notations, we pick a firing sequence
$\sigma$ starting in $M_1$ and ending in the home marking, i.e.,
$(N,M_1)\allowbreak[\sigma\rangle\allowbreak (N,\mi{Mrk}(C))$.
Such a $\sigma$ exists, because $C$ is a home cluster.
Like in the proof of Theorem~\ref{theo:dommark} we split $\sigma$ into $\sigma_1$ and $\sigma_2$.
$\mi{Exp}_{(N,M_1)}(\sigma)$ contains all permutations of firing sequence $\sigma$ that are obtained by expediting transitions.
Let $\sigma_1$ and $\sigma_2$ be such that $\sigma_1 \cdot \sigma_2 \in \mi{Exp}_{(N,M_1)}(\sigma)$,
$(N,M_1)[\sigma_1\rangle (N,M_1')$,
$\sigma_1 \in {T_{\mi{rest}}}^*$,
and $\mi{en}(N,M_1') \cap T_{\mi{rest}} \cap \{t \in \sigma_2\} = \emptyset$.
In other words, we expedite transitions from $T_{\mi{rest}}$, until this is no longer possible.
Given $\mi{Exp}_{(N,M_1)}(\sigma)$ it is always possible to find such $\sigma_1$, $\sigma_2$, and $M_1'$.
Suppose that $\mi{en}(N,M_1') \cap T_{\mi{rest}} \cap \{t \in \sigma_2\} \neq \emptyset$,
then we take the first transition in $\sigma_2$ that is in this set and move it to $\sigma_1$
(see construction in Definition~\ref{def:expedite}).
Since $\sigma_1$ does not fire transitions possibly consuming disagreement tokens
(recall that $T_{\mi{rest}} = \{t \in T \mid \pre{t} \cap (P_{1} \cup P_{2}) = \emptyset \}$), $\sigma_1$ is also enabled in $M_2$.
Let $M_2'$ be the marking reached after firing $\sigma_1$ in $M_2$, i.e., $(N,M_2)[\sigma_1\rangle(N,M_2')$.
Also, $(N,M_1')[\sigma_2\rangle\allowbreak (N,\mi{Mrk}(C))$ (because $\sigma_1 \cdot \sigma_2 \in \mi{Exp}_{(N,M_1)}(\sigma)$ and Lemma~\ref{lemma:reord}).
Figure~\ref{fig-proof-cfp} summarizes the different entities involved and their relationships.
\begin{figure}[thb!]
\centering
\includegraphics[width=6.0cm]{./figures/fig-proof-cfp}
\caption{Sketch of the construction used in Lemma~\ref{lem:cflucent}.
The elements satisfy the following relations:
$(N,M_1)\allowbreak[\sigma\rangle\allowbreak (N,\mi{Mrk}(C))$,
$\sigma_1 \cdot \sigma_2 \in \mi{Exp}_{(N,M_1)}(\sigma)$,
$(N,M_1)[\sigma_1\rangle (N,M_1')$,
$(N,M_2)[\sigma_1\rangle(N,M_2')$,
$(N,M_1')[\sigma_2\rangle (N,\mi{Mrk}(C))$,
$\sigma_1 \in {T_{\mi{rest}}}^*$, and
$\mi{en}(N,M_1') \cap T_{\mi{rest}} \cap \{t \in \sigma_2\} = \emptyset$.}
\label{fig-proof-cfp}
\end{figure}
$M_1(p) = M_1'(p)$ and $M_2(p) = M_2'(p)$ for any $p \in P_1 \cup P_2$, i.e.,
the disagreement places are unaffected by $\sigma_1$ because the $T_1$ and $T_2$ transitions did not fire
and $\sigma_1$ cannot add tokens to $P_{1}$ or $P_{2}$, because the net is safe
(see Corollary~\ref{corr:safe}). $\sigma_1$ only produces ``agreement tokens'' and
putting such a token in a disagreement place violates the safety property in the sequence starting in $M_1$ or $M_2$.
Also $M_1(p) = M_2(p)$ and $M_1'(p) = M_2'(p)$ for any $p \in P \setminus (P_1 \cup P_2)$.
This also holds for intermediate markings when firing the transitions in $\sigma_1$. Hence, the collection of $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}$ and $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}}$ tokens does not change (no disagreement tokens are removed and no new disagreement tokens are created). Moreover, there is a non-empty set of agreement tokens (denoted by $\bullet$) because the net is proper ($M_1'$ and $M_2'$ agree on these and each transition in $\sigma_1$ produces at least one such token).
Next, we prove that $(M_1',M_2')$ is indeed a conflict-pair for $N$.
We check the required properties listed in Definition~\ref{def:cfpair}:
\begin{itemize}[noitemsep,topsep=2pt]
\item $M_1'$ and $M_2'$ are indeed reachable markings of $(N,M)$ because $M_1,M_2 \in R(N,M)$,
$(N,M_1)\allowbreak [\sigma_1\rangle(N,M_1')$ and $(N,M_2)[\sigma_1\rangle(N,M_2')$,
\item $M_1'$ contains at least one ``disagreement token'' $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}$ and one ``agreement token'' $\bullet$ (see above).
$M_1'$ cannot be dead, because the only reachable marking that may be dead is $\mi{Mrk}(C)$ having a single token (apply again Proposition~\ref{prop:cluschar}).
$M_2'$ also contains at least one ``disagreement token'' $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}}$ and one ``agreement token'' $\bullet$.
Hence, neither $M_1'$ nor $M_2'$ can be dead.
\item Next, we show that $T' = \mi{en}(N,M_1') \cap \mi{en}(N,M_2') = \emptyset$ using the following observations:
\begin{itemize}[noitemsep,topsep=2pt]
\item $T' \subseteq T_{\mi{rest}}$, because the transitions in $T_1$ and $T_2$ cannot be enabled in both $M_1'$ and $M_2'$
(no tokens were added to a place in $P_1 \cup P_2$ by $\sigma_1$).
\item $\mi{en}(N,M_1') \cap T_{\mi{rest}} \cap \{t \in \sigma_2\} = \emptyset$ was used as a criterion when splitting $\sigma$ into $\sigma_1$ and $\sigma_2$.
\item Combining the above
implies $T' \cap \{t \in \sigma_2\} = \emptyset$. Hence,
the input places of the transitions in $T'$ are still marked after executing $\sigma_2$ in $M_1'$.
\item Since $(N,M_1')[\sigma_2\rangle (N,\mi{Mrk}(C))$, the input places of $T'$ must be marked in $\mi{Mrk}(C)$. Hence, $T' \subseteq C$.
\item This implies that the transitions in the home cluster are enabled in both $M_1'$ and $M_2'$.
This is only possible if $M_1'=M_2'$ leading to a contradiction, i.e., $\mi{en}(N,M_1') \cap \mi{en}(N,M_2') = \emptyset$.
\end{itemize}
Note that in $M_1'$ and $M_2'$ all enabled transitions need to consume at least one ``disagreement token''. Hence, no transition can be enabled in both $M_1'$ and $M_2'$.
If a transition would be enabled in both, then $\sigma_1$ could have been extended.
\item For all $t \in \mi{en}(N,M_1')$: $M_2'(\pre t)\geq 1$, because each transition enabled in $M_1'$ must
have an ``agreement token'' produced by $\sigma_1$ and a ``disagreement token'' in $P_1$.
If a transition $t$ would be enabled based on ``disagreement tokens'' only, these would have been there in $M_1$ already
(recall that $M_1(p) = M_1'(p)$ for any $p \in P_1$) leading to a contradiction because $\mi{en}(N,M_1)=\mi{en}(N,M_2)$.
Hence, any transition $t$ enabled in $M_1'$ must have an ``agreement token'' produced by $\sigma_1$ on one of it input places.
This token is also there in $M_2'$. Hence, $M_2'(\pre t)\geq 1$.
\item For all $t \in \mi{en}(N,M_2')$: $M_1'(\pre t)\geq 1$. Here the same arguments apply.
A transition cannot be enabled based on ``disagreement tokens'' only, since these would have been there in $M_2$ already
($M_2(p) = M_2'(p)$ for any $p \in P_2$).
\end{itemize}
Hence, $(M_1',M_2')$ is indeed a conflict-pair and thus completes the contrapositive proof.
\end{proof}
\subsection{Home Clusters Ensure Lucency in Free-Choice Nets}
\label{sec:mainresult}
Now we can prove the main result of this paper: Marked proper free-choice nets having a home cluster are lucent.
We use the notions of rooted disentangled paths and
conflict-pairs. The basic idea is to show that
a conflict-pair implies that there is an unsafe rooted disentangled path which is impossible.
The absence of conflict-pairs implies lucency.
\begin{figure}[thb!]
\centering
\includegraphics[width=16.0cm]{./figures/fig-theo-lucent}
\caption{Visualization of the three clusters considered in the proof of Theorem~\ref{theo:home-cp}. $C$ is the home cluster. $C_1$ is a cluster enabled in $M_1$ but not in $M_2$.
$C_2$ is a cluster enabled in $M_2$ but not in $M_1$.
The places labeled $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}$ or $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}}$ contain a token in the respective marking (just in $M_1$ or just in $M_2$). The paths connecting $C_2$ and $C_1$ and $C_1$ and $C$ are converted into rooted disentangled paths.
These two rooted disentangled paths can be concatenated to create a $C$-rooted disentangled path starting in $p^{mrk}$. The proof shows that at least one of these rooted disentangled paths is not safe, proving that the net has no conflict-pairs and thus must be lucent.}
\label{fig-theo-lucent}
\end{figure}
Theorem~\ref{theo:home-cp} shows that there cannot be a conflict-pair $(M_1,M_2)$ in
a marked proper free-choice net having a home cluster.
Figure~\ref{fig-theo-lucent} sketches the main idea of the proof.
First, we assume that there exist a conflict-pair $(M_1,M_2)$.
We identify, next to the home cluster $C$, two additional clusters $C_1$ and $C_2$ based on
the conflict-pair $(M_1,M_2)$.
$C_1$ is enabled in marking $M_1$ and $C_2$ is enabled in marking $M_2$.
$C_1$ can be any cluster enabled in marking $M_1$.
$C_2$ is a cluster enabled in marking $M_2$ that contributes to the enabling of
cluster $C_1$ which is disabled in marking $M_2$.
As Figure~\ref{fig-theo-lucent} shows $C_1$ has a $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}$ input token and $C_2$ has a $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}}$ input token.
Based on the selected $C_1$ and $C_2$ clusters, we create two rooted disentangled paths:
$\rho'$ is a $C_1$-rooted disentangled path connecting $C_2$ to $C_1$ and
$\rho''$ is a $C$-rooted disentangled path connecting $C_1$ to $C$.
These two rooted disentangled paths are combined into a path $\rho'''$ running from $C_2$ to $C$ via $C_1$.
If $\rho'''$ is not a $C$-rooted disentangled path (i.e., the same cluster is visited multiple times along the path), then it is possible to reach a marking starting from $M_2$ which puts a token on $\rho''$
(the path connecting $C_1$ to $C$) while having an agreement token in $C_1$.
Hence, there is a $C$-rooted disentangled path connecting $C_1$ to $C$ having at least two tokens (see proof for details). Using Lemma~\ref{lem:pathsafety} this leads to a contradiction. Hence, $\rho'''$ must be a $C$-rooted disentangled path. However, considering $M_1$ (rather than a marking reached from $M_2$) path $\rho'''$ has at least two tokens.
This also leads to a contradiction using Lemma~\ref{lem:pathsafety}.
Therefore, there cannot be a conflict-pair $(M_1,M_2)$.
The approach presented using Figure~\ref{fig-theo-lucent} is detailed in the proof below.
\begin{theorem}[Home Clusters Ensure Absence of Conflict-Pairs]\label{theo:home-cp}
Let $(N,M)$ be a marked proper free-choice net having a home cluster. $(N,M)$ has no conflict-pairs.
\end{theorem}
\begin{proof}
Let $(N,M)$ be a marked proper free-choice net having a home cluster $C$.
We assume that $(N,M)$ has a conflict-pair $(M_1,M_2)$
and show that this leads to a contradiction.
{\bf Useful notations: $P_{\bullet}$, $P_{\emptyset}$, $P_{1}$, and $P_{2}$.}
Based on the conflict-pair $(M_1,M_2)$, we partition the set of places $P$ into four sets
$P_{\bullet} = \{p \in P \mid p \in M_1 \ \wedge \ p \in M_2 \}$,
$P_{\emptyset} = \{p \in P \mid p \not\in M_1 \ \wedge \ p \not\in M_2 \}$,
$P_{1} = \{p \in P \mid p \in M_1 \ \wedge \ p \not\in M_2 \}$, and
$P_{2} = \{p \in P \mid p \not\in M_1 \ \wedge \ p \in M_2 \}$.
Transitions enabled in $M_1$ have
input places from $P_{\bullet}$ and $P_{1}$.
$\mi{en}(N,M_1) = \{ t \in T \mid
\pre{t} \cap P_{\bullet} \neq \emptyset \ \wedge \
\pre{t} \cap P_{\emptyset} = \emptyset \ \wedge \
\pre{t} \cap P_{1} \neq \emptyset \ \wedge \
\pre{t} \cap P_{2} = \emptyset
\}$.
Transitions enabled in $M_2$ have
input places from $P_{\bullet}$ and $P_{2}$.
$\mi{en}(N,M_2) = \{ t \in T \mid
\pre{t} \cap P_{\bullet} \neq \emptyset \ \wedge \
\pre{t} \cap P_{\emptyset} = \emptyset \ \wedge \
\pre{t} \cap P_{1} = \emptyset \ \wedge \
\pre{t} \cap P_{2} \neq \emptyset
\}$.
This follows directly from Definition~\ref{def:cfpair}.
{\bf Selecting clusters $C_1$ and $C_2$.}
Pick an arbitrary transition enabled in $M_1$:
$t^{pick}_1 \in \mi{en}(N,M_1)$.
Call the corresponding cluster $C_1$ (i.e., $t^{pick}_1 \in C_1$).
Cluster $C_1$ is fully marked in $M_1$,
but has at least one unmarked place in $M_2$.
$P^{unmrk} = C_1 \cap P_{1}$ is the non-empty set of such places.
To reach the home marking from $M_2$, we need to execute a transition in cluster $C_1$ because it is partially enabled.
Hence, there needs to be a firing sequence that marks the places in $P^{unmrk}$. Let $\sigma_{en}$ be a shortest firing sequence
starting in $M_2$ and marking a place in $P^{unmrk}$.
$\sigma_{en}$ starts with a transition enabled in $M_2$ and ends with a transition putting the first token in $P^{unmrk}$ (the transition may also mark other places in $P^{unmrk}$).
Let $t^{pick}_2 \in \mi{en}(N,M_2)$ be the first transition in this shortest sequence $\sigma_{en}$.
Given this firing sequence we can ``follow a token''
from $t^{pick}_2$ to a place in $P^{unmrk}$.
This provides a path starting in $t^{pick}_2$ and ending in the first place marked in $P^{unmrk}$.
This path contains a subset of transitions in $\sigma_{en}$.
Obviously, such a path must exist, but there may be many candidates.
The cluster where this path starts is called $C_2$ (i.e., $t^{pick}_2 \in C_2$).
There exists a place $p^{mrk} \in C_2 \cap P_{\bullet}$ in this cluster that is marked in both $M_1$ and $M_2$
($t^{pick}_2$ is enabled in $M_2$ and at least of the input places must also have a token in $M_1$, since $(M_1,M_2)$ is a conflict pair).
{\bf Selecting two rooted disentangled paths $\rho'$ and $\rho''$.}
We use the three clusters $C$, $C_1$, and $C_2$ to prove the contradiction. There is a path from $C_2$ to $C_1$ and a path from $C_1$ to $C$. Note that $C_1$ and $C_2$ need to be different due to the disagreement tokens.
Also $C$ is different from both $C_1$ and $C_2$, since it is not possible to mark the home cluster completely and still have tokens in other places (use Corollary~\ref{corr:dommark}).
Due to Lemma~\ref{lem:exstrdespath}
there must be a $C_1$-rooted disentangled path starting in $p^{mrk}$. Let us call this path
$\rho' = \langle p_1',t_1',p_2', \ldots , t_{n-1}',p_n' \rangle$.
$p_1' = p^{mrk}$ and $p_n'$ is a place in cluster $C_1$.
Assume that the construction described in Lemma~\ref{lem:exstrdespath} is used, i.e.,
all transitions in $\rho'$ also appear in $\sigma_{en}$ (but the reverse does not need to hold since we follow a token and take shortcuts to ensure that each cluster appears only once).
For clarity, we refer to the end place of $\rho'$ as $p^{conn}$, i.e.,
$p^{conn} = p_n'$.
Due to Corollary~\ref{corr:pathexists}
there must also be a $C$-rooted disentangled path starting in $p^{conn}$ ($p^{conn}$ is non-dead in $(N,M)$).
Let us call this path
$\rho'' = \langle p_1'',t_1'',p_2'', \ldots , t_{m-1}'',p_m'' \rangle$.
$p_1'' = p^{conn}$ and $p_m''$ is a place in cluster $C$.
For clarity, we refer to this place as $p^{end}$, i.e.,
$p^{end} = p_m''$.
Hence, we have
a $C_1$-rooted disentangled path $\rho'$ starting in $p^{mrk}$ and ending in $p^{conn}$ and
a $C$-rooted disentangled path $\rho''$ starting in $p^{conn}$ and ending in $p^{end}$.
{\bf Creating another rooted disentangled path $\rho'''$ by combining $\rho'$ and $\rho''$.}
Consider now the
path $\rho''' =
\langle p^{mrk},t_1',p_2', \ldots , t_{n-1}',p^{conn},
t_1'',p_2'', \ldots , t_{m-1}'',p^{end}
\rangle$, i.e., the concatenation of the paths $\rho'$ and $\rho''$.
We will show that $\rho'''$ is a $C$-rooted disentangled path starting in $p^{mrk}$ and ending in $p^{end}$.
Obviously, $\rho'''$ is also a path of $N$.
However, we also need to show that $\rho'''$
does not contain elements that belong to the same cluster.
If this is not the case there must be a place $p_i'$ in $\rho'$ with $1 \leq i < n$ and a place $p_j''$ in $\rho''$ with
$1 \leq j \leq m$ that belong to the same cluster. (Note that $\rho'$ and $\rho''$ do not visit the same cluster twice when considered separately, and $p_n'= p^{conn} = p_1''$ is in both so should not be compared with itself.)
However, this is impossible.
Assume there would be a cluster $C'$
with $p_i' \in C'$ and $p_j'' \in C'$.
Then a transition of this cluster should appear in $\sigma_{en}$.
Recall that we assume that the construction described in Lemma~\ref{lem:exstrdespath} is used to create $\rho'$, i.e., all transitions in $\rho'$ also appear in $\sigma_{en}$.
$t' \in C'$ is such a transition appearing in $\sigma_{en}$
and $\rho'$ and consuming tokens from both $p_i'$ and $p_j''$.
When starting in marking $M_2$ and executing $\sigma_{en}$, transition $t'$ occurs before any transition in $C_1$.
Consider the marking $M'$ just before $t'$ occurs, i.e., starting in $M_2$ a prefix of $\sigma_{en}$ is executed enabling $t'$ without executing any transition in $C_1$.
There exists a place $p^{alt} \in C_1 \cap P_{\bullet}$, because $C_1$ is fully marked in $M_1$ and partially marked in $M_2$.
In marking $M'$, both $p_j''$ and $p^{alt}$ are marked.
$p_j''$ is marked because $t'$ is enabled.
$p^{alt}$ is marked because no transition in $C_1$ fired yet.
However, there is also a $C$-rooted disentangled path
starting in $p^{alt}$, namely
$\rho^{alt} = \langle p^{alt},t_1'',p_2'', \ldots ,p_j'', \ldots t_{m-1}'',p^{end} \rangle$
(we can start in an arbitrary place in $C_1$ and still meet all requirements, note that compared to $\rho''$, $p^{conn}$ is replaced by $p^{alt}$).
Lemma~\ref{lem:pathsafety} shows that it is impossible
to have two marked places $p_j''$ and $p^{alt}$ in the
$C$-rooted disentangled path $\rho^{alt}$, leading to a contradiction.
Therefore, $\rho'''$ does not visit the same cluster multiple times (if so, $\rho''$ would not be a $C$-rooted disentangled path).
Hence, $\rho'''$ is a $C$-rooted disentangled path
starting in $p^{mrk}$ and ending in $p^{end}$.
{\bf The combined rooted disentangled path $\rho'''$ is not safe leading to a contradiction.}
Now consider the just constructed $C$-rooted disentangled path $\rho'''$ and marking $M_1$.
The places $p^{mrk}$ and $p^{conn}$ are both marked in $M_1$ and must be different.
Recall that $p^{mrk} \in C_2 \cap P_{\bullet}$ (i.e., also marked in $M_1$)
and $p^{conn} \in C_1$ (all places in $C_1$ are marked in $M_1$).
Again we apply Lemma~\ref{lem:pathsafety}, which shows that it is impossible to have two marked places
in the $C$-rooted disentangled path $\rho'''$.
Therefore, we find another contradiction, showing that
the conflict-pair $(M_1,M_2)$ cannot exist.
\end{proof}
Our goal was to show that marked proper free-choice nets having a home cluster
are lucent and this follows directly from the previous results.
\begin{corollary}[Home Clusters Ensure Lucency]\label{corr:home-lu}
Let $(N,M)$ be a marked proper free-choice net having a home cluster. $(N,M)$ is lucent.
\end{corollary}
\begin{proof}
This follows directly from Lemma~\ref{lem:cflucent} and Theorem~\ref{theo:home-cp}.
A marked proper free-choice net having a home cluster
has no conflict-pairs (Theorem~\ref{theo:home-cp})
and, therefore, must be lucent (Lemma~\ref{lem:cflucent}).
\end{proof}
$(N_1,M_1)$ in Figure~\ref{fig-intro-fc} and
$(N_5,M_5)$ in Figure~\ref{fig-home-lucent} are examples of
free-choice nets having a home cluster and these are indeed lucent.
$(N_4,M_4)$ depicted in Figure~\ref{fig-fc-nonlucid} is
not lucent and indeed has no home cluster.
\section{Relation To Perpetual Nets}
\label{sec:perpmarknets}
This paper significantly extends the results for \emph{perpetual} marked
free-choice nets presented in \cite{lucent-PN2018}.
These nets need to be live, bounded, and have a home cluster,
whereas in this paper, we only require the latter (but boundedness is implied).
Moreover, unlike \cite{lucent-PN2018} the setting is not limited to strongly-connected nets,
e.g., we allow for workflow nets and other types of Petri nets typically used in
process mining, workflow management, and business process management.
\begin{definition}[Perpetual Marked Nets \cite{lucent-PN2018}]\label{def:perpmn}
A marked Petri net $(N,M)$ is perpetual net if and only if
it is live, bounded, and has a home cluster.
\end{definition}
In this paper, we focus on marked proper free-choice nets having a home cluster.
Since boundedness is implied, the essential difference is the liveness requirement that we dropped.
None of the lucent Petri nets shown in this paper is live,
showing that this is a substantial generalization.
For example, $(N_1,M_1)$ in Figure~\ref{fig-intro-fc} and
$(N_5,M_5)$ in Figure~\ref{fig-home-lucent}
are lucent but not perpetual.
Lemma~\ref{lem:cflucent} and Theorem~\ref{theo:home-cp} (combined in Corollary~\ref{corr:home-lu}) can be used to show that
$(N_1,M_1)$ and $(N_5,M_5)$ are lucent.
\begin{table}[htb!]
\centering
\begin{tabular}{|p{1.5cm}|p{3.0cm}||c|c|}
\hline
\multicolumn{2}{|p{4.7cm}||}{class of nets for which lucency is proven to hold} & \parbox[t]{4cm}{marked proper free-choice nets having a home cluster (this paper)} &
\parbox[t]{4cm}{perpetual nets (free-choice, live, bounded, and having home cluster) \cite{lucent-PN2018}} \\
\hline \hline
structural & proper & \checkmark & \checkmark ~ (implied)\\ \cline{2-4}
properties & strongly-connected & - & \checkmark ~ (implied)\\
\hline \hline
dynamic & bounded & \checkmark ~ (implied) & \checkmark\\ \cline{2-4}
properties & live & - & \checkmark\\ \hline
\end{tabular}
\caption{Corollary~\ref{corr:home-lu} extends the results in \cite{lucent-PN2018} to nets that may be non-live and not strongly-connected (requirements are denoted by $\checkmark$).}\label{tab:prop}
\end{table}
Theorem~3 in \cite{lucent-PN2018} states that any perpetual marked free-choice net is lucent.
Corollary~\ref{corr:home-lu} generalizes this statement, as shown in Table~\ref{tab:prop}.
In the remainder of this section, we relate both settings.
\begin{proposition}[Perpetual Nets Are a Subclass of Free-Choice Nets Having a Home Cluster]\label{lemma:implication}
Let $(N,M)$ be a marked free-choice net.
If $(N,M)$ is perpetual, then $(N,M)$ is proper and has a home cluster.
\end{proposition}
\begin{proof}
A marked free-choice net $(N,M)$ is a perpetual net if and only if
it is live, bounded, and has a home cluster.
Hence, we only need to show that $(N,M)$ is proper.
This follows directly from the fact that well-formed nets are strongly-connected (Theorem 2.25 in \cite{deselesparza}).
\end{proof}
The reverse does not need to hold, as is demonstrated by figures~\ref{fig-intro-fc} and
\ref{fig-home-lucent}.
The proof of Theorem~3 in \cite{lucent-PN2018} is also incomplete.
The proof in \cite{lucent-PN2018} can be repaired, but this requires reasoning over a stacked array of P-components, making things overly complicated.
It is also possible to use a different approach using a so-called T-reduction showing the absence of conflict pairs,
see Theorem~6 in \cite{reduction-wvda-PN2021}.
In a T-reduction proper $t$-induced T-nets are ``peeled off'' until a T-net (i.e., marked graph)
remains (this is related to the notion of CP-nets used in \cite{deselesparza}).
The reduction preserves liveness, boundedness, perpetuality, pc-safeness, and other properties.
Starting from a perpetual well-formed free-choice net and a T-reduction, it can be shown that
lucency is preserved in the ``upstream'' direction.
Since for marked graphs it is easy to show lucency,
this implies that any perpetual marked free-choice net is lucent.
Selected results from Section~\ref{sec:arelucent} can also be used to repair the proof in \cite{lucent-PN2018} in a more direct manner
without using existing results for well-formed free-choice nets.
In this more limited setting, our approach can be further simplified by exploiting safeness and liveness.
For strongly-connected marked free-choice nets, having a home cluster implies perpetuality (i.e., liveness and boundedness are implied).
Moreover, such nets are also safe.
\begin{proposition}[Properties of Strongly-Connected Free-Choice Nets Having a Home Cluster]\label{prop:live-safe}
A strongly-connected marked free-choice net $(N,M)$
having a home cluster $C$ is live, safe, and lucent.
\end{proposition}
\begin{proof}
Let $(N,M)$ be a strongly-connected marked free-choice net having a home cluster $C$.
$N$ is proper because $N$ is strongly-connected.
Hence, we can apply Corollary~\ref{corr:safe} to show that $(N,M)$ is safe.
Corollary~\ref{corr:home-lu} can be used to show that $(N,M)$ is lucent.
Any transition $t$ is on a path from starting in $C$.
It is possible to create a firing sequence starting in $\mi{Mrk}(C)$ enabling $t$ by following this path.
This is due to the free-choice property and the fact that we cannot ``get stuck on the way'' (it is always possible to return to $\mi{Mrk}(C)$).
See the proof of Lemma~\ref{lem:pathsafety} for a similar reasoning.
Hence, $(N,M)$ is live.
\end{proof}
To explore the relationship between both settings in more detail,
we take a proper Petri net with a safe initial marking $M$ and a selected cluster $C$.
We add a transition $t_C$ that extends the cluster and that marks all places in $M$,
i.e., $\pre{t_C} = C \cap P$ and $\post{t_C} = \{p \in M\}$.
$t_C$ short-circuits the original net in an attempt to make it strongly-connected.
To achieve this, we also need to remove the nodes for which there is no path from the initially marked places.
\begin{definition}[Short-Circuited Cleaned Nets]\label{def:short-circ}
Let $N= (P,T,F)$ be proper Petri net having a cluster $C$ and an initial marking $M$ that is safe.
\begin{itemize}[noitemsep,topsep=2pt]
\item $\mi{conn}(N,M) = \{ x_n \mid \langle x_1,x_2, \ldots ,x_n \rangle \in \mi{paths}(N) \ \wedge \ x_1 \in M \}$ are all nodes that are on a path starting in an initially marked place.
\item $\mi{clean}(N,M) = (P',T',F'\cap ((P' \times T') \cup (T' \times P')))$ with
$P' = P \cap \mi{conn}(N,M)$, and $T' = T \cap \mi{conn}(N,M)$ is the net containing all places and transitions on paths starting in an initially marked place.
\item $\mi{short\_circ}(N,C,M) = (P,T\cup \{t_C\},F \cup (\mi{Pl}(C) \times \{t_C\})
\cup (\{t_C\}\times \{p \in M\}) )$ is the short-circuited net (adding a ``fresh'' transition $t_C \not\in T$
with $\pre{t_C} = C \cap P$ and $\post{t_C} = \{p \in M\}$).
\item $N_{C,M} = \mi{short\_circ}(\mi{clean}(N,M),\allowbreak C,M)$ applies the two operations in sequence.
\item $\hat{C} = C \cap \{t_c\}$ is used to denote the extended cluster (note that this is only a cluster of $N_{C,M}$ if $C \subseteq \mi{conn}(N,M)$).
\end{itemize}
\end{definition}
In an attempt to create a strongly-connected net, we first remove all ``dead nodes'' and then short-circuit the net by connecting a selected cluster to the initially marked places.
If all nodes of $C$ are on a path starting in an initially marked place, then
$\hat{C} = C \cap \{t_c\}$ is indeed a cluster of $N_{C,M}$ (otherwise not).
\begin{proposition}[Short-Circuited Cleaned Nets Are Strongly-Connected]\label{prop:scc-nets-are-cc}
Let $(N,M)$ be a safely marked proper free-choice net having a cluster $C$ such that $C \subseteq \mi{conn}(N,M)$.
The short-circuited cleaned net $N_{C,M} = \mi{short\_circ}(\mi{clean}(N,M),\allowbreak C,M)$ is
strongly-connected and free-choice, and
$\hat{C} = C \cap \{t_c\} \in \cluster{N_{C,M}}$
(i.e., $\hat{C}$ is indeed a cluster of $N_{C,M}$).
\end{proposition}
\begin{proof}
All nodes in $\mi{clean}(N,M)$ are reachable from an initially marked place
(including the nodes in $C$ because $C \subseteq \mi{conn}(N,M)$).
Hence, $t_c$ is also reachable from an initially marked place
and $t_c$ is connected to this place. Therefore, the net is strongly-connected.
Adding $t_c$ cannot destroy the free-choice property.
If there is a transition $t \in C$, then $\pre{t}=\pre{t_c}$.
If not, then $C$ has just one place.
Therefore, $N_{C,M}$ is free-choice and has a new cluster $\hat{C} = C \cap \{t_c\}$.
\end{proof}
Under the assumption that cluster $C$ is preserved when short-circuiting the net, $C$ is a home cluster of $(N,M)$ if and only if $\hat{C}$ is a home cluster of $(N_{C,M},M)$.
Moreover, this is equivalent to $(N_{C,M},M)$ being live and bounded,
and can be used to decide whether a free-choice net has a home cluster in polynomial time.
\begin{theorem}[Relating Both Settings]\label{theo:relation}
Let $(N,M)$ be a safely marked proper free-choice net having a cluster $C$ such that $C \subseteq \mi{conn}(N,M)$. The following three statements are equivalent:
\begin{itemize}[noitemsep,topsep=2pt]
\item[(1)] $C$ is a home cluster of $(N,M)$,
\item[(2)] $\hat{C}$ is a home cluster of $(N_{C,M},M)$, and
\item[(3)] $(N_{C,M},M)$ is live and bounded.
\end{itemize}
\end{theorem}
\begin{proof}
Let $N= (P,T,F)$ be a proper free-choice net having a cluster $C$
and an initial marking $M$ that is safe.
$N_{C,M} = \mi{short\_circ}(\mi{clean}(N,M),\allowbreak C,M)$ and $t_C$ is the short-circuiting transition.
$C \subseteq \mi{conn}(N,M)$, i.e., all cluster nodes are reachable from an initially marked place.
First, we show that (1) $\Rightarrow$ (2).
Assume that $C$ is a home cluster of $(N,M)$.
Under this assumption, we consider the reachable markings of $(N_{C,M},M)$.
These include the markings of $(N,M)$, but nothing more.
The moment all places in $C$ are marked, the other places are empty.
In $(N_{C,M},M)$ there is an additional transition $t_C$
that is enabled if all places in $\hat{C}$ are enabled.
If $t_C$ fires in $\mi{Mrk}(C) = \mi{Mrk}(\hat{C})$,
then we reach the initial state $M$ again.
Hence, the set of reachable markings is the same and
$\hat{C}$ is a home cluster of $(N_{C,M},M)$.
Second, we show that (2) $\Rightarrow$ (3).
$\hat{C}$ be a home cluster of $(N_{C,M},M)$.
Proposition~\ref{prop:scc-nets-are-cc} shows that
$N_{C,M}$ is strongly-connected and free-choice.
Using Proposition~\ref{prop:live-safe} this implies that
$(N_{C,M},M)$ is live and safe (i.e., also bounded).
Finally, we show that (3) $\Rightarrow$ (1).
Let $(N_{C,M},M)$ be live and bounded.
This implies that also $t_C$ is live and can be repeatedly be enabled.
When $t_C$ is enabled, the places in $\hat{C}$ are marked, i.e.,
$t_C$ can only be enabled in a marking $M'$ such that $M' \geq \mi{Mrk}(C)$.
It is impossible that $M' > \mi{Mrk}(C)$. If so, it would be possible to reach a marking larger than the initial marking yielding an unbounded net by firing $t_C$.
Hence, $M' = \mi{Mrk}(C)$ is the only reachable marking enabling $t_C$.
Therefore, the set of reachable markings of $(N_{C,M},M)$ and $(N,M)$ are the same. As a result, $\mi{Mrk}(C)$ can be reached from any reachable marking starting in $(N,M)$. This implies that $C$ is a home cluster of $(N,M)$.
Combining (1) $\Rightarrow$ (2), (2) $\Rightarrow$ (3), and (3) $\Rightarrow$ (1)
shows that the three statements are equivalent.
\end{proof}
We can apply Theorem~\ref{theo:relation} to all clusters of the net.
Therefore, the problem of deciding whether marked proper free-choice net has a home cluster
can be converted into a liveness and boundedness question, allowing us to solve the problem in polynomial time.
\begin{corollary}[Complexity of Home Cluster Detection]\label{corr:complex}
The following problem is solvable in polynomial time:
Given a marked proper free-choice net, to decide whether there is a home cluster.
\end{corollary}
\begin{proof}
Let $(N,M)$ be a marked proper free-choice net with $N=(P,T,F)$.
There are at most $\card{P}$ clusters. For each cluster $C$, we check whether
$C$ is a home cluster of $(N,M)$. This is the same as checking whether $C \subseteq \mi{conn}(N,M)$ and
$(N_{C,M},M)$ is live and bounded.
The former requirement is merely a syntactical check to ensure that
cluster $C$ is preserved when short-circuiting the net.
The latter requirement is known to be solvable in polynomial time
(see, for example, Corollary~6.18 in \cite{deselesparza}).
Hence, deciding whether there is a home cluster can also be solved in polynomial time.
\end{proof}
The above result is remarkable because it also applies to non-well-formed nets.
\section{Conclusion}
\label{sec:concl}
This paper shows that marked proper free-choice nets
having a home cluster are lucent.
A system is \emph{lucent} if the set of enabled actions
uniquely characterizes the state of the system.
The user interface of an information system or
the worklist provided by a workflow management system offers
possible actions to its users.
If the system is not lucent, the system may behave differently in seemingly identical situations.
The notion of lucency was introduced in \cite{lucent-PN2018}
and, given its foundational nature, it is surprising
that this was not investigated before.
The paper focuses on \emph{marked proper free-choice nets
having a home cluster} and uses novel concepts such as \emph{rooted disentangled paths} and \emph{conflict-pairs} to reason about
the behavior of such models. Most of the work on free-choice nets
is restricted to well-formed nets. However, the liveness requirement is unsuitable for many application domains.
Many systems and processes are \emph{terminating} and/or have an \emph{initialization} phase. These are excluded by most of the existing work.
As shown in this paper, we can often short-circuit the net and apply existing techniques.
However, the approach used in this paper is direct without using any results for well-formed free-choice nets.
Future work aims to extend the class of systems for which lucency can be proven. However, this will not be easy since unbounded nets or nets with long-term dependencies are inherently non-lucent.
More promising is the further investigation of Petri nets with home clusters. Ideas such as rooted disentangled paths and conflict-pairs have a broader applicability and may be used to generalize some of the results known for well-formed (free-choice) Petri nets.
For example, is it possible to create reduction and synthesis rules?
The idea to look into lucency originated from challenges in the field of \emph{process mining} (where observed behavior without state information is converted into process models that have states).
What if event logs would not only show the actions executed, but also what was possible, but did not happen?
In \cite{lucent-translucent-fi-2019} the notion of translucent event logs is introduced, and a baseline discovery algorithm is given. Given such information, it is much easier to discover process models. Another direction for future research is to create process mining techniques tailored towards discovering a marked proper free-choice net having a home cluster from a standard event log. Current approaches often aim to discover workflow nets that are (relaxed) sound.
Heuristic approaches do not ensure soundness.
Region-based techniques tend to create unreadable models.
Inductive mining techniques tend to produce underfitting models.
Therefore, there is room for exploring alternative representational biases in process mining.
~\\
{\bf Acknowledgements}: The author thanks the Alexander von Humboldt (AvH) Stiftung for supporting our research.
Special thanks go to the persistent anonymous reviewer for providing detailed comments that helped to improve the readability of the proofs.
\bibliographystyle{fundam}
|
1,314,259,993,391 | arxiv | \section{Introduction} \label{Intro}
\defcitealias{Schlafly2014}{Schlafly et al.}
The archetypal giant molecular cloud (GMC) Orion\,A is the most active star-forming region in the local neighborhood, having spawned $\sim$3,000 young stellar objects (YSOs) in the last few million years \citep[e.g.,][]{Megeath2012, Furlan2016, Grossschedl2018}. Some of the most basic observables of the star-formation process, including star-formation rates and history, age spreads, multiplicity, the initial mass function, and protoplanetary disk populations, have been derived for this benchmark region \citep[e.g.,][]{Bally2008,Muench2008}. Previous distance estimates to the Orion Nebula Cluster (ONC), the richest cluster toward the northern end of the cloud, put this object at about \SI{400}{pc} from Earth \citep[e.g.,][]{Sandstrom2007, Menten2007, Hirota2007, Kim2008, Kuhn2018}. Moreover, there has been some evidence that the northern part of the cloud, including the ONC (or ``Head''), is closer than the southern part (or ``Tail'')\footnote{For simplicity we classify the Orion\,A cloud roughly into Head and Tail; the Tail represents the less star-forming part.}, containing L1641 and L1647 \citep{Schlafly2014, Kounkel2017, Kounkel2018}.
\begin{table*}[!ht]
\begin{center}
\small
\caption{Distances to sub-regions in Orion\,A from the Literature.}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{llll}
\hline \hline
\multicolumn{1}{c}{Reference} & \multicolumn{1}{c}{Method} & \multicolumn{1}{c}{Region} & \multicolumn{1}{c}{Distance (pc)} \\
& & & \multicolumn{1}{c}{(pc)} \\
\hline
\citet{Genzel1981} & proper motion and radial velocity & Orion KL & $480\pm80$ \\
& of H$_2$O masers & & \\
\citet{Hirota2007} & VERA/VLBI & Orion KL & $437\pm19$ \\
\citet{Menten2007} & VLBI & ONC & $414\pm7$ \\
\citet{Sandstrom2007} & VLBI & ONC & $389^{+24}_{-21}$ \\
\citet{Kim2008} & VERA/VLBI & Orion KL & $418\pm6$ \\
\citet{Lombardi2011} & density of foreground stars & Orion\,A & $371\pm10$ \\
\hline
\citet{Schlafly2014}\tablefootmark{a} & PanSTARRS optical reddening &
$(l/b)$ at $(\SI{208.4}{\degree},\SI{-19.6}{\degree})$ north of the ONC & $418^{+43}_{-34}$ \\
& \citep{Green2014} & $(l/b)$ at $(\SI{209.1}{\degree},\SI{-19.9}{\degree})$ west of the ONC & $478^{+84}_{-59}$ \\
& & $(l/b)$ at $(\SI{209.0}{\degree},\SI{-20.1}{\degree})$ west of the ONC & $416^{+42}_{-36}$ \\
& & $(l/b)$ at $(\SI{209.8}{\degree},\SI{-19.5}{\degree})$ north to L1641-North & $580^{+161}_{-107}$ \\
& & $(l/b)$ at $(\SI{212.2}{\degree},\SI{-18.6}{\degree})$ east to L1641-South & $490^{+27}_{-27}$ \\
& & $(l/b)$ at $(\SI{212.4}{\degree},\SI{-19.9}{\degree})$ west to L1641-South & $517^{+44}_{-38}$ \\
& & $(l/b)$ at $(\SI{214.7}{\degree},\SI{-19.0}{\degree})$ south-east of L1647-South & $497^{+42}_{-36}$ \\
\hline
\citet{Kounkel2017}\tablefootmark{a} & VLBI & 15 YSOs near the ONC & $388\pm5$ \\
& & 2 YSOs near L1641-South & $428\pm10$ \\
\hline
\citet{Kounkel2018} & {\it Gaia} DR2 of APOGEE-2 sources & ONC & $386\pm3$ \\
& + HR-diagram selection & L1641-South & $417\pm4$\ \\
& & L1647 & $443\pm5$ \\
\hline
\citet{Kuhn2018} & {\it Gaia} DR2 of {\it Chandra} X-ray sources & ONC & $403^{+7}_{-6}$ \\
& & north and south to ONC & $\sim395$ \\
\hline
\end{tabular}
\renewcommand{\arraystretch}{1}
\label{tab:literature}
\tablefoot{
\tablefoottext{a}{See also Fig.~\ref{fig:scatter_literature}.}
}
\end{center}
\end{table*}
To know the true 3D spatial shape and orientation of this giant filamentary structure would allow one not only to determine accurate cloud and YSO masses, luminosities, and separations for this benchmark region, but it would also bring important hints on the formation of GMCs in the disk of the Milky Way. \citet{Schlafly2014} first found an indication of a distance gradient across Orion\,A (see Table~\ref{tab:literature}), using a method based on optical reddening of stars \citep{Green2014} which is not sensitive to regions of high column-density. \citetalias{Schlafly2014} found that the Tail of the cloud is about 70\,pc more distant than the ONC region. \citet{Kounkel2017} conducted 15 VLBI observations toward young stars near the ONC, and two observations toward L1641-South. These observations again suggest an inclination of the cloud away from the plane of the sky, with a difference in distance of about 40\,pc from Head to Tail (until L1641-South). The distances reported in \citet{Schlafly2014} and \citet{Kounkel2017} are presented in Fig.~\ref{fig:scatter_literature} and in Table~\ref{tab:literature}. \citet{Kounkel2018} continued the analysis of this region by using new APOGEE-2 data combined with the newly released {\it Gaia} DR2 catalog \citep{Brown2018}. In this recent paper, they focus on stellar populations and the star-formation history across the whole Orion complex in a high dimensional space using a clustering algorithm. They report a more distant Tail compared to the Head (about 55\,pc distance difference).
In this paper we have used the newly released {\it Gaia} DR2 to infer the 3D shape and orientation of Orion\,A. As a proxy to the cloud distance we will use the latest catalog of mid-infrared selected YSOs in this cloud, with ages $\lesssim 3$ Myr, for which a {\it Gaia} DR2 parallax measurement exists. These very young stars lie relatively close to, or are still embedded in the Orion\,A GMC, sharing the same velocity as the cloud \citep{Tobin2009, Hacar2016}, and are thus the best tracer of the cloud's shape.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\linewidth]{Figure1_scatter.pdf}
\caption{
{\it Gaia} DR2 $\varpi$ of YSOs with IR-excess in Orion\,A versus $l$ (top, $\sigma_\varpi$ as error-bars), and projected YSO distribution displayed on the {\it Herschel} map (bottom). Red are YSOs that pass the applied selection criteria as discussed in the first two steps in Sect.~\ref{Data}. The blue sources represent the sources lost when the flux-excess-cut is applied. This highlights that nebulae (near the ONC, see map) cause additional $\varpi$ uncertainties, not reflected in $\sigma_\varpi$.
The 1D distribution of $\varpi$ for both samples is shown in the histogram on the right. The red and blue middle lines show the median $\varpi$ of the samples.
The lower and upper borders (black dashed lines) indicate the applied distance cuts to avoid possible foreground or background contamination when deducing the average distances.
The middle gray line shows the distance to the ONC of 414\,pc \citep{Menten2007}, while the gray shaded band represents the 2D projected size of the cloud of about 40\,pc at 414\,pc.
}
\label{fig:scatter}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.95\linewidth]{Figure2_hist_Gmag.pdf}
\caption{Histogram of {\it Gaia} DR2 $G$-band magnitudes. The gray distribution shows all YSOs toward Orion\,A with measured {\it Gaia} DR2 parallaxes. The red and blue distributions show the YSO samples that pass our required selection criteria, while we distinguish sources with (red) and without (blue) flux-excess-cut (see also Fig.~\ref{fig:scatter}).}
\label{fig:mag_hist}
\end{figure}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.95\linewidth]{Figure3_mean.pdf}
\caption{
Top: Distance estimates ($1/\varpi$) for YSOs in Orion\,A versus $l$ and their average distances per $\Delta l$. YSOs are shown as red dots with error-bars ($\sigma_\varpi/\varpi^2$). Over-plotted are the median (blue diamonds) and mean (orange circles) distances per $\Delta l = \SI{1}{\degree}$ (blue/green vertical lines on the bottom map correspond to the bin boundaries, factor two over-sampled). The 1$\sigma$ and 2$\sigma$ lower and upper percentiles are shown as blue shaded areas. The horizontal gray line represents the \citet{Menten2007} distance to the ONC at $\SI{414}{pc}$ with a range of $\SI{\pm 20}{pc}$ (gray shaded area) corresponding to the projected extent in $l$ of the cloud ($\SI{\sim40}{pc}$ at $\SI{414}{pc}$).
Bottom: Distribution of the YSOs in Galactic coordinates projected on the {\it Herschel map}. The displayed area corresponds to the VISTA observed region \citep{Meingast2016}. The dark shades of the gray scale indicate regions of high dust column-density (or high extinction). The distribution of YSOs follows the high density regions of the cloud, shown by their mean($b$) positions per $\Delta l$ (orange squares).
}
\label{fig:mean}
\end{figure*}
\section{Observations and data selection} \label{Data}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\linewidth]{Figure4_XY_stretch.pdf}
\caption{
YSO distribution and YSO's mean positions projected in a cartesian plane. In this coordinate frame the Sun (at $X,Y,Z=0,0,0$) is connected to the location of Orion\,A with $X_{Orion}$ pointing toward $(l/b) = (\SI{211}{\degree}/\SI{-19.5}{\degree})$. Consequently, $X_{Orion}$ is similar to the distance from the Sun, while the $Y_{Orion}$ and $Z_{Orion}$ components coincide roughly with the $l$ and $b$ distribution, respectively.
The YSOs are shown in gray scale colored by $\sigma_\varpi$.
The mean positions per $\Delta l$ are shown as orange filled circles (as in Fig.~\ref{fig:mean}), while the two rightmost points from Fig.~\ref{fig:mean} are excluded. The gray dashed lines are lines of constant $l$ as viewed from the Sun ($\SI{2}{\degree}$ steps from $l = \SI{206}{\degree}$ to $\SI{216}{\degree}$). For orientation, the numbered boxes show the mean positions of YSOs projected near eight known clusters, as listed in the bottom left legend. In brackets we give the estimated distances (derived from {\it Gaia} DR2 parallaxes), which are used to plot the boxes.
}
\label{fig:xy_stretch}
\end{figure}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.95\linewidth]{Figure5_xyz_cloud.pdf}
\caption{
3D orientation of the Orion\,A GMC in Galactic cartesian coordinates ($X$ positive toward the Galactic center, $Y$ positive toward the Galactic east, $Z$ positive toward the Galactic north). The orange dots represent the mean positions of YSOs per $\Delta l$ (see Fig.~\ref{fig:mean}), while only using those on top of high column-density. The gray shaded area shows an idealized 3D cloud shape in each projection at $A_\mathrm{K,Herschel}\gtrsim\SI{0.57}{mag}$ ($A_V\gtrsim\SI{5}{mag}$), assuming a symmetric cylindrical shape, meaning that the filament is as deep as it is wide in the sky. For orientation, the black arrows indicate the line-of-sight from the Sun. Each arrow points toward $(l,b) = (211.0, -19.5)$ plotted from $d=$ 380\,pc to 390\,pc.
}
\label{fig:xyz}
\end{figure*}
We have used the Orion\,A YSO catalog of \citet{Grossschedl2018} which revisited the catalog of \citet{Megeath2012} \citep[including updates from][]{Megeath2016, Furlan2016, Lewis2016}, and added about 200 new YSO candidates from a dedicated ESO-VISTA near-infrared survey covering the whole Orion\,A region \citep[$\SI{\sim18}{deg\square}$,][]{Meingast2016}, making it the most complete (2D) distribution of YSOs toward this cloud. The catalog contains 2,978 YSO candidates with IR-excess, classified into 2,607 pre-main-sequence stars with disks (Class\,II), 183 flat-spectrum sources, and 188 protostars (Class\,0/I). The on-sky distribution of these sources generally follows the high density regions of the cloud (see Fig.~\ref{fig:mean}, bottom). Combined with their youth, this makes them a good proxy for cloud distances.
To infer the distance along the Orion\,A GMC we averaged over YSO's parallaxes ($\varpi$) in equally sized bins of Galactic longitude ($\Delta l$). To derive distances from parallaxes we have investigated both the inverse of $\varpi$ and the Bayesian distance estimates from \citet{BailerJones2018}, which account for the non-linearity of the transformation parallax to distance. At a distance of 400\,pc, the mean difference between the two methods is about 1\% for DR2, meaning that the final result in this paper will be virtually independent of the method used to infer distances.
Moreover, we do not include a global parallax offset of 0.029\,mas or 0.08\,mas, as discussed in \citet{Lindegren2018} and \citet{Stassun2018}, since it is very uncertain how or if this effect influences the parallaxes of our sample toward Orion\,A. Besides, the presence of a small offset does not affect the result in this paper. To summarize, in this paper we use parallaxes when possible, and only then we derive the mean or median distance from the inverse of the $\mathrm{mean}(\varpi)$ or $\mathrm{median}(\varpi)$.
Before cross-matching the YSO sample with \textit{Gaia} DR2 data we checked the effect of proper motions on the cross-match. To this end, we transformed \textit{Gaia} J2015.5 coordinates into J2000. The effect toward Orion\,A is marginal, with a mean separation between J2015.5 and J2000 of $\SI{0.09}{\arcsec}$, smaller than the astrometric accuracy of the VISTA survey (observed in 2013). We used then a $\SI{1}{\arcsec}$ cross-match radius between the original {\it Gaia} J2015.5 and VISTA coordinates. This results in 1,986 cross-matches of DR2 parallaxes with IR-excess YSOs (67\% of the original YSO catalog).
Since we are interested in reliable anchor points along the cloud, and given the relatively good statistics, we chose a conservative selection criteria for the final sample \citep[informed by][]{Babusiaux2018, Lindegren2018, Arenou2018, Evans2018}, which is described in the following three steps. In a first step we applied several cuts to get reliable parallax measurements:\footnote{Shortcuts for {\it Gaia} parameters: \\
$\varpi$: {\tt parallax} [mas] \\
$\sigma_\varpi$: {\tt parallax\_error} [mas] \\
$G$: {\tt phot\_g\_mean\_mag} [mag]\\
$\sigma_G$: 1.0857 / {\tt phot\_g\_mean\_flux\_over\_error} [mag]\\
$\epsilon_i$: {\tt astrometric\_excess\_noise} [mas]\\
$\mathrm{sig}\_\epsilon_i$: {\tt astrometric\_excess\_noise\_sig} (significance)
}
\begin{equation}
\begin{aligned}
& \sigma_\varpi/\varpi < 0.1 \\
& \sigma_G < \SI{0.03}{mag} \\
& \mathtt{astrometric\_sigma5d\_max} < \SI{0.3}{mas} \\
& \mathtt{visibility\_periods\_used} > 8 \\
& (\epsilon_i \leq \SI{1}{mas}) \,\,\,\mathrm{OR}\,\,\, (\epsilon_i > \SI{1}{mas} \,\,\,\mathrm{AND}\,\,\, \mathrm{sig}\_\epsilon_i \leq 2) \\
\end{aligned}
\end{equation}
Bright nebulosities and crowded regions can cause further uncertainties, which especially effect the ONC region. Hence, in a second step, we excluded sources showing a flux excess\footnote{using the ratio of the fluxes $(I_\mathrm{BP}-I_\mathrm{RP})/I_\mathrm{G}$:\\
$\mathtt{phot\_bp\_rp\_excess\_factor}$}, by applying the following flux-excess-cut, similar to \citet{Evans2018}:
\begin{equation}
(I_\mathrm{BP}-I_\mathrm{RP})/I_\mathrm{G} > 1.35 + 0.06 (G_\mathrm{BP}-G_\mathrm{RP})^2
\end{equation}
This condition significantly reduces the distance scatter near the ONC (see Fig.~\ref{fig:scatter}),
but it does not significantly affect the averaged parallaxes along the cloud, since the scatter is more or less symmetric.
Finally, in a third step, we used only sources in a distance interval of $\SI{300}{pc} < d < \SI{600}{pc}$ (or $\SI{3.333}{mas} \gtrsim \varpi \lesssim \SI{1.666}{mas}$), since an examination of the parallax distribution (Fig.~\ref{fig:scatter}) shows a clear drop in density of sources beyond these boundaries. This prevents the contamination by outliers when averaging the parallaxes, as some sources are as close as 100\,pc or as far as 1000\,pc. YSOs with such large deviating distances from expected values near 400\,pc need further investigation, since these can be caused either by uncertainties which are not reflected in $\sigma_\varpi$, or these young stars are not associated with Orion\,A. The combined selection criteria leave us with a final tally of 682 YSOs with IR-excess (23\% of the original YSO catalog) consisting mainly of Class\,II sources (666 Class\,IIs, 16 flat-spectrum) (see Table~\ref{tab:catalog}).
The selected sources have observed $G$ band magnitudes within $\SI{6.3}{mag}<G<\SI{18.2}{mag}$ (see Fig.~\ref{fig:mag_hist}), which is in the range of the suggested magnitude limits \citep{Lindegren2018}\footnote{Bright sources with $G<\SI{6}{mag}$ have generally inferior astrometric quality. Faint sources with $G>\SI{18}{mag}$ are problematic in dense regions.}. As argued above, these sources are the youngest optically visible sources in Orion\,A and hence close to the cloud and a good proxy to the cloud distance.
\section{Results}
In Figure~\ref{fig:mean} we show the average distances, derived from averaged parallaxes of the YSOs per one degree wide bins along Galactic longitude ($\Delta l = \SI{1}{\degree}$, $\SI{\sim7}{pc}$ at $\SI{414}{pc}$, over-sampled by a factor of two). We do not weight the average by the parallax errors, given that we have already applied conservative error cuts.
The map (Figs.~\ref{fig:scatter} and \ref{fig:mean}, bottom) shows the YSO distribution projected in Galactic coordinates on a Planck-Herschel-Extinction dust column-density map \citep[][hereafter, {\it Herschel map}]{Lombardi2014}\footnote{We use a factor 3,050 to linearly convert $Herschel$ optical depth to extinction, as derived by \citet{Meingast2018}.}.
The distance variations in Fig.~\ref{fig:mean} (top) indicate that the Head of the cloud appears to be roughly on the plane of the sky at about $\SI{400}{pc}$ (for an extent of about $\SI{15}{pc}$ to $\SI{20}{pc}$), while the Tail, starting between $l \approx \SI{210}{\degree}$ and $\SI{211}{\degree}$ and reaching to $l\approx\SI{214.5}{\degree}$, extends from about $\SI{400}{pc}$ to about $\SI{470}{pc}$ along the line-of-sight. Thus, the Tail is inclined $\SI{\sim70}{\degree}$ away from the plane of the sky. Consequently, the Tail is about four times as long ($\sim$75\,pc) as the Head, leading to a total length of the Orion\,A filament of about 90\,pc.
The surprising extent of the cloud along the line-of-sight is visualized in Fig.~\ref{fig:xy_stretch}, where we project the YSO positions ($\sigma_\varpi$ as gray scale) in a cartesian plane as seen from the Sun, with $X_{Orion}$ pointing toward Orion\,A. Over-plotted we show the mean positions per bin (orange dots, as in Fig.~\ref{fig:mean}). The displayed mean positions were transformed into the cartesian coordinate system using the following positions: the mean YSO distances ($\bar{d}_\mathrm{YSOs}$), the mean Galactic latitude positions of the YSOs ($\bar{b}_\mathrm{YSOs}$), and the $\Delta l$ bin centers (see also Table~\ref{tab:mean}).
Sources with $\sigma_\varpi \gtrsim \SI{0.085}{mas}$ disappear in this visualization, while the scatter of YSO distances is still largest near the ONC. However, since the scatter follows largely the line-of-sight, it is still likely that it reflects parallax measurement uncertainties. This should be kept under review in future {\it Gaia} data releases.
In Figure~\ref{fig:xyz} we show the orientation of the Orion\,A cloud projected in Galactic cartesian coordinates, using the mentioned mean YSO positions (Table~\ref{tab:mean}).
We exclude the three rightmost positions in Fig.~\ref{fig:mean} ($l\leq\SI{208}{\degree}$), since they are not projected on top of high dust column-density. Figure~\ref{fig:xyz} highlights the extent of the cloud in galactic 3D space, also showing an idealized representation of the 3D shape of the cloud in gray scale. The shape is deduced by using extinction contours at $A_\mathrm{K,Herschel} = \SI{0.57}{mag}$ (using extinctions from the {\it Herschel map}). For the far end of the Tail ($l\geq\SI{213.5}{\degree}$, last three points), we extrapolate the cloud shape manually, since the extinction drops on the upper side of the Tail. We use then the middle $b$ position between the upper and lower edge of the Tail, instead of $\bar{b}_\mathrm{YSOs}$. This approach visualizes the opening of the Tail. The sharp turn from Head to Tail is clearly visible in XY and XZ projection. The striking bent of the Head, which consists basically of the Integral Shaped Filament (ISF), calls for a revision of the star-formation history in Orion\,A.
A potential caveat to using the distances of YSOs as proxies to the cloud distance is that {\it Gaia}, being an optical telescope, is not sensitive to highly extincted sources. As a consequence, it will miss embedded YSOs and non-embedded YSOs that may be hidden by the cloud.
This implies that the derived distances might suffer from a bias toward closer distances (corresponding to the mean separation between YSOs and the cloud), more pronounced at the denser parts.
In a first step we tested the average distances by using only sources projected on top of high extinction, by gradually increasing the extinction threshold (from $A_{\mathrm{K,Herschel}} = \SI{0.1}{mag}$ to $\SI{1.0}{mag}$ with \SI{0.1}{mag} steps). Secondly, we used only sources projected on low extinction, again using the mentioned extinction thresholds. We find no significant difference in the mean distance distribution, and also the mean distances do not shift systematically to closer distances at high extinctions. With this, we estimate, given the error in the DR2 parallaxes and the source distance distribution, that the averaged distances per bin are approximately reflecting the cloud shape, especially in regions of low extinction. For regions of higher extinction, like the ISF, the distance might be biased toward closer distances, aggravated by the existence of foreground populations \citep[e.g.,][]{Alves2012,Bouy2014} of young stars.
We like to point out that in Fig.~\ref{fig:xy_stretch}, the ONC, an especially embedded cluster, appears at about 400 to 410\,pc \citep[close to 414\,pc,][]{Menten2007} while the adjacent regions (including foreground clusters) appear at a distance of about 390\,pc. The about 10\,pc difference compared to the literature value can be seen as an estimate of remaining systematic uncertainties for the approach we are following. A global systematic parallax offset of 0.08\,mas \citep{Stassun2018} would produce a shift of about 12\,pc toward closer distances at the Head, and of about 16\,pc at the Tail. As mentioned in Sect.~\ref{Data}, we do not include a systematic offset in the reported distances, since it is very unclear how it affects sources across Orion\,A. More importantly for this paper, relative distances are sufficient and the 3D shape of Orion\,A is largely independent of an offset.
We further test the result by a) changing the bin size $\Delta l$ along the cloud from $\SI{0.1}{\degree}$ to $\SI{1.0}{\degree}$, b) varying the different error cuts, and c) excluding sources that are not projected near regions of high dust column-density. The overall result stays the same in all cases, with the Tail starting to incline between $l\approx\SI{210}{\degree}$ and $\SI{211}{\degree}$. Regarding a), using smaller bins naturally increases noise or reflects the existence of cloud sub-structure, while larger bins have a smoothing effect. It is clear from Fig.~\ref{fig:mean}, that, for example, the region near L1641-South shows some significant distance variations, which hint toward a more complex structure than presented here.
In this paper we will not go into detail about specific sub-structures or sub-clusters in Orion\,A, since we are only interested in the overall shape and line-of-sight extent. A more detailed analysis of this important cloud is called for, using future {\it Gaia} data releases, which will provide improved accuracy.
In Table~\ref{tab:regions} we provide average distances of large-scale sub-regions in Orion\,A. We find that YSOs at the Head of the cloud, including the ISF region, the ONC, NGC1977, NGC1981, NGC1980, and L1641-North, lie on average at about 395\,pc. YSOs at the Tail are on average at about 430\,pc, including L1641-Center and South, and L1647-North and South. Separating the very southern part (L1647-South), we get a maximum distance to the end of the Tail of about 470\,pc, while the most distant stars have distances of about 550\,pc.
We find that the two clusters L1647-North and South are more distant (420\,pc to 470\,pc) than estimated with X-ray luminosities \citep[250\,pc to 280\,pc,][]{Pillitteri2016}. To make this a fair comparison, we investigate the DR2 parallaxes of XMM-Newton X-ray sources\footnote{From \url{https://nxsa.esac.esa.int}} in this region, which show a similar average distance as the IR-excess YSOs, supporting the farther distance estimated toward these clusters. The resulting tension between the X-ray luminosities and the {\it Gaia} results need further investigation.
The main result in this paper confirms previous work who pointed out a distance gradient in Orion\,A, as already discussed in Sect.~\ref{Intro}. The $\sim$70\,pc distance difference from Head to Tail is in agreement with \citet{Schlafly2014}, while the individual values along the cloud show variations between the samples (see Tables~\ref{tab:literature}, \ref{tab:mean}, and \ref{tab:regions}). \citet{Kounkel2017} discuss the 3D orientation of Orion\,A using VLBI measurements in the ONC and L1641-South. The $\sim$40\,pc distance difference between these regions is again in agreement with our findings.
\citet{Kounkel2018}, who also use {\it Gaia} DR2 parallaxes of young stars, find a smaller distance difference from Head to Tail as compared to our result (about 55\,pc from ONC to L1647). The discrepancy is due to the different samples used.
In this paper we use only the highest-quality data for the youngest YSOs (ages $\lesssim\SI{3}{Myrs}$), as these are likely to be the closest sources to the cloud, while \citet{Kounkel2018} aims at maximizing the identification of young stars in the entire Orion star-forming region, and includes sources as old as 12\,Myrs.
For completeness, we compare our sample to the \citet{Kounkel2018} sample (K18) and we find that only about 20\% of their sources are in common with our sample (or about 68\% of our sample are in common with K18). The rest of the K18 sources (80\%) are likely older and less connected to the Orion A cloud, hence, not good tracers of the cloud's shape. The sources which are only in our sample (about 1/3 of our sample) are further responsible for the different results. We find that some of these sources are more distant, especially near the Tail.
While these three papers \citep{Schlafly2014,Kounkel2017,Kounkel2018} point to a gradient in the distance from the Head to the Tail of the cloud, our paper not only confirms this gradient, but 1) establishes that the Head of the cloud is bent in regards to the Tail, 2) the Head is essentially on the plane of the sky while the tail appears to be highly inclined, not far from the line-of-sight, and 3) that the cloud has overall a cometary-like shape oriented toward the Galactic plane, although containing sub-structure not resolved in our reconstruction.
Furthermore, our results are in agreement with \citet{Kuhn2018}, who investigate the kinematics of the ONC using {\it Chandra} observed cluster members in combination with {\it Gaia} DR2. They report a distance of about 403\,pc to the ONC (Table~\ref{tab:literature}), similar to the estimated 407\,pc that we find, when looking solely at YSOs near the ONC (Fig.~\ref{fig:xy_stretch}). They point out that the ONC seems to be recessed relative to the immediate surroundings (at $\sim$395\,pc), which we also observe by using IR-excess YSOs \citep[see Figs.~\ref{fig:scatter} or \ref{fig:mean} and figure~21 in][]{Kuhn2018}.
\section{Discussion}
\begin{table*}[!ht]
\begin{center}
\small
\caption{Mean positions per Galactic longitude bin ($\Delta l$).}
\begin{tabular}{ccccccccc}
\hline
\hline
\multicolumn{1}{c}{$\Delta l$ bin center} &
\multicolumn{1}{c}{$\bar{b}_\mathrm{YSOs}$} &
\multicolumn{1}{c}{$\bar{d}_\mathrm{YSOs}$} &
\multicolumn{1}{c}{$X$} &
\multicolumn{1}{c}{$Y$} &
\multicolumn{1}{c}{$Z$} &
\multicolumn{1}{c}{$X_\mathit{Orion}$} &
\multicolumn{1}{c}{$Y_\mathit{Orion}$} &
\multicolumn{1}{c}{$Z_\mathit{Orion}$} \\
\multicolumn{1}{c}{(deg)} &
\multicolumn{1}{c}{(deg)} &
\multicolumn{1}{c}{(pc)} &
\multicolumn{1}{c}{(pc)} &
\multicolumn{1}{c}{(pc)} &
\multicolumn{1}{c}{(pc)} &
\multicolumn{1}{c}{(pc)} &
\multicolumn{1}{c}{(pc)} &
\multicolumn{1}{c}{(pc)} \\
\hline
207.0 & -19.20956 & 371.03 $\pm$ 21.83 & -312.18 & -159.06 & -122.08 & 370.22 & -24.44 & 1.60 \\
207.5 & -19.15597 & 394.01 $\pm$ 30.83 & -330.14 & -171.86 & -129.29 & 393.35 & -22.72 & 2.13 \\
208.0 & -19.14714 & 396.69 $\pm$ 20.38 & -330.88 & -175.93 & -130.11 & 396.20 & -19.61 & 2.27 \\
208.5 & -19.24683 & 391.01 $\pm$ 24.00 & -324.42 & -176.15 & -128.89 & 390.68 & -16.10 & 1.61 \\
209.0 & -19.41924 & 392.69 $\pm$ 25.02 & -323.92 & -179.55 & -130.56 & 392.48 & -12.93 & 0.48 \\
209.5 & -19.59683 & 393.22 $\pm$ 21.78 & -322.42 & -182.42 & -131.89 & 393.10 & -9.70 & -0.71 \\
210.0 & -19.67799 & 390.21 $\pm$ 26.41 & -318.20 & -183.71 & -131.40 & 390.16 & -6.41 & -1.23 \\
210.5 & -19.59386 & 395.07 $\pm$ 30.41 & -320.69 & -188.90 & -132.49 & 395.06 & -3.25 & -0.65 \\
211.0 & -19.46612 & 401.18 $\pm$ 30.38 & -324.22 & -194.81 & -133.69 & 401.18 & 0.0 & 0.24 \\
211.5 & -19.36718 & 409.36 $\pm$ 31.97 & -329.28 & -201.79 & -135.75 & 409.34 & 3.37 & 0.94 \\
212.0 & -19.15993 & 417.43 $\pm$ 43.81 & -334.39 & -208.95 & -137.00 & 417.37 & 6.88 & 2.46 \\
212.5 & -19.09259 & 423.31 $\pm$ 45.68 & -337.38 & -214.93 & -138.46 & 423.17 & 10.47 & 2.96 \\
213.0 & -19.22319 & 435.13 $\pm$ 35.87 & -344.59 & -223.78 & -143.27 & 434.89 & 14.34 & 2.02 \\
213.5 & -19.53701 & 448.16 $\pm$ 32.40 & -352.20 & -233.12 & -149.87 & 447.79 & 18.42 & -0.42 \\
214.0 & -19.73136 & 461.17 $\pm$ 39.97 & -359.88 & -242.74 & -155.69 & 460.60 & 22.72 & -2.06 \\
214.5 & -19.74885 & 467.31 $\pm$ 38.40 & -362.47 & -249.12 & -157.90 & 466.54 & 26.85 & -2.30 \\
\hline
\end{tabular}
\tablefoot{The mean positions per bin ($\Delta l = \SI{1}{\degree}$, within a Galactic latitude range $\SI{-20.5}{\degree}<b<\SI{-18.1}{\degree}$) correspond to the orange dots in Figs.~\ref{fig:mean}, \ref{fig:xy_stretch}, and \ref{fig:xyz}.
The reported mean distances ($\bar{d}_\mathrm{YSOs}$) do not include a systematic global parallax offset. The distance error is the standard deviation of the mean. $XYZ$ are Galactic cartesian coordinates (see also Fig.~\ref{fig:xyz}). $XYZ_\mathrm{Orion}$ are transformed Galactic cartesian coordinates with $X$ pointing toward Orion\,A (see also Fig.~\ref{fig:xy_stretch}).}
\label{tab:mean}
\end{center}
\end{table*}
\begin{table*}[!ht]
\begin{center}
\small
\caption{Averaged parallaxes and derived distances to different large-scale sub-regions in Orion\,A.}
\begin{tabular}{lcccccc}
\hline \hline
Region & $l$-Range & Sample & Mean($\varpi$) & Mean($d$) & Median($\varpi$) & Median($d$) \\
& (\si{\degree}) & size & (mas) & (pc) & (mas) & (pc) \\
\hline
Orion\,A (all) & 208 -- 215 & 650 & 2.50$\pm$0.20 & 400$\pm$32 & 2.52$\pm$0.10 & 397$\pm$16 \\
Head (ISF) & 208 -- 211 & 483 & 2.55$\pm$0.16 & 393$\pm$25 & 2.54$\pm$0.08 & 393$\pm$13 \\
Tail & 211 -- 215 & 145 & 2.33$\pm$0.24 & 428$\pm$42 & 2.33$\pm$0.17 & 430$\pm$31 \\
Tail-L1641 & 211 -- 214 & 130 & 2.36$\pm$0.23 & 424$\pm$42 & 2.35$\pm$0.17 & 426$\pm$31 \\
Tail-L1647-South & 214 -- 215 & 15 & 2.14$\pm$0.18 & 467$\pm$32 & 2.17$\pm$0.07 & 461$\pm$15 \\
\hline
\end{tabular}
\tablefoot{The averages per $l$-range are calculated within $\SI{-20.5}{\degree}<b<\SI{-18.1}{\degree}$. The reported parallaxes and distances do not include a systematic global offset. Shown as uncertainties are the standard deviation from the mean and the median absolute deviation. On top of this we expect a systematic error of about 10\,pc.}
\label{tab:regions}
\end{center}
\end{table*}
The 3D shape of Orion\,A, now accessible via the {\it Gaia} measurements, informs and enlightens our knowledge on this fundamental star-formation benchmark. The main result from this work is that Orion\,A is longer than previously assumed and has a cometary shape pointing toward the Galactic plane. Also of note, the Head of the cloud appears to be bent in comparison with the Tail (Fig.~\ref{fig:xyz}). Why would this be the case? One important hint is that the star-formation rate in the Head of the cloud is about an order of magnitude higher than in the Tail (Gro\ss scheld et al., in prep.). Taking this into consideration, one can think of at least two scenarios to explain the enhanced star-formation rate and the shape of the Head: 1) cloud-cloud collision and 2) feedback from young stars and supernovae. Recently, \cite{Fukui2018} interpreted the gas velocities in this region as evidence that two clouds collided about 0.1 Myr ago, and are likely responsible for the formation of the massive stars. While we cannot rule out this scenario with the data presented here, we point out that there is evidence for a young population of foreground massive stars \citep[e.g.,~in NGC\,1980, NGC\,1981,][]{Bally2008,Alves2012,Bouy2014} \citep[cf.][]{Fang2017}, that could provide the feedback necessary to bend the Head of the cloud. An investigation on the second scenario is needed and beyond the scope of this work, but it seems plausible that an external event to the Orion\,A cloud could have taken place in the last million years.
The 3D shape of the cloud clarifies some previous results. For example, \citet{Meingast2018} found evidence for different dust properties in Orion\,A, when comparing the regions in the Head and the Tail of the cloud. They argued, correctly, that the dust in L1641 might not ``see'' the radiation from the massive stars toward the Head of the cloud, and their properties are then not affected by it. Our result validates this view: the dust grains in L1641 lie substantially in the back of the ONC, which contains the most massive stars in the region, and are hence shielded, or too far from the sources of UV radiation.
The deduced length of the Orion\,A filament of 90\,pc makes it similar to the Nessie Classic filament \citep[$\SI{\sim80}{pc}$,][]{Jackson2010}, which is often regarded as a prototypical large-scale filament, or a ``bone" of the Milky Way \citep{Goodman2014}.
\citet{Zucker2017} undertook an analysis of the physical properties and kinematics of a sample of 45 large-scale filaments in
the literature. They found that these filaments can be distinguished in three broad categories, depending on their aspect ratio and high column-density fraction. Orion\,A has an average aspect ratio of about 30:1 when taking the length of 90\,pc and its average width (FWHM $\sim$3\,pc), and a high-column-density fraction of about 45\%. For the latter we use an A$_K$ threshold of 0.5\,mag, comparable to $1\times10^{22}\mathrm{cm}^{-2}$ in \citet{Zucker2017}. This puts Orion\,A squarely into their category c), which describes highly elongated, high-column-density filaments, or so called "bones" of the Milk Way. The position-angle between Orion\,A and the plane is in agreement with the average position-angles of the bones in their sample, but Orion\,A differs significantly from the known bones regarding its distance from the mid-plane of the Milky Way ($\sim$145\,pc), which is an order of magnitude larger than the median distance between bones and the Galactic plane. This discrepancy calls for a large-scale process to have pushed the cloud this far from the plane. \cite{Franco1986} proposed a scenario for the origin of the Orion complex as the consequence of an impact of a high-velocity cloud with the plane of the Galaxy (from above) that could account for the abnormal distance of Orion below the plane. Nevertheless, the cloud's cometary shape with a star-bursting Head closer to the plane, as shown in this work, seems to be at odds with this scenario.
Finally, we point out that the unexpected length of Orion\,A along the line-of-sight affects the observables toward the cloud (masses, luminosities, binary separations) that will need revision. For example, the current cloud and YSO masses toward the Tail can be underestimated by about 30\% to 40\% under the common assumption of a single constant distance to Orion A, while binary separations can be underestimated by about 10\% to 20\%.
\section{Summary}
We have used the recent {\it Gaia} DR2 to investigate the 3D shape of the Orion\,A GMC.
Orion\,A is not the straight filamentary cloud that we see in (2D) projection, but instead a cometary-like cloud, oriented toward the Galactic plane, with two distinct components: a denser and enhanced star-forming (bent) Head, and a lower density and star-formation quieter $\sim$75 pc long Tail. The two components seem to overlap between $l\approx\SI{210}{\degree}$ to $\SI{211}{\degree}$.
We find that the Head of the Orion\,A cloud appears to be roughly on the plane of the sky (at $\sim$400\,pc), while the Tail, surprisingly, appears to be highly inclined, not far from the line-of-sight ($\SI{\sim 70}{\degree}$), reaching at least $\sim$470\,pc.
The true extent of Orion\,A is then not the projected $\sim$40\,pc but $\sim$90\,pc, making it by far the largest molecular cloud in the local neighborhood. Its aspect ratio ($\sim$30:1) and high-column-density fraction ($\sim$45\%) make it similar to large-scale Milky Way filaments (bones), despite its distance to the galactic mid-plane being an order of magnitude larger than typically found for these structures.
\textit{Gaia} is opening an important new window in the study of the ISM, in particular the star-forming ISM, bringing the critical third spatial dimension that will allow not only cloud structure studies similar to the ones presented here, but an unique view on the dynamics between dense gas and YSOs.
\begin{acknowledgements}
We thank the anonymous referee whose comments helped to improve the manuscript.
J.\,Gro\ss schedl acknowledges funding by the Austrian Science Fund (FWF) under project number P 26718-N27.
This work is based on observations made with ESO Telescopes at the La Silla Paranal Observatory under program ID 090.C-0797(A).
This work is part of the research program VENI with project number 639.041.644, which is (partly) financed by the Netherlands Organisation for Scientific Research (NWO). A.\,Hacar thanks the Spanish MINECO for support under grant AYA2016-79006-P.
J.\,Alves is part of the Research Platform Data Science @ Uni Vienna (\url{https://datascience.univie.ac.at}).
This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This research has made use of the VizieR catalog access tool, CDS, Strasbourg, France. This research has made use of Python, \url{https://www.python.org}, of Astropy, a community-developed core Python package for Astronomy \citep{Astropy2013}, NumPy \citep{Walt2011}, and Matplotlib \citep{Hunter2007}. This research made use of TOPCAT, an interactive graphical viewer and editor for tabular data \citep{Taylor2005}.
This work has made use of ``Aladin sky atlas'' developed at CDS, Strasbourg Observatory, France \citep{Bonnarel2000, Boch2014}.
\end{acknowledgements}
\begin{flushleft}
\bibliographystyle{aa}
|
1,314,259,993,392 | arxiv | \section{Introduction and Summary}
{\bf A) Generating functions of power sums and powers. Generalized Stirling2 and Eulerian numbers}.
\par\smallskip
Finite sums of non-negative powers of positive integers have been studied by many authors. See {\sl Edwards}, \cite {Edwards1}, \cite {Edwards2} and {\sl Knuth} \cite{Knuth} for some history, and the books on {\sl Johannes Faulhaber} by {\sl Hawlitschek} \cite{Hawlitschek} and {\sl Schneider} \cite{Schneider}. \par\smallskip\noindent
We are interested in finite sums of power of arithmetic progressions ($PS$ for power sums)
\begin{equation}
{\fbox{\color{blue} $PS(d,a;n,m)$}}\, :=\, \sum_{j=0}^m\, (a\, +\ d\,j)^n\, {\text with}\ \ n\, \in \, \mathbb N_0,\ m\, \in \, \mathbb N_0,\ d \, \in \, \mathbb N,\ a\, \in \, N_0\, .
\end{equation}
Note that the lower summation index for $j$ is $0$. We put $0^0\, :=\, 1$ if $a\, =\, 0$ and $n\, =\, 0$. (In Maple 13 \cite{Maple} $0^0$ is put to $0$).
It is sufficient to consider $a\, =\, 0$ if $d\, =\, 1$, and $a\, \in \, RRS(d)$ for $d\sspgeq 2$, where $RRS(d)$ denotes the smallest positive restricted residue system modulo $d$, {\it i.e.},\, $RRS(d)\, :=\, \{k\, \in \, RS(d)\, |\, \gcd(k,d) \, =\, 1\}$ with $RS(d)\, :=\,\{0,\,1,\,...,\,d-1\}$, the smallest non-negative residue system modulo $d$. \par\smallskip\noindent
The aim of the first part of this paper is is to compute the ordinary ({\it o.g.f.\ }, symbolized by $G$) and exponential generating functions ({\it e.g.f.\ }, symbolized by $E$) for given powers $n$. Such functions are considered in the framework of formal power series, without considering questions of convergence. Proofs will be given in section 2.
\par\smallskip\noindent
The {\it o.g.f.\ } (indeterminate $x$) is
\begin{equation}
{\fbox{{\color{blue}$GPS(d,a;n,x)$}}}\, :=\, \sum_{m=0}^{\infty}\,PS(d,a;n,m)\, x^m,\ \ n\, \in \, \mathbb N_0\ .
\end{equation}
The {\it e.g.f.\ } (indeterminate $t$) is
\begin{equation}
{\fbox{{\color{magenta}$EPS(d,a;n,t)$}}}\, :=\, \sum_{m=0}^{\infty}\,PS(d,a;n,m)\, \frac{t^m}{m!},\ \ n\, \in \, \mathbb N_0\ .
\end{equation}
As is known, the {\it e.g.f.\ } is obtained from the {\it o.g.f.\ } {\it via}\ inverse {\sl Laplace} transform as
\begin{equation}
E(t)\, =\, {\cal L}^{-1}\left[ \frac{1}{p}\,G\left(\frac{1}{p}\right)\right]\ ,
\end{equation}
and {\it vice versa} by a direct {\sl Laplace} transform to get $G$ from $E$.\par\smallskip\noindent
Of course, application of the binomial theorem immediately leads, after an exchange of the two finite sums, to a formula for $PS(d,a;n,m)$ in terms of the ordinary power sums $PS(n,m) \, =\, PS(1,0;n,m)$, {\it viz}\,
\begin{equation}
PS(d,a;n,m) \, =\, \sum_{k=0}^n\, {\binomial{n}{k}}\,a^{n-k}\,d^k\,PS(k,m)\,,
\end{equation}
and therefore, if we interchange an infinite sum with a finite sum,
\begin{equation}
GPS(d,a;n,x)\, :=\, \sum_{k=0}^n\, {\binomial{n}{k}}\,a^{n-k}\,d^k\,GPS(k,x)\,,
\end{equation}
with $GPS(k,x) = GPS(1,0;k,x)$.
Similarly,
\begin{equation}
EPS(d,a;n,t)\, :=\, \sum_{k=0}^n\, {\binomial{n}{k}}\,a^{n-k}\,d^k\,EPS(k,t)\,,
\end{equation}
with $EPS(k,t) = EPS(1,0;k,t)$.
Therefore it is in principle sufficient to compute $GPS(k,x)$ and use an inverse {\sl Laplace} transform to find $EPS(k,t)$. It may however be difficult (or impossible) to give its explicit form.\par\smallskip\noindent
Instead of $GPS(n,x)$ we prefer to compute the general $GPS(d,a;n,x)$ directly. In this way we find
\begin{equation}
{\fbox{\color{blue}$GPS(d,a;n,x)$}}\, =\, \sum_{k=0}^n\, S2(d,a;n,k)\, k!\,\frac{x^k}{(1-x)^{k+2}}\ ,
\end{equation}
where the generalized {\sl Stirling} numbers of the second kind (generalized subset numbers)
enter {\it via}\ the reordering process
\begin{equation}
(a\,{\bf 1} \, +\ d\, {\bf E}_x)^n \, =:\, \sum_{m=0}^n\,S2(d,a;n,m)\,x^m\, {\bf d}_x^{\ m}\,,
\end{equation}
with the {\sl Euler} operator ${\bf E}_x\, :=\, x\,{\bf d}_x$ where ${\bf d}_x$ is the differentiation operator, and $\bf 1$ is the identity operator.\par\smallskip\noindent
This definition leads to the three term recurrence relation
\begin{equation}
S2(d,a;n,m)\, =\, d\, S2(d,a;n-1,m-1) \,\, +\ \, (a\, +\ d\,m)\, S2(d,a;n-1,m), \ \ {\rm for}\ n\sspgeq 1\,,\ m\, =\, 0,\,1,\,...,\,n\,,
\end{equation}
with $S2(d,a;n,-1)\, =\, 0$, $ S2(d,a;n,m)\, =\, 0$ for $n\, < \, m$ and $S2(d,a;0,0)\, =\, 1$.\par\smallskip\noindent
This recurrence is obeyed by
\begin{equation}
S2(d,a;n,m)\, =\, {\frac{1}{m!}}\, \sum_{k=0}^m\,(-1)^{m-k}\, {\binomial{m}{k}}\,(a\, +\ d\,k)^n\,.
\end{equation}
These generalized {\sl Stirling} numbers build a lower triangular infinite dimensional matrix, named ${\bf S2}[d,a]$ which turns out to be an exponential convolution array like the ordinary {\sl Stirling} ${\bf S2}$ matrix, {\it i.e.},\, a {\sl Sheffer} matrix, denoted by
\begin{equation}
{\bf S2}[d,a] \, =\, (e^{a\,x},\, e^{d\,x}\, -\ 1)\,.
\end{equation}
For {\sl Sheffer} matrices see the W. Lang link in $OEIS$ \cite{OEIS}, \seqnum{A006232} called ``Sheffer $a$- and $z$-sequences'', the second part, where also references are given. (Henceforth $A$-numbers will be given without quoting {\it OEIS} each time.)\par\smallskip\noindent
A three parameter generalization of Stirling numbers of the second kind has been proposed in {\sl Bala} \cite{Bala} as $S_{(a,b,c)}$. The present generalization is $S2(d,a;n,m) \, =\, d^m\,S_{(d,0,a)}$. There {\sl Sheffer} arrays are called exponential {\sl Riordan} arrays.\par\smallskip\noindent
A one parameter generalization is given in {\sl Luschny} \cite{Luschny1} called {\sl Stirling}-{\sl Frobenius} subset numbers, with the scaled version called there [SF-SS] with parameter m corresponding to ${\bf S2}[m,m-1]$. The [SF-S] triangle family coincides with {\sl Bala}'s $S_{(m,0,m-1)}$.
\par\smallskip\noindent
The {\sl Sheffer} structure (exponential convolution polynomials) means that the {\it e.g.f.\ } of the sequence of column $m$ is
\begin{equation}
ES2Col(d,a;t,m)\, =\, e^{a\,t}\,\frac{(e^{d\,t}\sspm1)^m}{m!}\,, \ \ m\, \in \, {\mathbb N}_0\, .
\end{equation}
This corresponds to the {\it o.g.f.\ }
\begin{equation}
GS2Col(d,a;x,m)\, =\, \frac{(d\,x)^m}{\prod_{j=0}^m\, (1\, -\ (a+d\,j)\,x)}\,, \ \ m\, \in \, {\mathbb N}_0\ .
\end{equation}
This means that the column scaled {\sl Sheffer} triangle $\widehat{S2}[d,a]\, =\, \left(e^{a\,x}, \, \frac{1}{d}\,(e^{d\,x} \, -\ 1)\right)$ with elements $ \widehat{S2}(d,a;n,m)\, =\, S2(d,a;n,m)\, \frac{1}{d^m}$ are
\begin{equation}
\widehat{S2}(d,a;n,m)\, =\, h^{(m+1)}_{n-m}[d,a],
\end{equation}
where $h^{(m+1)}_k[d,a]$ are the complete homogeneous symmetric functions of degree $k$ of the $m+1$ symbols $a_j\, =\, a\, +\ d\,j$, $j\, =\, 0,\,1,\, ...,\,m$, and $h^{(m+1)}_0\, =\, 1$. If $[d,a]\, =\, [1,0]$ the symbol $a_0 \, =\, 0$ can be omitted, and only the $m$ symbols $a_j\, =\, a\, +\ d\,j$ for $j\, =\, 1,\,2\, ...,\, m$ are active. For symmetric functions see {\it e.g.},\, \cite {Krishnamurthy}, p. 53, and p. 54, eq $(46)$.
\par\smallskip\noindent
The transition matrix property of the $\bf S2$[1,0] \, =\, $\bf S2$ (see \cite{GKP} p. 262, eq.\,$(6.10)$) generalizes to
\begin{equation}
x^n\, =\, \sum_{m=0}^n\, \widehat{S2}(d,a;n,m)\,fallfac(d,a;x,m)\, ,\ \ n\, \in \, {\mathbb N}_0\, ,
\end{equation}
where the generalized falling factorial is (see also Bala \cite{Bala} where this falling factorial appears in \Eq{15} as special $[t;d,0,c] _n$ in the signed $Stirling1$ context. See the present part C for the unsigned case)
\begin{equation}
fallfac(d,a;x,m)\, :=\, \prod_{j=0}^{m-1} (x\, -\ (a+j\,d)\, \ \ {\rm with}\ \ fallfac(d,a;x,0)\, :=\, 1\, .
\end{equation}
This can also be written in terms of the usual falling factorials $x^{\underline{n}}\, :=\, \prod_{j=0}^{n-1}\,(x-j)$, for $n\, \in \, \mathbb N$ and $x^{\underline{0}}\, :=\, 1$ as
\dstyle{fallfac(d,a;x,m)\, =\, d^m\,\left(\frac{x-a}{d}\right)^{\underline{m}}}.\par\smallskip\noindent
Using the binomial theorem in eq.\,$(11)$ and interchanging the sums shows that ${\bf S2}[d,a]$ (when the matrix elements are not specified we use this notation) can be written in terms of the usual $Stirling2$ numbers ${\bf S2}\, =\, {\bf S2}[1,0]$ as,
\begin{equation}
S2(d,a;n,m)\, =\, \sum_{k=0}^n\,{\binomial{n}{k}}\,a^{n-k}\,d^k\,S2(k, m)\ .
\end{equation}
For the inverse of this relation see {\sl Lemma 10}, \Eq{160}, in the proof section, part $C$.
\par\smallskip\noindent
A standard recurrence for {\sl Sheffer} row polynomials (\cite{Roman}, p. 50, Corollary 3.7.2) leads, with $PS2(d,a;n,x)\, :=\, \sum_{m=0}^n\, S2(d,a;n,m)\,x^m$, to
\begin{equation}
PS2(d,a;n,x)\, =\, [a\, +\ d\,x\, +\ d\,{\bf E}_x]\,PS2(d,a;n-1,x)\,, \ \ {\rm for}\ n \, \in \, \mathbb N\,,
\end{equation}
with input $PS2(d,a;0,x)\, =\, 1$.\par\smallskip\noindent
The eq. $(8)$ version of the {\it o.g.f.\ } is not convenient to find $EPS(d,a;n,t)$ by inverse {\sl Laplace} transform because of the power $k\, +\ 2$ instead of $k\, +\ 1$. The solution is to consider first the {\it o.g.f.\ } of the powers (instead of the one of the sums of powers) which can be found analogously to the $GPS$ case. Of course, if \Eq{8} has been proved the {\it o.g.f.\ } for the first difference sequence follows immediately. This will later lead to another form of $GPS$ which is amenable to find $EPS$.
\begin{eqnarray}
GP(d,a;n,x)&\, :=\,& \sum_{m=0}^{\infty}\, (a\, +\ d\, m)^n\, x^m\,,\\
&\, =\, & \sum_{k=0}^n\, S2(d,a;n,k)\, k!\,\frac{x^k}{(1-x)^{k+1}}\ .
\end{eqnarray}
From this the {\it e.g.f.\ } can be computed directly based on \dstyle{{\cal L}^{-1}\left[ \frac{1}{(p-1)^{k+1}} \right]\, =\, \frac{t^k}{k!}\, e^t}, using the linearity of the inverse {\sl Laplace} transform.
\begin{equation}
EP(d,a;n,t)\, =\, e^t\,\sum_{k=0}^n\,S2(d,a;n,k)\,t^k\,.
\end{equation}
Continuing with the search for a more tractable form of $GPS(d,a;n,x)$ we apply another reordering identity on $GP(d,a;n,x)$, {\it viz}\,
\begin{equation}
\sum_{j=0}^n b^{(n)}_j\,\frac{x^j}{(1-x)^{j+1}}\, =\, \frac{1}{(1\, -\ x)^{n+1}}\,\sum_{i=0}^n a^{(n)}_i\,x^i \,,\ \ n\, \in \, \mathbb N_0\ ,
\end{equation}
with
\begin{eqnarray}
a^{(n)}_i &\, =\,& \sum_{j=0}^i \, (-1)^{i-j}\, {\binomial{n-j}{i-j}}\, b^{(n)}_j,\\
b^{(n)}_j &\, =\,& \sum_{i=0}^{j} \, {\binomial{n-i}{j-i}}\, a^{(n)}_i\, .
\end{eqnarray}
Note that this reordering identity can not be applied to $GPS(d,a;n,x)$ because of the wrong power in the denominator. But here it produces
\begin{eqnarray}
GP(d,a;n,x) &\, =\,& \frac{1}{(1\, -\ x)^{n+1}}\, PrEu(d,a;n,x)\,, \ {\rm with\ the\ polynomials} \\
PrEu(d,a;n,x) &\, =\,& \sum_{k=0}^n rEu(d,a;n,k)\,x^k\,, \ {\rm where} \\
rEu(d,a;n,k)&\, =\,& \sum_{j=0}^k (-1)^{k-j}\,{\binomial{n-j}{k-j}}\, S2(d,a;n,j)\,j!\, .
\end{eqnarray}
Here $rEu(d,a;n,k)$ are generalized {\sl Euler}ian numbers, which constitute a number triangle (sometimes called {\sl Euler} triangle) but compared with the usual {\sl Euler}ian triangle for $[d,a]\, =\, [1,0]$, given in {\sl Graham et al.} \cite{GKP}, Table 268, p. 268, or \seqnum{A173018}, the rows are reversed. The row reversed number triangle is shown in \seqnum{A123125}. This explains the $r$ in front of $Eu$ for {\sl Euler}ian. \par\noindent
The inverse of the relation between ${\bf rEu}[d,a]$ and ${\bf S2}[d,a]$ is
\begin{equation}
S2(d.a;n,m)\,m! \, =\, \sum_{k=0}^m\,{\binomial{n-k}{m-k}}\, rEu(d,a;n,k)\, .
\end{equation}
From the explicit form of ${\bf S2}[d,a]$ in eq.$(11)$ the one for ${\bf rEu}[d,a]$ follows by eq. $(23)$.
\begin{equation}
rEu(d,a;n,k)\, =\, \sum_{j=0}^k\, (-1)^{k-j}\, {\binomial{n+1}{k-j}}\, (a+d\,j)^n\,.
\end{equation}
In terms of the usual {\sl Eulerian} numbers one finds from eqs. $(28)$ with $(18)$ and eq. $(29)$ for $[d,a]\, =\, [1,0]$
\begin{equation}
rEu(d,a;n,k) \, =\, \sum_{m=0}^n\,{\binomial{n}{m}}\,a^{n-m}\,d^m\,\sum_{p=0}^k\,(-1)^{k-p}\,{\binomial{n-m}{k-p}}\,rEu(m,p)\, .
\end{equation}
Note that no formula analogous to \Eq{18} holds due to the different binomial structure in $rEu[d,a]$ of \Eq{28}.\par\smallskip\noindent
The three term recurrence for ${\bf rEu}[d,a]$ is
\begin{equation}
rEu(d,a;n,m)\, =\, (d\,(n-m)\, +\ (d-a))\, rEu(d,a;n-1,m-1) \,\, +\ \, (a\, +\ d\,m)\, rEu(d,a;n-1,m),
\end{equation}
for $n\sspgeq 1\,,\ m\, =\, 0,\,1,\,...,\,n$, with $rEu(d,a;n,-1)\, =\, 0$, $rEu(d,a;n,m)\, =\, 0$ for $n\, < \, m$ and $rEu(d,a;0,0)\, =\, 1$.\par\smallskip\noindent
The corresponding (ordinary, not exponential) row polynomials are
\begin{equation}
PrEu(d,a;n,x)\, :=\, \sum_{m=0}^n\,rEu(d,a;n,m)\,x^m\, ,\ \ n\, \in \, {\mathbb N}_0\,.
\end{equation}
From eq.\,$(21)$ and eqs.\,$(26)$ with $(27)$ follows a relation between these ${\bf rEu}[d,a]$ row polynomials and those of the number triangle with entries $S2fac(d,a;n,m)\, :=\, S2(d,a;n,m)\,m!$, named $PS2fac(d,a;n,x)$, {\it viz}\,
\begin{equation}
PrEu(d,a;n,x)\, =\, (1\, -\ x)^n\,PS2fac\left(d,a;n,\frac{x}{1-x}\right)\, .
\end{equation}
It may be noted, in passing, that the transformation \dstyle{y\, =\, \frac{x}{1\, +\ x}}, or \dstyle{x\, =\, \frac{y}{1\, -\ x}} is called {\sl Euler}'s transformation (see, {\it e.g.},\, \cite{Hardy}, p. 191, last row).
\par\smallskip\noindent
From this preceding relation the {\it e.g.f.\ } ({\it i.e.},\, the e.g.f. for the row reversed {\sl Euler}ian triangle) follows:
\begin{equation}
{\fbox{\color{cyan}$EPrEu(d,a;t,x)$}}\, =\, \frac{(1\, -\ x) \,e^{a\,(1-x)\,t}}{1\, -\ x\,e^{d\,(1-x)\,t}}\, .
\end{equation}
This is not a {\sl Sheffer} structure, not even one of the more general {\sl Brenke} type $g(z)\, B(x\,z)$, \cite{Brenke}, \cite{Chihara}, p. 167.\par\smallskip\noindent
The {\it e.g.f.\ } of the row sums ($x\,\to\, 1$) of ${\bf rEu}[d,a]$ is obtained {\it via}\ {\sl l'H\^opital}'s rule as \dstyle{\frac{1}{1\, -\ d\,t}}, independently of $a$.\par\smallskip\noindent
A one parameter $k$-family of generalized Eulerian polynomials $A_{n,k}(x)$ with coefficient triangles has been considered by {\sl Luschny} \cite{Luschny2}. The coefficients of $A_{n,k}(x)$ build ${\bf Eu}[k,1] \, =\, {\bf rEu}[k,k-1]$.\par\smallskip\noindent Now the new form of $GPS(d,a;n,x)$ is simply obtained by multiplying $GPS(d,a;n,x)$ with \dstyle{\frac{1}{1\, -\ x}} because this is the rule to obtain the {\it o.g.f.\ } for partial sums of a sequence from the {\it o.g.f.\ } of the sequence.
\begin{equation}
GPS(d,a;n,x) \, =\, \frac{1}{(1\, -\ x)^{n+2}}\, PrEu(d,a;n,x)\, .
\end{equation}
There is still this power $n+2$ but now we can use the reordering identity, eq.\,$(23)$ with $n$ replaced by $n+1$:
\begin{equation}
\sum_{j=0}^{n+1}\, b^{(n+1)}_j\, {\frac{x^j}{(1\, -\ x)^{j+1}}}\, =\, \frac{1}{(1\, -\ x)^{(n+1)+1}} \sum_{i=0}^{n+1}\,a_i^{(n+1)}\,x^i\, ,
\end{equation}
with (see eq.\,$(30)$)
\begin{equation}
a^{(n+1)}_i\, =\, rEu(d,a;n,i)\, =\, \sum_{p=0}^i\,(-1)^{i-p}\,{\binomial{n+1}{i-p}}\, (a\, +\ d\,p)^n\, .
\end{equation}
Note that $a^{(n+1)}_{n+1}\, =\, 0$ because $PrEu(d,a;n,x)$ has degree $n$. \par\smallskip\noindent
Note that now the $x$ dependence is amenable for a later inverse {\sl Laplace} transform. The calculation of $b^{(n+1)}_j$ is a bit lengthy but it turns out to have a nice form (we add the $[d,a]$ parameters).
\begin{equation}
b_j^{(n+1)}(d,a)\, =\, S2(d,a;n,j)\,j! \, +\ S2(d,a;n,j-1)\,(j-1)! \, =:\, \Sigma S2(d,a;n,j)\, ,
\end{equation}
leading finally to the result for the {\it e.g.f.\ }
\begin{equation}
{\fbox{\color{magenta}$EPS(d,a;n,t)$}} \, =\, e^t\, \Sigma S2(d,a;n,j) \frac{t^j}{j!}\ , n\, \in \, {\mathbb N}_0\, .
\end{equation}
Let us recapitulate the detour we made in a diagram referring to eq.\,$(23)$ for obtaining two versions of $GP$ or $GPS$:
\begin{eqnarray}
{\bf GPSv1}\ &{\buildrel {(23)} \over \nrightarrow}&\ {\bf GPSv2}\,\ \ {\buildrel {\cal L}^{-1} \over \rightarrow}\ \ {\bf EPS}\, \nonumber \\
\downarrow \phantom{xx} &&\hskip .5cm \uparrow \nonumber\\
{\bf GPv1}\ &{\buildrel {(23)} \over \rightarrow}&\ \, {\bf GPv2}\, .
\end{eqnarray}
\par\bigskip\noindent
{\bf B) Generalized Faulhaber formula and Bernoulli polynomials}\par\noindent
The next topic is to find for the power sum $PS(d,a;n,m)$ a formula in terms of {\sl Bernoulli} polynomials evaluated appropriately. This formula has bee named {\sl Faulhaber} formula for the ordinary $[d,a]\, =\, [1,0]$ case by {\sl Conway} and {\sl Guy} \cite{CoGuy}, p. 106. {\sl Faulhaber} used the numbers, later called {\sl Bernoulli} numbers by {\sl de Moivre} and {\sl Euler} (see \cite{Edwards1}, \cite{Edwards2}), already by 1631 before {\sl Jakob I Bernoulli}. For this formula see \cite{GKP}, p. 367. eq. (7.79), and \cite{Koecher}, p. 167 eq. (1). Here it is, with our definition of $PS(n,m) = PS(1,0;n,m)$,
\begin{equation}
PS(n,m) \, =\, \delta_{n,0} \, +\ \frac{1}{n+1}\, \left( B(n+1,\, x=m+1)\, -\ B(n+1,\,x=1)\right)\, , \ \ n\, \in \, \mathbb N_0\, ,
\end{equation}
where $\delta_{n,0}\, =\, [n\, =\, 0]$ is the {\sl Kronecker} symbol: $1$ if $n\, =\, 0$ and 0 otherwise. The {\sl Bernoulli} numbers are defined recursively by (see \cite{GKP}, p. 284, eq. $(6.79)$)
\begin{equation}
B(n)\, :=\, \frac{1}{n\, +\ 1}\,\left(\delta_{n,0}\, -\ \sum_{k=0}^{n-1}\, {\binomial{n+1}{k}} B(k) \right)\, \ {\rm for}\ \ n\, \in \, \mathbb N, \ {\rm with}\ B(0)\, =\, 1\, .
\end{equation}
They have \dstyle{B(1)\, =\, -\frac{1}{2}} and are found in OEIS \cite{OEIS} under \seqnum{A027641}\, /\,\seqnum{A027642}.
The corresponding {\sl Bernoulli} polynomials are
\begin{equation}
B(n,\,x)\, :=\, \sum_{m=0}^m {\binomial{n}{m}} B(n-m)\,x^m\,.
\end{equation}
Their coefficient tables are given in \seqnum{A196838}\,/\,\seqnum{A196839} or \seqnum{A053382}\, /\, \seqnum{A053383} for rising or falling powers of $x$, respectively.\par\smallskip\noindent
For the generalized case one finds for the power sums $PS(d,a;n,m)$ from eqs. $(5)$ and $(42)$ the following {\sl Faulhaber} formula in terms of ordinary {\sl Bernoulli} polynomials
\begin{equation}
{\fbox{\color{blue}$PS(d,a;n,m)$}}\, =\, \sum_{k=0}^n\,{\binomial{n}{k}}\, a^{n-k}\, d^k\, \left [ \delta_{k,0} \, +\ \frac{1}{k+1}\,\left(B(k+1,\,x=m+1)\, -\ B(k+1,\,x=1) \right) \right]\,.
\end{equation}
\par\bigskip\noindent
But the idea is to find the analogon of formula $(42)$ with generalized {\sl Bernoulli} polynomials.\par\smallskip\noindent
An obvious generalization of the {\sl Bernoulli} numbers is
\begin{equation}
B(d,a;n) \, :=\, \sum_{m=0}^n\,(-1)^m\,\frac{1}{m+1}\,S2(d,a;n,m)\,m!\,,\ n\, \in \, {\mathbb N}_0\,.
\end{equation}
For $[d,a]\, =\, [1,0]$ see, {\it e.g.},\, {\sl Charalambides} \cite{Charalambides}, or the formula and Maple section of \seqnum{A027641}.
\par\smallskip\noindent
From eq.$(18)$ one finds $B[d,a]$ in terms of $B$.
\begin{equation}
B(d,a;n) \, :=\, \sum_{m=0}^n {\binomial{n}{m}}\,a^{n-m}\,d^m\,B(m)\, .
\end{equation}
The {\it e.g.f.\ } of $\{B(d,a;n)\}_{n=0}^{\infty}$ is
\begin{equation}
EB(d,a;t)\, =\, \frac{d\,t\,e^{a\,t}}{e^{d\,t}\, -\ 1}\, .
\end{equation}
The corresponding generalized {\sl Bernoulli} polynomials are (compare with eq.\,$(44)$)
\begin{equation}
B(d,a;n,x) \, :=\, \sum_{m=0}^n {\binomial{n}{m}}\,B(d,a;n-m)\,x^m \,,
\end{equation}
and from eq.\,$(47)$ they can also be written in terms of $\{B(m)\}_{m=0}^n$ as
\begin{equation}
B(d,a;n,x) \, :=\, \sum_{m=0}^n {\binomial{n}{m}}\,d^m\,B(m)\,(a\, +\ x)^{n-m} \,.
\end{equation}
Their {\it e.g.f.\ } is, either from \Eq{49} or $(50)$,
\begin{equation}
EB(d,a;t,x)\, =\, \frac{d\,t\,e^{a\,t}}{e^{d\,t}\, -\ 1}\,e^{x\,t}\, ,
\end{equation}
identifying their coefficients as {\sl Sheffer} arrays \dstyle{\left(\frac{d\,z\,e^{a\,z}} {e^{d\,z}\, -\ 1},\,z \right)}. Such arrays are of the so called {\sl Appell} type (compare {\sl Roman} \cite{Roman}, pp. 26 - 28, with a different notation).
It turns out that in order to obtain a generalized {\sl Faulhaber} formula in terms of {\sl Bernoulli} polynomials the $B[d,a]$ just introduced are not quite the ones needed. In fact, they are too general. One has to work with the polynomials depending only on $d$, {\it viz}\,
\begin{equation}
{\fbox{\color{bittersweet}$B(d;n,x)$}}\, =\, \sum_{m=0}^n {\binomial{n}{m}}\, B(d;n-m)\, x^m\, ,
\end{equation}
with the generalized {\sl Bernoulli} numbers
\begin{equation}
B(d;n)\, :=\, B(d,a=0;n) = d^n\,B(n)\,, \ n\, \in \, {\mathbb N}_0\, .
\end{equation}
They can also be obtained by exponential convolution of the more general ones with the sequence $\{-a^n\}_{n=1}^{\infty}$.
\begin{equation}
B(d;n)\, =\, \sum_{m=0}^n {\binomial{n}{m}}\, B(d,a;n-m)\, (-a)^m\,.
\end{equation}
For $a\, =\, 0$ only $m\, =\, 0$ survives and B(d,0;n) results. But also for non-vanishing $a$ the $a$ dependence drops out, as can be seen from the {\it e.g.f.\ } of the sequence on the {\it r.h.s.}, using eq. $(48)$.
\begin{equation}
\frac{d\,t\,e^{a\,t}}{e^{d\,t}\, -\ 1}\, e^{-a\,t} \, =\, \frac{d\,t}{e^{d\,t}\, -\ 1}\, =\, EB(d,a=0;t)\, =:\, EB(d;t)\,.
\end{equation}
For $B(d;n)$ with $d\, =\, 2,\,3$ and $4$ see $(-1)^n$\seqnum{A239275}$(n)$/\seqnum{A141459}$(n)$,\ \seqnum{A285863}$(n)$/\seqnum{A285068}$(n)$ and\par\noindent
\seqnum{A288873}$(n)$/\seqnum{A141459}$(n)$.\par\smallskip\noindent
The {\it e.g.f.\ } of the polynomial system $\{B(d;n,x)\}_{n=0}^{\infty}$ of eq.\,$(52)$ is
\begin{equation}
EB(d;t,x)\, =\, \frac{d\,t}{e^{d\,t}\, -\ 1}\, e^{x\,t}\, =\, EB(d,a=0;t,x)\,.
\end{equation}
The {\sl Appell} type {\sl Sheffer} structure is obvious.\par\smallskip\noindent
Now the stage is set for giving the result for the generalized {\sl Faulhaber} formula in terms of the polynomials $B(d,n,x)$.
\begin{eqnarray}
{\fbox{{\color{blue}$PS(d,a;n,m)$}}}&\, =\,& \frac{1}{d\,(n+1)}\,\left[B(d;n+1,x \, =\, a\, +\ d\,(m+1))\, -\ B(d;n+1,x \, =\, d) \right. \nonumber \\
&&\hskip 1.5cm \, -\ \left. B(d;n+1,x \, =\, a)\, +\ B(d,n+1,x=0) \, +\ d\,\delta_{n,0}\right]\,.
\end{eqnarray}
Here $B(d,n+1,x=0) = B(d;n+1)\, =\, d^{n+1}\,B(n+1)$, and the {\sl Kronecker} symbol enters because of our definition of $PS(d,a;n,m)$ where the sum starts with $j=0$, not with $1$.
\par\smallskip\noindent
The generalized {\sl Lah} numbers ${\bf L}[d,a]$ are discussed in the proof section 2, C) 4.\par\bigskip\noindent\par\smallskip\noindent
{\bf C) Generalized Stirling1 numbers}\par\smallskip\noindent
As elements of the {\sl Sheffer} group the inverse of the (infinite, lower triangular) matrix $\bf S2[d,a]$ exists and is called $\bf S1[d,a]$. This is therefore a generalized {\sl Stirling} number triangle of the first kind.
\begin{equation}
{\bf S2}[d,a]\cdot {\bf S1}[d,a]\, =\, {\bf 1} \, =\, {\bf S1}[d,a]\cdot {\bf S2}[d,a]\,,
\end{equation}
with the (infinite dimensional) identity matrix {\bf 1}. For practical purposes it is sufficient to consider the finite dimensional case of $N\,\times\, N$ matrices. ${\bf S1}[d,a]$ is a signed matrix with fractional entries for $d\, \neq \, 1$. Therefore, in order to have non-negative entries one considers ${\bf S1p}[d,a]$ with entries $S1p(d,a;n,m) \, :=\, (-1)^{n-m}\,S1(d,a;n,m)$. But in the combinatorial context also a scaling is needed to obtain a non-negative integer matrix ${\bf \widehat{S1p}}[d,a]$ with diagonal entries $1$ ({\it i.e.},\, monic row polynomials). This is done by scaling the ${\bf S1p}[d,a]$ rows $n$ with $d^n$. \par\smallskip\noindent
We then have the {\sl Sheffer} structures
\begin{equation}
{\bf S1}[d,a]\, =\, \left(\frac{1}{(1\, +\ x)^{\frac{a}{d}}},\, \frac{1}{d}\,log(1\, +\ x)\right)\,,\hskip 1cm \rm {and} \hskip 1cm
{\fbox{\color{bleudefrance}${\bf\widehat{S1p}}[d,a]$}}\, =\, \left(\frac{1}{(1\, -\ d\,x)^{\frac{a}{d}}},\, -\frac{1}{d}\,log(1\, -\ d\,x)\right)\,.
\end{equation}
The ${\bf \widehat {S2}}[d,a]$ matrices (see \Eq{15}) which have scaled matrix elements $\widehat{S2}(d,a;n,m)\, =\, S2(d,a;n,m)/d^m$ have the signed inverse matrices \dstyle{{\bf \widehat{S1}}[d,a]\, =\, \left((1\, +\ d\,x)^{-\frac{a}{d}},\,\frac{1}{d}\,\log(1\, +\ d\,x)\right)}.\par\smallskip\noindent
The signed ${\bf \widehat {S1}}[d,a]$ matrices have been considered by {\sl Bala} \cite{Bala} as $s_{(d,0,a)}$. In {\sl Luschny} \cite{Luschny1} the $SF-C$ matrices are our ${\bf \widehat{S1p}}[d,d-1]$, and the $SF-CS$ matrices are the unsigned inverse matrices of ${\bf \widehat {S2}}[d,d-1]$.\par\smallskip\noindent
The {\sl Sheffer} structure of ${\bf \widehat{S1p}}[d,a]$ means that the {\it e.g.f.\ } of column $m$ is
\begin{equation}
{\fbox{\color{bleudefrance}$E\widehat{S1p}Col(d,a;t,m)$}}\, =\, \frac{1}{(1\, -\ d\,t)^{\frac{a}{d}}}\, \frac{1}{m!}\,\left(-\frac{1}{d}\,log(1\, -\ d\,t)\right)^m \,, \ \ m\, \in \, {\mathbb N}_0\ .
\end{equation}
There seems not to exist a simple form for the corresponding {\it o.g.f.\ }.\par\smallskip\noindent
The three term recurrence for the ${\bf \widehat{S1p}}[d,a]$ matrix entries is
\begin{equation}
\widehat{S1p}(d,a;n,m) \, =\, \widehat{S1p}(d,a;n-1,m-1) \, +\ (d\,n\, -\ (d-a))\, \widehat{S1p}(d,a;n-1,m), \ \ {\rm for}\ n\sspgeq 1\,,\ m\, =\, 0,\,1,\,...,\,n\,,
\end{equation}
with $\widehat {S1p}(d,a;n,-1)\, =\, 0$, $\widehat {S1p}(d,a;n,m)\, =\, 0$ for $n\, < \, m$ and $\widehat {S1p}(d,a;0,0)\, =\, 1$.\par\smallskip\noindent
The usual transition from the monomial basis $\{x^n\}_{n=0}^{\infty}$ to the rising factorials (see \cite{GKP}, p. 263, eq.\,$(6.11)$) generalizes to the following identification of the row polynomials of ${\bf \widehat{S1p}}[da]$
\begin{equation}
{\fbox{\color{bleudefrance}$P\widehat{S1p}(d,a;n,x)$}}\, :=\, \sum_{m=0}^n\,\widehat{S1p}(d,a;n,m)\,x^m\, =\, risefac(d,a;x,n)\, ,
\end{equation}
with the generalized rising factorials (compare this with the generalized falling factorials eq.\,$(17) $)
\begin{equation}
risefac(d,a;x,n)\, :=\, \prod_{j=0}^{n-1} (x\, +\ (a+j\,d))\, \ \ {\rm with}\ \ risefac(d,a;x,0)\, :=\, 1 .
\end{equation}
This can be rewritten also in terms of the usual rising factorial $x^{\overline{n}}\, :=\, \prod_{j=0}^n\,(x+j)$ for $n\, \in \, \mathbb N$ and $x^{\overline{0}}\, :=\, 1$ as \dstyle{risefac(d,a;x,n)\, =\, d^n\,\left(\frac{x+a}{d} \right)^{\overline{n}}}. In terms of falling factorials this is \dstyle{risefac(d,a;x,n)\, =\, (-d)^n\,\left(\frac{-(x+a)}{d} \right)^{\underline{n}}}.\par\smallskip\noindent
This identification implies {\it via}\ {\sl Vieta}'s theorem that the coefficients of the monic polynomial $P\widehat{S1p}(d,a;n,x)$ are the elementary symmetric functions $\sigma^{(n)}_{n-m}(a_0,a_1,...,a_{n-1})$ in the indeterminates $\{a_j\}_{j=0}^{n-1}$ given by $a_j\, :=\, a\, +\ j\,d$, with $\sigma^{n}_{0}\, :=\, 1$. Sometimes \dstyle{\sigma^{(n)}_{n-m}[d,a]} is used for these symmetric functions. Thus
\begin{equation}
\widehat{S1p}(d,a;n,m)\, =\, \sigma^{(n)}_{n-m}(a_0,a_1,...,a_{n-1}),\ {\rm with}\ a_j\, =\, a\, +\ j\,d\,.
\end{equation}
If $d\, =\, 1$ (and $a\, =\, 0$) $a_0\, =\, 0$ does not contribute and one can write $\widehat{S1p}(1,0;n,m)\, =\, S1p(n,\, m)\, =\, \sigma^{(n-1)}_{n-m}(1,\,2,\,...,\, n-1)$.\par\smallskip\noindent
Sorting in falling powers of $a$ one obtains the formula for $\widehat{S1p}(d,a;n,m)$ in terms of the usual unsigned {\sl Stirling1} numbers $S1p(n,\,m) =$\seqnum{A132393}$(n,\,m)$.
\begin{equation}
\widehat{S1p}(d,a;n,m)\, =\, \sum_{j=0}^{n-m}\,{\binomial{n-j}{m}}\,S1p(n,\,n-j)\, a^{n-m-j}\, d^j\ \, =\, \sum_{j=m}^{n}\,{\binomial{j}{m}}\,S1p(n,\,j)\, a^{j-m}\, d^{n-j} \, .
\end{equation}
This satisfies the recurrence relation \Eq{61}.\par\smallskip\noindent
The standard {\sl Sheffer} recurrence (\cite{Roman}, p. 50, Corollary 3.7.2) for these row polynomials boils down to \begin{equation}
P\widehat{S1p}(d,a;n,x)\, =\, (x+a)\,P\widehat{S1p}(d,a;n\, -\ 1,x\, +\ d)\,, \ \ n \, \in \, \mathbb N\,,
\end{equation}
with input $P\widehat{S1p}(d,a;0,x)\, =\, 1 $.\par\smallskip\noindent
From the {\sl Sheffer} property the {\it e.g.f.\ } of these row polynomials, {\it i.e.},\, the {\it e.g.f.\ } of the number triangle ${\bf \widehat{S1p}}[d,a]$, is
\begin{equation}
{\fbox{\color{bleudefrance}$EP\widehat{S1p}(d,a;t,x)$}}\, =\, \frac{1}{(1\, -\ d\,t)^{\frac{a}{d}}}\,exp\left(-x\,\frac{1}{d}\,log(1\, -\ d\,t)\right)\, =\, \frac{1}{(1\, -\ d\,t)^{\frac{a+x}{d}}}\ .
\end{equation}
For the {\sl Meixer} type recurrence see the proof section 2, C), 7.\par\smallskip\noindent
A more involved problem is to find the generalization of the formula giving $\widehat{S1p}(d,a;n,m)$ in terms of the column scaled $\widehat{S2}(d,a;n,m)$ elements. The standard {\sl Schl\"omilch} formula is (see, e.g., \cite{Charalambides}, p. 290, eq. $(8.20)$ for the signed $S1$ entries)
\begin{equation}
S1p(n,\,m)\, =\, (-1)^{n-m}\,\sum_{k=0}^{n-m}\, (-1)^k\,{\binomial{n+k-1}{m-1}}\, {\binomial{2\,n - m}{n-m-k}}\, S2(n-m+k,\,k)\ .
\end{equation}
The direct proof starts with \Eq{65}. Inserting the {\sl Schl\"omilch} formula just given, then using the inverse of \Eq{18} leads to
\begin{eqnarray}
{\fbox{\color{bleudefrance}$\widehat{S1p}(d,a;n,m)$}}&\, =\,& a^{n-m}\,\sum_{j=m}^n\,{\binomial{j}{m}}\,\sum_{k=0}^{n-j}\,{\binomial{n-k-1}{j-1}}\,{\binomial{2\,n-j}{n-j-k}}\,a^k\, * \nonumber \\
&& *\sum_{l=0}^{n-j+k}\,(-1)^l\,{\binomial{n-j+k}{l}}\,a^{-l}\,\widehat{S2}(d,a;l,k)\,,\ {\rm for}\ n\sspgeq m\sspgeq 0\,.
\end{eqnarray}
Note that this result also holds for $a\, =\, 0$ because then $a^{n-m+k-l}$ becomes $\delta_{0,n-m+k-l}$ (from $0^0\, =\, 1$) leading to a collapse of the $l-$sum, and the remaining two sums produce $\widehat{S1p}(d,0;n,m)\, =\, d^{n-m}\,S1p(n,\,m)$. \par\smallskip\noindent
Also the known result for $m\, =\, 0$ from \Eq{62} is recovered, {\it viz}\, \ $\widehat{S1p}(d,a;n,0)\, =\, risefac(d,a;0,n)\, =\, d^n\,\left(\frac{a}{d}\right)^{\overline{n}}$.\par\smallskip\noindent
For another formula, following from a proof along the lines of the ordinary formula in \cite{Charalambides}, p. 290, see the proof section {\it C}, \Eq{175}.
\par\smallskip\noindent
The inverse of the generalized {\sl Lah} matrix ${\bf L}^{-1}[d,a]$ is discussed in the proof section 2, C) 4.\par\bigskip\noindent\par\smallskip\noindent
{\bf D) Combinatorial Interpretation }\par\smallskip\noindent
{\bf I)} ${\bf \widehat{S2}}[d,a]$\par\smallskip\noindent
The {\it o.g.f.\ }, eq. $(14)$, divided by $d^m$, which generates the complete homogeneous symmetric functions $h^{(m+1)}_{n-m}[d,a]$ of degree $n-m$ of the $m+1$ symbols $a_j\, =\, a\, +\ d\,j$, $j\, =\, 0,\,1,\,...,\, m$, leads immediately to the following combinatorial interpretation of \dstyle{\widehat{S2}(d,a;n,m)\, :=\, \frac{1}{d^m}\, S2(d,a;n,m)} (see \Eq{15}).\par\smallskip\noindent
$\widehat{S2}(d,a;n,m)$ is for $d\sspgeq 2$ the (dimensionless) total volume of the $multichoose(m+1,\, n-m)\, =\, {\binomial{n}{m}}$ hyper-cubes and hyper-cuboids (polytopes) of dimension $n-m$ which are build from the $n-m$ orthogonal $\mathbb Z^{n-m}$ vectors of lengths taken from the repertoire $a_j\, =\, a\, +\ d\,j$, $j\, =\, 0,\,1,\,...,\, m$. \par\smallskip\noindent
For $d\, =\, 1$ (and $a\, =\, 0$), the standard {\sl Stirling2} case ${\bf S2}\, =\, {\bf S2}[1,0]\, =\, {\widehat {\bf S2}}[1,0]$, $a_0\, =\, 0$ does not contribute and the $n-m$ vectors are from the set $\{1,\,2,\, ...,\, m\}$ for the $multichoose(m,\, n-m)\, =\, {\binomial{n-1}{m-1}}$ polytopes.\par\smallskip\noindent
Some examples:\par\smallskip\noindent
a) ${\widehat S2}(1,0;3,2)\, =\, S2(3,2)\, =\, 3$ from the \dstyle{{\binomial{2}{1}}\, =\, 2} polytopes of dimension $1$ with basis lengths $1,\,2$, {\it i.e.},\, two lines of length $1$ and $2$ with total length $3$.\par\smallskip\noindent
b) ${\widehat S2}(2,1;3,2)\, =\, 9$ (see \seqnum{A039755}) from the \dstyle{{\binomial{3}{2}}\, =\, 3} polytopes of dimension $1$ with basis lengths $1,\,3,\, 5$, {\it i.e.},\, three lines of total length $9$.\par\smallskip\noindent
c) ${\widehat S2}(3,2;3,1)\, =\, 39$ (see \seqnum{A225468}) from the \dstyle{{\binomial{3}{1}}\, =\, 3} polytopes of dimension $2$ with basis lengths from the set $\{2,\,5\}$, {\it i.e.},\, two squares of area $2^2$ and $5^2$ and a rectangle of area $2^1\,5^1$, giving total area $4\, +\ 25\, +\ 10\, =\, 39$. \par\bigskip\noindent
{\bf II)} ${\bf \widehat{S1p}}[d,a]$\par\smallskip\noindent
From eq. ($62$) for the row polynomials and the implied elementary symmetric function formula for $\widehat{S1p}(d,a;n,m)$ of eq.\,$(64)$ one has the combinatorial interpretation.\par\smallskip\noindent
$\widehat{S1p}(d,a;n,m)$ is for $d\sspgeq 2$ the total volume of the ${\binomial{n}{n-m}}$ hyper-cuboids of dimension $n-m$ formed from the $n-m$ orthogonal $\mathbb Z^{n-m}$ vectors with distinct (dimensionless) lengths from the $n$-set $\{a\, +\ d\,j\,|\, j=0,\,1,\,...,\, n-1\}$. For $[d,a]\, =\, [1,0]$ the ordinary unsigned {\sl Stirling1} number $S1p(n,\, m)$ (see \seqnum{A132393}) gives the total volume of the ${\binomial{n-1}{n-m}}$ hyper-cuboids of dimension $n-m$ formed from the $n-m$ orthogonal $\mathbb Z^{n-m}$ vectors with distinct (dimensionless) lengths from the $(n-1)$-set $\{1,\,2,\,...\, n-1\}$.\par\smallskip\noindent
Some examples:\par\smallskip\noindent
a) $\widehat{S1p}(1,0;4,2)\, =\, S1p(4,\, 2)\, =\, 11$ (see \seqnum{A132393}) from the \dstyle{{\binomial{3}{2}}\, =\, 3} hyper-cuboids of dimension $2$ with distinct basis vector lengths from the set $\{1,\,2,\, 3\}$, {\it i.e.},\, six rectangles of area $1\,\cdot\, 2$, $1\,\cdot\, 3$ and $2 \,\cdot\, 3$, with total area $2\, +\ 3\, +\ 6\, =\, 11$.\par\smallskip\noindent
b) $\widehat{S1p}(2,1;4,1)\, =\, 176$ (see \seqnum{A028338}) from the \dstyle{{\binomial{4}{3}}\, =\, 4} hyper-cuboids of dimension $3$ with distinct basis vector lengths from the set $\{1,\,3,\,5,\,7\}$, {\it i.e.},\, four cuboids with volumes $1\,\cdot\, 3 \,\cdot\, 5$, $1\,\cdot\, 3\,\cdot\, 7$, $1 \,\cdot\, 5\,\cdot\, 7$ and $3\,\cdot\, 5 \,\cdot\, 7$, adding to $176$.\par\smallskip\noindent
c) $\widehat{S1p}(3,1;4,0)\, =\, 280$ (see \seqnum{A286718}) from the \dstyle{{\binomial{4}{4}}\, =\, 1} hyper-cuboid of dimension $4$
with distinct basis vector lengths from the set $\{1,\,4,\,7,\,10\}$, {\it i.e.},\, the $4D$ hyper-cuboid with volume $1\,\cdot\, 4 \,\cdot\, 7 \,\cdot\, 10\, =\, 280$.
\par\bigskip\noindent
Two remarks: The first column sequences $\{\widehat{S1p}(d,1;n,0)\}_{n=1}^{\infty}$ have also an interpretation as numbers of $(d+1)$-ary rooted increasing trees with $n$ vertices, including the root vertex. This is the sequence $\{S(k=d+1;n,1)\}$ of generalized {\sl Stirling2} numbers with parameter $k$ in the notation of \cite{WLang}, eq. ($5$). The reason is the {\it e.g.f.\ } called there $g2(k=d+1;x)\, =\, -1 \, +\ (1 \, +\ (1-(d+1))\,x)^{\frac{1}{1-(d+1)}} \, =\, -1 \, +\ (1\, -\ d\,x)^{-\frac{1}{d}}$ which is the {\it e.g.f.\ } of the $m=0$ column of ${\bf \widehat{S1p}}[d,1]$, viz $E\widehat{S1p}Col(d,1;x,0)\, =\, (1\, -\ d)^{-\frac{1}{d}}$ (see \Eq{60}) but with the $n\, =\, 0$ entry removed. See the instances \seqnum{A001147}, \seqnum{A007559}, \seqnum{A007696}, \seqnum{A008548}, ... for $d\, =\, 2,\, 3,\, 4,\, 5\, ...$, respectively. They are, for $n >=1 $, the number of $3,\,4,\,5,\,6,\, ...$-ary increasing rooted trees. \par\smallskip\noindent
Similarly, the first column sequences $\{\widehat{S1p}(d,d-1;n,0)\}_{n=1}^{\infty}$ are related to another variety of increasing trees given by the {\it e.g.f.\ } $g2p(k=d-1;x)\, =\, 1\, -\ (1\, -\ d\,x)^{\frac{1}{d}}$ for the sequence $\{|S(-k=1-d;n,1)|\}_{n=0}^{\infty}$ of \cite{WLang}, eq.\,($6$). This is related to the {\it e.g.f.\ } of the sequence $\{\widehat{S1p}(d,d-1,n,0)\}_{n=0}^{\infty}$, {\it i.e.},\, $E\widehat{S1p}Col(d,d-1;x,0)\, =\, (1\, -\ d\,x)^{-\frac{d-1}{d}}$ by integrating and adding $1$: $\int dx\,g\widehat{S1p}(d,d-1;x) \, +\ 1\, =\, g2p(d-1;x)$. \par\noindent
Some instances are: \seqnum{A001147}, \seqnum{A008544}, \seqnum{A008545}, \seqnum{A008546}, ... for $d\, =\, 2,\, 3,\, 4,\, 5\, ...$, respectively.
\par\bigskip\noindent
The combinatorial interpretation of ${\bf rEu}[da]$ should also be consodered.
\par\bigskip\noindent
\vskip 1cm
\section{Proofs}
\hskip 1cm In the following proofs the setting of formal power series is used. No convergence issues are considered. Infinite sums are interchanged (one could use alternatively a large cutoff). Differentiation as well as integration will also be interchanged with infinite sums. Only statements which are not already obvious from the main text are proved here. Note that for binomials the definition of \cite{GKP}, p. 154, eq.\,$(5.1)$ is taken. This is not the definition used by {\sl Maple13} \cite{Maple}. Also $0^0\, :=\, 1$. The symmetry of binomial coefficients is used {\sl ad libitum} (but the upper number in the binomial has to be a non-negative integer).\par\smallskip\noindent
{\bf A) Proofs of section 1\,A}\par\smallskip\noindent
{\bf 1. Proof of eqs. $\bf (8)$ to $\bf (11)$}\par\smallskip\noindent
{\bf Lemma 1:}
With the notation of eq. ($9$):
\begin{equation}
(a\,{\bf 1} \, +\ d\,{\bf E}_x)^n\,x^j\, =\, (a\, +\ d\,j)^n\, x^j\ .
\end{equation}
{\bf Proof:} Trivial, by induction over $n\, \in \, \mathbb N_0$ with $j\, \in \, \mathbb N_0$.\par\smallskip\noindent
For the {\it o.g.f.\ } $GPS(d,a;n,x)$ from eq.\,($2$) with eq.\,($1$) one has, after an exchange of the two sums, inserting $x^j\,x^{-j}$ and application of {\sl Lemma 1}:
\begin{eqnarray}
GPS(d,a;n,x) &\, =\,& \sum_{m=0}^{\infty}\,x^m\, \sum_{j=0}^m\, (a\, +\ d\,j)^n \, =\, \sum_{j=0}^{\infty}\, (a\, +\ d\,j)^n\,\sum_{m=j}^{\infty}\,x^m \nonumber \\
&\, =\,& \sum_{j=0}^{\infty}\, \left( (a\,{\bf 1} \, +\ d\,{\bf E}_x)^n\,x^j\right)\, \sum_{m=j}^{\infty}\, x^{m-j}\, =\, \sum_{j=0}^{\infty}\, \left( (a\,{\bf 1} \, +\ d\,{\bf E}_x)^n\,x^j\right)\,\frac{1}{1\, -\ x}\, .
\end{eqnarray}
After summing over $j$, the reordering of differentials from eq.\,($9$), {\it i.e.},\, the definition of $S2(d,a;n,m)$, is used:
\begin{eqnarray}
\hskip 1.3cm &\, =\,& (a\,{\bf 1} \, +\ d\, {\bf E}_x)^n\,\frac{1}{(1\, -\ x)^2}\, =\, \sum_{k=0}^n\,S2(d,a;n,k)\,x^k\,{\bf d}_x^k\,\frac{1}{(1\, -\ x)^2} \nonumber \\
&\, =\,& \sum_{k=0}^n\,S2(d,a;n,k)\,k!\,\frac{x^k}{(1\, -\ x)^{2+k}}\, .
\end{eqnarray}
The three term recurrence eq.\,($10$) of the number triangle $\{S2(d,a;n,k)\}$ follows from the definition eq. ($9$):
\begin{eqnarray}
\sum_{m=0}^{n+1}\, S2(d,a;n+1,m)\,x^m\,{\bf d}_x^m &\, =\,& (a\,{\bf 1} \, +\ d\, {\bf E}_x)\,(a\,{\bf 1} \, +\ d\, {\bf E}_x)^n \, =\, (a\,{\bf 1} \, +\ d\,{\bf E}_x)\,\sum_{m=0}^{n}\, S2(d,a;n,m)x^m\,{\bf d}_x^m \nonumber \\
&\, =\, & \sum_{m=0}^{n}\, S2(d,a;n,m)\,a\, x^m\,{\bf d}_x^m \, +\ \sum_{m=0}^{n}\, S2(d,a;n,m)\,d\,\left(m\, x^m\,{\bf d}_x^m \, +\ x^{m+1}\,{\bf d}_x ^{m+1}\right) \nonumber \\
&\, =\,& \sum_{m=0}^{n+1}\, S2(d,a;n,m)\,(a\, \, +\ d\,m)\,x^m\,{\bf d}_x^m \, +\ \sum_{m=1}^{n+1}\, S2(d,a;n,m-1)\,d\,x^m\,{\bf d}_x^m\ .
\end{eqnarray}
In the second to last sum the $m\, =\, n+1$ term has been added due to the triangle condition $S2(d,a;n,m)\, =\, 0$ if $n\, < \, m$. In the last sum the lower index $m\, =\, 0$ can be added because of the input condition $S2(d,a;n,-1)\, =\, 0$. Comparing powers of $x^m\,{\bf d}_x^m$ then leads to the recurrence eq.\,($10$) after the change $n\,\to\, n-1$.\par\bigskip\noindent
The explicit form of $S(d,a;n,m)$ from \Eq{11} satisfies the recurrence \Eq{10} together with the inputs because
\begin{eqnarray}
&& d\, S2(d,a;n-1,m-1) \, +\ (a\, +\ d\,m)\,S2(d,a;n-1,m) \nonumber \\
&&\, =\, \sum_{k=0}^{m-1}\frac{d}{(m-1)!}\,(-1)^{m-k}\,(-1)\,{\binomial{m-1}{k}}\,(a\, +\ d\,k)^{n-1} \, +\ \sum_{k=0}^{m}\frac{1}{m!}\,(-1)^{m-k}\,{\binomial{m}{k}}\,(a\, +\ d\,m)\,(a\, +\ d\,k)^{n-1} \nonumber \\
&\, =\,& \frac{1}{m!}\,\sum_{k=0}^{m}\,(-1)^{m-k}\,{\binomial{m}{k}}\,\left[-d\,m \frac{{\binomial{m-1}{k}}}{{\binomial{m}{k}}} \, +\ (a\, +\ d\,m)\right]\,(a\, +\ d\,k)^{n-1} \, .
\end{eqnarray}
I the first term of the last sum the term $k\, =\, m$ does not contribute because of the binomial. Now the term within the bracket becomes
\begin{equation}
\left[ ...\right]\, =\, -d\,\frac{m}{m}\,(m\, -\ k) \, +\ (a\, +\ d\,m)\, =\, a\, +\ d\,k\,,
\end{equation}
leading to the {\it l.h.s.\, } of the recurrence, viz\, $S2(d,a;n,m)$ of eq.\,($11$)
\par\smallskip\noindent
For the instances of $S2[d,a]$ for $[d,a]\ =\ [1,0],\,[2,1],\,[3,1],\, [3,2],\,[4,1],\, ]4,3]$ see \seqnum{A048993},\ \seqnum{A154537},\, \seqnum{A282629},\, \seqnum{A225466},\, \seqnum{A285061},\, \seqnum{A225467}, respectively. \par\bigskip\noindent
\par\smallskip\noindent
We give also the recurrence for $S2fac(d,a;n,m)\ :=\ S2(d,a;n,m)\,m!$:
\begin{equation}
S2fac(d,a;n,m)\, =\, m\,d\, S2fac(d,a;n-1,m-1) \,\, +\ \, (a\, +\ d\,m)\, S2fac(d,a;n-1,m), \ \ {\rm for}\ n\sspgeq 1\,,\ m\, =\, 0,\,1,\,...,\,n\,,
\end{equation}
with $S2fac(d,a;n,-1)\, =\, 0$, $ S2fac(d,a;n,m)\, =\, 0$ for $n\, < \, m$ and $S2fac(d,a;0,0)\, =\, 1$.\par\smallskip\noindent
The {\it e.g.f.\ } of the row polynomials of ${\bf S2fac}[d,a]$ is \dstyle{\frac{e^{a\,t}}{1\, -\ (e^{d\,t\, -\ 1})}}. This is seen after interchanging the two sums and using the {\it e.g.f.\ } of columns of ${\bf S2}[d,a]$.\par\smallskip\noindent
For the instances $[d,a]\ =\ [1,0],\,[2,1],\,[3,1],\, [3,2],\,[4,1],\, ]4,3]$ see \seqnum{A131689}, \seqnum{A145901},\, \seqnum{A284861},\, \seqnum{A225472},\, \seqnum{A285066},\, \seqnum{A225473}, respectively. \par\bigskip\noindent
{\bf 2. Proof of eq.\,${\bf (13)}$, i.e., eq.\,${\bf (12)}$}\par\smallskip\noindent
The {\sl Sheffer} structure eq.\,$(12)$ means that the column {\it e.g.f.\ } of the ${\bf S2}[d,a]$ number triangle satisfies eq.\,$(13)$. The column {\it e.g.f.\ } $ES2Col(d,a;t,m)$ is here named $E(t,m)$ for simplicity.
The lower summation index in brackets can be used instead of the given one because of the triangle structure of ${\bf S2}[d,a]$. The recurrence is used in the first step.
\begin{eqnarray}
E(t,m) &\, :=\,& \sum_{n=m(0)}^{\infty}\,S2(d,a;n,m)\frac{t^n}{n!} \nonumber\\
&\, =\,& d\,\sum_{n=0(1)}^{\infty}\, S2(d,a;n-1,m-1)\frac{t^n}{n!} \, +\ (a\, +\ d\,m)\,\sum_{n=0(1)}^{\infty}\,S2(d,a;n-1,m)\,\frac{t^n}{n!} \nonumber\\
&\, =\,& \int\,dt\,\left(d\,\sum_{n=1}^{\infty}\, S2(d,a;n-1,m-1)\frac{x^{n-1}}{(n-1)!} \, +\ (a\, +\ d\,m)\,\sum_{n=1}^{\infty}\,S2(d,a;n-1,m)\,\frac{t^{n-1}}{(n-1)!} \right)\ \nonumber \\
&\, =\,& \int\,dt\,\left(d\,E(t,m-1) \, +\ (a\, +\ d\,m)\,E(t,m)\right)\ .
\end{eqnarray}
Differentiating both sides of the final equation produces a recurrence for $E(t,m)$:
\begin{equation}
\left(\frac{d\ }{dt} \, -\ (a\, +\ d\,m)\right)\,E(t,m)\, =\, d\,E(t,m-1)\,,
\end{equation}
with the input $E(t,0)\, =\, e^{a\,t}$ because $S2(d,a;n,0)\, =\, a^n$ from the recurrence.\par\smallskip\noindent
The solution of this differential difference equation, satisfying the input, is
\begin{equation}\
E(t,m) \,\equiv\, ES2Col(d,a;t,m)\, =\, e^{a\,t}\,\frac{\left(e^{d\,t}\, -\ 1\right)^m}{m!}\,,
\end{equation}
which is eq.\,($13$).\par\bigskip\noindent
{\bf 3. Proof of eq.\,$\bf (14)$}\par\smallskip\noindent
The {\it o.g.f.\ } of eq.\,$(14)$ $GS2Col(d,a;x,m) \,\equiv\, G(x,m)$ for short in this proof, is shown to lead to the {\it e.g.f.\ } $ES2Col(d,a;t,m)\,\equiv\, E(t,m)$ of eq.\,$(13)$ {\it via}\ the inverse {\sl Laplace} transform, given in eq.\,$(4)$. \par\smallskip\noindent
{\bf Lemma 2:}
\begin{equation}
\prod_{j=0}^m\,\frac{1}{x\, -\ (a\, +\ d\,j)}\, =\, \left(\frac{-1}{d}\right)^m\,\frac{1}{m!}\,\sum_{j=0}^m\,(-1)^j\,{\binomial{m}{j}}\,\frac{1}{x\, -\ (a\, +\ d\,j)}\, ,\ \ {\rm for}\ \ m\, \in \, \mathbb N_0\, .
\end{equation}
{\bf Proof}: This is a standard partial fraction decomposition for the rational function \dstyle{\frac{1}{P(x)}} with $P(x)$ a polynomial of degree $m+1$ with the simple roots $\alpha_j\, =\, a\, +\ d\,j$, $j\, =\, 0,\,1,\,...,\,m $. \dstyle{\frac{1}{P(x)} \, =\, \sum_{j=0}^m\,\frac{a_j}{x \, -\ (a\, +\ d\,j)}}. Here \dstyle{a_j\, =\, \frac{1}{P^{\prime}(\alpha_j)}\, =\, \prod_{k=0,\neq j}^m\, \frac{1}{\alpha_j\, -\ \alpha_k}\, =\,\frac{1}{d^m}\, \prod_{k=0,\neq j}^m\,\frac{1}{j-k}\, =\, \frac{1}{d^m}\,\frac{1}{j!\,(-1)^{m-j}\,(m-j)!}\, =\, \frac{1}{d^m}\,\frac{1}{m!}\,(-1)^{m-j}\,{\binomial{m}{j}}}.\hskip .7cm $\square$\par\smallskip\noindent
Now, due to the linearity of ${\cal L}^{-1}$, and the transform \dstyle{ {\cal L}^{-1}\left[\frac{1}{p\, -\ \alpha} \right]\, =\, e^{\alpha\,t}}, one finds after application of {\sl Lemma 2}, and with the help of the binomial theorem:
\begin{eqnarray}
E(t,m)&\, =\,& {\cal L}^{-1}\left[\frac{1}{p}\,G\left(\frac{1}{p}\,,m\right)\right]\, =\, {\cal L}^{-1}\left[\frac{d^m}{\prod_{j=0}^m\,(p\, -\ (a\, +\ d\, j))} \right] \, =\, \frac{(-1)^m}{m!}\,\sum_{j=0}^m\,(-1)^j\,{\binomial{m}{j}}\,{\cal L}^{-1}\left[ \frac{1}{p\, -\ (a\, +\ d\,j)}\right] \nonumber \\
&\, =\,& \frac{(-1)^m}{m!}\,e^{a\,t}\,\sum_{j=0}^m\,{\binomial{m}{j}}\, (-e^{d\,t})^j\, =\, \frac{e^{a\,t}}{m!}\,(1\, -\ e^{d\,t})^m \,(-1)^m\, .
\end{eqnarray}
which is indeed the {\it e.g.f.\ } given in eq.\,$(13)$.\par\bigskip\noindent
{\bf 4. Proof of eq.\,$\bf (16)$ with eq.\,$\bf (17)$}\par\smallskip\noindent
{\bf Lemma 3: Sheffer transform of a sequence}\par\smallskip\noindent
If the {\it e.g.f.\ } of sequence $\{b_n\}_{n=0}^{\infty}$ is ${\cal B }(t)$, the {\it e.g.f.\ } of sequence $\{a_n\}_{n=0}^{\infty}$ is ${\cal {A}}(t)$, and the {\sl Sheffer} transform of $\{a_n\}$ is $b_n \, =\, \sum_{m=0}^n\,S(n,\, m)\,a_n$, with $S$ {\sl Sheffer} of type $S\, =\, (g(t),\ f(t))$ then
\begin{equation}
{\cal B}(t)\, =\, g(t)\,{\cal A}(f(t))\ .
\end{equation}
The {\bf proof} uses an exchange of the two summations and the {\it e.g.f.\ } of column $m$ of $S$, {\it i.e.},\, \dstyle{g(t)\,\frac{f(t)^m}{m!}}.\par\smallskip\noindent
{\bf Corollary 1}: \par\smallskip\noindent
The row polynomials $PS(n,\,x)$ of a {\sl Sheffer} matrix ${\bf S}\, =\, (g(t),\,f(t))$ have {\it e.g.f.\ } \dstyle{EPS(t,\,x)\, =\, g(t)\,e^{x\,f(t)}}. \par\noindent
This is also called the {\it e.g.f.\ } of the triangle $\bf S$.
\par\bigskip\noindent
That the $fallfac(d,a;x,m)$ definition in \Eq{17} can be rewritten in terms of the usual falling factorial $x^{\underline{n}}$ is trivial.\par\smallskip\noindent
\vfill
\eject
\noindent
{\bf Lemma 4: E.g.f. of fallfac[d,a]}\par\smallskip\noindent
The {\it e.g.f.\ } of $fallfac(d,a;x,m)$ (see eq. $(17)$) is
\begin{equation}
F(d,a;x,t)\, :=\, 1\, +\ \sum_{m=1}^{\infty}\, fallfac(d,a;x,m)\,\frac{t^m}{m!}\, =\, (1\, +\ d\,t)^{\frac{x-a}{d}}\, .
\end{equation}
{\bf Proof}: With the binomial theorem and the rewritten form of $fallfac[d,a]$ in terms of the ordinary falling factorial this is trivial.\par\smallskip\noindent
Now eq.\,$(16)$ is a {\sl Sheffer} transform of the sequence $\{fallfac(d,a;x,m)\}_{m=0}^{\infty}$, therefore, with {\sl Lemmata} $3$ and $4$
the {\it e.g.f.\ } of the {\it r.h.s.\, } of eq.\,$(16)$ is, with the {\it e.g.f.\ } of ${\bf \widehat{S2}}$ given in connection with eq.\,$(15)$,
\begin{equation}
e^{a\,t}\,F\left(d,a;t,\frac{1}{d}\,(e^{d\,t}\, -\ 1)\right)\, =\, e^{a\,t}\, \left(1\, +\ (e^{d\,t}\, -\ 1) \right)^{\frac{x-a}{d}}\, =\, e^{a\,t}\,e^{d\,t\,\frac{x-a}{d}}\, =\, e^{t\,x},
\end{equation}
which is the {\it e.g.f.\ } of the sequence $\{x^n\}_{n=0}^{\infty}$ of the {\it l.h.s.\, }.\par\bigskip\noindent
{\bf 5. Meixner recurrence and recurrence for Sheffer row polynomials eq.\,$\bf (19)$}\par\smallskip\noindent
{\bf a)} Monic row polynomials $s(n,\,x)$ of the {\sl Sheffer} type $(g(x),\, f(x))$ satisfy the {\sl Meixner} \cite{Meixner}, p. 9, eqs.\,$(4.1)$ and $(4.2)$, recurrence ($f^{[-1]}$ denotes the compositional inverse of $f$)
\begin{equation}
f^{[-1]}({\bf d}_x)\, s(n,\,x) \, =\, n\, s(n-1,\,x),\ \ {\rm with\ input}\ \ s(0,\,x)\, =\, 1.
\end{equation}
For the proof see the original reference.\par\smallskip\noindent
For $P\widehat{S2}(d,a;n,x)\, =\, \sum_{m=0}^n\,\widehat{S2}(d,a;n,m)\,x^m$ with \dstyle{f^{[-1]}(y)\, =\, \frac{1}{d}\,log(1\, +\ \,y)}
one has
\begin{equation}
\frac{1}{d}\,\sum_{k=1}^n\, \frac{(-1)^{k+1}}{k}\, \frac{d^k\,\, }{dx^k}\,P\widehat{S2}(d,a;n,x)\, =\, n\,P\widehat{S2}(d,a;n-1,x),\ \ {\rm with\ input}\ \ P\widehat{S2}(d,a;0,x)\, =\, 1\ .
\end{equation}
{\bf b)} The standard recurrence for {\sl Sheffer} row polynomials $S(n,\,x)$ (not necessarily monic ones) is given in {\sl Roman} \cite{Roman}, p. 50, Corollary 3.7.2, which in our notation is\par\smallskip\noindent
{\bf Lemma 5:} (the differentiation is with respect to $t$)
\begin{equation}
S(n,\, x)\, =\, \left.\left[x\, +\ \left(log(g(f^{[-1]}(t)))\right)^{\prime}\right]\,\frac{1}{f^{[-1]}(t)^{\prime}}\,\right |_{t\, =\, {\bf d}_x}\, S(n-1,\,x), \ \ {\rm for}\ n\, \in \, \mathbb N\,,
\end{equation}
and input $S(0,\,x)\, =\, 1 $.\par\smallskip\noindent
For $PS2(d,a;n,x)$ of eq.\,$(19)$ $f^{[-1]}(t)\, =\, \frac{1}{d}\,log(1\, +\ t )$, $ f^{[-1]}(t)^{\prime}\, =\, \frac{1}{d}\,\frac{1}{1\, +\ t}$, $g(t)\, =\, e^{a\,t}$ and
$\left(log(g(f^{[-1]}(t)))\right)^{\prime}\, =\, \frac{a}{d}\,\frac{1}{1\, +\ t}$, leading to \dstyle{PS2(d,a;n,x)\, =\, \left. [x\,d\,(1\, +\ t)\, +\ a]\right |_{t\, =\, {\bf d}_x}\,PS2(d,a;n-1,x)}, which his eq.\,$(19)$,\par\bigskip\noindent
{\bf 6. Proof of eqs.\,$\bf (23)$ to $\bf (25)$}\par\smallskip\noindent
Multiplication of eq.\,$(23)$ with $(1\, -\ x)^{n+1}$ and the binomial formula gives
\begin{eqnarray}
\sum_{i=0}^n\,a_i^{(n)}\, x^i&\, =\,& \sum_{j=0}^n\, b_j^{(n)}\,x^j\,(1\, -\ x)^{n-j}\, =\, \sum_{j=0}^n\,b_j^{(n)}\,\sum_{k=0}^{n-j}\,(-1)^k\,{\binomial{n-j}{k}}\, x^{k+j}\nonumber \\
&\, =\,& \sum_{i=0}^n\,x^i\left(\sum_{j=0}\,b_j^{(n)}\,(-1)^{i-j}\,{\binomial{n-j}{i-j}} \right)\, ,
\end{eqnarray}
where in the last step a new summation index $i\, =\, k+j$ has been used instead of $k$, and the upper summation index $j$ is determined by the binomial as $\min(n,\,i)\, =\, i$, because $0\, \leq \, i\, \leq \, n$. Comparing the coefficients of the powers $x^i$, for $i\, =\, 1,\,2,\, ...,\,n$, leads then to eq.\,$(24)$ for $a_i^{(n)}$.\par\smallskip\noindent
The inverse relation, eq.\,$(25)$, uses the following binomial identity (see \cite{GKP}, p. 169. eq.\,$(5.24)$ with $k\,\to\, p$, $m\, =\, s\, =\, 0$, $l\,\to\, n-k$, $n\,\to\, n \, -\ j$)
\begin{equation}
\sum_{p\sspgeq 0}\,(-1)^p\,{\binomial{n-k}{p}}\,{\binomial{p}{n-j}}\, =\, (-1)^{n-k}\,{\binomial{0}{k-j}}\, =\, (-1)^{n-k}\,\delta_{k,j}\,,
\end{equation}
with the {\sl Kronecker} symbol $\delta_{j,k}\, =\, 1$ if $j=k$, and $0$ otherwise.\par\smallskip\noindent
In the {\it r.h.s.\, } of eq.\,$(25)$, with $a_i^{(n)}$ from eq.\,$(24)$ inserted, the two finite sums are interchanged, a new summation index $p\, =\, n-i$ is used instead of $i$, and finally the preceding binomial identity is employed.
\begin{eqnarray}
\sum_{i=0}^j\,{\binomial{n-i}{j-i}}\,a_i^{(n)}&\, =\,& \sum_{i=0}^j\,{\binomial{n-i}{j-i}}\,\sum_{k=0}^i\,(-1)^{i-k}\,b_k^{(n)}\,{\binomial{n-k}{i-k}} \, =\, \sum_{k=0}^j\,(-1)^{i-k}\,b_k^{(n)}\,\left(\sum_{i=k}^j\,{\binomial{n-i}{n-j}}\, {\binomial{n-k}{n-i}}\right)\nonumber \\
&\, =\,& \sum_{k=0}^j\,(-1)^{n-k}\,b_k^{(n)}\,\left(\sum_{p=n-j}^{n-k}\,(-1)^p\,{\binomial{n-k}{p}}\,{\binomial{p}{n-j}}\right)\, =\, b_j^{(n)}\, .
\end{eqnarray}
{\bf 7. Proof of eqs.\,$\bf (26)$ to $\bf (28)$}\par\smallskip\noindent
This follows by using in eq.\,$(21)$ eqs.\, $(23)$ with $b_j^{(n)}\, =\, S2(d.a;n,j)\,j!$ and eq.\,$(24)$. Then $a_k^{(n)}\, =\, rEu(d,a;n,k)$ is obtained as given in eq.\,$(28)$.\par\bigskip\noindent
{\bf 8. Proof of eq.\,$\bf (29)$}\par\smallskip\noindent
This is eq.\,$(25)$ (replacing $i\,\to\, k$) with $b_j^{(n)}$ and $a_k^{(n)}$ as given in the previous proof.\par\bigskip\noindent
{\bf 9. Proof of eq.\,$\bf (30)$}\par\smallskip\noindent
This uses the binomial identity (see \cite{GKP}, p. 169, eq. $(5.26)$ with $k\,\to\, j$, $q\,\to\, 0$, $m\,\to\, n-k$, $l\,\to\, n$ and $n\,\to\, l$)\par\smallskip\noindent
\begin{equation}
\sum_{j=l}^k\,{\binomial{n-j}{n-k}}\, {\binomial{j}{l}}\, =\, {\binomial{n+1}{n-k+l+1}}\, =\, {\binomial{n+1}{k-l}}\, .
\end{equation}
Insertion of eq.\,$(11)$ into eq.\,$(28)$ followed by an interchange of the two sums and application of the binomial identity leads to
\begin{eqnarray}
rEu(d,a;n,k)&\, =\,& \sum_{j=0}^k\,(-1)^{k-j}\,{\binomial{n-j}{k-j}}\, \sum_{l=0}^j\,(-1)^{j-l}\,{\binomial{j}{l}}\,(a\, +\ d\,l)^n\, =\, \sum_{l=0}^k\,(-1)^{k-l}\, (a\, +\ d\,l)^n\,\sum_{j=l}^k\,{\binomial{n-j}{n-k}}\, {\binomial{j}{l}} \nonumber \\
&\, =\,& \sum_{l=0}^k\,(-1)^{k-l}\,(a\, +\ d\,l)^n\,{\binomial{n+1}{k-l}}\, =\, \sum_{j=0}^k\, (-1)^{k-j}\,{\binomial{n+1}{k-j}}\,(a\, +\ d\,j)^n\, .
\end{eqnarray}
For the ${\bf rEu}[d,a]$ triangles with $[d,a]\, =\, [1,0],\, [2,1],\, [3,2],\,[4,3]$ see \seqnum{A123125}, \seqnum{A060187}, \seqnum{A225117}, \seqnum{A225118}, respectively. The case ${\bf rEu}[3,1]$ is the row reversed version of ${\bf rEu}[3,2]$, and ${\bf rEu}[4,1]$ is the row reversed version of ${\bf rEu}[4,3]$, In general this row reversion relation holds between ${\bf rEu}[d,d-a]$ and ${\bf rEu}[d,a]$, for \dstyle{a\, =\, 1,\,...,\,\floor{\frac{d}{2}}} for $\gcd(d-a,a)\, =\, 1$.\par\bigskip\noindent
{\bf 10. Proof of eq.\,$\bf (31)$}\par\smallskip\noindent
Like eq.\,$(6)$ one has for $GP(d,a;n,x)$ of eq.\,$(20)$
\begin{equation}
GP(d,a;n,x)\ =\ \sum_{k=0}^n\,{\binomial{n}{k}}\,a^{n-k}\,d^k\,GP(k,x)\,,
\end{equation}
with $GP(k,x)\, =\, GP(1,0;k,x)$. However, this does not lead immediately to the desired formula for $rEu(d,a;n,k)$ in terms of the usual {\sl Euler}ian numbers $rEu(n,\,k)$ claimed in eq.\,$(31)$. The proof is done by inserting ${\bf S2}[d,a]$ from eq.\,$(18)$ into eq.\,$(28)$, then replacing the usual ${\bf S2}$ by ${\bf rEu}$ {\it via}\ eq.\,$(29)$ for $[d,a]\, =\, [1,0]$. Here two binomial identities are needed. The first is given in \cite{GKP}, p. 169, eq.\,$(5.25)$ (with $k\,\to\, j$, $s\,\to\, m-i$, $n\,\to\, i$, $l\,\to\, n$, and $m\,\to\, n-k$). Here one needs $n-k\sspgeq 0$ and the upper summation index which would be $n$ can be replaced by $k$ because for $j\, =\, k=1,\,...,\,n$ the first binomial vanishes because the upper non-negative number is then smaller than the lower one.
\begin{equation}
\sum_{j=i}^k\,(-1)^j\, {\binomial{n-j}{n-k}}\,{\binomial{m-i}{j-i}}\, =\, (-1)^k\, {\binomial{m-n+k-i-1}{k-i}}\, =\, (-1)^k\, {\binomial{-(n-m-(k-i)+1)}{k-i}}\, =\, (-1)^{i}\,{\binomial{n-m}{k-i}}\ .
\end{equation}
In the last step another identity, given in \cite{GKP}, p. 164, \Eq{5.14}, has been used.\par\smallskip\noindent
\begin{equation}
{\binomial{-r}{p}} \, =\, (-1)^p\,{\binomial{p-1+r}{p}}\ .
\end{equation}
Now
\begin{eqnarray}
rEu(d,a;n,k)&\, =\,& \sum_{j=0}^k\,(-1)^{k-j}\,{\binomial{n-j}{k-j}}\,S2(d,a;n,j)\,j!\, =\, \sum_{j=0}^k\,(-1)^{k-j}\,{\binomial{n-j}{k-j}}\,\sum_{m=0}^n\,{\binomial{n}{m}}\,a^{n-m}\,d^m\,S2(m,\,j)\,j!\nonumber \\
&\, =\,& \sum_{j=0}^k\,(-1)^{k-j}\,{\binomial{n-j}{k-j}}\,\sum_{m=0}^n\,{\binomial{n}{m}}\,a^{n-m}\,d^m\,\sum_{i=0}^j\,{\binomial{m-i}{j-i}}\,rEu(m,\,i)\nonumber \\
&\, =\,& \sum_{m=0}^n\,{\binomial{n}{m}}\,a^{n-m}\,d^m\,\sum_{i=0}^k\,(-1)^k\,rEu(m,\,i)\,\sum_{j=0}^k\,(-1)^j\, {\binomial{n-j}{n-k}}\,{\binomial{m-i}{j-i}}\nonumber\\
&\, =\,& \sum_{m=0}^n\,{\binomial{n}{m}}\,a^{n-m}\,d^m\,\sum_{i=0}^k\,(-1)^k\,rEu(m,\,i)\,(-1)^i\, {\binomial{n-m}{k-i}}\, ,
\end{eqnarray}
which is eq.\,$(31)$ with summation index $p\,\to\, i$.\par\bigskip\noindent
{\bf 11. Proof of eq.\,$\bf (32)$}\par\smallskip\noindent
We show that eq.\,$(30)$ satisfies the three term recurrence eq.\,$(31)$. The {\it l.h.s.\, } of the recurrence is
\begin{equation}
rEu(d,a;n,m)\, =\, \sum_{j=0}^m\,(-1)^{m-j}\, {\binomial{n+1}{m-j}}\,(a\, +\ d\,j)^n\, .
\end{equation}
The {\it r.h.s.\, } of the recurrence is
\begin{eqnarray}
&&(d\,(n-m)\, +\ (d\, -\ a))\,\sum_{j=0}^{m-1}\,(-1)^{m-j-1}\,{\binomial{n}{m-1-j}}\,(a\, +\ d\,j)^{n-1}\nonumber\\
&&\, +\ (a\, +\ d\,m)\,\sum_{j=0}^m\,(-1)^{m-j}\,{\binomial{n}{m-j}}\,(a\, +\ d\,j)^{n-1}\nonumber\\
&\, =\,& \sum_{j=0}^m\,(-1)^{m-j}\,{\binomial{n+1}{m-j}}\,(a\, +\ d\,j)^{n-1} \left[-(d\,(n-m)\, +\ (d\, -\ a))\frac{{\binomial{n}{m-1-j}}}{{\binomial{n+1}{m-j}}}\, +\ (a\, +\ d\,m)\,\frac{{\binomial{n}{m-j}}}{{\binomial{n+1}{m-j}}}\right]\, .
\end{eqnarray}
In the first sum the upper index $m-1$ has been extended to $m$ because the extra term vanishes due to the binomial. The terms in the bracket are shown to become $a\, +\ d\,j$ as follows.\par\smallskip\noindent
\begin{eqnarray}
[...]&\, =\,& -(d\,(n-m)\, +\ (d\, -\ a))\,\frac{n!\,(m-j)!}{(m-1-j)!\,(n+1)!}\, +\ (a\, +\ d\,m)\, \frac{n!(n+1-m+j)!}{(n-m+j)!\,(n+1)!} \nonumber \\
&\, =\,& \frac{1}{n+1}\,\left(-(d\,(n-m)\, +\ (d\, -\ a))\,(m-j)\, +\ (a\, +\ d\,m)\,(n-m+j+1)\right)\nonumber \\
&\, =\,& \frac{1}{n+1}\, \left( d\,j\,(n+1)\, +\ a\,(n+1)\right)\, =\, a+d\, j\,.
\end{eqnarray}
{\bf 12. Proof of eq.\,$\bf (34)$}\par\smallskip\noindent
From eq.\,(21) and the row polynomial definition of $PS2fac$ we have
\begin{equation}
GP(d,a;n,x)\, =\, \frac{1}{1-x}\,\sum_{k=0}^n\,S2(d,a;n,k)\,k!\,\left(\frac{x}{1-x}\right)^k\, =\, \spfed \frac{1}{1-x}\,PS2fac\left(d,a;n,\frac{x}{1-x}\right)\ .
\end{equation}
Therefore, from eq.\,$(26)$,
\begin{equation}
\frac{1}{1-x}\,PS2fac\left(d,a;n,\frac{x}{1-x}\right)\, =\, \frac{1}{(1-x)^{n+1}}\,PrEu(d,a;n,x)\, ,
\end{equation}
which is \Eq{34} for $x\, \neq \, 1$. But \Eq{34} holds also for $x\, =\, 1$, with the row sums $PrEu(d,a;n,1)\, =\, S2fac(d,a;n,n)$. From \Eq{14} one has $S2fac(d,a;n,n) \, =\, [x^n]\left( n!\,GS2Col(d,a;x,n) \right)\, =\, d^n\,n!$ with {\it e.g.f.\ } \dstyle{\frac{1}{1\, -\ d\,t}}, independently of $a$. These sequences are, for $d=1,\,2,\,...,\,5$, \seqnum{A000142}, \seqnum{A000165}, \seqnum{A032031}, \seqnum{A047053}, \seqnum{A052562}\,.
\par\bigskip\noindent
{\bf 13. Proof of eq.\,$\bf (35)$}\par\smallskip\noindent
With eq.\,$(34)$ and the {\sl Sheffer} structure of ${\bf S2}[d,a]$ (eq.\, $(13)$ for the column {\it e.g.f.\ }, here in the variable $(1-x)\,z$)
\begin{eqnarray}
EPrEu(d,a;t,x)&\, :=\,& \sum_{n=0}^{\infty}\,\frac{t^n}{n!}\,PrEu(d,a;n,x)\, =\, \sum_{n=0}^{\infty}\,\frac{(t\,(1-x))^n}{n!}\,\sum_{m=0}^n\,S2(d,a;n,m)\,m!\left(\frac{x}{1-x}\right)^m \nonumber \\
&\, =\,& \sum_{m=0}^{\infty}\,\left(\frac{x}{1-x}\right)^m\, \sum_{n=m(0)}^{\infty}\, \frac{(t\,(1-x))^n}{n!}\,S2(d,a;n,m)\,m!\nonumber \\
&\, =\,& e^{a\,(1-x)\,t}\,\sum_{m=0}^{\infty}\,\left(\frac{x}{1-x}\right)^m \frac{(e^{d\,(1-x)\,t}\, -\ 1)^m}{m!}\, m!\nonumber \\
&\, =\,& e^{a\,(1-x)\,t}\,\frac{1}{1\, -\ \frac{x}{1-x}\,(e^{d\,(1-x)\,t}\, -\ 1)}\, =\, \frac{(1-x)\,e^{a\,(1-x)\,t}}{1\, -\ x\,e^{d\,(1-x)\,t}}\, .
\end{eqnarray}
The limit $x\,\to\, 1$ {\it via}\ {\sl l'H\^opital}'s rule leads to the {\it e.g.f.\ } \dstyle{\frac{1}{1\, -\ d\,t}} for the row sums of ${\bf rEu}[d,a]$, as found also in the preceding proof.
\par\bigskip\noindent
{\bf 14. Proof of eq.\,$\bf (39)$}\par\smallskip\noindent
One obtains $b_j^{(n+1)}$ of eq.\,$(37)$ from eq.\,$(25)$ (with $n\,\to\, n+1$) with $a_i^{(n+1)}$ given in eq.\,$(38)$. We omit the $(d,a)$ labels here. Remember that $a_{n+1}^{(n+1)}\, =\, 0$ (but $b_{n+1}^{(n+1)}$ does not vanish).
\begin{equation}
b_j^{(n+1)}\, =\, \sum_{i=0}^j\,{\binomial{n+1-i}{j-i}}\,a_i^{(n+1)}\, =\, \sum_{i=0}^j\,{\binomial{n+1-i}{j-i}}\,\sum_{p=0}^i\,(-1)^{i-p}\,{\binomial{n+1}{i-p}}\,(a\, +\ d\,p)^n\, .
\end{equation}
It is convenient to consider the case $j=0$ separately.
\begin{equation}
b_0^{(n+1)}\, =\, a_0^{(n+1)}\, =\, a^n,\ \ n\, \in \, \mathbb N_0\, .
\end{equation}
For $j\, =\, 1,\,2,\,...,\,n+1$ we have after exchange of the sums
\begin{equation}
b_j^{(n+1)}\, =\, \sum_{p=0}^j\,(-1)^p\, (a\, +\ d\,p)^n\,\sum_{i=p\,(0)}^{j}\,(-1)^i\,{\binomial{n+1-i}{n+1-j}}\,{\binomial{n+1}{i-p}}\,.
\end{equation}
Because of the second binomial one could start the sum with $i=0$. Now the binomial identity used already above in the first step of eq.\,$(95)$, is employed with $j\,\to\, i,\, i\,\to\, p,\,n\,\to\, n+1,\, k\,\to\, j,\, m\,\to\, p+n+1$. In the first binomial the lower number is non-negative and the original upper sum index $n+1$ can be replaced by $j$ because for $i\, =\, j+1,\,...,\,n+1$ the upper non-negative number in the first binomial becomes smaller then the lower one.
\begin{equation}
b_j^{(n+1)}\, =\, \sum_{p=0\,(1)}^j\,(-1)^p\, (a\, +\ d\,p)^n\,(-1)^j\,{\binomial{j-1}{j-p}}\, .
\end{equation}
Because $j\sspgeq 1$ one can use the symmetry of the binomials (this is why we have separated the $j=0$ case).
\begin{equation}
b_j^{(n+1)}\, =\, \sum_{p=0}^j\,(-1)^{j-p}\,{\binomial{j-1}{p-1}}\,(a\, +\ d\,p)^n\, .
\end{equation}
The identification with the $\Sigma S2[d,a]$ as claimed in eq.\,$(39)$ is achieved by using the {\sl Pascal} recurrence (see \seqnum{A007318}).
\begin{eqnarray}
&&\sum_{p=0}^j\,(-1)^{j-p}\,{\binomial{j-1}{p-1}}\,(a\, +\ d\,p)^n\, =\, - \sum_{p=0}^j\,(-1)^{j-p}\,{\binomial{j-1}{p}}\,(a\, +\ d\,p)^n \, +\ \sum_{p=0}^j\,(-1)^{j-p}\,{\binomial{j}{p}}\,(a\, +\ d\,p)^n \nonumber\\
&&\, =\, +\, S2(d,a;n,j-1)\,(j-1)!\, +\ S2(d,a;n,j)\,j!\, ,
\end{eqnarray}
where eq\,$(11)$ has been used. Now the $j=0$ result $a^n$ is also covered because $S2(d,a;n,0)\, =\, a^n$ and $S2(d,a;n,-1)*(-1)!$ is taken as vanishing ($S2(d,a;n,-1)=0$ from the recurrence eq.\,$(10)$).
\vfill
\eject
\noindent
{\bf 15. Proof of eq.\,$\bf (40)$}\par\smallskip\noindent
Putting things together, the {\it e.g.f.\ } of $\{PS(d,a;n,m)\}_{m=0}^{\infty}$ (see \Eq{1}) becomes {\it via}\ inverse {\sl Laplace} transform of \Eq{2}
\begin{equation}
EPS(d,a;n,t)\, :=\, \sum_{m=0}^{\infty}\,PS(d,a;n,m)\,\frac{t^m}{m!} \, =\, {\cal L}^{-1}\left [\frac{1}{p}\,GPS\left(d,a;n,\frac{1}{p}\right) \right ]\,,
\end{equation}
and from eq.\,$(36)$ with eqs.\,$(27)$,\,$(38)$ and $(37)$
\begin{equation}
\frac{1}{p}\,GPS\left(d,a;n,\frac{1}{p}\right)\, =\, \sum_{j=0}^{n+1}\,b^{(n+1)}_j(d,a)\, \frac{1}{p}\,\frac{1}{p^j}\,\frac{1}{\left(1\, -\ \frac{1}{p}\right)^{j+1}}\, =\, \sum_{j=0}^{n+1}\,b^{(n+1)}_j(d,a)\,\frac{1}{(p\, -\ 1)^{j+1}}\, .
\end{equation}
Thus, by linearity of ${\cal L}^{-1}$, and the formula before eq.\,$(22)$, one obtains
\begin{equation}
EPS(d,a;n,t)\, =\, e^t\,\sum_{j=0}^{n+1}\,b_j^{(n+1)}(d,a)\,\frac{t^j}{j!}\,,
\end{equation}
which become finally eq.\,$(40)$ after insertion of $b_j^{(n+1)}(d,a)$ from eq.\,$(39)$.
\par\bigskip\noindent
{\bf B) Proofs of section 1\,B}\par\smallskip\noindent
{\bf 1. Proof of eq.\,$\bf (47)$}\par\smallskip\noindent
Insertion of eq.\,$(18)$ into eq.\,$(46)$ leads after interchange of the two finite sums to
\begin{equation}
B(d,a;n)\, =\, \sum_{k=0}^n\,{\binomial{n}{k}}\,a^{n-k}\,d^k\,\sum_{m=0}^{n\ (k)}\,(-1)^m\,\frac{1}{m+1}\,S2(k,\,m)\,m!\,,
\end{equation}
where in the second sum the upper index can be taken as $k$ instead of $n$ because $S2(k,\,m)\, =\, 0$ for $m\, > \, k$, and $k\, \leq \, n$ from the first sum. Then the second sum is equal to $B(k)$ by eq.\,$(46)$ for $[d,a]\, =\, [1,0]$. This is then eq.\,$(47)$.\par\smallskip\noindent
For the $B[d,a]$ numbers for $[d,a] = [1,0],\, [2,1],\,[3,1],\,[4,1],\,[5,1],\,[5,2]$
see \seqnum{A027641}/\seqnum{A027642}, \seqnum{A157779}/\seqnum{A141459}, \seqnum{A157799}/\seqnum{A285068},
\seqnum{A157817}/\seqnum{A141459}, \seqnum{A157866}/\seqnum{A288872}, \seqnum{A157833}/\seqnum{A288872}, respectively.
\par\noindent
$B[3,2](n)\, =\, (-1)^n\,B[3,1](n)$, $B[4,3](n)\, =\, (-1)^n\,B[4,1](n)$, $B[5,3](n)\, =\, (-1)^n\,B[5,2](n)$,
and $B[5,4](n)\, =\, (-1)^n\,B[5,1](n)$. \par\bigskip\noindent
{\bf 2. Proof of eq.\,$\bf (48)$}\par\smallskip\noindent
The {\it e.g.f.\ } of $\{B(d,a;n)\}_{n=0}^\infty$ is obtained from the defining eq.\,$(46)$ recognizing, after exchange of the sums, the {\it e.g.f.\ } $ES2Col(d,a;t,m)$ of eq.\,$(13)$:\par\smallskip\noindent
\begin{eqnarray}
EB(d,a;t) &\, :=\,& \sum_{n=0}^{\infty}\, B(d,a;n)\,\frac{t^n}{n!}\, =\, \sum_{n=0}^{\infty}\, \frac{t^n}{n!}\, \sum_{m=0}^n\, (-1)^m\, \frac{m!}{m+1}\, S2(d,a;n,m)\nonumber\\
&\, =\,& \sum_{m=0}^{\infty}\, (-1)^m\, \frac{m!}{m+1}\, \sum_{n=m}^{\infty}\, \frac{t^n}{n!}\, S2(d,a;n,m)\, =\, \sum_{m=0}^{\infty}\, (-1)^m\, \frac{1}{m+1}\, e^{a\,t}\,(e^{d\,t}\, -\ 1)^m \nonumber \\
&\, =\,& e^{a\,t}\, \frac{1}{y}\,\sum_{m=0}^{\infty} \frac{y^{m+1}}{m+1}\ \ ({\rm with} \ y\, :=\, 1\, -\ e^{d\,t})
\, =\, e^{a\,t}\, \frac{1}{y}\,\int\,dy \sum_{m=0}^{\infty}\, y^m \, =\, e^{a\,t}\, \frac{1}{y}\,\int\,\frac{dy}{1-y}\nonumber \\
&\, =\,& e^{a\,t}\, \frac{1}{-y}\,log(1-y)\, =\, \frac{d\,t\,e^{a\,t}}{e^{d\,t}\, -\ 1}\,.
\end{eqnarray}
{\bf 3. Proof of eq.\,$\bf (50)$}\par\smallskip\noindent
This follows from inserting into \Eq{49} (with $n-m\, =\, p$) eq,\,$(47)$, using the binomial identity (see \cite{GKP}, p. 174. Table 174. the trinomial revision formula)\dstyle{{\binomial{n}{p}}\,{\binomial{p}{m}}\, =\, {\binomial{n}{m}}\, {\binomial{n-m}{p-m}}} and interchange of the sums. Then the binomial formula is used.\par\smallskip\noindent
\begin{eqnarray}
B(d,a;n,x) &\, =\, & \sum_{p=0}^n\,{\binomial{n}{p}}\,B(d,a;p)\,x^{n-p}\, =\, \sum_{p=0}^n\,{\binomial{n}{p}}\,x^{n-p}\, \sum_{m=0}^p\,{\binomial{p}{m}}\,a^{p-m}\,d^m\,B(m)\nonumber \\
&\, =\, & \sum_{p=0}^n\,\sum_{m=0}^p\, {\binomial{n}{m}}\, {\binomial{n-m}{p-m}}\, x^{n-p}\,a^{p-m}\,d^m\,B(m)\, =\, \sum_{m=0}^n\,{\binomial{n}{m}}\,a^{-m}\,d^m\,B(m)\,\sum_{p=m}^n\,{\binomial{n-m}{p-m}}\, x^{n-p}\,a^p \nonumber \\
&\, =\, & \sum_{m=0}^n\,{\binomial{n}{m}}\,a^{-m}\,d^m\,B(m)\,\sum_{p=0}^{n-m}\,{\binomial{n-m}{p}}\, x^{n-(p+m)}\,a^{p+m}\nonumber \\
&\, =\, & \sum_{m=0}^n\,{\binomial{n}{m}}\,d^m\,B(m)\,\sum_{p=0}^{n-m}\,{\binomial{n-m}{p}}\, x^{(n-m)-p}\,a^p\, =\, \sum_{m=0}^n\,{\binomial{n}{m}}\,d^m\,B(m)\,(a\, +\ x)^{n-m}\, .
\end{eqnarray}
{\bf 4. Proof of eq.\,$\bf (51)$}\par\smallskip\noindent
Eq.\,$(49)$ is an exponential (also called binomial) convolution of the sequences $\{B(d,a;n)\}_{n=0}^{\infty}$ and $\{x^n\}_{n=0}^{\infty}$, hence the product of their {\sl e.g.f.}s is with \Eq{48} \dstyle{EB(d,a;t)\,e^{x\,t}\, =\, \frac{d\,t\,e^{(a+x)\,t}}{e^{d\,t}\, -\ 1}}.\par\noindent
Alternatively one can take the exponential convolution of eq.\,$(50)$ of $\{d^m\,B(m)\}_{m=0}^{\infty}$ with {\it e.g.f.\ } \dstyle{\frac{d\,t}{e^{d\,t}\, -\ 1}} and $\{(a\, +\ x)^n\}_{n=0}^{\infty}$ with {\it e.g.f.\ } $e^{(a+x)\,t}$, and their product is eq.\,$(51)$.\par\bigskip\noindent
{\bf 5. Proof of eq.\,$\bf (57)$}\par\smallskip\noindent
This proof will be rather lengthy. It will need the following simple {\it Lemma}.\par\smallskip\noindent
{\bf Lemma 6:} If the {\it e.g.f.\ } of the sequence $\{C_n\}_{n=0}^{\infty}$ is ${\cal C}(t)$ then the {\it e.g.f.\ } of the sequence \dstyle{ \left\{\frac{C_{n+1}}{n+1}\right\}_{n=0}^{\infty}} is \dstyle{\frac{1}{t}\,( {\cal C}(t)\, -\ C_0)}\ .
\par\smallskip\noindent
{\bf Proof:} \par\noindent
\begin{equation}
\sum_{n=0}^{\infty}\, \frac{C_{n+1}}{n+1}\frac{t^n}{n!}\, =\, \frac{1}{t}\, \sum_{n=0}^{\infty}\,C_{n+1}\,\frac{t^{n+1}}{(n+1)!}\, =\, \frac{1}{t}\,({\cal C}(t)\, -\ a_0)\, .
\end{equation}
We compute the {\it o.g.f.\ } of the first two terms of the claimed {\sl Faulhaber} formula \Eq{57} multiplied by $d\, (n+1)$
\begin{equation}
d\,(n+1)\,G(d,a;n,x)\, :=\, \sum_{m=0}^{\infty}\, x^m\,\left\{ B(d;n+1,x=a+d\,(m+1))\, -\ B(d;n+1,x=d)\right\}\,,
\end{equation}
with the polynomials $B(d;n,x)$ from \Eq{52} which are inserted with summation index $k\, =\, n-m$ instead of $m$. The two terms with $k\, =\, n+1$ will be separated and they cancel. Then the sums will be interchanged.
\begin{eqnarray}
d\,(n+1)\,G(d,a;n,x)&\, =\,& \sum_{m=0}^{\infty}\, x^m\,\left\{\sum_{k=0}^n\,{\binomial{n+1}{k}}\, B(d;k)\, (a\, +\ d\,(m+1))^{n+1-k}\, -\ \sum_{k=0}^n\, {\binomial{n+1}{k}}\,B(d;k)\,d^{n+1-k}\right\}\nonumber \\
&\, =\,& \sum_{k=0}^n\,{\binomial{n+1}{k}}\, B(d;k)\,\left\{\left(\sum_{m=0}^{\infty}\,(a\, +\ d\,(m+1))^{n+1-k}\,x^m\right) \, -\ \frac{1}{1-x}\,d^{n+1-k}\right\}\, .
\end{eqnarray}
The last term in the curly bracket simplifies with $B(d;k)\, =\, d^k\, B(k)$ from \Eq{53}, and with \Eq{43} rewritten as
\begin{equation}
\sum_{k=0}^n\,{\binomial{n+1}{k}}\,B(k)\, =\, \delta_{n,0}\,,
\end{equation}
to
\dstyle{-\frac{d}{1\, -\ x}\,\delta_{n,0}}.\par\noindent
In the remaining double sum one uses the {\it o.g.f.\ } of sums of powers (see eqs. $(20)$ and $(21)$) after an index shift $m\,\to\, m-1$, then one adds and subtracts the new $m=0$ term. Thus,
\begin{eqnarray}
d\,(n+1)\,G(d,a;n,x)&\, =\,& \sum_{k=0}^n\,{\binomial{n+1}{k}}\, B(d;k)\,\frac{1}{x} \left( GP(d,a;n+1-k,x)\, -\ a^{n+1-k}\right) \, -\ \frac{d}{1\, -\ x}\,\delta_{n,0}\nonumber \\
&\, =\,& \sum_{k=0}^n\,{\binomial{n+1}{k}}\, B(d;k)\,\left[-\frac{a^{n+1-k}}{x} \, +\ \sum_{m=0}^{n+1-k}\, S2(d,a;n+1-k,m)\,m!\,\frac{x^{m-1}}{(1\, -\ x)^{m+1}}\right]\,\nonumber\\
&&\, -\ \frac{d}{1\, -\ x}\,\delta_{n,0}\, .
\end{eqnarray}
The term with ${\bf S2}[d,a]$ will now be treated separately as $d\,(n+1)\,G1(d,a;n,x)$ and the remainder\par\noindent
$d\,(n+1)\,G2(d,a;n,x)$ will be added later. In $d\,(n+1)\,G1(d,a;n,x)$ a new summation index $k' \, =\, n+1-k$ is used (called then again $k$), and the $m=0$ sum term will be separated in order to have in both sums the same offset $1$.\par\smallskip\noindent
\begin{eqnarray}
d\,(n+1)\,G1(d,a;n,x) &\, =\,& \sum_{k=1}^{n+1}\,{\binomial{n+1}{k}}\, B(d;n+1-k)\,\left(\sum_{m=1}^k\, S2(d;a;k,m)\,m!\,\frac{x^{m-1}}{(1\, -\ x)^{m+1}}\, +\ \frac{a^k}{x\,(1\, -\ x)}\right)\, \nonumber \\
&\, =:\,& d\,(n+1)\,G11(d,a;n,x) \, +\ d\,(n+1)\,G12(d,a;n,x)\, .
\end{eqnarray}
The $m=0$ term $ d\,(n+1)\,G12(d,a;n,x)$ will be added later, and the for the first term we have the following {\it Lemma}.\par\smallskip\noindent
{\bf Lemma 7:} \par\smallskip\noindent
\begin{equation}
G11(d,a;n,x)\, =\, GPS(d,a;n,x)\,,
\end{equation}
which is the {\it o.g.f.\ } given in \Eq{2} of the object of desire $PS(d,a;n,m)$, {\it i.e.},\, the {\it l.h.s.\, } of the {\sl Faulhaber} formula \Eq{57}.\par\smallskip\noindent
{\bf Proof:} The two sums are exchanged, and in the $m$-sum a shift $m\,\to\, m+1$ will be applied.
\begin{eqnarray}
G11(d,a;n,x)&\, =\,& \frac{1}{d\,(n+1)}\,\sum_{m=1}^{n+1}\frac{x^{m-1}}{(1-x)^{m+1}}\,\sum_{k=m}^{n+1}\, {\binomial{n+1}{k}}\,B(d;n+1-k)\,S2(d,a;k,m)\,m!\nonumber\\
&\, =\,&\sum_{m=0}^{n}\frac{x^m}{(1-x)^{m+2}}{\frac{1}{d\,(n+1)}}\sum_{k=m+1}^{n+1} {\binomial{n+1}{k}}B(d;n+1-k)S2(d,a;k,m+1)(m+1)!\,.
\end{eqnarray}
The $k$-sum will be called $C_{n+1}\,\equiv\, C(d,a;n+1,m+1)$. Now {\it Lemma 6} is used to compute the {\it e.g.f.\ } of \dstyle{\left\{\frac{C_{n+1}}{d\,(n+1)}\right\}_{n=0}^{\infty}}. Because \dstyle{C_n\, =\, \sum_{k=0}^{n} {\binomial{n}{k}}B(d;n-k)S2(d,a;k,m+1)(m+1)!} (the sum can start with $k=0$ because $S2(d,a,k,m+1)$ vanishes for $k\, < \, m+1$) is an exponential convolution, the {\it e.g.f.\ } of $\{C_n\}_{n=0}^{\infty}$ is the product of $EB(d;t)$ from \Eq{55} and the {\it e.g.f.\ } $ES2Col(d,a;t,m+1)$ multiplied by $(m+1)!$, hence
\begin{equation}
\frac{d\,t}{e^{d\,t}\, -\ 1}\cdot e^{a\,t}\,(e^{d\,t}\, -\ 1)^{m+1} \, =\, d\,t\,e^{a\,t}\,(e^{d\,t}\, -\ 1)^m\, .
\end{equation}
Thus, the {\it e.g.f.\ } of \dstyle{\left\{\frac{C_{n+1}}{d\,(n+1)}\right\}_{n=0}^{\infty}} is, by {\it Lemma 6}, $e^{a\,t}\,(e^{d\,t}\, -\ 1)^m$ because $C_0 \, =\, C(d,a;0,m+1)\, =\, B(d;0)\,S2(d,a;0,m+1)(m+1)!\, =\, 0$, since $S2(d,a;0,m+1)\, =\, 0$ for $m\sspgeq 0$. But this is the {\it e.g.f.\ } of $\{S2(d,a;n,m)\,m!\}_{n=0}^{\infty}$, and therefore
\begin{equation}
G11(d,a;n,x) \, =\, \sum_{m=0}^{n}\frac{x^m}{(1-x)^{m+2}}\,S2(d,a;n,m)\,m! \, =\, GPS(d,a;n,x)\, .
\end{equation}
In the last step \Eq{8} was used.\hskip 12cm $\square$\par\smallskip\noindent
If all terms of $G1$ and $G2$ are added we have \par\smallskip\noindent
\begin{eqnarray}
G(d,a;n,x)\, =\, GPS(d,a;n,x) &\, +\ & \frac{1}{d\,(n+1)}\,\left [\frac{1}{x\,(1-x)}\,\sum_{k=1}^{n+1}\,{\binomial{n+1}{k}}\, B(d;n+1-k)\,a^k\right.\nonumber \\
&&\hskip 2cm \left.\, -\ \sum_{k=0}^n\,{\binomial{n+1}{k}}\, B(d;k)\,\frac{a^{n+1-k}}{x} \, -\ \frac{d}{1-x}\,\delta_{n,0}\right]\ .
\end{eqnarray}
In the second sum an index change $k'\, =\, n+1-k$ leads to
\begin{eqnarray}
G(d,a;n,x) - GPS(d,a;n,x)&=&\frac{1}{d\,(n+1)}\left[\left(\frac{1}{x\,(1-x)}\, -\ \frac{1}{x}\right)\sum_{k=1}^{n+1}\,{\binomial{n+1}{k}}B(d;n+1-k)\,a^k\, -\ \frac{d}{1-x}\delta_{n,0}\right]\nonumber \\
&\, =\,& \frac{1}{d\,(n+1)}\,\frac{1}{1-x}\,\left[\sum_{k=1}^{n+1}\,{\binomial{n+1}{k}}\, B(d;n+1-k)\,a^k\, -\ d\,\delta_{n,0} \right]\nonumber\\
&\, =\,& \frac{1}{d\,(n+1)}\,\frac{1}{1-x}\,\llap{\phantom{xxxxxxx}}\left[\sum_{k=0}^{n+1}\,{\binomial{n+1}{k}}\, B(d;n+1-k)\,a^k\, -\ B(d;n+1)\, -\ d\,\delta_{n,0} \right]\nonumber \\
&\, =\,& \frac{1}{d\,(n+1)}\,\frac{1}{1-x}\, \left[B(d;n+1,x=a) \, -\ B(d;n+1,x=0) \, -\ d\,\delta_{n,0}\right].
\end{eqnarray}
In the last step the polynomials of \Eq{52} have been identified. If now the definition $G(d,a;n,x)$ in \Eq{116} is remembered, and the coefficient $[x^m]$ of this {\it o.g.f.\ } is picked one finds after this {\it tour de force} the {\it Faulhaber} formula \Eq{57}.\par\bigskip\noindent
\par\smallskip\noindent
{\bf C) Proofs of section 1\,C}\par\smallskip\noindent
{\bf 1. Proof of eq.\,$\bf (59)$}\par\smallskip\noindent
In the {\sl Sheffer} group of (infinite) lower triangular matrices the inverse element of ${\bf S}\, =\, (g(x),\,f(x))\,\equiv\, (g,\,f)$ is
\begin{equation}
{\bf S}^{-1}\, =\, \left(\frac{1}{g(f^{[-1]}(y))},\,f^{[-1]}(y)\right) \,\equiv\, \left(\frac{1}{g\circ f^{[-1]}}, \,f^{[-1]}\right)
\end{equation}
with the compositional inverse $f^{[-1]}$ of $f$, {\it i.e.},\, $f(f^{[-1]}(y))\, =\, y$, or $f^{[-1]}(f(x))\, =\, x$, identically. Here $f(x)\, =\, x\,{\hat f}(x)$ with ${\hat f}(0)\, \neq \, 0$. \par\smallskip\noindent
For the {\sl Sheffer} matrix ${\bf S2}[d,a]$ one has $g(x)\, =\, e^{a\,x}$ and $f(x)\, =\, e^{d\,x}\, -\ 1$, hence \dstyle{f^{[-1]}(y)\, =\, \frac{1}{d}\,\log(1\, +\ y)}, and
\begin{equation}
{\bf S1}[d,a]\, :=\, ({\bf S2}[d,a])^{-1} \, =\, \left((1\, +\ y)^{-\frac{a}{d}} ,\,\frac{1}{d}\,\log(1\, +\ y)\right)\,.
\end{equation}
This matrix has in general fractional integer entries. The unsigned matrix ${\bf S1p}[d,a]\,\equiv\, |{\bf S1}[d,a]|$ ($ p$ for non-negative) has elements $S1p(d,a;n,m)\, =\, (-1)^{n-m}\, S1(d,a;n,m)$ because then the {\it e.g.f.\ } for column $m$ becomes
\begin{equation}
ES1p(d,a;t,m)\, =\, (1-t)^{-\frac{a}{d}}\,\frac{\left(-\frac{1}{d}\,\log(1\, -\ t)\right)^m}{m!}\,,
\end{equation}
and both (formal) power series have non-negative elements which are in general fractional numbers.\par\smallskip\noindent
For combinatorial considerations one is interested in non-negative integer matrices. Therefore, a scaling of the rows is performed: $\widehat{S1p}(d,a;n,m)\, :=\, d^n\,S1p(d,a;n,m)$ which leads to diagonal elements $1$, and the {\sl Sheffer} matrix is
\begin{equation}
{\bf \widehat{S1p}}[d,a]\, =\, \left((1\, -\ d\,y)^{-\frac{a}{d}},\,-\frac{1}{d}\,\log(1\, -\ d\,y)\right)\,,
\end{equation}
because the scaling leads to $t\,\to\, d\,t$ in $ES1p(d,a;t,m)$. The new power series generate exponentially non-negative integers, because \dstyle{\left[\frac{x^n}{n!}\right]\,(1\, -\ d\,y)^{-\frac{a}{d}}\, =\, \left(\frac{a}{d}\right)^{\overline{n}}\,d^n \, =\, \prod_{j=0}^{n-1}\,(a\, +\ d\,j)\, =\, risefac(d,a;0,n)} (see \Eq{63} for the $risefac$ definition), and \dstyle{\left[\frac{x^n}{n!}\right]\,\left( -\frac{1}{d}\,\log(1\, -\ d\,y)\right)\, =\, (n-1)!\,d^{n-1}} for $n\sspgeq 1$ (and $0$ for $n=0$).\par\smallskip\noindent
{\bf 2. Proof of eq.\,$\bf (61)$}\par\smallskip\noindent
The three term recurrence of the ${\bf \widehat{S1p}}[d,a]$ can be obtained from the {\it e.g.f.\ } of their column sequences $E\widehat{S1p}Col(d,a;t,m)$ given in \Eq{60} which we abbreviate for this proof as $Ep(t,m)$.\par\smallskip\noindent
{\bf Lemma 8:}\par\smallskip\noindent
\begin{equation}
(1-d\,t)\,\frac{d\,}{dt}\,Ep(t,m)\, =\, a\,Ep(t,m) \, +\ Ep(t,m-1)\, \ \ {\rm for}\ m\, \in \, \mathbb N\,,
\end{equation}
and the input is \dstyle{Ep(t,0)\, =\, (1\, -\ d\,t)^{-\frac{a}{d}}}.\par\smallskip\noindent
{\bf Proof:} This is elementary with \Eq{60}.\par\smallskip\noindent
Now the recurrence \Eq{61} is seen to satisfy this {\it Lemma}.
\begin{eqnarray}
Ep(t,m) &\, =\,& \sum_{n=0}^{\infty}\,\frac{t^n}{n!}\,\widehat{S1p}(d,a;n,m)\nonumber\\
&\, =\,& \sum_{n=0\ (1)}^{\infty}\,\frac{t^n}{n!}\, \widehat{S1p}(d,a;n-1,m-1)\, +\ \sum_{n=0\ (1)}^{\infty}\,\frac{t^n}{n!}\,(d\,n\, -\ (d-a))\,\widehat{S1p}(d,a;n-1,m)\nonumber\\
&\, =\,& \sum_{n=0}^{\infty}\,\frac{t^{n+1}}{(n+1)!}\,\widehat{S1p}(d,a;n,m-1) \, +\ \sum_{n=0}^{\infty}\,\frac{t^{n+1}}{(n+1)!}\,(d\,(n+1)\, -\ (d-a))\,\widehat{S1p}(d,a;n,m)\nonumber \\
&\, =\,& \int dt\, Ep(t,m-1)\, +\ d\,t\,Ep(t,m)\, -\ (d-a)\, \int dt\,Ep(t,m)\ .
\end{eqnarray}
In the second line the two sums actually start with $n=1$ because ${\bf \widehat{S1p}}[d,a]$ vanishes for negative row indices. This is a integral-difference equation with input $Ep(t,0)$ as given in the {\it Lemma}.
\begin{equation}
(1\, -\ d\,t)\,Ep(t,m)\, =\, \int dt\,\left(Ep(t,m-1)\, -\ (d\, -\ a)\,Ep(t,m)\right)\ .
\end{equation}
Differentiation produces precisely the equation of the {\it Lemma}.\par\smallskip\noindent
{\bf 3. Proof of eq.\,$\bf (62)$}\par\smallskip\noindent
The row polynomials of {\sl Sheffer} triangles are a {\sl Sheffer} transform of the monomials $\{x^n\}_{n=0}^{\infty}$. Therefore, with {\it Lemma 3}, \Eq{82}, the {\it e.g.f.\ } of the (ordinary) row polynomials of ${\bf \widehat{S1p}}[d,a]$ is obtained from the {\it e.g.f.\ } of ${\bf \widehat{S1p}}[d,a]$ given by \Eq{59} and $e^{x\,t}$, {\it i.e.},\,
\begin{equation}
EP\widehat{S1p}(d,a;t,x)\, :=\, \sum_{n=0}^{\infty}\, P\widehat{S1p}(d,a;n,x)\,\frac{t^n}{n!}\, =\, (1\, -\ d\,t)^{-\frac{a}{d}}\,exp(x\,(-\frac{1}{d}\,\log(1-d\,t)))\, =\, (1\, -\ d\,t)^{-\frac{x+a}{d}}\, .
\end{equation}
Then the binomial theorem leads to
\begin{equation}
EP\widehat{S1p}(d,a;t,x) \, =\, \sum_{n=0}^{\infty}\,\frac{t^n}{n!}\, (-d)^n\, \left(-\frac{x+a}{d}\right)^{\underline{n}} \, =\, \sum_{n=0}^{\infty}\,\frac{t^n}{n!}\,risefac(d,a;x,n)\, ,
\end{equation}
where in the first equation the usual falling factorial $x^{\underline{n}}\, :=\, \prod_{j=0}^{n-1}\, (x-j)$ appeared, and in the second equation the definition \Eq{63} for $risefac(d,a;x,n)$ has been used in the rewritten form using ordinary falling factorials.\par\smallskip\noindent
In this way we have found, as a corollary, the {\it e.g.f.\ } of $\{risefac(d,a;x,n)\}_{n=0}^{\infty}$ to be \dstyle{(1\, -\ d\,t)^{-\frac{x+a}{d}}}.\par\smallskip\noindent
The $fallfac[d,a]$ analogon is obtained from inverting \Eq{16} using the inverse of the scaled ${\bf \widehat{S2}}[d,a]$ {\sl Sheffer} matrix, {\it i.e.},\, the signed ${\bf \widehat{S1}}[d,a]$ matrix.
\begin{equation}
fallfac(d,a;x,n)\, =\, \sum_{m=0}^n\,\widehat{S1}(d,a;n,m)\,x^m\, .
\end{equation}
These are the row polynomials of ${\bf \widehat{S1}}[d,a]$.\par\smallskip\noindent
{\bf 4. Lah[d,a]}\par\smallskip\noindent
It is tempting to give here the generalized unsigned {\sl Lah} matrix ${\bf L}[d,a]$ as transition matrix between $risefac[d,a]$ and $fallfac[d,a]$.\par\smallskip\noindent
For the ordinary $[d,a]\, =\, [1,0]$ {\sl Lah} triangle see \seqnum{A271703} (or \seqnum{A008297} with $n\sspgeq m\sspgeq 1$) and \cite{GKP}, exercise 31, p. 312, solution p. 552.\par\smallskip\noindent
The generalization is
\begin{equation}
risefac(d,a;x,n)\, =\, \sum_{m=0}^n\,L(d,a;n,m)\,fallfac(d,a;x,m)\ .
\end{equation}
From \Eq{62} and \Eq{16} one has, in matrix notation
\begin{equation}
{\bf L}[d,a]\, =\, {\bf\widehat{S1p}}[d,a]\,\cdot\, {\bf\widehat{S2}}[d,a]\,.
\end{equation}
We quote a {\it Lemma} on the multiplication law of the {\sl Sheffer} group.\par\smallskip\noindent
{\bf Lemma 9:} \par\smallskip\noindent
If the product of two {\sl Sheffer} matrices with ${\bf S1}\, =\, (g1,\, f1)$ and ${\bf S2}\, =\, (g2,\, f2)$ is ${\bf S3}\, =\, {\bf S1}\,\cdot\, {\bf S2}$ with ${\bf S3}\, =\, (g3,\, f3)$ then
\begin{equation}
g3\, =\, g1\, (g2\,\circ\, f1)\ \ , \ \ f3\, =\, (f2\,\circ\, f1)\ \ ,\ \ {\rm i.e.}, \ \
g3(t)\, =\, g1(t)\,g2(f1(t)))\ \ , \ \ f3(t)\, =\, f2(f1(t))\, .
\end{equation}
This is standard {\sl Sheffer} lore.\par\smallskip\noindent
With \Eq{59} and the statement just before \Eq{15} this implies the {\sl Sheffer} structure\par\smallskip\noindent
\begin{equation}
{\bf L}[d,a]\, =\, \left((1\, -\ \,d\,t)^{-\frac{2\,a}{d}},\, \frac{t}{1\, -\ d\,t}\right)\, .
\end{equation}
The proof of an explicit form along the lines of the mentioned exercise in \cite{GKP} does not immediately lead to an explicit form for $L(d,a;n,m)$ if $a\, \neq \, 0$. See also the complicated form of \Eq{69} for ${\bf \widehat{S1p}}[d,a]$. Of course the matrix product can be written with the help of \Eq{64} or \Eq{65} and \Eq{11}.\par\smallskip\noindent
The {\it e.g.f.\ } of the column sequences is
\begin{equation}
ELCol(d,a;t,m)\, =\, (1\, -\ \,d\,t)^{-\frac{2\,a}{d}}\, \frac{1}{m!}\,\left( \frac{t}{1\, -\ d\,t} \right)^m\,,\ \ m\, \in \, \mathbb N_0\, .
\end{equation}
From the so called $a-$ and $z-$sequences for {\sl Sheffer} matrices (see the link \cite{WLang2}, where also references are given. This link is found also in \seqnum{A006232}) one finds recurrence relations. The {\it e.g.f.}s of these sequences are ($g$ and $f$ are those of \Eq{140} \par\smallskip\noindent
\begin{eqnarray}
a(y)&\, =\,& \frac{y}{f^{[-1]}(y)} \, =\, 1\, +\ d\, y\, =\, a(d;y)\,,\nonumber \\
z(y)&\, =\,& \frac{1}{f^{[-1]}(y)}\,\left(1\, -\ \frac{1}{g(f^{[-1]}(y))}\right)\, =\, \frac{1\, +\ d\,y}{y}\,\left(1\, -\ (1\, +\ d\,y)^{-\frac{2\,a}{d}}\right)\, =\, z(d,a;y)\,.
\end{eqnarray}
This means that there is always a three term recurrence for the matrix entries $L(d,a;n,m)$ for $n \sspgeq m \sspgeq 1$ because the $a-$sequence is $\{1,\,d,\,{\rm repeat}(0)\}$ {\it i.e.},\, \par\smallskip\noindent
\begin{equation}
L(d,a;n,m)\, =\, \frac{n}{m}\,L(d,a;n-1,m-1)\, +\ n\, L(d,a;n-1,m)\,\ \ n\, \in \, \mathbb{N},\, m\, =\, 1,\,2, ...,\, n\, ,
\end{equation}
where the input from column $m\, =\, 0$, besides $L(d,a;0,0)\, =\, 1$, can be taken form the {\it e.g.f.\ } \Eq{141}. \par\smallskip\noindent
In general one can use the $z-$sequence for column $m=0$ in combination with the given recurrence \Eq{143}. For $[d,a]\, =\, [1,0]$ where the $z-$sequence vanishes, the $m \, =\, 0$ column becomes directly $\{1,\,{\rm repeat}(0)\}$. For $[d,a]\, =\, [2,1]$ the $z-$sequence becomes $\{2,\,{\rm repeat}(0)\}$, and the column $m=0$ is also given directly as \seqnum{A000165}. All other cases need also lower row entries with $m\sspgeq 1$.\par\smallskip\noindent
In this special ${\bf L}[d,a]$ case one can, however, derive from the column {\it e.g.f.\ } \Eq{114} a four term recurrence, {\it i.e.},\,
\begin{eqnarray}
L(d,a;n,m)\, =\, L(d,a;n-1,m-1) &\, +\ & 2\,(a\, +\ d\,(n-1))\, L(d,a;n-1,m)\nonumber \\
&\, -\ & d\,(n-1)\,(2\,a\, +\ d\,(n-2))\, L(d,a;n-2,m)\,,
\end{eqnarray}
with inputs $L(d,a;0,0)\, =\, 1$, $L(d,a;n,-1)\, =\, 0$, $L(d,a;-1,m)\, =\, 0$, and $L(d,a;n,m)\, =\, 0$ if $n\, < \, m$.\par\smallskip\noindent
{\bf Proof:}\,
This uses the definition of \dstyle{ELCol(d,a;t,m)\, :=\, \sum_{n=m\, (0)}^{\infty}\, L(d,a;n,m)\,\frac{t^n}{n!}}, and the trivial result
\begin{equation}
(1-d\,t)^2\,\frac{d\ }{dt}\,ELCol(d,a;t,m)\, =\, 2\,a\,(1\, -\ d\, t)\,ELCol(d,a;t,m)\, +\ ELCol(d,a;t,m-1)\, .
\end{equation}
The recurrence follows then by comparing powers of \dstyle{\frac{t^n}{n!}}\,, sending $n\,\to\, n-1$.
\par\bigskip\noindent
The {\sl Meixner} type recurrence for the row polynomials (see \Eq{85}) is
\begin{equation}
\frac{{\op d}_x}{1\, +\ d\,{\bf d}_x}\, PL(d,a;n,x)\, =\, n\, PL(d,a;n-1,x)\,,\ \ n\, \in \, \mathbb N\,,
\end{equation}
and input $PL(d,a;0,x)\, =\, 1$. The series terminates and this becomes
\begin{equation}
\sum_{k=0}^{n-1}\,(-1)^d\, d^k\,{\bf d}_x^{k+1}\, PL(d,a;n,x)\, =\, n\, PL(d,a;n-1,x)\, .
\end{equation}
The general {\sl Sheffer} polynomial recurrence (see \Eq{87} for the rewritten {\sl Roman} corollary) is
\begin{equation}
PL(d,a;n,x)\, =\, \left((2\,a \, +\ x)\,{\bf 1} \, +\ 2\,d\,(a\, +\ x)\, {\bf d}_x \, +\ d^2\,x\,{\bf d}_x^2\right)\, PL(d,a;n-1,x)\,, \ \ n\, \in \, \mathbb N,\,
\end{equation}
and input $PL(d,a;0,x)\, =\, 1$.\par\bigskip\noindent
The inverse matrix of ${\bf L}[d,a]$ is also of interest:\par\smallskip\noindent
\begin{equation}
fallfac(d,a;x,n)\, =\, \sum_{m=0}^n\,L^{-1}(d,a;n,m)\,risefac(d,a;x,m)\ .
\end{equation}
From \Eq{127} one finds the {\sl Sheffer} structure
\begin{equation}
{\bf L}^{-1}[d,a]\, =\, \left(\frac{1}{(1 + d\,t)^{\frac{2\,a}{d}}} ,\, \frac{t}{1\, +\ d\,t}\right)\, =\, (gL(-t),\, -fL(-t))\, ,
\end{equation}
where $gL$ and $fL$ are taken from \Eq{140}.\par\smallskip\noindent
This means, by looking at the column {\it e.g.f}.s of {\sl Sheffer} matrices, that the inverse matrix is just obtained by properly signing the ${\bf L}[d,a]$ matrix entries.
\begin{equation}
L^{-1}(d,a;n,\, m)\, =\, (-1)^{n-m}\,L(d,a;n,\,m)\,, \ \ n\sspgeq m\sspgeq 0\,.
\end{equation}
The explicit from of $gL^{-1}[d,a]$ and $fL^{-1}[d,a]$ shows that one has to replace in the ${\bf L}[d,a]$ recurrence formulae $a\,\to\, -a$ and $d\,\to\, -d$.\par\smallskip\noindent
The $a-$ and $z-$sequences are then $aL^{-1}(d)\, =\, \{1,\,-d,\,{\rm repeat}(0)\}$ and the {\it e.g.f.\ } for $zL^{-1}$ is \dstyle{zL^{-1}(d,a;y) \, =\, \frac{1\, -\ d\,y}{y}\,\left(1\, -\ (1\, -\ d\,y)^{-\frac{2\,a}{d}}\right)\, =\, -z(d,a;-y)} (with $z(d,a;y)$ from \Eq{142}). This gives a three term recurrence for $L^{-1}(d,a;n,m)$ for $n\, > \, m\, > \, 1$ with the column $m=0$ as input.\par\smallskip\noindent
The recurrence derived like above from the column {\it e.g.f.\ } is just \Eq{144} with replacements $a\,\to\, -a$ and $d\,\to\, -d$
\begin{eqnarray}
L^{-1}(d,a;n,m)\, =\, L^{-1}(d,a;n-1,m-1) &\, -\ & 2\,(a\, +\ d\,(n-1))\, L^{-1}(d,a;n-1,m)\nonumber \\
&\, -\ & d\,(n-1)\,(2\,a\, +\ d\,(n-2))\, L^{-1}(d,a;n-2,m)\,,
\end{eqnarray}
with inputs $L^{-1}(d,a;0,0)\, =\, 1$, $L^{-1}(d,a;n,-1)\, =\, 0$, $L^{-1}(d,a;-1,m)\, =\, 0$, and $L^{-1}(d,a;n,m)\, =\, 0$ if $n\, < \, m$.\par\smallskip\noindent
The {\sl Meixner} type recurrence for the row polynomials of ${\bf L}^{-1}$ is like the one in \Eq{147} wit replacement $d\,\to\, -d$
\begin{equation}
\sum_{k=0}^{n-1}\, d^k\,{\bf d}_x^{k+1}\, PL^{-1}(d,a;n,x)\, =\, n\, PL^{-1}(d,a;n-1,x)\, ,
\end{equation}
with input $PL^{-1}(d,a;0,x)\, =\, 1$.\par\smallskip\noindent
The general {\sl Sheffer} recurrence is like \Eq{148} with replacement $a\,\to\, -a$ and $d \,\to\, -d$.
\begin{equation}
PL^{-1}(d,a;n,x)\, =\, \left((x\, -\ 2\,a)\,{\bf 1} \, -\ 2\,d\,(x\, -\ a)\, {\bf d}_x \, +\ d^2\,x\,{\bf d}_x^2\right)\, PL^{-1}(d,a;n-1,x)\,, \ \ n\, \in \, \mathbb N,\,
\end{equation}
and input $PL^{-1}(d,a;0,x)\, =\, 1$.\par\bigskip\noindent
{\bf 5. Proof of eq.$\bf(64)$}\par\smallskip\noindent
It is well known ({\sl Vieta}'s theorem) that the coefficients of a monic polynomial $P(n,\, x)\, =\, \sum_{m=0}^n\, p_m\, x^m$ of degree $n$ are given in terms of the $n$ zeros $x_j,\, j\, =\, 1,\,...,\,n$, of $P$ by $p_m\, =\, (-1)^{n-m}\,\sigma^{(n)}_{n-m}(x_1,x_2,...,x_n)\, =\, -\sigma^{(n)}_{n-m}(-x_1,\,-x_2,\, ...,\,-x_n)$ with the elementary symmetric functions $\sigma^{n}_{n-m}$ of degree $n-m$, and $\sigma^{(n)}_0\, =\, 1$.
For the $risefac(d,a;x,n)$ polynomials \Eq{63} the zeros are $x_j\, =\, -(a\, +\ (j-1)\,d) \, =\, -a_{j-1},\, \, j\, =\, 1,\,...,\,n$ proving \Eq{64} for the coefficients $\widehat{S1p}(d,a;n,m)$.\par\bigskip\noindent
{\bf 6. Proof of eq. ${\bf (65)}$}\par\smallskip\noindent
The second version is proved by using \dstyle{ risefac(d,a;x,n)\, =\, d^n\,\left(\frac{x\, +\ a}{d}\right)^{\overline{n}}} (see \Eq{63}) and the known result ({\it e.g.},\, , \cite{GKP}, p. 263, eq. (6.11)) for ${\bf S1p}$ as transition matrix
\begin{equation}
x^{\overline{n}}\, =\, \sum_{k=0}^n\,S1p(n,\,k)\,x^k,\ \ k\, \in \, \mathbb N_0\, .
\end{equation}
Now, with the binomial formula,
\begin{eqnarray}
risefac(d,a;x,n)&\, =\,& d^n\,\sum_{k=0}^n\,S1p(n,\,k) \left(\frac{x\, +\ a}{d}\right)^k\, =\, \sum_{k=0}^n\,d^{n-k}\,S1p(n,\,k)\, \sum_{m=0}^k\,{\binomial{k}{m}}\,a^{k-m}\, x^m \nonumber \\
&\, =\,& \sum_{m=0}^n\,x^m\,\sum_{k=m}^n\,{\binomial{k}{m}}\,S1p(n,\,k)\,a^{k-m}\,d^{n-k}\, ,
\end{eqnarray}
and the coefficient of $x^m$ is $\widehat{S1p}(d,a;n,m)$ given in the second version of \Eq{65} (with summation index $k\,\to\, j$). The first version of \Eq{65} is obtained by changing $j\,\to\, n\, -\ j^{\prime}$, using then again $j$ as summation index.\par\smallskip\noindent
An alternative proof can be given using the recurrence $risfac(d,a;x,n)\, =\, (x\, +\ (n-1)\,d\, +\ a)\,risefac(d,a;x,n-1)$ with input $risefac(d,a;x,0)\, =\, 1$. Then the {\sl Pascal} recurrence (see \seqnum{A007318}) and the known ${\bf S1p}$ recurrence (given by \Eq{61} for $[d,a]\, =\, [1,0]$) are used.\par\bigskip\noindent
{\bf 7. Meixner type recurrence, and proof of eq. $\bf (66)$}\par\smallskip\noindent
The {\sl Meixner} type recurrence for the monic row polynomials $P\widehat{S1p}(d,a;n,x)$ uses the compositional inverse of the {\sl Sheffer} $f$ function which is \dstyle{\frac{1}{d}\,(1\, -\ e^{-d\,y})} (see \Eq{85}). Therefore, $f^{[-1]}({\bf d}_x)\, P\widehat{S1p}(d,a;n,x)\, =\, n\, P\widehat{S1p}(d,a;n-1,x)$ becomes
\begin{equation}
\sum_{k=1}^n\, (-1)^{k-1}\,\frac{d^{k-1}}{k!}\,({\bf d}_x)^k\, P\widehat{S1p}(d,a;n,x)\, =\, n\, P\widehat{S1p}(d,a;n-1,x)\,,
\end{equation}
with input $P\widehat{S1p}(d,a;0,x)\, =\, 1$.\par\smallskip\noindent
The general {\sl Sheffer} recurrence (see {\sl Lemma} $5$, \Eq{87}, also for the {\sl Roman} reference) uses, with the {\sl Sheffer} $g$ and the given $f^{[-1]}$ function,
\begin{equation}
g(f^{[-1]}(t))\, =\, e^{a\, t},\ \ \frac{d}{dt}\,g(f^{[-1]}(t))\, =\, a\, e^{a\,t},\ \ \frac{d}{dt}\,f^{[-1]}(t)\, =\, e^{-d\,t}\, ,
\end{equation}
leading to
\begin{equation}
P\widehat{S1p}(d,a;n,x)\, =\, (x\, +\ a)\,e^{d\,{\bf d_x}}\,P\widehat{S1p}(d,a;n-1,x)\, =\, (x\, +\ a)\, P\widehat{S1p}(d,a;n-1,x+d),\,
\end{equation}
by {\sl Taylor}'s theorem, and this proves \Eq{66}.\par\bigskip\noindent
{\bf Proof of eq, ${\bf (67)}$}\par\smallskip\noindent
This is covered by {\it Corollary 1} (after \Eq{82}).\par\bigskip\noindent
{\bf Proof of eq. ${\bf(69)}$}\par\smallskip\noindent
The direct way uses the inversion of \Eq{18}.\par\smallskip\noindent
{\bf Lemma 10:}\par\smallskip\noindent
\begin{equation}
S2(n,\,m) \, =\, \left(\frac{-a}{d}\right)^n\,\sum_{k=0}^n\,(-1)^k\,{\binomial{n}{k}}\,a^{-k}\,S2(d,a;k,m)\,, \ \ {\rm for}\ \ n\sspgeq m\sspgeq 0\ .
\end{equation}
{\bf Proof}: From the exponential convolution \Eq{18} one has for the column {\it e.g.f.\ } \dstyle{ES2Col(d,a;t,m)\, =\, e^{a\,t}\, ES2Col(d\,t,m)} (see \Eq{13}). This means (for $d\, \in \, \mathbb N $) that
\begin{equation}
ES2Col(t,m)\, =\, e^{-\frac{a}{d}\,t}\, ES2Col\left(d,a;\frac{t}{d},m\right)\, ,
\end{equation}
which gives for $S2(n,\,m)$ the exponential convolution leading to the assertion.\hskip 5.5cm $\square$
\par\smallskip\noindent
Then the proof of \Eq{69} starts by inserting $S1p(n\, j)$, in the second version of \Eq{65}, by {\sl Schl\"ohmilch}'s formula, \Eq{68}, with $S2(n-j+k,\,k)$ from the {\it Lemma}, \Eq{160}:
\begin{eqnarray}
\widehat{S1p}(d,a;n,m)&\, =\,& \sum_{j=m}^n\,{\binomial{j}{m}}\,a^{j-m}\,d^{n-j}\,(-1)^{n-j}\,\sum_{k=0}^{n-j}\,(-1)^k\,{\binomial{n+k-1}{j-1}}\,{\binomial{2\,n-j}{n-j-k}}\,* \nonumber \\
&& *\, \left(\frac{-a}{d}\right)^{n-j+k}\ \sum_{l=0}^{n-j+k}\,(-1)^l\,{\binomial{n+k-j}{l}}\,a^{-l}\,d^k\,\widehat{S2}(d,a;l,k)\, ,
\end{eqnarray}
where ${\bf S2}[d,a]$ has been replaced by ${\bf \widehat{S2}}[d,a]$ (see the line before \Eq{15}). Collecting $a$ and $d$ powers and the signs leads to \Eq{69}.\par\smallskip\noindent
Another more complicated proof follows the one of the usual {\sl Schl\"ohmilch} formula given in \cite{Charalambides}, p. 290. This uses the {\sl Lagrange} inversion theorem for powers of a (formal) power series.\par\smallskip\noindent
{\bf Lemma 11}: {\bf Lagrange theorem and inversion} \cite{Fichtenholz}, p. 523, \Eq{29}, \cite{WhittakerWatson}, p. 133.\par\smallskip\noindent
{\bf a)} With ${\tilde f}(x) = f(y(x))$, $y(x)\, =\, a\, +\ x\,\varphi(y)$ (here as formal power series)
\begin{equation}
{\tilde f}(x)\, =\, f(a)\, +\ \sum_{n=1}^{\infty}\,\frac{x^n}{n!} \frac{d^{n-1}\ }{da^{n-1}}\left[\varphi^n(a)\,f^{\prime}(a)\right]\, .
\end{equation}
{\bf b)} With $a\, =\, 0$, $y\, =\, x\,\psi(x)$, and $f(y)\, =\, y^k$, $k\, \in \, \mathbb N_0$ and $x(y)\, =\, y^{[-1]}(y)$ (compositional inverse)\par\smallskip\noindent
\begin{eqnarray}
x^k(y)&\, =\,& \delta_{k,0}\, +\ k\,\sum_{n=1}^{\infty}\, \frac{y^n}{n!}\,\left.\frac{d^{n-1}\ }{dt^{n-1}}\left[\left(\frac{1}{\psi(t)}\right)^n\,t^{k-1}\right]\right|_{t=0}\nonumber\\
&\, =\,& \delta_{k,0} \, +\ \sum_{n=1}^{\infty}\,\frac{y^n}{n!}\,\sum_{j=0}^{n-1}\, {\binomial{n-1}{j}}\,k^{\underline{n-j}}\,\left.\left[\frac{d^j\ }{dt^j}\left(\frac{1}{\psi(t)}\right)^n\right]\,t^{k-n+j}\right|_{t=0}\nonumber \\
&\, =\,& \delta_{k,0} \, +\ k!\,\sum_{m=k}^{\infty}\,\frac{y^m}{m!}\,{\binomial{m-1}{m-k}}\,\left.\left[\frac{d^{m-k}\ }{dt^{m-k}}\left(\frac{1}{\psi(t)}\right)^m\right]\right|_{t=0}\, .
\end{eqnarray}
{\bf Proof}: Part {\bf a)} is the standard {\sl Lagrange} theorem.\par\smallskip\noindent
The first equation of part {\bf b)} follows by exchanging the r\^ole of $y$ and $x$, using \dstyle{\varphi(x)\, =\, \frac{1}{\psi(x)}} and $0^k\, =\, \delta_{k,0}$. (See \cite{Fichtenholz}, pp. 524-525 for the case $k=1$). The second equation uses the {\sl Leibniz} rule. Then only $j\, =\, n-k\sspgeq 0$ survives after evaluation at $t=0$, and in the last formula the summation index has been changed for later purposes from $n$ to $m$.\par\bigskip\noindent
{\bf Corollary 2}:\par\smallskip\noindent
\begin{equation}
\left.\frac{d^n\ }{dy^n}\left[\frac{1}{k!}\,(y^{[-1]}(y))^k\right]\right|_{y=0}\, =\, {\binomial{n-1}{n-k}}\, \left.\frac{d^{n-k}\ }{dt^{n-k}}\left[\frac{1}{\psi^n(t)}\right]\right|_{t=0}\,, \ \ {\rm for}\ \ n\sspgeq k\, \in \, \mathbb N .
\end{equation}
The $\delta$ term now disappeared for $k\sspgeq 1$.\par\noindent
This coincides with the inversion formula of {\sl Lagrange} given in \cite{Charalambides}, Theorem 11.11, p. 435, used in the proof on p. 290. \par\bigskip\noindent
Another formula is needed to convert later the negative powers of $\psi$ in the {\sl Corollary} into positive ones. This is given in \cite{Charalambides}, as {\sl Remark 11.5}, p. 432, also used in the proof on p. 290.
\begin{equation}
\left.\frac{d^m\ }{dt^m}\left[(h(t))^s\right]\right|_{t=0}\, =\, \sum_{r=0}^m\,{\binomial{s}{r}}\,{\binomial{m-s}{m-r}}\,\left.\frac{d^m\ }{dt^m}\left[(h(t))^r\right]\right|_{t=0}\,,\ \ {\rm for}\ h(0)\, =\, 1\,, \ m \, \in \, \mathbb N_0,\ s\, \in \, \mathbb R\,.
\end{equation}
Now we start with the derivation of a generalized {\sl Schl\"ohmilch} formula (we use here the column index $k$).
\begin{equation}
\widehat{S1p}(d,a;n,k)\, =\, \left.\frac{d^n\ }{dy^n}\left[ (1\, -\ d\, y)^{-\frac{a}{d}}\,\frac{1}{k!}\,\left(-\frac{1}{d}\,\log(1\, -\ d\,y)\right)^k \right]\right|_{y=0}\,\ {\rm for}\ \ 0\, \leq \, k\, \leq \, n\, .
\end{equation}
The result for $k\, =\, 0$ is known from \Eq{62} to be $risefac(d,a;x=0,n)$. The {\sl Leibniz} rule is applied (because there is no closed formula for the inverse of the product in the solid brackets, written as $k-$th power). The derivatives of the first factor are known (see the $k\, =\, 0$ result) as
\begin{equation}
\left.\frac{d^{n-m}\ }{dy^{n-m}}\left[ (1\, -\ d\, y)^{-\frac{a}{d}}\right]\right|_{y=0}\, =\, risefac(d,a;x=0,n-m)\, .
\end{equation}
For the second factor the {\sl Corollary} is applied with $n\,\to\, m$, and the known compositional inverse of \dstyle{y^{[-1]}(d;y)\, =\, -\frac{1}{d}\,\log(1\, -\ d\,y)} {\it viz}\, \dstyle{y(d;t)\, =\, t\,\psi(d;t)\, =\, \frac{1}{d}\,(1\, -\ e^{-d\,t})}, is used.
\begin{equation}
\left. \frac{d^{m}\ }{dy^{m}}\left[\frac{1}{k!}\,\left(-\frac{1}{d}\,\log(1\, -\ d\,y)\right)^k\right]\right|_{y=0} \, =\, {\binomial{m-1}{m-k}}\,\left. \frac{d^{m-k}\ }{dt^{m-k}}\left[\left(\frac{1\, -\ e^{-d\,t}}{d\,t}\right)^{-m}\right]\right|_{t=0}\, .
\end{equation}
The negative power on the {\it r.h.s.\, } is now converted with the help of \Eq{166} with $s\,\to\, -m$ and $m\,\to\, m-k$. Then the binomial with negative upper entry is rewritten as \dstyle{{\binomial{-m}{r}}\, =\, (-1)^r\,{\binomial{r+m-1}{r}}} (see \Eq{95}), and we use the abbreviation \dstyle{\psi(d;t)\, =\, \frac{1\, -\ e^{-d\,t}}{d\,t}} from above.
\begin{equation}
\left. \frac{d^{m}\ }{dy^{m}}\left[\frac{1}{k!}\,\left(-\frac{1}{d}\,\log(1\, -\ d\,y)\right)^k\right]\right|_{y=0} \, =\, {\binomial{m-1}{m-k}}\,\sum_{r=0}^{m-k}\,(-1)^r\,{\binomial{r+m-1}{r}}\,{\binomial{2\,m-k}{m-k-r}}\, \left. \frac{d^{m-k}\ }{dt^{m-k}}\left(\psi(d;t)^r\right)\right|_{t=0}\, .
\end{equation}
With the {\it e.g.f.\ } \dstyle{E{\widehat{S2}}Col(d,a;x,r)\, =\, e^{a\,x}\,\frac{\left( \frac{1}{d}\,(e^{d\,x}\, -\ 1)\right)^r}{r!}} for column $r$ of $\bf{\widehat{S2}}[d,a]$ from its {\sl Sheffer} structure one obtains after multiplication with $x^{-r}$ and a sign flip $x\, =\, -t$
\begin{equation}
\psi(d;t)^r\, =\, \left(\frac{1\, -\ e^{-d\,t}}{d\,t}\right)^r\, =\, e^{a\,t}\,\sum_{n=r}^{\infty}\,(-1)^{n-r}\,\widehat{S2}(d,a;n,r)\,r!\frac{t^{n-r}}{n!}\, =:\, e^{a\,t}\,A(d,a;t,r).
\end{equation}
For the $(m-k)$-th derivative evaluated at $t\, =\, 0$ the {\sl Leibniz} rule is again applied with a summation index $p$. The exponential factor leads to $a^{m-k-p}$. The $p$-th derivative w.r.t. $t$ of $A(d,a;t,r)$ is evaluated at $t=0$ after the index shift $n-r\, =\, s$ in $A$. This leads to the collapse of the $s-$sum due to the $t\, =\, 0$ evaluation, whence $s\, =\, p$, and $p^{\underline p}\, =\, p!$. The result is
\begin{equation}
\left.\frac{d^p\ }{dt^p}\, A(d,a;t,r) \right|_{t=0}\ =\ \frac{r!\,p!}{(p+r)!}\, (-1)^p\,\widehat{S2}(d,a;p+r,r)\, .
\end{equation}
Thus
\begin{equation}
\left. \frac{d^{m-k}\ }{dt^{m-k}}\left(\psi(d;t)^r\right)\right|_{t=0} \, =\, \sum_{p=0}^{m-k}\,{\binomial{m-k}{p}}\,a^{m-k-p}\,(-1)^p\,\frac{1}{{\binomial{p+r}{p}}}\,\widehat{S2}(d,a,p+r,r) \,,\ \ {\rm for}\ \ m\sspgeq k\,.
\end{equation}
This leads, with the abbreviation $risefac(d,a,x=0,n-m) = (d,a)^{\overline{n-m}}$, to
\begin{eqnarray}
\widehat{S1p}(d,a;n,k)&\, =\,& \sum_{m=k}^n\,{\binomial{n}{m}}\,(d,a)^{\overline{n-m}}\,{\binomial{m-1}{m-k}}\sum_{r=0}^{m-k}\, (-1)^r\, {\binomial{r+m-1}{r}}\,{\binomial{2\,m-k}{m+r}}\, * \nonumber \\
&& * \sum_{p=0}^{m-k}\,(-1)^p\,\frac{{\binomial{m-k}{p}}}{{\binomial{p+r}{r}}}\, a^{m-k-p}\, \widehat{S2}(d,a;p+r,r)\,,\ \ {\rm for}\ n\sspgeq m\sspgeq 1,\,
\end{eqnarray}
A new summation index $m-k\, =\, m'$ is used, and the sum over the triangular array, with rows indexed by $m$ and columns by $r$, is reordered by summing first the columns $r\, =\, 0,\,...,\,n-k$ and then the rows $m\, =\, r,\,...,\,n-k$. The binomials in these two sums are rewritten and the final result is (using column index $m$ instead of $k$)
\begin{eqnarray}
{\fbox{\color{bleudefrance}$\widehat{S1p}(d,a;n,m)$}}&\, =\,& \frac{n!}{(m-1)!} \sum_{r=0}^{n-m}\,\frac{(-1)^r}{r!}\, \sum_{k=r}^{n-m}\, (d,a)^{\overline{n-k-m}}\, {\binomial{2\,k+m}{k+m}}\, \frac{1}{k+m+r}\,\frac{1}{(n-m-k)!}\, \frac{1}{(k-r)!}\, * \nonumber \\
&& * \sum_{p=0}^k\,(-1)^p\,\frac{{\binomial{k}{p}}}{{\binomial{p+r}{r}}}\, a^{k-p}\, \widehat{S2}(d,a;p+r,r)\,\ {\rm for}\ n\sspgeq m\sspgeq 1,\,
\end{eqnarray}
and $\widehat{S1p}(d,a;n,0)\, =\, risefac(d,a;x=0,n)$.
\par\bigskip\noindent
The binomials could be rewritten by building some binomials, but no essential simplification seems to be possible.\par\smallskip\noindent
Both generalizations of the {\sl Schl\"ohmilch} formula involve three sums, but the direct version given in \Eq{69} looks simpler than the second version \Eq{175}.
\par\bigskip\noindent
The statements in {\it section D} are obvious.
\par\bigskip\noindent
|
1,314,259,993,393 | arxiv | \section{Belle II Analysis Software Framework}
\label{sec:basf2}
\subsection{Code Structure}
\input{code}
\subsection{Basf2 Development Infrastructure and Procedures}
\input{development}
\subsection{Modules, Parameters, and Paths}
\input{modules}
\subsection{Data Store and I/O}
\input{datastore}
\subsection{Event Data Model}
\input{event_data_model}
\section{Central Services}
\subsection{Python Interface and Jupyter Notebooks}
\input{python_interface}
\subsection{Parallel Processing}
\input{parallel_processing}
\subsection{Random Numbers}
\input{random}
\subsection{Conditions Data}
\input{conditions}
\subsection{Geometry and Magnetic Field}
\input{geometry}
\section{Conclusions}
Ten years of development work with emphasis on software quality have culminated in a reliable software framework for the Belle II collaboration that is easy to use and extend with new or improved algorithms.
It fulfills the requirements for data taking, simulation, reconstruction, and analysis.
The success is illustrated by the fact that first physics results were presented to the public two weeks after collision data taking had started in Spring 2018.
While the core Belle II software is mature and robust, it must continue to accommodate the evolution of technology and requirements.
Therefore, it is crucial that expertise is preserved and carried forward to new developers, as for all other components of Belle II.
\section*{Acknowledgements}
We thank Leo Piilonen, Jan Strube, Hadrien Grasland, and Stefano Spataro for helpful comments and discussion.
We thank the KEK and DESY computing groups for valuable support.
We rely on many open-source software packages and thank the communities providing them.
We acknowledge support from BMBF and EXC153 (Germany) and MEXT (Japan).
\bibliographystyle{spphys}
\subsubsection{Basf2}
\label{sec:basf2code}
The Belle II-specific code is partitioned into about 40 packages, such as the base-level framework, one package for each detector component, the track reconstruction code, and the post-reconstruction analysis tools.
Each package is managed by one or two librarians.
The code is written in C++, with the header and source files residing in \texttt{include} and \texttt{src} subdirectories, respectively.
By default, one shared library is created per package and is installed in a top-level \texttt{lib} directory that is included in the user's library path.
The build system treats the package's contents in pre-defined subdirectories as follows:
\begin{itemize}
\item \texttt{modules}: The code is compiled into a shared library and installed in a top-level \texttt{module} directory so that it can be dynamically loaded by basf2.
\item \texttt{tools}: C++ code is compiled into an executable and installed in a top-level \texttt{bin} directory that is included in the user's path. Executable scripts, usually written in Python, are symlinked to this directory.
\item \texttt{dataobjects}: These classes define the organization of the data that can be stored in output files. The code is linked in a shared library with the \texttt{\_data\-objects} suffix.
\item \texttt{scripts}: Python scripts are installed in a directory that is included in the Python path.
\item \texttt{data}: All files are symlinked to a top-level \texttt{data} folder.
\item \texttt{tests}: Unit and script tests (see Section~\ref{sec:development}).
\item \texttt{validation}: Scripts and reference histograms for validation plots (see Section~\ref{sec:development}).
\item \texttt{examples}: Example scripts that illustrate features of the package.
\end{itemize}
Users of basf2 usually work with centrally installed versions of basf2.
At many sites, they are provided on CVMFS~\cite{cvmfs}.
Users may also install pre-compiled binaries at a central location on their local systems with the \texttt{b2install-release} tool.
If no pre-compiled version is available for their operating system, the tool compiles the requested version from source.
\subsubsection{Externals}
\label{sec:externals}
We require a few basic packages to be installed on a system, like a compiler, make, wget, tar, and git.
The tool \texttt{b2install-prepare} checks whether these prerequisites are fulfilled and installs, if desired, the missing packages.
All other third-party code on which we rely is bundled in the externals installation.
It includes basic tools like GCC, Python 3, and bzip2 to avoid requiring a system-wide installation of specific versions at all sites, as well as HEP specific software like ROOT~\cite{root}, Geant4~\cite{geant4}, and EvtGen~\cite{evtgen}.
Some packages, like LLVM or Valgrind, are optional and not included in the compilation of the externals by default.
The number of external products has grown over time to about 60, supplemented with 90 Python packages.
The instructions and scripts to build the externals are stored in a git repository.
We use a makefile with specific commands for the download, compilation, and installation of each of the external packages.
Copies of the upstream source packages are kept on a Belle II web server to have them available, if the original source disappears.
The copies provide redundancy for the download if the original source is temporarily unavailable.
The integrity of the downloaded files is checked using their SHA 256 digests.
The libraries, executables, and include files of all external packages are collected in the common directories \texttt{lib}, \texttt{bin}, and \texttt{include}, respectively, so that each of them can be referenced with a single path.
For the external software that we might want to include in debugging efforts, such as ROOT or Geant4, we build a version with debug information to supplement the optimized version.
The compilation of the externals takes multiple hours and is not very convenient for users.
Moreover, some users experience problems because of specific configurations of their systems.
These problems and the related support effort are avoided by providing pre-compiled binary versions.
We use docker to compile the externals on several supported systems: Scientific Linux 6, Red Hat Enterprise Linux 7, Ubuntu 14.04, and the Ubuntu versions from 16.04 to 18.04.
The \texttt{b2install-externals} tool conveniently downloads and unpacks the selected version of the pre-built externals.
Because the absolute path of an externals installation is arbitrary, we have invested significant effort to make the externals location-independent.
First studies to move from the custom Makefile to Spack~\cite{spack} have been done with the aim of profiting from community solutions for the installation of typical HEP software stacks, but relocateability of the build products remains an issue.
\subsubsection{Tools}
\label{sec:tools}
The tools are a collection of shell and Python scripts for the installation and setup of the externals and basf2.
The tools themselves are set up by sourcing the script \texttt{b2setup}.
This script identifies the type of shell and then sources the corresponding sh- or csh-type setup shell script.
This script, in turn, adds the tools directory to the \texttt{PATH} and \texttt{PYTHONPATH} environment variables, sets Belle II specific environment variables, defines functions for the setup or configuration of further software components, and checks whether a newer version of the tools is available.
A pre-defined set of directories is searched for files containing site-specific configurations.
The Belle II-specific environment variables have the prefix \texttt{BELLE2} and contain information like repository locations and access methods, software installation paths, and software configuration options.
Installation of externals and basf2 releases is handled by the shell scripts \texttt{b2install-externals} and \texttt{b2install-release}, respectively.
Usually, they download and unpack the version-specific tarball of pre-compiled binaries for the given operating system.
If no binary is available, the source code is checked out and compiled.
Each version of the externals and basf2 releases is installed in a separate directory named after the version.
For the compilation of the externals, we rely on the presence of a few basic tools, like \texttt{make} or \texttt{tar}, and development libraries with header files.
Our tools contain a script that checks that these dependencies are fulfilled and, if necessary, installs the missing ones.
The command \texttt{b2setup} sets up the environment for a version-specified basf2 release.
It automatically sets up the externals version that is tied to this release, identified by the content of the \texttt{.externals} file in the release directory.
An externals version can be set up independently of a basf2 release with the \texttt{b2setup-externals} command.
The version-dependent setup of the externals is managed by the script \texttt{externals.py} in the externals directory.
Externals and basf2 releases can be compiled in optimized or debug mode using GCC.
In addition, basf2 supports the compilation with the Clang or Intel compilers.
These options can be selected with the \texttt{b2code-option} and \texttt{b2code-option-externals} commands.
A distinct subdirectory is used for the option's libraries and executables.
The commands that change the environment of the current shell are implemented as functions for sh-type shells and as aliases for csh-type shells.
The tools also support the setup of an environment for the development of basf2 code.
The \texttt{b2code-create} command clones the basf2 git repository and checks out the master branch.
The environment is set up by executing the \texttt{b2setup} command without arguments in the working directory.
If a developer wants to modify one package and take the rest from a centrally installed release, the \texttt{b2code-create} command can be used with the version of the selected release as an additional argument that is stored in the file \texttt{.release}.
The sparse checkout feature of git is used to get a working directory without checked-out code.
Packages can then be checked out individually with the \texttt{b2code-package-add} command.
The \texttt{b2setup} command sets up the environment for the local working directory and the centrally installed release.
Further tools for the support of the development work are described in Section~\ref{sec:development}.
To make it easier for users to set up an environment for the development of post-reconstruction analysis code and to encourage them to store it in a git repository, the tools provide the \texttt{b2analysis-create} command.
This requires a basf2 release version as one of the arguments and creates a working directory attached to a git repository on a central Belle II server.
The basf2 release version is stored in a \texttt{.analysis} file and used by the \texttt{b2setup} command for the setup of the environment.
The \texttt{b2analysis-get} command provides a convenient way to get a clone of an existing analysis repository and set up the build system.
The tools are designed to be able to set up different versions of basf2 and externals and thus must be independent of them.
For this reason, all binary code is placed in the externals.
When GCC and Python were embedded in the tools originally to avoid duplication in multiple externals versions, this proved difficult to manage during updates.
One of the prime challenges that we overcame in the development of the tools was to cope with the different shell types and various user environment settings.
\subsubsection{Access of Conditions Objects}
By default, the framework assumes that payload contents are serialized
ROOT objects and manages the access to them, but direct access to payload files of any type is possible, too. User
access to conditions objects is provided by two interface classes, one for
single objects called \texttt{DBObjPtr} and one for arrays of objects
called \texttt{DBArray}. These classes reference \texttt{DBEntry} payload objects
in the \texttt{DBStore} global store. Multiple
instances of the interface class point to the same object. It is identified
by a name that is, by default, given by the class name. Access to the
conditions objects is available in C++ and in Python. The class interfaces
are designed to be as close as possible to the interface
for event-level data (see Section~\ref{sec:datastore}),
so that users can use the same concepts for both.
The interface classes always point to the correct payload objects for the current
run; updates are transparent to the user. If the user needs to be aware when the
object changes, they can either manually check for changes, or register a callback
function for notification. Figure~\ref{fig:database-relations}
visualizes the relations among the entities.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{figures/DatabaseRelations.pdf}
\caption{Relations between all entities for the Conditions Database Client.
The user usually only interacts with the \texttt{DBObjPtr} and
\texttt{DBArray} objects and maybe configures the database sources (shown in
blue). Everything else is handled transparently, including the communication
with the CDB (shown in green).}\label{fig:database-relations}
\end{figure*}
The CDB handles payloads at run granularity, but the framework can
transparently handle conditions that change within a run: if the payload is a
ROOT object inheriting from the base class \texttt{IntraRunDependency}, the
framework transparently checks for each event whether an update of the conditions data is required.
Different specializations of \texttt{IntraRunDependency} can be implemented: for
example, changing the conditions depending on event number or time stamp.
\subsubsection{Creation of Conditions Data}
To facilitate easy creation of new conditions data -- for example, during
calibration -- we provide two payload creation classes, \texttt{DBImportObj} and
\texttt{DBImportArray}. They have an interface very similar to \texttt{DBObjPtr}
and \texttt{DBArray}. Users
instantiate one of the creation classes, add objects to them and commit them to
the configured database with a user-supplied IoV. This includes support for
intra-run dependency. The capability to use a local file-based database allows for
easy preparation and validation of new payloads before they are uploaded to the CDB.
\subsubsection{Management of CDB Content}
To simplify the inspection and management of the CDB contents, we provide the
\texttt{b2conditionsdb} tool that uses the
requests package~\cite{python-requests} for communication with the CDB API. It allows users to list, create and modify
global tags, as well as to inspect their contents. It can be used to download a
global tag for use with the local database backend and to upload a
previously prepared and tested local database configuration to a global tag.
\subsubsection{Data Store}
\label{sec:datastore}
Modules exchange data via the \textit{Data Store} that provides a globally accessible interface to mutable objects or arrays of objects.
Objects (or arrays of objects) are identified by name that, by default, corresponds to the class name.
By convention, arrays are named by appending an ``s'' to the class name.
Users may choose a different name to allow different objects of the same type simultaneously.
Objects in the Data Store can have either permanent or event-level durability.
In the latter case, the framework clears them before the next data event is processed.
Client code can add objects to the Data Store, but not remove them.
Within one event, two distinct arrays of objects in the Data Store can have weighted many-to-many relations between their elements.
For example, a higher-level object might have relations to all lower-level objects that were used to create it.
Each relation carries a real-valued weight that can be used to attach quantitative information such as the fraction a lower-level object contributed to the higher-level one.
The relationship information is stored in a separate object; no direct pointers appear in the related objects.
This allows us to strip parts of the event data, without affecting data integrity: if one side of a relationship is removed, the whole relation is dropped.
The relations are implemented by placing a \texttt{RelationArray} in the Data Store that records the names of the arrays it relates, as well as the indices and weights of the related entries.
As the Data Store permits only appending entries to an array, the indices are preserved.
The name of the relations object is formed by placing ``To'' between the names of the related arrays.
The interface to objects in the Data Store is implemented in the templated classes \texttt{StoreObjPtr} for single objects and \texttt{StoreArray} for arrays of objects, both derived from the common \texttt{StoreAccessorBase} class.
They are constructed with the name identifying the objects; without any argument, the default name is used.
Access to the objects is type-safe and transparent to the event-by-event changes of the Data Store content.
To make the access efficient, the \texttt{StoreAccessorBase} translates the name on first access to a pointer to a \texttt{DataStoreEntry} object in the Data Store.
The \texttt{DataStoreEntry} object is valid for the lifetime of the job and contains a pointer to the currently valid object, which is automatically updated by the Data Store.
Access to an object in the Data Store thus requires an expensive string search only on the first access, and then a quick double dereferencing of a pointer on subsequent accesses.
The usage of relations is simplified by deriving the objects in a Data Store array from \texttt{RelationsObject}.
It provides methods to directly ask an object for its relations to, from, or with (ignoring the direction) other objects.
Non-persistent data members of \texttt{RelationsObject} and helper classes are used to make the relations lookup fast by avoiding regeneration of information that was obtained earlier.
We provide an interface to filter, update or rebuild relations when some elements are removed from the Data Store.
It is possible to copy whole or partial arrays in the Data Store, where new relations between the original and copied arrays are created, and, optionally, the existing relations of the original array are copied.
\subsubsection{I/O}
We use ROOT for persistency.
This implies that all objects in the Data Store must have a valid ROOT dictionary.
The \texttt{RootOutputModule} writes the content of the Data Store with permanent and event durability to a file with two separate \texttt{TTree}s, with a branch for each Data Store entry.
The selection of branches, the file name, and some tree configurations can be specified using module parameters.
The corresponding module for reading ROOT files is the \texttt{RootInputModule}.
The \texttt{RootOutputModule} writes an additional object named \texttt{FileMetaData} to the permanent-durability tree of each output file.
It contains a logical file name, the number of events, information about the covered experiment/run/event range, the steering file content, and information about the file creation.
The file metadata also contains a list of the logical file names of the input files, called parents, if any.
This information is used for the index file feature.
A \texttt{RootInputModule} can be asked to load, in addition to the input file, its ancestors up to a generational level given as a parameter.
A file catalog in XML format, created by the \texttt{RootOutputModule}, is consulted to translate logical to physical file names for the ancestor files.
The unique event identifier is then used to locate and load the desired event.
With the index file feature, one can produce a file containing only \texttt{EventMetaData} objects (see next section) of selected events, and then use this as the input file in a subsequent job to access the selected events in its parents.
File-reading performance is not optimal, however, since the usual structure of \texttt{TTree}s in ROOT files is not designed for sparse event reading.
The index file feature can be used also to add objects to an existing file without copying its full content or to access lower level information of individual events for display or debug purposes.
The Belle II data-acquisition system uses a custom output format with a sequence of serialized ROOT objects to limit the loss of events in case of malfunctions.
The files in this format are transient; they are converted to standard ROOT files for permanent storage.
\subsubsection{Testing the Geometry Description}
Developing a functional material and geometry description is quite cumbersome, because, usually, complex construction drawings need to be converted from CAD or paper into code that places the separate volumes with their correct transformation.
To assist the sub-detector developers with this task, we developed a set of tools to supplement the visualization tools provided by Geant4.
First, we run an automated overlap check that uses methods provided by Geant4 to check, for each volume, if it has intersections with any of its siblings or its parent.
This is done by randomly creating points on the surface of the inspected volume and checking if this point is either outside the parent, or inside any of the siblings.
This check is performed on a nightly basis and repeated with more samples points prior to major releases, or if large changes to the geometry have been made.
Second, we provide a module to scan the material budget encountered when passing through the detector.
This module tracks non-interacting, neutral particles through the detector, and records the amount of material encountered along the way.
It can be configured to scan the material in spherical coordinates, in a two-dimensional grid, or as a function of the depth along rays in a certain direction.
The output is a ROOT file containing histograms of the traversed material.
These histograms can be created for each material or each detector component.
In particular, the material distribution by component is a very useful tool to track changes to the material description, allowing us to visualize the differences after each update to the volume-definition code or material-description parameters.
\subsubsection{Magnetic Field Description}
The magnetic field description for Belle~II is loaded from the conditions database.
The payload is created from an XML file using the same procedure as for the geometry description introduced above.
Because the magnetic field does not create any Geant4 volumes, analysis jobs can obtain the field values without the need to instantiate a Geant4 geometry.
The magnetic field creator can handle a list of field definitions for different regions of the detector.
If more than one definition is valid for a given region, either the sum of all field values is taken, or only one definition's value is returned, if it is declared as exclusive.
We have implementations for constant magnetic field, 2D radial symmetric field map and full 3D field maps and some special implementations to recreate the accelerator-magnet conditions close to the beams.
For normal simulation and analysis jobs, we have a segmented 3D field map with a fine grid in the inner-detector region and a total of three coarse outer grids for the two endcaps and the outer-barrel region.
\subsubsection{Python Interface}
\label{sec:pythoninterface}
To apply the functionality described in \autoref{sec:basf2} to a data processing task -- at the most basic level, arranging appropriate modules into a path and starting the event processing --
basf2 provides a Python interface.
Typically, users perform tasks using Python scripts (called ``steering files'' in this context), but interactive use is also supported.
Figure~\ref{fig:steering} shows a minimal example for the former, while \autoref{sec:jupyternotebooks} discusses applications for the latter.
\begin{figure}[htbp]
\begin{lstlisting}
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Generate 100 events with event numbers 0 to 99 that contain only the event meta data.
import basf2
main = basf2.create_path()
main.add_module('EventInfoSetter', evtNumList=[100])
basf2.process(main)
\end{lstlisting}
\caption{Example of a basf2 steering file.}
\label{fig:steering}
\end{figure}
Python is a very popular language and provides an easy-to-understand syntax that new users can rather quickly deploy to use the framework efficiently.
It allows us to harness the power of a modern scripting language for which copious (third-party) packages are available.
We exploit this, for example, to build a higher-level framework for performing typical analysis tasks in a user-friendly way.
The docstring feature of Python is used to generate documentation web pages with Sphinx.
We use Boost.Python~\cite{boost.python} to expose the basf2 framework features in Python.
While steering files can be executed by passing them directly to the Python interpreter, we also provide the \texttt{basf2} executable as an alternative to add framework-specific command line arguments.
Among these are options to print versioning information, list available modules and their description, and specify input or output file names.
Besides the implementation of modules in C++, the framework allows the user to execute modules written in Python.
This makes it even easier for users to write their own module code because it can be embedded in the steering file.
It can also facilitate rapid prototyping.
Even so, the modules provided by the framework are written in C++ (with a few exceptions for tasks that are not performance critical) to profit from the advantages of compiled code.
Using PyROOT~\cite{pyroot}, Python access to the Data Store is provided by classes resembling the \texttt{StoreObjPtr} and \texttt{StoreArray} interfaces.
In an equivalent way, interface classes provide access to conditions data, such as calibration constants (see Section~\ref{sec:conditions}).
A feature that facilitates development and debugging is the possibility to interrupt the event processing and present an interactive Python prompt.
In the interactive session based on IPython~\cite{ipython}, the user can inspect or even modify the processed data.
\subsubsection{Jupyter Notebooks}
\label{sec:jupyternotebooks}
Typical HEP user-level analyses for processing large data samples are mostly based on the execution of small scripts written in Python or ROOT macros that call complex compiled algorithms in the background.
Jupyter notebooks~\cite{jupyter} allow a user to develop Python-based applications that bundle code, documentation and results (such as plots).
They provide an enriched browser-based working environment that is a front-end to an interactive Python session that might be hosted centrally on a remote high-performance computing cluster.
Jupyter notebooks include convenient features like syntax highlighting and tab-completion as well as integration with data-analysis tools like ROOT, matplotlib~\cite{matplotlib} or pandas~\cite{pandas}.
The integration of Jupyter into basf2 simplifies the process of creating and processing module paths within Jupyter notebooks and represents a natural next step beyond the integration of Python into basf2.
The package for the interplay between Jupyter and basf2 is encapsulated into a basf2-agnostic hep-ipython-tools project~\cite{hep-ipython-tools} that can be used with the framework code of other experiments.
The processing of one or more paths is decoupled into an abstract \textit{calculation} object, which plays well with the interactivity of the notebooks, because multiple instances of this calculation can be started and monitored, while continuing the work in the notebook.
Abstracting the basf2 calculation together with additional interactive widgets and convenience functions for an easier interplay between Juypter and basf2 not only improves the user experience, but also accentuates the narrative and interactive character of the notebooks.
The decoupling of the calculations is achieved using the multiprocessing library and depends heavily on the ability to steer basf2 completely from the Python process.
Queues and pipelines are used from within the basf2 modules to give process and runtime-dependent information back to the notebook kernel.
The interactive widgets are created using HTML and JavaScript and display information on the modules in a path, the content of the data store or the process status and statistics.
|
1,314,259,993,394 | arxiv | \section{Introduction}\label{sec:introduction}
Terahertz (THz) wireless is an upcoming technology to provide new spectrum resources for future communication systems. The availability of contiguous high bandwidth transmissions in the THz spectrum can be potential for wireless backhaul/fronthaul technology \cite{Koenig_2013_nature,Wang2014, Elayan_2019}. The THz spectrum is mostly unlicensed and can support secured terabits per second (Tbps) data transmissions with low latency for various high-end applications. The line-of-sight (LOS) THz technology requires high directional antennas with higher gain to compensate for severe path loss due to the molecular absorption of transmitted signals. Nevertheless, the THz link is susceptible to the random pointing errors caused by the misalignment between transmitter and receiver antenna beams and may incur transceiver distortion at higher frequencies in addition to the stochastic multi-path fading \cite{Kokkoniemi_2018,KOKKONIEMI2020, Boulogeorgos_Analytical, Shanyun2021}. Alleviating the adverse effects of signal attenuation and fading is desirable for high-speed THz links.
Recently, dual-hop and multi-hop relaying at THz frequencies have been investigated \cite{ Xia_2017,Giorgos2020, Boulogeorgos_2020_THz_THz,huang2021, Boulogeorgos_Error,Pranay_2021_TVT, Rong_2017, Abbasi_2017,Mir2020}. More specifically, the authors in \cite{Xia_2017} formulated an optimal relaying distance for THz-band communication to maximize the network throughput. In \cite{Giorgos2020}, a relay selection approach was suggested to mitigate the impact of antenna misalignment and shadowing due to the human blockage in a multi-relay setup. A reconfigurable intelligent surface (RIS) assisted multi-hop THz system over Rician fading was considered in \cite{huang2021} to mitigate the signal attenuation using deep reinforcement learning (DRL) based beam-forming technique. Considering the generalized $\alpha$-$\mu$ fading combined with stochastic pointing errors, the decode-and-forward (DF) protocol was employed to link THz and radio frequency (RF) technologies \cite{Boulogeorgos_Error,Pranay_2021_TVT}. Using multi-antenna transceivers, the DF relaying was studied for a dual-hop THz-THz link \cite{Boulogeorgos_2020_THz_THz}.
There has been an increased research interest to model the short-term fading for THz communications
\cite{Riza2017, Kursat2021, Papasotiriou2021,RIS_THz_HW_Impaiment}. A Gamma mixture channel model for THz transmissions for a short ($<1$ \mbox{m}) link is proposed in \cite{Kursat2021}. The authors in \cite{Papasotiriou2021} find $\alpha$-$\mu$ fading model suitable for the THz transmission using the measurement at $152$\mbox{GHz} for a link length within
$50$\mbox{m}. Using a comprehensive THz measurement data at $300$\mbox{GHz} for train-to-infrastructure and inside station \cite{Ke2019}, the authors in \cite {RIS_THz_HW_Impaiment} demonstrate that the fluctuating two ray (FTR) model is a better fit for THz multi-path fading modeling than the conventional Rician and Nakagami-m distributions. Using the combined effect of the short-term FTR fading, antenna misalignment, and hardware impairments, \cite {RIS_THz_HW_Impaiment} derived outage probability and ergodic capacity for RIS-aided THz systems.
The recently proposed FTR model has been extensively studied for mmWave wireless transmissions \cite{Juan2017, Jiayi2018, Hui2019, Wen2018,Zhang_2020_mmWave_FSO,Osmah2019}.
In \cite{Juan2017,Jiayi2018, Hui2019}, analytical performance was studied for a single-link FTR fading channel. The physical layer secrecy performance over the FTR fading channel was analyzed \cite{Wen2018}. The authors in \cite{Zhang_2020_mmWave_FSO} analyzed the performance of a mixed free-space-optics (FSO)-mmWave system by modeling the mmWave and FSO channels as FTR and Gamma-Gamma distributed, respectively. In contrast to the single-antenna system, multi-antenna at the receiver can harness the spatial diversity over independent fading for improved performance \cite{Hahemi2020, Hussein2021, Jiakang2019,Maryam1019, Hadi2020}. In \cite{Hahemi2020}, the authors analyzed the performance of an equal gain combining (EGC) receiver by deriving outage probability and average BER using single-variate mathematical functions over FTR fading channels. In \cite{Hussein2021}, a low complexity selection combining receiver was investigated with a performance analysis on the outage probability, average BER, and ergodic capacity in terms of multi-variate Fox's H function. In \cite{Jiakang2019}, the authors analyzed the optimal maximal ratio combining (MRC) receiver by deriving the PDF and CDF of the sum of arbitrarily distributed FTR variates. In \cite{Maryam1019}, the outage probability and an upper bound on average BER was derived for the MRC receiver. The authors in \cite{Hadi2020} provided asymptotic and non-asymptotic expressions of the outage probability and average BER for the MRC over non-identical distributed FTR fading channels. The moment matching method was used to approximate statistics of the sum of FTR fading channel for analyzing relay-assisted radio frequency (RF)-mmWave wireless communications for high-speed trains \cite{Jiayi2020}.
In the light of THE above research and to the best of the author's knowledge, performance analysis of the MRC receiver with THz wireless transmissions over FTR fading channels jointly with stochastic pointing errors is not available in the open literature. Our main contributions of this paper are follows:
\begin{itemize}[leftmargin=*]
\item By deriving PDF and CDF of a single-link THz link using standard mathematical functions, we provide exact statistical characterization of the SNR for the MRC receiver under the joint effect of FTR short-term fading and zero-bore sight pointing errors considering independent and
nonidentical (i.ni.d.) channel conditions in terms of multi-variate Fox's H function.
\item We develop exact analytical expressions of ergodic capacity, outage probability, and average BER for both single-antenna reception and MRC receiver and present asymptotic expressions for outage probability and average BER at high SNR. We derive the diversity order of the considered system to show the advantage of multi-antenna reception and the impact of pointing errors.
\item We evaluate multi-variate Fox's H function using Python code \cite{Alhennawi2016} and validate our derived analytical expressions with Monte-Carlo simulations. We also demonstrate the effect of various system and channel parameters on the performance of THz wireless communications.
\end{itemize}
\section{System Model}
A single-antenna source communicates to an $L$-antenna destination over the THz spectrum. The THz link is affected by path-loss, short-term fading, pointing errors, and transceiver distortions. Assuming negligible hardware impairments \cite{Boulogeorgos_Error, Pranay_2021_VTC}, the received signal $y_i$ at the $i$-th antenna is given by:
\begin{equation} y_{i} = h_l h_i s + w_i,
\label{eq:rx_one}
\end{equation}
where $h_{l}$ is the path gain, $s$ is the transmitted signal with power $P$, $h_i$ denotes the fading channel coefficient, and $w_i$ is the additive white Gaussian noise with a variance $\sigma_w^2$.
The deterministic path gain $h_{l}$ is dependent on antenna gains, frequency, and molecular absorption coefficient \cite{Boulogeorgos_Analytical}:
\begin{equation}
h_l = \frac{c\sqrt{G_{t}G_{r}}}{4\pi f d} \exp(-\frac{1}{2}k(f)d)
\end{equation}
where $c$, $f$, and $d$ respectively denote the speed of light, carrier frequency, link distance whereas $G_{t}$ and $G_{r}$ denote gain of the transmitting antenna and receiving antenna, respectively. The term $k(f,T,\psi,p)$ is the molecular absorption coefficient depends on the temperature $T$, relative humidity $\psi$, and atmospheric pressure $p$ \cite{Boulogeorgos_performance_2018}.
The compound channel coefficient is $h_i = h_{pi} h_{fi}$, where $h_{pi}$ and $h_{fi}$ models pointing error and short term fading, respectively. We use the zero boresight model for pointing errors $h_{pi}$ \cite{Farid2007}:
\begin{equation}
\begin{aligned}
f_{h_{pi}}(h_p) &= \frac{\phi^2}{S_{0}^{\phi^2}}h_{p}^{\phi^{2}-1},0 \leq h_p \leq S_0,
\end{aligned}
\label{eq:pdf_hp}
\end{equation}
where $S_0=\mbox{erf}(\upsilon)^2$ with $\upsilon=\sqrt{\pi/2}\ (r_1/\omega_z)$ and $\omega_z$ is the beam-width, $\phi = {\frac{\omega_{z_{\rm eq}}}{2 \sigma_{s}}}$ with $\omega_{z_{\rm eq}}$ as the equivalent beam-width at the receiver, which is given as $\omega_{z_{\rm eq}}^2 = {\omega^2_z} \sqrt{\pi} \mbox{erf}(\upsilon)/(2\upsilon\exp(-\upsilon^2)) $, and $\sigma^2_{s}$ is the variance of pointing errors displacement characterized by the horizontal sway and elevation \cite{Farid2007}.
To model $|h_{fi}|^2$, we use the FTR fading channel with PDF given as \cite{Jiayi2018}:
\begin{eqnarray} \label{eq:ftr1}
&{f_{|h_{fi}|^{2} }}(x) = \frac{{{m^m}}}{{\Gamma (m)}}\sum \limits _{j = 0}^\infty {\frac{{K^jd_jx^j}}{{(\Gamma (j + 1))^2(2\sigma ^2)^{j+1}}}} \exp (- \frac{x}{2\sigma ^2})
\end{eqnarray}
where $K$ is the ratio of the average power of the dominant component and multi-path, $m$ is the index of fading severity, and $\Delta$ denotes the similarity of two dominant waves. The term $\sigma^2$ represents the variance of diffused components such that $\sigma^2=\frac{1}{2(1+K)}$ for the normalized averaged SNR. The factor $d_j$ is defined in \cite{Jiayi2018} and recently updated with an additional factor in \cite{Miguel2021}.
We denote SNR of the $i$-th antenna as $ \gamma_i=\gamma_0|h_i|^2$, where $\gamma_0= \frac{P |h_{l}|^2}{\sigma_{w}^2}$.
Assuming perfect channel state information (CSI), SNR with optimal combining for the MRC receiver is $\gamma_{}= \sum_{i=1}^L \gamma_i$. Thus, PDF and CDF for the sum of product of pointing errors and FTR random variables is required for statistical performance analysis of the MRC receiver.
\section{Statistical Derivations}
In this section, we provide statistical results for the sum of $L$ arbitrarily distributed FTR fading combined with stochastic pointing errors by deriving closed-form expressions of the PDF and CDF of the single THz link.
\begin{my_proposition}
If $|h_{i}|=|h_{fi}||h_{pi}|$ is the combined effect of FTR fading and pointing errors, then the PDF and CDF of the single-link SNR $\gamma_i=\gamma_0|h_{i}|^{2}$ are given by
\begin{eqnarray} \label{eq:pdf_single}
& f_{\gamma_i}(x) = \frac{{{\phi^{2}}{m^m}}}{{2S_0^{\phi^{2}} \gamma_0^{\frac{\phi^2}{2}} {{\left({2{\sigma ^2}} \right)}^{{\frac{\phi^{2}}{2}}}}} {\Gamma (m)}}\sum \limits _{j = 0}^\infty {\frac{{K^jd_j}}{{[\Gamma \left({j + 1} \right)]^2}}} \nonumber \\& {{x^{({\frac{\phi^2}{2}}-1)}}}\Gamma \left({ -\frac{\phi^{2}}{2}+j+1,\frac{{x}}{{2 \gamma_0 \sigma ^2 S_0^{2}}}} \right)
\end{eqnarray}
\begin{eqnarray}\label{eq:cdf_single}
&F_{\gamma_i}(x)= \frac{{{m^m}}}{{2S_0^{\phi^{2}}} \gamma_0^{\frac{\phi^2+1}{2}}{{\left({2{\sigma ^2}} \right)}^{{\frac{\phi^{2}}{2}}}}{\Gamma (m)}}\sum \limits _{j = 0}^\infty {\frac{{{K^j}{d_j}}}{{ [\Gamma ({j + 1}})]^2}} \nonumber \\& 2 x^{\frac{\phi^2}{2}} \Gamma({ -\frac{\phi^{2}}{2}+j+1,\frac{{x}}{{2\gamma_0 \sigma^2 S_0^{2}}}})\nonumber\\& -2^{{\phi^{2}}{}+1} {({}{{\gamma_0\sigma ^2 S_0^{2}}})}^{\frac{\phi^2}{2}} x^{-\frac{\phi^2}{2}} \Gamma (j+1,\frac{{x}}{{2\gamma_0\sigma ^2 S_0^{2}}})
\end{eqnarray}
\end{my_proposition}
\begin{IEEEproof}
Transforming random variable in \eqref{eq:pdf_hp}, we get
\begin{eqnarray}\label{eq:pe1}
f_{{h_{pi}^{2}}}(x)=\frac{1}{2} {\phi ^{2}S^{-\phi ^{2}}_{0}}{}x^{{\frac{\phi ^{2}}{2}-1}}, \quad 0 \leq x \leq {S_{0}^{2}},
\end{eqnarray}
Using the limits of PDF in \eqref{eq:ftr1} and \eqref{eq:pe1}, the PDF of $|h_{i}|^{2}= h_{fi}h_{p}^{2}$ can be expressed as \cite{papoulis_2002}
\begin{eqnarray}\label{eq:pdf_der1}
f_{h_{i}^{2}}(x) = \int _{0}^{S_{o}^2} \frac {1}{y} f_{h_{fi}^{}}\left ({\frac {x}{y}}\right) f_{h_{pi}^2}(y) \mathrm {d}y.
\end{eqnarray}
Substituting \eqref{eq:ftr1} and \eqref{eq:pe1} in \eqref{eq:pdf_der1}, we have
\begin{eqnarray}\label{eq:pdf_der2}
& f_{h_{i}^{2}}(x)= \sum \limits _{j = 0}^\infty {{\frac{{{K^j}{d_j}}}{{[\Gamma(j+1)]^2}{(2{\sigma^{2}})^{j+1}}}}{x^{j}}}\nonumber \\& \int_{0}^{S_{0}^2} {e^{-\frac{x}{2y{\sigma ^2}}}} {y}^{\frac{\phi^{2}}{2}-2-j}dy
\end{eqnarray}
Expressing the integral in \eqref{eq:pdf_der2} in terms of incomplete Gamma function with a transformation of random variable $\frac{1}{\gamma_0}f_{|h_{fp}|^{2}}(\gamma/\gamma_0)$, we get \eqref{eq:pdf_single}. To derive the CDF $F_{\gamma}(x)=\int_0^{x} f_{\gamma}(z)dz$, we use the identity $\int x^{b-1} \Gamma(s, x) \mathrm{d} x= -\frac{1}{b}\big(x^b\Gamma(s,x)+\Gamma(s+b,x)\big)$ to get \eqref{eq:cdf_single}.
\end{IEEEproof}
In the following Theorem, we capitalize the results of Proposition 1 to derive PDF and CDF of the SNR for the MRC receiver:
\begin{my_theorem}
If the PDF of single THz link is distributed as \eqref{eq:pdf_single}, then PDF $f_{\gamma_{}}(\gamma)$ and CDF $F_{\gamma_{}}(\gamma)$ of the SNR $\gamma_{}= \sum_{i=1}^L \gamma_i$ for an $L$-antenna MRC receiver are given by
\begin{flalign}\label{eq:pdf_mrc}
f_{\gamma_{}}(\gamma) =& \frac{1}{\gamma}\sum \limits _{{[j_i=0]}_{i=1}^L}^\infty \prod_{l=1}^{L} \frac{{{\phi^{2}}{{m_l}^{m_l}}}}{{2S_0^{\phi^{2}}} {\Gamma (m_l)}} {{\frac{{{{K_l}^{j_l}}{{d_l}_{j_l}}\gamma^{\phi^2/2}}}{{[\Gamma^{}(j_l+1)]^2}{(2{\sigma_l^{2}}\gamma_0)^{\phi^2/2}}}}} \nonumber \\ &
~~~~~~H^{0,0:2,1;2,1;\dots;2,1}_{0,1:2,2;2,2;\dots;2,2} \bigg[ ~\begin{matrix} V(\gamma) \end{matrix}~ \bigg| \begin{matrix} ~~V_1~~ \\ ~~V_2~~ \end{matrix} \bigg]
\end{flalign}
where $V(\gamma) = \big\{\frac{\gamma}{{2\sigma_i^2 S_0^{2} \gamma_0}}\big\}_{i=1}^L$, $V_1 = -: \big\{ \big(1-\frac{\phi^2}{2},1 \big), \big(1,1\big) \big\};\cdots;\big\{ \big(1-\frac{\phi^2}{2},1 \big), \big(1,1\big) \big\} $ and $V_2 = \big\{ \big(1-\frac{L\phi^2}{2};1,\cdots,1 \big) \big\}; \big\{ \big(-\frac{\phi^2}{2}+j_i+1,1 \big), \big(0,1\big) \big\}_{i=1}^L$
\begin{eqnarray}\label{eq:cdf_mrc}
& F_{\gamma_{}}(\gamma) = \sum \limits _{{[j_i=0]}_{i=1}^L}^\infty \prod_{l=1}^{L} \frac{{{\phi^{2}}{{m_l}^{m_l}}}}{{2S_0^{\phi^{2}}} {\Gamma (m_l)}} {{\frac{{{{K_l}^{j_l}}{{d_l}_{j_l}}\gamma^{\phi^2/2}}}{{[\Gamma^{}(j_l+1)]^2}{(2{\sigma_l^{2}\gamma_0})^{\phi^2/2}}}}} \nonumber \\ & ~ H^{0,0:2,1;2,1;\dots;2,1}_{0,1:2,2;2,2;\dots;2,2}\bigg[~ \begin{matrix} U(\gamma) \end{matrix}~ \bigg| \begin{matrix} ~~U_1~~ \\ ~~U_2~~ \end{matrix} \bigg]
\end{eqnarray}
where $U(\gamma) = \big\{\frac{\gamma}{{2\sigma_i^2 S_0^{2} \gamma_0}}\big\}_{i=1}^L$, $U_1 = -: \big\{ \big(1-\frac{\phi^2}{2},1 \big), \big(1,1\big) \big\}; \cdots ;\big\{ \big(1-\frac{\phi^2}{2},1 \big), \big(1,1\big) \big\} $ and $U_2 = \big\{ \big(-\frac{L\phi^2}{2};1,\cdots,1 \big) \big\}; \big\{ \big(-\frac{\phi^2}{2}+j_i+1,1 \big), \big(0,1\big) \big\}_{i=1}^L$
\begin{figure*}
\begin{eqnarray}\label{eq:asymptotic_out}
&P_{\rm out_{}}^{{\rm MRC}, \infty} = \hspace{0 mm}\sum \limits _{{[j_i=0]}_{i=1}^L}^\infty \hspace{0mm} \prod_{l=1}^{L} \frac{{{\phi^{2}}{{m_l}^{m_l}}}}{{2A_0^{\phi^{2}}} {\Gamma (m_l)}} {{\frac{{{{K_l}^{j_l}}{{d_l}_{j_l}}\gamma^{\phi^2/2}}}{{[\Gamma^{}(j_l+1)]^2}{(2{\sigma_l^{2}\gamma_0})^{\phi^2/2}}}}} \frac{1}{ \beta \Gamma\big(\hspace{0 mm}\frac{L\phi^2}{2} +\sum_{i=1}^{L} g_i\hspace{0mm}\big)} \nonumber \\&\prod_{i=1}^{L} \frac{\prod_{j=1,j\neq c_i}^{2} \Gamma(b_{i,j}+B_{i,j}-B_{i,j}g_i) \Gamma\big(\hspace{0mm}\frac{\phi^2}{2} - 1+ g_i\hspace{0mm}\big) }{\Gamma\big(\frac{\phi^2}{2}-j_i\big) } \left(\hspace{0mm}\frac{\gamma}{{2\sigma_i^2 S_0^{2}\gamma_0}}\hspace{0mm}\right)^{g_i}
\end{eqnarray}
\begin{eqnarray}\label{eq:asymptotic_ber}
&\bar{P}_{e}^{{\rm MRC}, \infty} = \frac{1}{2\Gamma(p)q^{\frac{L\phi^2}{2}}}\sum \limits _{{[j_i=0]}_{i=1}^L}^\infty \prod_{l=1}^{L} \frac{{{\phi^{2}}{{m_l}^{m_l}}}}{{2S_0^{\phi^{2}}} {\Gamma (m_l)}} {{\frac{{{{K_l}^{j_l}}{{d_l}_{j_l}}}}{{[\Gamma^{}(j_l+1)]^2}{(2{\sigma_l^{2}}\gamma_0)^{\phi^2/2}}}}} \frac{\Gamma\big(p+\frac{L\phi^2}{2} +\sum_{i=1}^{L} g_i\big) }{ \beta \Gamma\big(\frac{L\phi^2}{2} + \sum_{i=1}^{L} g_i\big)} \nonumber \\ & \prod_{i=1}^{L} \frac{\prod_{j=1,j\neq c_i}^{2} \Gamma(b_{i,j}+B_{i,j}-B_{i,j}g_i) \Gamma\big(\frac{\phi^2}{2} - 1+ g_i\big) }{\Gamma\big(\frac{\phi^2}{2}-j_i\big) } \left(\frac{1}{{2\sigma_i^2 S_0^{2}\gamma_0}q} \right)^{g_i-1}
\end{eqnarray}
where $b_{i,1} = -\frac{\phi^2}{2}+j+1$, $b_{i,2} = 0$, $B_{i,1} = 1$, $B_{i,2} = 1$ $g_i = \min\{-\frac{\phi^2}{2}+j_i+1, 0 \} $, $\beta = \prod_{i=1}^{L} B_{i,c_i}$, and $c_i = { \argmin}_{j=1:m_i} \left\{\frac{b_{i,j}}{B_{i,j}}\right\}$.
\hrule
\end{figure*}
\end{my_theorem}
\begin{IEEEproof}
See Appendix A.
\end{IEEEproof}
\section{Performance Analysis}
In this section, we analyze the performance of single-antenna THz link in terms of standard Mathematical functions and use multi-variate Fox's H and Gamma functions to provide exact and asymptotic analysis on the multi-antenna reception.
\subsection{Single Antenna Reception (SAR)}
\subsubsection{Outage Probability}
Outage probability is defined as the probability of instantaneous SNR less than a predetermined threshold value $\gamma_{\rm th}$ i.e., $ P_{\rm out}^{} = P(\gamma <\gamma_{\rm th})$. Thus an exact expression of the outage probability is $P_{\rm out}^{\rm SAR}= F_{\gamma_i}(\gamma_{\rm th})$, where $F_{\gamma_i}(x)$ is given in \eqref{eq:cdf_single}.
\subsubsection{Average BER}
Average BER is another important performance metric for communication systems, defined as \cite{Ansari2011}
\begin{equation} \label{eq:ber_eqn}
\bar{P}_e = \frac{q^p}{2\Gamma(p)}\int_{0}^{\infty} \exp({-qx})x^{p-1}\Psi_{X}(x)dx
\end{equation}
where $p$ and $q$ are constants to specify different modulation techniques and $\Psi_{X}$ is the CDF. We denote by $\xi = {}{2\gamma_0\sigma^2S_0^2}$. Using \eqref{eq:cdf_single} in \eqref{eq:ber_eqn}, applying the identity [\cite{Gradshteyn},6.455/1] and expressing the Hypergeometric function into Gamma function we get
\begin{eqnarray}\label{ber_single}
&\bar{P}_{e}^{\rm SAR} = \frac{m^m p^q}{{4S_0^{\phi^{2}}} \gamma_0^{\frac{\phi^2+1}{2}}{{\left({2{\sigma ^2}} \right)}^{{\frac{\phi^{2}}{2}}}}\Gamma(m)\Gamma(p)}\sum \limits _{j = 0}^\infty \frac{{K^jd_j}}{[\Gamma(j+1)]^2} \nonumber \\ & \times \frac{2 (\xi)^{-(\frac{\phi^2}{2}+p)} e^{-q\gamma} \big[-2^{\frac{\phi^2}{2}} (\frac{\phi^2}{2}+p) \Gamma(j+1+p) + p \Gamma(2j+2+p) \big] }{p(\frac{\phi^2}{2}+p)}
\end{eqnarray}
\subsubsection{Ergodic Capacity}
The ergodic capacity is defined as
\begin{eqnarray}\label{eq:rate_eqn}
\bar{\eta} = \int_{0}^{\infty} {\log_2}(1+x) \psi_X(x) dx
\end{eqnarray}
where $\psi_X$ denotes the PDF. Using \eqref{eq:pdf_single} in \eqref{eq:rate_eqn} and applying identity of definite integration of two Meijer's G function [\cite{Meijers},07.34.21.0011.01], we get an exact expression for the ergodic capacity of the single THz link
\begin{eqnarray}\label{eq:rate_single_exact}
& \bar{\eta}^{\rm SAR} = \frac{{{m^m}}}{{\log (2)S_0^{\phi^{2}} \gamma_0^{\frac{\phi^2}{2}} {{\left({2{\sigma ^2}} \right)}^{{\frac{\phi^{2}}{2}}}}} {\Gamma (m)}}\sum \limits _{j = 0}^\infty \frac{{{K^j}{d_j}}}{[ \Gamma(j+1)]^2} \xi^{-\frac{\phi^2}{2}} \nonumber \\& G_{3,4}^{4,1} \Bigg( \begin{matrix} -\frac{\phi^2}{2}, 1-\frac{\phi^2}{2}, 1 \\ -\frac{\phi^2}{2}+j+1, 0, -\frac{\phi^2}{2}, -\frac{\phi^2}{2} \end{matrix} \Bigg| \xi \Bigg)
\end{eqnarray}
Further, we can use ${\rm log}(1+\gamma)>{\rm log}(\gamma)$ in \eqref{eq:rate_eqn} with \eqref{eq:pdf_single}, and denoting $\psi^{(0)}$ as the digamma function to get a simpler bound on the ergodic capacity
\begin{eqnarray}\label{eq:rate_single_bound}
\bar{\eta}^{\rm SAR} \geq \frac{2{m^m}}{{{ \log}(2)}\phi^2S_0^{\phi^{2}} \gamma_0^{\frac{\phi^2}{2}} {{\left({2{\sigma ^2}} \right)}^{{\frac{\phi^{2}}{2}}}}} {\Gamma (m)} \sum \limits _{j = 0}^\infty \frac{{{K^j}{d_j}}}{[ \Gamma(j+1)]^2} \nonumber \\ \xi^{\frac{\phi^2}{2}}\Gamma(j+1)\big(-1-\frac{\phi^2}{2} {\log}(\xi)+ \frac{\phi^2}{2} \psi^{(0)} (j+1) \big)
\end{eqnarray}
by applying integration-by-parts with the identity [\cite{Gradshteyn}, eq.4.352.1].
\subsection{Multi-antenna Reception with MRC}
\subsubsection{Outage Probability}
An exact expression of the outage probability for the MRC is $P_{\rm out}^{\rm MRC}= F_{\gamma}(\gamma_{\rm th})$, where $F_{\gamma}(\gamma_{})$ is given in \eqref{eq:cdf_mrc}. We use \cite{AboRahama_2018} to present the asymptotic outage probability in \eqref{eq:asymptotic_out}. Considering the dominant terms at high SNR, we get the diversity order as $\sum_{l=1}^L \min\{j_l+1, \phi^2/2\}$. It is interesting to note that the diversity order is independent of FTR fading parameters $K$, $m$, and $\Delta$ as also observed in earlier literature \cite{Jiakang2019}, and has been extensively verified for THz transmissions in numerical section V. The diversity order shows the advantage of multi-antenna reception and that the impact of pointing errors can be minimized by circumventing the multi-path fading using a sufficiently higher beam-width.
\subsubsection{Average BER}
Substituting $F_{\gamma_{}}(\gamma)$ in \eqref{eq:ber_eqn} and applying the definition of multivariate Fox’s H-function with the following inner integral $I_1$
\begin{eqnarray}
\label{eq:ib}
I_1\hspace{-1mm} = \hspace{-1.5mm}\int_{0}^{\infty} \hspace{-3mm}e^{-qz}z^{p-1}{z}^{\sum_{l=1}^L \left(\frac{\phi^2}{2} + \zeta_l\right)}dz\hspace{-0.5mm}=\hspace{-0.5mm}\frac{\Gamma\left(p+\frac{L\phi^2}{2}\sum_{l=1}^L\zeta_l\right)}{q^{p+\frac{L\phi^2}{2}+\sum_{l=1}^L\zeta_l}}
\end{eqnarray}
and applying the definition of multivariate Fox’s H-function on the resultant expression \cite{Kilbas_2004}, we get
\begin{eqnarray}\label{eq:ber_mrc}
\bar{P}_e^{\rm MRC} =& \hspace{-1mm}\frac{1}{2\Gamma(p)q^{\frac{L\phi^2}{2}}}\sum \limits _{{[j_i=0]}_{i=1}^L}^\infty \hspace{-2mm}\prod_{l=1}^{L} \frac{{{\phi^{2}}{{m_l}^{m_l}}}}{{2S_0^{\phi^{2}}} {\Gamma (m_l)}} {{\frac{{{{K_l}^{j_l}}{{d_l}_{j_l}}}}{{[\Gamma^{}(j_l+1)]^2}{(2{\sigma_l^{2}}\gamma_0)^{\phi^2/2}}}}} \nonumber\\ &
\hspace{-8mm}H^{0,1:2,1;2,1;\dots;2,1}_{2,1:2,2;2,2;\dots;2,2} \bigg[ ~\begin{matrix} F(\gamma_0) \end{matrix}~ \bigg| \begin{matrix} ~~F_1~~ \\ ~~F_2~~ \end{matrix}\bigg]
\end{eqnarray}
where $F(\gamma_0) = \big\{\frac{1}{{2\sigma_i^2 S_0^{2} \gamma_0 q}}\big\}_{i=1}^L$, $F_1 = \big\{ \big(1-p-\frac{L\phi^2}{2};1,\cdots,1 \big) \big\}: \big\{ \big(1-\frac{\phi^2}{2},1 \big), \big(1,1\big) \big\};\cdots;\big\{ \big(1-\frac{\phi^2}{2},1 \big), \big(1,1\big) \big\}$ and $F_2 = \big\{ \big(-\frac{L\phi^2}{2};1,\cdots,1 \big) \big\}; \big\{ \big(-\frac{\phi^2}{2}+j_i+1,1 \big), \big(0,1\big) \big\}_{i=1}^L$. Similar to the outage probability, we get the asymptotic expression in \eqref{eq:asymptotic_ber} and the diversity order as $\sum_{l=1}^L \min\{j_l+1, \phi^2/2\}$.
\begin{figure*}[t]
\subfigure[Outage probability.]{\includegraphics[scale=0.28]{Outage_probability_ver10.pdf}}
\subfigure[Average BER.]{\includegraphics[scale=0.28]{Bit_Error_Rate_ver7.pdf}}
\subfigure[Ergodic capacity.]{\includegraphics[scale=0.28]{Capacity_receiver_antenna_ver5.pdf}}
\caption{Performance of THz wireless transmissions over FTR fading with pointing errors.}
\label{fig:outage}
\label{fig:ber}
\label{fig:capacity_antenna}
\end{figure*}
\subsubsection{Ergodic Capacity}
Substituting $f_{\gamma}(\gamma)$ in \eqref{eq:rate_eqn} and applying the definition of multivariate Fox’s H-function \cite{Kilbas_2004}:
\begin{eqnarray}
\label{eq:cap_proof}
&\bar{\eta}^{\rm MRC} = \hspace{-3mm}\sum \limits _{{[j_i=0]}_{i=1}^L}^\infty \hspace{-2mm}\prod_{l=1}^{L}\big( \frac{{{\gamma^{2}}{{m_l}^{m_l}}}}{{2A_0^{\gamma^{2}}} {\Gamma (m_l)}} {{\frac{{{{K_l}^{j_l}}{{d_l}_{j_l}}}}{{[\Gamma^{}(j_l+1)]^2}{(2{\sigma_l^{2}}\gamma_0)^{\gamma^2/2}}}}}\big)
(\frac{1}{2\pi i})^L\nonumber \\ &\int_{\cal{L}}\frac{1}{\Gamma \big(\frac{L\gamma^2}{2} + \sum_{l=1}^L\zeta_l\big)}
\big\{\prod_{l=1}^{L}\frac{\Gamma\big(-\frac{\gamma^{2}}{2}+j_l+1 - \zeta_l\big)\Gamma(-\zeta_l)\Gamma\big(\frac{\gamma^{2}}{2}+\zeta_l\big)}{\Gamma\big(1-\zeta_l\big)}
\nonumber \\& \big(\frac{1}{{2\sigma_l^2 S_0^{2}}\gamma_0}\big)^{\zeta_l}d\zeta \big\} \frac{1}{\ln(2)}\int_{0}^{\infty} \ln(1+z)
{z}^{-1+\sum_{l=1}^L \left(\frac{\gamma^2}{2} + \zeta_l\right)} dz
\end{eqnarray}
We use the Mellin's inverse transform $ \ln(1+z)=\frac{1}{{2\pi i}}\int_{{L+1}}
\frac{\Gamma\left(1+\zeta_{L+1}\right)\Gamma\left(-\zeta_{L+1}\right)\Gamma\left(-\zeta_{L+1}\right)}{\Gamma\left(1-\zeta_{L+1}\right)}{z}^{-\zeta_{L+1}}d\zeta_{L+1}$ to represent the inner integral in \eqref{eq:cap_proof} as
\begin{eqnarray}
\label{eq:cap_ia_2}
& I_2 = \frac{1}{\ln(2){2\pi i}}\int_{L_{L+1}}
\frac{\Gamma\left(1+\zeta_{L+1}\right)\Gamma\left(-\zeta_{L+1}\right)\Gamma\left(-\zeta_{L+1}\right)}{\Gamma\left(1-\zeta_{L+1}\right)} d\zeta_{L+1} \nonumber \\& \times \int_{0}^{\infty} {z}^{-1+\sum_{l=1}^L \left(\frac{\gamma^2}{2} + \zeta_l\right)} {z}^{-\zeta_{L+1}}dz
\end{eqnarray}
Since the inner integral in \eqref{eq:cap_ia_2} is not convergent, we use final value theorem $\lim_{t\rightarrow \infty} \int_{0}^{t}f(z) dz = \lim_{s\rightarrow 0} F(s) = F(\epsilon)$ with Laplace transform of integrand in \eqref{eq:cap_ia_2} to get
\begin{eqnarray}
\label{eq:flt1}
& I_{21} = \int_{0}^{\infty} {z}^{-1+\sum_{l=1}^L \left(\frac{\gamma^2}{2} + \zeta_l\right)} {z}^{-\zeta_{L+1}}dz\nonumber \\&
=\Gamma \left(\frac{L\gamma^2}{2} +\sum_{l=1}^{L+1}\zeta_l-\zeta_{L+1}\right) \left(\frac{1}{\epsilon}\right)^{\frac{L\gamma^2}{2} + \sum_{l=1}^{L+1}\zeta_l-\zeta_{L+1}}
\end{eqnarray}
Using \eqref{eq:cap_ia_2} and \eqref{eq:flt1} in \eqref{eq:cap_proof}, and applying the definition of multivariate Fox’s H-function, we get
\begin{flalign} \label{cap_lemma}
&\bar{\eta}^{\rm MRC}= 1.4427\sum \limits _{{[j_i=0]}_{i=1}^L}^\infty \prod_{l=1}^{L} \frac{{{\phi^{2}}{{m_l}^{m_l}}}}{{2A_0^{\phi^{2}}} {\Gamma (m_l)}}\nonumber \\ & {{\frac{{{{K_l}^{j_l}}{{d_l}_{j_l}}}}{{[\Gamma(j_l+1)]^2}{(2{\sigma_l^{2}}\gamma_0\epsilon)^{\phi^2/2}}}}}
H^{0,1:2,1;2,1;\dots;2,1;2,1}_{1,1:2,2;2,2;\dots;2,2;2,2}
\bigg[~ \begin{matrix} G(\gamma_0) \\ \epsilon \end{matrix} ~\bigg| \begin{matrix} ~~G_1~~ \\ ~~G_2~~ \end{matrix} \bigg]
\end{flalign}
where $G(\gamma_0) = \big\{\frac{\gamma}{{2\sigma_i^2 S_0^{2} \gamma_0 \epsilon}}\big\}_{i=1}^L$, $G_1 = \big\{ \big(1-\frac{L\phi^2}{2};1,\cdots,1,-1 \big) \big\}; \big\{ \big(1-\frac{\phi^2}{2},1 \big), \big(1,1\big) \big\};\cdots;\big\{ \big(1-\frac{\phi^2}{2},1 \big), \big(1,1\big) \big\}; \big\{(0,1),(1,1)\big\} $ and $G_2 = \big\{ \big(1-\frac{L\phi^2}{2};1,\cdots,1,0 \big) \big\}; \big\{ \big(-\frac{\phi^2}{2}+j_1+1,1 \big), \big(0,1\big) \big\}_{i=1}^L; \big\{(0,1),(1,1)\big\} $
where $\epsilon\to 0$.
\section{Simulation and Numerical Results}
In this section, we demonstrate the performance of the considered single-antenna and MRC receivers for THz transmissions and validate the derived analytical expressions with Monte Carlo simulations. We consider the THz channel with a distance of $50$\mbox{m}, carrier frequency $275$ \mbox{GHz}, and antenna gains of $50$ \mbox{dBi}. To compute the path loss for the THz link, we consider the relative humidity, atmospheric pressure, and temperature as $50\%$, $101325$ \mbox{Pa}, and $296$ \mbox{K}, respectively. We use \cite{Farid2007} to compute the parameters $\phi$ and $S_0$ of pointing errors by varying the beamwidth and the jitter variance $10$\mbox{cm} antenna aperture radius. The AWGN noise power is considered to be $-94.2$ \mbox{dBm}.
Fig. 1(a) demonstrates the impact of receiver antennas $L$, pointing errors parameter $\phi$, and FTR fading $m$ on the outage performance of the considered THz system with $S_0=0.054$, $\gamma_{\rm th}=4$\mbox{dB}, $K=10$, and $\Delta=0.5$. It can be seen that improvement in the performance is significant with spatial diversity when the number of receiver antennas is increased from $L=1$ to $L=4$. The figure also confirms that the outage probability improves when the fading severity parameter $m$ increases, as expected. The slope of plots clearly demonstrate the dependence of diversity order on the system parameters. It can be seen that there is a distinguishable difference in the slopes for $\phi = 1$ and $\phi = 2.5$ but there is a minimal change in the slope with $\phi = 2.5$ and $\phi = 6$. As such, the diversity order increases linearly with $L$, is independent of the parameter $m$, depends on $\phi$ when $\phi^2/2< \min{j_l+1}$ but becomes independent otherwise. Thus, the impact of pointing errors can be mitigated using a sufficiently higher beam-width for THz transmissions.
We demonstrate the average BER performance by varying the parameters $\Delta$ and $\phi$ with $K=2$, and $m=2$. The figure shows that highly dissimilar specular components of FTR fading depicted through $\Delta$ provides an improved average BER performance. Similar to the outage probability, we can observe that the average BER improves with an increase in the number of receiver antennas $L$ and the diversity order is independent of $\Delta$ and becomes independent of pointing errors with a sufficiently higher value of $\phi$.
Finally, Fig. 1(c) illustrates the relationship between the ergodic capacity and the number of receiver antennas with inter-dependence of the various channel and system parameters. The figure shows the logarithmic scaling of the ergodic capacity with the number of receiver antennas $L$. It can be observed that ergodic capacity is nearly independent of the parameters $K$ and $\Delta$, but increases with an increase in the parameters $m$. The figure also shows that pointing errors significantly degrade the THz performance. However, an increase in the MRC antennas reduces the gap in performance by harnessing the spatial as compared to the single-antenna system.
\section{Conclusions}
In this paper, we analyzed the performance of THz wireless transmissions under the combined effects of path loss, generalized FTR fading, and Rayleigh distributed pointing errors. We provided statistical results on the single-antenna and multi-antenna receivers by deriving PDF and CDF of the resultant SNR. We analyzed the performance of the considered system by deriving closed-form expressions for the outage probability, average BER, and ergodic capacity. Using asymptotic analysis on the outage probability and average BER, we derived the diversity order of the system, which provides a design criterion of using sufficiently higher beam-width to mitigate the impact of pointing errors by circumventing the multi-path fading. We validated our derived analytical expressions with Monte-Carlo simulations to show that the impact of the number of receiver antennas $L$, pointing errors $\phi$, and fading severity parameter $m$ is higher on the THz wireless system compared with other parameters. Incorporating hardware impairment in the performance analysis may be a possible extension of the proposed work.
\section*{Appendix A}
Using the definition of the MGF function, we apply inverse Laplace transform to find the PDF of $\gamma_{}= \sum_{i}^{L} \gamma_i$ as $f_{\gamma_{}}(\gamma)= \mathcal{L}^{-1} M_{\gamma_{}}(s)$, where $M_{\gamma_{}}(s)=\prod_{i=1}^{L} M_{\gamma_{i}}(s)$ and $ M_{\gamma_{i}}(s)$ is the MGF of the $i$-th random variable $\gamma_i$.
Converting the incomplete Gamma function in \eqref{eq:pdf_single} to Meijer's G and applying the Meijer's G identity of definite integration, we get
\begin{eqnarray}
&M_{\gamma_{}}(s) = \prod_{l=1}^{L} \frac{{{\phi^{2}}{{m_l}^{m_l}}}}{{2S_0^{\phi^{2}}} {\Gamma (m_l)}} \sum \limits _{{[j_i=0]}_{i=1}^L}^\infty {{\frac{{{{K_l}^{j_l}}{{d_l}_{j_l}}{s^{-\frac{\phi^2}{2}}}}}{{[\Gamma^{}(j_l+1)]^2}{(2{\sigma_l^{2}{\gamma_0}})^{\phi^2/2}}}}}\nonumber \\&
G_{2,2}^{2,1}\left(\frac{1}{{2\sigma_l^2 S_0^{2} \gamma_0 s}}\left\vert \begin{matrix} {-\frac{\phi^2}{2} + 1,1} \\ {-\frac{\phi^{2}}{2}+j+1 , 0} \end{matrix} \right)\right.
\end{eqnarray}
Applying the definition of Meijer's G function \cite{Meijers} and interchanging the sum and product, we get
\small
\begin{eqnarray}
\small
&M_{\gamma_{}}(s)= \sum \limits _{{[j_i=0]}_{i=1}^L}^\infty \prod_{l=1}^{L}\big( \frac{{{\phi^{2}}{{m_l}^{m_l}}}}{{2S_0^{\phi^{2}}} {\Gamma (m_l)}} {{\frac{{{{K_l}^{j_l}}{{d_l}_{j_l}}{s^{-\frac{\phi^2}{2}}}}}{{[\Gamma^{}(j_l+1)]^2}{(2{\sigma_l^{2}}{\gamma_0})^{\phi^2/2}}}}}\big)\nonumber \\&
\frac{1}{2\pi i}\int_{L_l}\frac{\Gamma\big(-\frac{\phi^{2}}{2}+j_l+1 - \zeta_l\big)\Gamma\left(-\zeta_l\right)\Gamma\big(\frac{\phi^{2}}{2}+\zeta_l\big)}{\Gamma\big(1-\zeta_l\big)}
\big(\frac{1}{{2\sigma_l^2 S_0^{2} \gamma_0 s}}\big)^{\zeta_l} d\zeta_l
\end{eqnarray}
\normalsize
Thus, using $f_{\gamma_{}}(z) = \frac{1}{2\pi i}\int_{L} e^{sz} M_{\gamma_{}}(s) ds$, interchanging the integral, and rearranging the terms, we get
\begin{eqnarray}
\label{eq:pdf}
&f_{\gamma_{\rm }}(z) = \sum \limits _{{[j_i=0]}_{i=1}^L}^\infty \prod_{l=1}^{L}\big( \frac{{{\phi^{2}}{{m_l}^{m_l}}}}{{2S_0^{\phi^{2}}} {\Gamma (m_l)}} {{\frac{{{{K_l}^{j_l}}{{d_l}_{j_l}}}}{{[\Gamma^{}(j_l+1)]^2}{(2{\sigma_l^{2}\gamma_0})^{\phi^2/2}}}}}\big)\nonumber \\&
\prod_{l=1}^{L}
\frac{1}{2\pi i}\int_{L_l}\frac{\Gamma\big(-\frac{\phi^{2}}{2}+j_l+1 - \zeta_l\big)\Gamma(-\zeta_l)\Gamma\big(\frac{\phi^{2}}{2}+\zeta_l\big)}{\Gamma(1-\zeta_l)}
\nonumber \\&\big(\frac{1}{{2\sigma_l^2 S_0^{2} \gamma_0}}\big)^{\zeta_l} \frac{1}{2\pi i}\int_{L} e^{sz} s^{-\sum_{l=1}^L \big(\frac{\phi^2}{2} + \zeta_l\big)}ds d\zeta_l
\end{eqnarray}
We substitute $sz = -b $, and apply the identity \cite{Gradshteyn} (eq. 8.315.1) to solve the inner integral in \eqref{eq:pdf}:
\begin{equation}
\label{eq:I}
I = \frac{1}{2\pi i}\int_{L} e^{sz} s^{-\sum_{l=1}^L \left(\frac{\phi^2}{2} + \zeta_l\right)}ds=\frac{{z}^{-1+\sum_{l=1}^L \left(\frac{\phi^2}{2} + \zeta_l\right)}}{\Gamma \left(\sum_{l=1}^L\left(\frac{\phi^2}{2} + \zeta_l\right)\right)}
\end{equation}
We substitute \eqref{eq:I} in \eqref{eq:pdf} and apply the definition of multivariate Fox’s H function \cite{Kilbas_2004} to get \eqref{eq:pdf_mrc}. Finally, we use $F_{\gamma_{}}(z) = \mathcal{L}^{-1} \prod_{i=1}^{N} \frac{M_{\gamma_i}(s)}{s}$ and apply the similar steps used in the derivation of PDF to get the CDF in \eqref{eq:cdf_mrc}, which concludes the proof of Theorem 1.
\bibliographystyle{ieeetran}
|
1,314,259,993,395 | arxiv | \section{Introduction} \label{section1}
Driven by many use cases, such as Internet of Things, massive machine-type communication (mMTC) has been regarded as a necessary service in future wireless networks \cite{wuyp}.
It aims to achieve the communication between massive user equipments (UEs) and the base station (BS), where only a fraction of UEs are active at any given time interval and transmit data payloads with small size.
In order to reduce latency, the grant-free random access scheme is usually adopted, where the UE activity is unknown in advance for the receiver.
Some subsets of these topics have been discussed in the past.
When the number of UEs $K$ is finite and blocklength $n$ is infinite, the fundamental limits can be obtained based on classical multiuser information theory \cite{elements_IT}.
Motivated by emerging systems with massive UEs, a new paradigm was proposed in \cite{GuoDN}, where $K$ was allowed to grow unboundedly with $n$ and the payload was infinite.
The linear scaling was adopted in \cite{GuoDN, improved_bound, finite_payloads_fading, A_perspective_on, RAC_fading}.
Since the data payload of UEs was usually of small size, \cite{improved_bound} and \cite{finite_payloads_fading} derived bounds on the minimum energy-per-bit for reliable transmission with finite payload and infinite blocklength in AWGN and fading channels, respectively.
\cite{A_perspective_on} and \cite{RAC_fading} studied random access limits in AWGN and fading channels, respectively, where common codebook was adopted with finite payload, blocklength, and active UEs. When common codebook is utilized, there is no obvious difference for theoretical derivation between random access and massive access with the knowledge of UE activity.
In this work, we adopt individual codebook\footnote{It should be noted that individual codebook and common codebook assumptions correspond to different massive access models in practice, which jointly constitute a complete mMTC research [1].} for each UE. The number of UEs $K$ grows linearly and unboundedly with blocklength. Any $K_a$ of them have a fixed number of data bits to send over quasi-static fading channels. We assume synchronous transmission.
This system model has not been studied before but is significant in mMTC.
The infinite payload assumption in classical multiuser information theory will result in infinite energy-per-bit when $K$ and $n$ go to infinity with a fixed rate, which is not suitable for practice mMTC systems \cite{wuyp}. Thus, we consider finite payload and finite energy codewords in this paper for energy-efficiency \cite{wuyp}.
We obtain the bounds on the minimum energy-per-bit for reliable random access with no channel state information (CSI) and CSI at the receiver (CSIR).
These bounds provide energy-efficiency guidance for new massive random access coding and communication schemes.
Since we do not know UE activity in advance, the derivation is more difficult compared with \cite{finite_payloads_fading}.
\emph{Notation:}
We adopt uppercase and lowercase boldface letters to denote matrices and column vectors, respectively.
$\mathbf{I}_{n}$ denotes the $n\times n$ identity matrix.
Let $\left(\cdot \right)^{H}\!$, $\oplus$, $\left\|\cdot \right\|_{p}$, $\cdot\backslash\cdot$, $\left| \mathcal{A} \right|$, and $\operatorname{span}(\cdot)$ denote conjugate transpose, direct sum, ${\ell}_p$-norm, set subtraction, the cardinal of set $\mathcal{A}$, and the span of a set of vectors, respectively.
Let $\mathcal{CN}(\cdot,\cdot)$, $\beta(\cdot,\cdot)$, $\chi_2(\cdot)$, and $\chi'_2(\cdot,\cdot)$ denote circularly symmetric complex Gaussian distribution, beta distribution, chi-squared distribution, and non-central chi-squared distribution, respectively.
For $0\!\leq\! p\!\leq\!1$, let $h(p) \!=\! -p\ln(p)-(1\!-\!p)\ln(1\!-\!p)$ and $h_2(p) \!=\! h(p)/\ln 2$ with $0\ln 0$ defined to be $0$.
$\mathbb{N}_{+}$ denotes the set of nonnegative natural numbers.
For $n\!\in\! \mathbb{N}_{+}$, let $[n] \!=\! \left\{1,2,\cdots\!,n\right\}$.
Denote the projection matrix on to the subspace spanned by $S$ and its orthogonal complement as $\mathcal{P}_{S}$ and $\mathcal{P}_{S}^{\bot}$, respectively.
Let $f(x) \!=\! \mathcal{O}\!\left( g(x)\right)$, $x \!\to\! \infty$ mean $\limsup_{x\to\infty}\!\left|f(x)/g(x) \right|\!<\!\infty$ and $f(x) \!=\! o\!\left( g(x)\right)$, $x \!\to\! \infty$ mean $\lim_{x\to\infty}\!\left|f(x)/g(x) \right|\!=\!0$.
\section{System Model} \label{section2}
We assume the number of UEs $K$ grows linearly and unboundedly with the blocklength $n$, i.e., $K\!=\!\mu n$, $\mu \!<\! 1$, and $n\!\to\! \infty$ \cite{improved_bound}.
We denote the number of active UEs as $K_a \!=\! p_a K $, where $p_a$ is the active probability of each UE. The UE set and active UE set are denoted as $\mathcal{K}$ and $\mathcal{K}_a$, respectively.
We assume each UE has an individual codebook with $M=2^k$ vectors.
The codebook of the $j$-th UE is denoted as $\mathcal{C}_{j}=\left\{\mathbf{c}_{1}^{j}, \mathbf{c}_{2}^{j},\ldots, \mathbf{c}_{M}^{j}\right\}$, where $\mathbf{c}_{m}^{j}\in \mathbb{C}^{n}$ for $m\in[M]$.
We consider quasi-static fading channels. The receiver observes $\mathbf{y}$ given by
\begin{equation}\label{receive_y1}
\mathbf{y} = \sum_{j\in{\mathcal{K}_a}}{h}_j\mathbf{c}^{j}+\mathbf{z} =\mathbf{AH}\boldsymbol{\beta}+\mathbf{z}\in \mathbb{C}^{n},
\end{equation}
where ${h}_j \!\stackrel{i.i.d.}{\sim} \!\mathcal{CN}(0,1)$ denotes the fading coefficient between BS and UE $j$,
$\mathbf{H}$ is a $KM\!\times\! KM$ block diagonal matrix where block $i$ is an $M\times M$ diagonal matrix with diagonal entries be ${h}_i$, and $\mathbf{z}$ includes i.i.d. $\mathcal{CN}(0,1)$ entries.
We require the power constraint $\left\|\mathbf{c}_m^{j}\right\|_{2}^{2} \!\leq \!n P$ for $m\in [M]$.
We denote the signal of UE $j$ as $\mathbf{c}^{j} \!=\! \mathbf{c}_{W_j}^{j}$, where $W_{j} \!\in\! [M]$ is chosen uniformly at random.
The $((j\!-\!1)M\!+\!1)$-th column to the $(jM)$-th column of $\mathbf{A}$ are codewords of UE $j$.
The block-sparse vector $\boldsymbol{\beta} \!\in\! \left\{\!\boldsymbol{\beta} \!\in\! \{0,1\}^{\!KM}\!\!: \!\left\| \boldsymbol{\beta}\right\|_{0} \!=\! K_a, \sum_{i=(j-1)M+1}^{jM} \!\boldsymbol{\beta}_{i} \!=\! \{0,1\}, \!\forall j\!\in\! \mathcal{K}\!\right\}$.
The decoder aims to find the estimate $\hat W_j$.
We use the per-user probability of error (PUPE) $\epsilon$ as performance metric \cite{A_perspective_on}
\begin{equation}\label{PUPE}
P_{e}\!=\!\mathbb{E}\!\left[\! \frac{1}{K_a} \!\sum_{j\in {\mathcal{K}_a}} \!1 \!\left[W_{j} \!\neq \!\hat{W}_{j}\right] \!\right]
\leq \epsilon.
\end{equation}
The system achieves the spectral efficiency $S\!=\!p_a\mu k$ and energy-per-bit $\varepsilon \!= \!\frac{nP}{k} \!=\! \frac{P_{tot,a}}{S}$, where ${P_{tot,a}} \!=\! K_aP$ denotes the total power of active UEs.
For finite $\varepsilon$, we consider finite $P_{tot,a}$, i.e., $P$ decaying as $\mathcal{O}(1/n)$.
In this case, we have:
\begin{defi}\label{defi1}
An $(n,M,\epsilon,\varepsilon,K,K_a)$ random access code for the channel $ P_{\mathbf{y} | \mathbf{c}^{1}, \ldots, \mathbf{c}^{K_a}}\!:\! \prod_{j\in\mathcal{K}_a} \!\mathcal{C}_j \!\to\! \mathcal{Y}$ is a pair of (possibly randomized) maps including encoders $\left\{ f_j\!: \!\left[M \right] \!\to\!\mathcal{C}_j \right\}_{j=1}^{K}$ and the decoder $g\!: \! \mathcal{Y} \!\to\!\binom {[M]}{K_{a}}$ such that power constraint is satisfied and $P_e\leq\epsilon$. Then, we have the following fundamental limit
\begin{equation}\label{energy_per_bit_limit}
\varepsilon^{*}\!(M, \mu, p_a, \epsilon)\!=\!\!\lim_{n \rightarrow \infty} \!\inf \{\varepsilon\!:\! \exists(n, M, \epsilon, \varepsilon, K, K_a)\!-\!\text{ \!\!code }\!\!\},
\end{equation}
where the infimum is taken over all possible encoders and decoders, and the limit is understood as $\liminf$ or $\limsup$ depending on whether an upper or a lower bound is given.
\end{defi}
\section{ Achievability Bound } \label{section3}
\subsection{CSIR} \label{section3_sub1}
Assuming decoder knows the realization of channel fading coefficients, we can use euclidean metric to decode and obtain
\begin{prop}\label{prop_achi_CSIR}
Fix spectral efficiency $S$ and target PUPE $\epsilon$.
Given $\nu\!\in\!(1\;\!\!-\;\!\!\epsilon,\!1]$ and $\epsilon' \!\!=\! \epsilon - 1+\nu$ with CSIR,
if $\varepsilon \!>\! \varepsilon_{CSIR}^{*} \!=\! \sup_{\theta\in(\epsilon'\!,\nu]} \! \sup_{\psi\in[0,\nu-\theta]} \! \frac{P_{tot,a}^{'}\!(\theta,\psi)}{S}$, there exists a sequence of $(n, M\!, \epsilon_n, \varepsilon, K\!, K_a)$ codes such that
$\limsup_{n\to\infty}\epsilon_n\!\leq\!\epsilon$, where
\begin{equation}\label{P_tot_achi_CSIR}
P_{tot,a}^{'}\!\!\left(\theta,\!\psi \right) \!=\!\! \frac{4\left({ \exp\!\left\{\!\gamma_{\theta}\!\right\} \!-\! 1} \right)}
{\xi\!\left( \psi , \!\psi\!+\!\theta \right) \!-\! 4\!\left({ \exp\!\left\{\!\gamma_{\theta}\!\right\} \!-\! 1} \right) \!\xi\!\left( \psi\!+\!\theta , \!\psi\!+\!\theta\!+\!\!1\!\!-\!\nu \right) }\!,
\end{equation}
\begin{equation}\label{xi_achi_CSIR}
{\xi\!\left( \varsigma,\zeta \right)} = \varsigma \ln(\varsigma) - \zeta \ln(\zeta) + \zeta - \varsigma,
\end{equation}
\begin{equation}
\gamma_{\theta} \;\!\!\!=\;\!\!\!p_a \;\!\!\mu h\;\!\!(\;\!\!1\!-\nu+\theta) \!+\;\!\!\mu\!\left(\;\!\!1\!\!-\!\!\nu p_a\!\!+\;\!\!\!\theta p_a \;\!\!\right) \!h\!\;\!\!\left(\;\!\!\! \frac{\theta{p_a}}{{1}\!\!+\!\!\theta{p_a}\!\!\;\!\!-\!\!\nu{p_a}}\;\!\!\!\right)\!+\theta p_a\;\!\!\mu\!\ln\! M \!.
\end{equation}
Hence $\varepsilon^{*} (M, \mu, p_a, \epsilon) \leq \varepsilon_{CSIR}^{*}$.
\end{prop}
\begin{proof}\label{proof_achi_CSIR}
We use a random coding scheme by generating a Gaussian codebook for each UE with $\mathbf{c}_{m}^{j} \!\stackrel{\mathrm{i.i.d.}}{\sim} \! \mathcal{CN}\!\left(0,\!P'\mathbf{I}_{n}\right)$ and $P' \!<\! P$.
The $j$-th UE sends $\mathbf{c}^{j}\!=\!\mathbf{c}_{W_{j}}^{j} \!1\!\left\{\!\left\|\mathbf{c}_{W_{j}}^{j}\right\|_{2}^{2} \!\leq\! n P\!\right\}$ if it is active.
The decoder estimates the messages of $K_{a_1} \!= \!\nu K_a$ active UEs. Let $?$ denote an error symbol.
The decoder outputs
\begin{equation}
\left[ \hat{\mathcal{K}}_{a_1}, \!\left( \hat{\mathbf{c}}^j \right)_{\!j\in \hat{\mathcal{K}}_{a_1} }\!\right] \!=\! \arg\!\!\!\!\!\!\min_{\stackrel{ \hat{\mathcal{K}}_{a_1} \subset \mathcal{K}} {\!\left| \hat{\mathcal{K}}_{a_1}\!\right| = K_{a_1}} }
\!\!\!\!\min_{\left( \hat{\mathbf{c}}^j \in \mathcal{C}_j \right)_{j\in\hat{\mathcal{K}}_{a_1}}}
\!\!\left\|\mathbf{y}- \!\!\!\!\sum_{j\in \hat{\mathcal{K}}_{a_1}}\!\!\!\!h_j \hat{\mathbf{c}}^{j} \right\|_{2}^{2}\!\!,\!
\end{equation}
\begin{equation}
\hat{W}_j = \left\{
\begin{array}{ll}
f_{j}^{-1}\left(\hat{\mathbf{c}}^{j}\right) & j \in \hat{\mathcal{K}}_{a_1} \\
? & j \notin \hat{\mathcal{K}}_{a_1}
\end{array}\right. .
\end{equation}
We change the measure over which $\mathbb{E}$ in \eqref{PUPE} is taken to the one with $\mathbf{c}^{j}\!=\!\mathbf{c}_{W_{j}}^{j}$ at the cost of adding $ p_0 \!=\! K_a \mathbb{P}\!\left[\!\frac{\chi_{2}(2n)}{2 n}\!>\!\frac{P}{{P}^{\prime}}\!\right]$ \cite{A_perspective_on}.
We have $ p_0 \!\to\! 0 \text { as } n \!\to\! \infty$.
The averaged PUPE becomes
\begin{equation}\label{PUPE_p0}
P_{e} \leq p_0 + \mathbb{E}\!\left[\!\frac{1}{K_a} \!\sum_{j\in {\mathcal{K}_a}} \!1\! \left[ W_{j} \!\neq\! \hat{W}_{j} \right] \right]_{\text{\!new measure}} = p_0 + p_1.
\end{equation}
Next, we adopt the new measure and omit the subscript.
Let $F_t\! =\!\! \left\{ \sum_{j\in {\mathcal{K}_a}} \!1\! \left[ W_{j}\! \neq \!\hat{W}_{j} \right] \!=\!K_{a,t} \!\right\}$, $K_{a,t} \!=\! {K_a \!-\!K_{a_1}\!+\!t}$,
and $\mathcal{T} \!=\! (\epsilon' K_a, \nu K_a]\cap \mathbb{N}_{+}$.
We can bound $p_1$ as
\begin{equation}\label{PUPE_p1}
p_{1} \leq \epsilon + \mathbb{P} \!\left[ \bigcup_{t \in \mathcal{T}} \!F_t \right]
\leq \epsilon + \min \!\left\{\!1, \sum_{t\in \mathcal{T}} \mathbb{P} \!\left[ F_t\right]\!\right\}
= \epsilon + p_2.
\end{equation}
For simplicity, we rewrite ``$\bigcup_{{S_{1} \subset \mathcal{K}_a,} {\left| S_{1} \right| = K_{a,t}}} \!$'' to ``$\bigcup_{S_{1}}$'' and ``$\bigcup_{{S_{2} \subset \mathcal{K} \backslash \mathcal{K}_a\cup S_{1},} {\left| S_{2} \right| = t}}$'' to ``$\bigcup_{S_{2}}$''; similarly for $\sum$.
We have
\begin{align}\label{PUPE_p1_p2_ft}
\mathbb{P} \!\left[ F_t | \mathbf{H}, \!\mathbf{c}_{[\mathcal{K}_a]}, \mathbf{z} \right]
\! & \leq\! \mathbb{P}\! \!\left[
\!\bigcup_{S_{1}} \bigcup_{S_{2}}
\!\!\!\!\!\bigcup_{\stackrel{ \mathbf{c}^{i'} \!\in \mathcal{C}_i: }{ i \in S_{2}, \mathbf{c}^{i'} \!\!\neq \mathbf{c}^i }}
\!\!\!\!\!\left\{ \!\left\| \mathbf{z} \!+ \!\! \sum_{i\in S_{1}}\!\!h_i \mathbf{c}^i \!\!- \!\!\sum_{i\in S_{2}}\!\!h_i \mathbf{c}^{i'} \!\right\|_2^2 \right.\right.
\notag \\
&\left.\left.\left. \;\;\;\;\;\;\;\leq\!
\min_{\stackrel{S_{3} \subset S_{1}} {\left| S_{3} \right| = t}} \left\| \mathbf{z} \!+\!\! \!\! \sum_{i\in S_{1} \!\backslash S_{3} }\!\!\! h_i \mathbf{c}^i \right\|_2^2 \!\right\} \right| \!\mathbf{H},\! \mathbf{c}_{[\mathcal{K}_a]}\! , \mathbf{z} \right]\notag \\
& \leq \!\sum_{S_{1}} \sum_{S_{2}}
\!M^t \mathbb{P} \!\left[ F\!\left( S_{1}, S_{2}, S_{3}^{*}\right) \!| \mathbf{H}, \!\mathbf{c}_{[\mathcal{K}_a]}, \mathbf{z} \right]\!,
\end{align}
where $F\!\left( \;\!\!S_{1}\;\!\!, \;\!\!\;\!\!S_{2}\;\!\!, \;\!\!\;\!\!S_{3}^{*}\right) \!\!=\!\! \left\{\!\left\| \mathbf{z}_1 \!\!+\!\! \sum_{i\in \;\!\!S_{3}^{*}}\!\!h_i \mathbf{c}^i \;\!\!\!\;\!\!-\!\!\;\!\! \sum_{i\in \;\!\!S_{2}}\!\!h_i \mathbf{c}^{i'} \!\right\|_2^2 \!\!\;\!\!\leq\;\!\!\!\;\!\!
\left\| \mathbf{z}_1 \!\right\|_2^2 \!\right\}$ with
$\mathbf{z}_1 \!=\! \mathbf{z} \!+\! \sum_{i\in S_{1} \! \backslash S_{3}^{*} } \!h_i \mathbf{c}^i$,
$\mathbf{c}_{[\mathcal{K}_a]} \!= \!\left\{\mathbf{c}^{i}\!\!: i \!\in\! \mathcal{K}_a \right\}$,
and $S_{3}^{*}\! \subset \!S_{1}$ is a possibly random subset of size $t$.
To further bound \eqref{PUPE_p1_p2_ft}, for $\mathbf{a}\sim\mathcal{CN}(0,\mathbf{I}_{n})$, $b\in\mathbb{C}$, $\mathbf{u}\in\mathbb{C}^{n}$, $\gamma>-\frac{1}{\left|b\right|^{2}}$, and $\phi={1+\gamma\left|b\right|^{2}}$, we utilize the identity \cite{improved_bound}
\begin{equation}\label{identity_exp}
\mathbb{E}\left[\exp\left\{-\gamma\|b\mathbf{a}+\mathbf{u}\|_{2}^{2}\right\}\right]
={\phi^{-n}}{\exp\left\{-\frac{\gamma} {\phi} {\left\|\mathbf{u}\right\|_{2}^{2}}\right\}}.
\end{equation}
The Chernoff bound is also utilized for any random variable $U$, i.e., $\mathbb{P}\left( U\geq d\right) \!\leq\! \min_{\lambda\geq 0} \exp\left\{-\lambda d \right\} \mathbb{E}\left[ \exp\left\{ \lambda U\right\} \right]$ \cite{elements_IT}.
Hence, let $\lambda_2 = 1+\lambda_1 P' \sum_{i\in S_{2}} \left|h_i \right|^{2}$, and we have
\begin{align}\label{PUPE_p1_p2_ft_A}
&\;\;\;\;\mathbb{P} \!\left[ F\!\left( S_{1}, S_{2}, S_{3}^{*}\right) \!| \mathbf{H}, \mathbf{c}_{[\mathcal{K}_a]}, \mathbf{z} \right] \notag\\
&\leq\! \min_{\lambda_1\geq0}
\left(\lambda_2\right)^{\!-n}
\!\exp\!\left\{\!\lambda_1 \!\left\|\mathbf{z}_1 \!\right\|_{2}^{2}
\!-\!\!\frac{\lambda_1}{\lambda_2} \!\left\| \mathbf{z}_1 \!+\! \sum\nolimits_{i\in S_{3}^{*}}\!\!h_i \mathbf{c}^i \right\|_{2}^{2} \!\right\}\!.\!
\end{align}
Taking expectation over $\mathbf{c}_{\left[S_{3}^{*}\right]}$ and $\mathbf{z}_1$, respectively, we have
\begin{equation}
\mathbb{P} \!\left[ F\!\left( S_{1},S_{2}, S_{3}^{*}\right) \!| \mathbf{H} \right]
\!= \!\!\left( \!1\!+\! \frac{P'\!\left( \sum_{i\in S_{2}}\!\! \left|h_i \right|^{2} \!\!+\!\! \sum_{i\in S_{3}^{*}} \!\! \left|h_i \right|^{2}\!\right)}{4 \!\left( 1\!+\!P' \! \sum_{i\in S_{1}\backslash S_{3}^{*}}\! \left|h_i \right|^{2} \right) }\! \right)^{\!\!\!\!-n}\!.
\end{equation}
We sort $\left\{h_i\!:\!i\!\in\!\mathcal{K}_a\right\}$ in decreasing order of fading power as $\left|{h}_{1}^{\downarrow}\right|\!\! \geq\!\! \left|{h}_{2}^{\downarrow}\right|\!\! \geq \!\!\ldots \!\!\geq \!\! \left|{h}_{K_a}^{\downarrow}\!\right|$.
Let $ \Psi_{n} \!\!=\! \left[0, \nu\!-\!\theta\right]\!\cap\! \left\{\! \frac{i}{K_a}\!\!: \!i\!\in\! [{K}_a] \!\right\}$.
Choosing $S_{3}^{*} \!\subset\! S_{1}$ to contain indices with top $t$ fading power, we can obtain
\begin{align}\label{PUPE_p1_p2_ft2}
&\mathbb{P} \!\left[ F_t | \mathbf{H}\right] \!\leq \!{\binom {K_a} {K_{a,t}}} \!{\binom {K \!-\!K_{a_1}\!+\!t} {t}}
M^t \notag \\
& \;\;\;\;\;\;\;\;\;\;\cdot\! \!
\left( \!\! \min_{\psi \in \Psi_n}
\!\!\left\{\!\!1\!+\! \frac{ P' \! \sum_{i=\psi K_a+1}^{\psi K_a+t} \! \left|h_i^{\downarrow} \right|^{2}}{\!4 \!\left(\!\! 1\!+\!\!P' \! \sum_{i=\psi K_a\!+t+1}^{\psi K_a\!+t+\!K_a\!-\!K_{a_1}}\!\! \left|h_i^{\downarrow} \!\right|^{2} \right) }\!\! \right\}\!\right)^{\!\!\!\!-n} \!\!.\!
\end{align}
Let $\Theta_n \!\!=\!\! (\epsilon'\!, \nu]\!\cap\! \left\{\! \frac{i}{K_a}\!\!: \!i\!\in\! [{K}_a] \!\right\}$ and $t \!=\! \theta K_a$.
When $\theta \!=\! \nu$, ${\binom {K_a} {K_{a\!,t}}} \!=\! 1$.
For $\theta \!\in\! \Theta_n\!\backslash \{\nu\}$, we have \cite{Gallager}
\begin{equation}\label{C1}
{\binom {K_a} {K_{a,t}}} \!\!\leq\!\! \sqrt{\!\frac{1}{\!2\pi \!K_a(1\!-\!\nu\!+\!\theta) (\nu\!-\!\theta)}} \!\exp\!\left\{\!K_a h(1\!-\!\nu\!+\!\theta)\!\right\}\!.
\end{equation}
Let $K_t = K -K_{a_1}+t$. Similarly, for $\theta \in \Theta_n $, we have
\begin{equation}\label{C2}
{\binom {K_t} {t}}
\!\!\leq\! \sqrt{\!\frac{{1} +\theta{p_a} - \nu{p_a}}{2\pi\theta K_{a}\!\left( {1}\!-\! \nu{p_a} \right)}}
\!\exp\!\left\{\!K_t h\!\left(\!\frac{\theta{p_a}}{{1}\! +\!\theta{p_a}\!\!-\! \nu{p_a}}\!\right)\!\right\}\!.
\end{equation}
For $\tau\!>\!0$ and $0\!<\!\varsigma,\!\zeta \!<\!\!1$, with probability $1\!-\exp\!\left\{\!-\mathcal{O}(n^{\tau}) \right\}$, we have
$\frac{1}{K_a}\sum_{j=\lceil \varsigma K_a\rceil}^{\lceil \zeta K_a\rceil}\! \left|{{h}^{\downarrow}_{j}} \right|^{2}
= \xi \!\left( \varsigma, \zeta \right) + o(1)$ \cite{finite_payloads_fading}.
We define the event $L_{n}$ with $\mathbb{P}\left[ L_{n}^{c}\right]$ exponentially small in $n$
\begin{align}\label{event_Ln}
L_{n}\!\! =\! &\!\bigcap_{\psi \in \Psi_{\!n}}\!\! \left\{\! \!\left\{ \!\!\frac{1}{K_a}\!\!\! \sum_{j= (\psi+\theta) K_a+1}^{ (\!\psi+\theta+\!1\!-\!\nu)\! K_a}\!\! \!\!\!\left|{{h}^{\downarrow}_{j}} \!\right|^{2}
\!\!\!\!= \! \xi \!\left( \psi\!+\!\theta , \psi\!+\!\theta\!+\!\!1\!\!-\!\nu \right) \!+\! o(1)\! \!\right\}\!\right. \notag\\
& \;\;\;\;\;\;\;\;\;\;\bigcap \!\left.\left\{ \!\! \frac{1}{K_a}\!\! \sum_{j= \psi K_a\!+1}^{ (\psi+\theta) K_a}\!\! \left|{{h}^{\downarrow}_{j}} \right|^{2}
\!\!=\! \xi \!\left( \psi , \psi\!+\!\theta \right) \!+\! o(1)\!\!\right\}\! \!\right\}\!.\!\!
\end{align}
Let $\kappa = 1-\nu p_a+\theta p_a$. We can bound $p_2$ as
\begin{align}\label{PUPE_p1_p2lim}
p_2\!&\leq \mathbb{E}\!\left[\min \!\left\{1,\sum_{t\in\mathcal{T}}\mathbb{P} \!\left[ F_t | \mathbf{H}\right] \right\} 1\left[ L_n\right] \right] + \mathbb{P}\left[ L_n^{c}\right]\notag \\
& \leq \min \!\left\{\;\!\! 1, \!\sum_{\theta \;\!\!\in\;\!\! \Theta_{\;\!\!n}\!}
\!\exp\!\left\{\!o(n) \!-\!n \!
\left(\!-\mu\kappa h\!\left(\!\frac{\theta{p_a}}{\kappa}\!\right)
\!-\! \theta p_a \mu \ln \!M
\right. \right. \right.\notag \\
& \;\;\;\;\;\;\;\;\;\;\;\;\;+ \!\min_{\psi \in\;\!\! \Psi_{\;\!\!n}}\!\ln \!\left(
\!1\!+\! \frac{ P'\!K_a \xi \!\left( \psi , \psi\!+\!\theta \right) }{4 \left( 1\!+\!P'\!K_a \xi \left( \psi\!+\!\theta , \;\!\!\psi\!+\!1\!-\!\nu\!+\!\theta \right) \right) }\right) \notag \\
& \;\;\;\;\;\;\;\;\;\;\;\;\left.\left.\left. - p_a \mu h\!\left( 1-\nu+\theta \right) \right)\right\} \right\} + o(1).
\end{align}
Define $\Theta = (\epsilon', \nu]$ and $\Psi = [0, \nu-\theta]$. Choosing $K_aP' > \sup_{\theta\in \Theta} \! \sup_{\psi\in \Psi} \! P_{tot,a}^{'}\!\left(\theta,\!\psi\right)$ will ensure $\limsup_{n\to\infty}\! p_2 \!=\! 0$.
\end{proof}
\subsection{No-CSI} \label{section3_sub2}
In this section, we assume neither the transmitters nor the decoder knows the realization of fading coefficients, but they both know the fading distribution. In this case, we can obtain:
\begin{prop}\label{prop_achi_noCSI}
Fix spectral efficiency $S$ and target PUPE $\epsilon$.
With no-CSI, if $\varepsilon > \varepsilon_{no-CSI}^{*} = \sup_{\theta\in(\epsilon,1]} \frac{P_{tot,a}^{'}(\theta)}{S}$,
there exists a sequence of $(n,\! M\!, \epsilon_n, \varepsilon, K, \!K_a)$ codes such that $\limsup_{n\to\infty}\!\epsilon_n\!\leq\!\epsilon$, where
\begin{equation}\label{P_tot_achi_noCSI}
P_{tot,a}^{'}\left(\theta \right) = \frac{W_{\theta}}{\left(1-\delta_{3}^{*}\right)\xi\left( 1-\theta,1\right)},
\end{equation}
\begin{equation}\label{W_achi_noCSI}
W_{\theta} = \frac{1\!-\!V_{\theta}}{V_{\theta}}(1+\delta_{2,\theta}) ,
\end{equation}
\begin{align}\label{V_achi_noCSI}
{V}_{\theta} \!&=\! \exp \!\left\{ \!-\! \left( \!\delta_{1,\theta}^*
\!+\! \frac{1-p_a \mu +\theta p_a \mu}{1- p_a \mu}h\left( \frac{\theta p_a \mu}{1-p_a \mu +\theta p_a \mu}\right) \right.\right. \notag\\
& \;\;\left.\left. \!+ \frac{ \theta p_a \mu \ln M}{1-p_a\mu}
\!+\! \frac{\mu(1\!-\!p_a \!+\!\theta p_a )}{1- p_a \mu} h\!\left( \frac{\theta p_a}{\!1 \!-\! p_a \!+\!\theta p_a\!} \right) \!\right)\!\right\}\! ,
\end{align}
\begin{equation}\label{delta_achi_noCSI}
\delta_{1,\theta}^* = \frac{p_a \mu}{1-p_a \mu} h(\theta),
\end{equation}
\begin{equation}\label{c_achi_noCSI}
c_\theta = \frac{2 V_{\theta}}{1-V_{\theta}} ,
\end{equation}
\begin{equation}\label{q_achi_noCSI}
q_\theta = \frac{ p_a \mu }{1-p_a \mu+ \theta p_a \mu} h(\theta),
\end{equation}
\begin{equation}\label{delta1_achi_noCSI}
\delta_{2,\theta}^* = q_\theta\left( 1+c_\theta\right) + \sqrt{q^2_\theta c_\theta \left( 2+c_\theta\right) + 2q_\theta\left( 1+c_\theta\right)},
\end{equation}
\begin{equation}\label{delta2_achi_noCSI}
\delta_{3}^* \!=\! \inf \!\left\{ x\!: 0<x<1, -\ln(1-x)-x > 0\right\},
\end{equation}
where $\xi(\cdot,\cdot)$ is given in \eqref{xi_achi_CSIR}.
Hence, $\varepsilon^{*} (M, \mu, p_a, \epsilon) \!\leq\! \varepsilon_{no-CSI}^{*}$.
\end{prop}
\begin{proof}\label{proof_achi_noCSI}
We assume each UE has a Gaussian codebook with power $P' \!<\! P$. The $j$-th UE transmits $\mathbf{c}^{j}$ if it is active.
We utilize the projection decoder based on the fact that $\mathbf{y}$ in \eqref{receive_y1} belongs to the subspace spanned by the transmitted signals if the additive noise is neglected \cite{Beta_dis}.
The output is given by
\begin{equation}\label{decoder_achi_noCSI}
\left[ \hat{\mathcal{K}}_a, \!\left( \hat{\mathbf{c}}^j \!\;\!\right)_{\!j\!\;\!\in\!\;\! \hat{\mathcal{K}}_a }\!\right] \!\!=\! \arg\!\!\!\!\!\!\!\!\max_{{ \hat{\mathcal{K}}_a \subset \mathcal{K}}, {\left| \hat{\mathcal{K}}_a\!\right| = K_a} } \max_{\left( \hat{\mathbf{c}}^j \!\;\!\in \!\;\!\mathcal{C}_j \!\right)_{\!j\!\;\!\in\!\;\!\hat{\mathcal{K}}_{\!\;\!a}}} \!\!\!\left\| \mathcal{P}_{\!\left\{\!\hat{\mathbf{c}}^j\!: j\in \hat{\mathcal{K}}_a \!\right\}} \!\mathbf{y}\right\|_2^2\!\!,\!\!
\end{equation}
\begin{equation}\label{decoder_W_achi_noCSI}
\hat{W}_j = \left\{
\begin{array}{ll}
f_{j}^{-1}\left(\hat{\mathbf{c}}^{j}\right) & j \in \hat{\mathcal{K}}_a \\
? & j \notin \hat{\mathcal{K}}_a
\end{array}\right. .
\end{equation}
As in Section \ref{section3_sub1}, we change the measure and obtain \eqref{PUPE_p0}. Next, we bound $p_1$ as in \eqref{PUPE_p1} with $p_2 \!=\!\min\left\{1, \!\sum_{t\in \mathcal{T}} \!\mathbb{P} \left[ F_t\right]\right\}$,
$F_t\! =\! \left\{\! \sum_{j\in {\mathcal{K}_a}} \!\!1\! \left[ W_{j}\! \neq \!\hat{W}_{j} \right] \!=\! t \!\right\}$, and $\mathcal{T} \!\!=\!\! (\epsilon K_a, K_a]\cap \mathbb{N}_{+}$.
Let $F\!\left( S_{1}, S_{2}\right) \!=\! \left\{ \left\| \mathcal{P}_{\!\mathbf{c}_{\left[S_{2}\right]}^{'}, \mathbf{c}_{ \left[\mathcal{K}_a \!\backslash S_{1}\right] } } \mathbf{y} \right\|_2^2 \!\geq\! \left\| \mathcal{P}_{\!\mathbf{c}_{ \left[\mathcal{K}_a \right] } } \mathbf{y} \right\|_2^2 \right\}$.
We rewrite ``$\bigcup_{{S_{1} \subset \mathcal{K}_a},{\left| S_{1} \right| = t}}$'' to ``$\bigcup_{S_{1}}$'' and ``$\bigcup_{{S_{2} \subset \mathcal{K} \backslash \mathcal{K}_a\cup S_{1}}, {\left| S_{2} \right| = t}}$'' to ``$\bigcup_{S_{2}}$''; similarly for $\sum$.
We have
\begin{equation}\label{PUPE_p1_p2_ft_noCSI}
\mathbb{P} \!\left[ F_t | \mathbf{H}, \!\mathbf{c}_{[\mathcal{K}_a]}, \mathbf{z} \right]
\! \!=\! \mathbb{P}\! \!\left[\! \left.
\bigcup_{S_{1}}
\bigcup_{S_{2}}
\!\!\!\!\!\!\bigcup_{\stackrel{ \mathbf{c}^{i'} \!\in \mathcal{C}_i: }{ i \in S_{2}, \mathbf{c}^{i'} \!\neq \mathbf{c}^i }}
\!\!\!\!\!\!\!\!\!\! F\!\left( S_{1}, S_{2}\right) \right| \!\mathbf{H}, \mathbf{c}_{[\mathcal{K}_a]}, \mathbf{z} \right]\!\!.\!
\end{equation}
Let $A_1 \!=\! \mathbf{c}_{ \left[\mathcal{K}_a \!\backslash S_{1}\right] }$, $B_1 \!=\! \mathbf{c}_{\left[S_{2}\right]}^{'}$, and $V \!=\! \operatorname{span}\{A_1,B_1\} \!=\! A \oplus B$ where $A$ and $B$ are subspaces of dimension $K_a\!- t$ and $t$ respectively, with $A\!=\! \operatorname{span}(A_1)$ and $B$ is orthogonal complement of $A_1$ in $V$.
Hence, $\left\|\mathcal{P}_{V} \mathbf{y}\right\|_2^2 \!=\! \left\|\mathcal{P}_{\!A_1} \mathbf{y}\right\|_2^2 + \left\|\mathcal{P}_{\!B}\mathcal{P}_{\!A_1}^{\bot} \mathbf{y}\right\|_2^2 $.
Denote $n_t \!= n\!-\!K_a\!+t$.
Conditioned on $\mathbf{H}$, $\mathbf{c}_{[\mathcal{K}_a]}$, and $\mathbf{z}$, the law of
$\left\|\mathcal{P}_{\!B} \mathcal{P}_{\!A_1}^{\bot} \mathbf{y}\right\|_2^2$ is the law of squared length orthogonal projection of a fixed vector in $\mathbb{C}^{n_t}$ of length $\left\|\mathcal{P}_{\!A_1}^{\bot} \mathbf{y}\right\|_2^2$ onto a (uniformly) random $t$ dimensional subspace.
It is the same as the law of squared length orthogonal projection of a random vector of length $\left\|\mathcal{P}_{\!A_1}^{\bot} \mathbf{y}\right\|_2^2$ in $\mathbb{C}^{n_t}$ onto a fixed $t$ dimensional subspace, i.e., $\left\|\mathcal{P}_{\!A_1}^{\bot} \mathbf{y}\right\|_2^2 \beta(t,n\!-\!K_a)$ \cite{Beta_dis}.
We have
\begin{equation}\label{PUPE_p1_p2_fst_noCSI}
\mathbb{P} \left[ \left.
F\left( S_{1}, S_{2}\right) \right| \mathbf{H}, \mathbf{c}_{[\mathcal{K}_a]}, \mathbf{z} \right]
=F_{\beta}(G_{S_{1}}; n-K_a, t),
\end{equation}
where $ G_{S_{1}} \!\!=\! { \left\|\mathcal{P}_{\mathbf{c}_{[\mathcal{K}_a]}}^{\bot}\mathbf{z} \right\|_2^2 }\Big/
{\left\|\mathcal{P}_{\mathbf{c}_{[\mathcal{K}_a \!\backslash S_{1}]}}^{\bot}\mathbf{y} \right\|_2^2 }$.
$F_{\beta}(G_{S_{1}}; n-K_a, t)$ denotes the CDF of beta distribution with parameters $n-K_a$ and $t$ satisfying $F_{\beta}(G_{S_{1}}; n-K_a, t)\!\leq\! {\binom {n_t-1} {t-1}} G_{S_{1}}^{n-\!K_a}$ for $t \!\geq \!1$.
Denote $K_t = K-K_a+t$. Then, \eqref{PUPE_p1_p2_ft_noCSI} can be bounded as
\begin{equation}\label{PUPE_p1_p2_ftb_noCSI}
\mathbb{P} \!\left[ F_t | \mathbf{H}, \mathbf{c}_{[\mathcal{K}_{a} ]}, \mathbf{z} \right]
\! \leq \!\min \!\left\{\!1, \sum_{S_{1}} {\!\binom {K_t} {t}} M^t \!{\binom {n_t - 1} {t -1}} G_{S_{1}}^{n-K_a}\!\right\} \!.
\end{equation}
Let $t \!=\! \theta K_a$ and $ \Theta_n \!=\! (\epsilon, 1]\!\cap \!\left\{ \!\frac{i}{K_a}\!: i\!\in\! [{K}_a] \right\}$.
We can bound ${\binom {K_a} {t}} $ and ${\binom {K_t} {t}}$ based on \cite{Gallager}. Meanwhile, we have
\begin{equation}
{\binom {n_t-1} {t-1}}\!\leq\!
\sqrt{\frac{n_t-1}{2\pi(t-1) \left( n-\!K_a \right)}} \exp \!\left\{n_t \;\!h\!\left({\frac{t}{n_t}}\right)\right\}.
\end{equation}
We denote $r_{\theta} \! =\! \frac{ \theta p_a \mu \ln M}{1- p_a\mu}
\!+\! \frac{1-p_a \mu +\theta p_a \mu}{1- p_a \mu}h\left( \frac{\theta p_a \mu}{1-p_a \mu +\theta p_a \mu}\right)
\!+ \! \frac{\mu-p_a \mu +\theta p_a \mu}{1- p_a \mu} h\!\left( \frac{\theta p_a}{1 - p_a +\theta p_a} \right)
\!+\! \frac{\ln \left({\frac{1 +\theta p_a - p_a}{2\pi\theta K_a\!\left( 1 - p_a \right)} } \right)}{2(n-K_a)}
\!+\! \frac{\ln \left( \frac{n_t-1}{2\pi(t-1)\!\left( n-K_a \right)} \right)}{2(n-K_a)} $,
$\tilde{V}_{n,{\theta}} \!=\! r_{\theta} + \delta_{1,\theta}$ with $\delta_{1,\theta}\!>\!0$, and ${V}_{n,{\theta}} \!=\! \exp\!\left\{ \!-\tilde{V}_{n,{\theta}}\!\right\}$.
We have $\lim_{n\to\infty} \!{V}_{n,\theta} \!=\! {V}_{\theta} $ as in \eqref{V_achi_noCSI}.
Define the event
$ L_{1} \!=\! \bigcap_{t \in \mathcal{T}} \bigcap_{S_{1}} \left\{G_{S_{1}}\!\leq\! V_{n, \theta}\right\}$.
Then, $p_{2}$ can be bounded as
\begin{align}\label{PUPE_p1_p2b_noCSI}
p_{2} \!&\leq\! \mathbb{E} \! \left[ \!\min\!\left\{\!1, \!\sum_{t \in \mathcal{T}} \sum_{S_{1}}
\!\exp\!\left\{ (n\!-\!\!K_a) r_{\theta} \right\} \!G_{S_{1}}^{n\!-\!K_a} \! \!\right\}\! 1\!\left[ L_{1} \right] \right]\!\!+\! \mathbb{P}\!\left[ L_{1}^c \right] \notag \\
& \leq \!\!\sum_{\theta \in \Theta_n \backslash \{1\}} \!\!\!\!\!\! \exp\!\left\{\!K_a h(\theta) \!-\! (n\!-\!\!K_a)\delta_{1,\theta} \!-\! \frac{1}{2}\!\ln\! \left(2\pi\theta K_a(1\!-\!\theta) \right)\!\right\}\notag \\
&\;\;\;\;+ \exp\!\left\{- n(1\!-\!p_a \mu)\delta_{1,\theta=1}\right\}
+ \mathbb{P}\left[ L_{1}^c \right].
\end{align}
Let $p_{3} = \mathbb{P}\left[ L_{1}^c \right]$, which can be bounded as
\begin{align}\label{PUPE_p1_p2_p3a_noCSI}
p_{3}\!&\leq \!\mathbb{P} \!\left[\! \bigcup_{t\in \mathcal{T}} \bigcup_{S_{1}}
{\left\|\mathcal{P}^{\bot}_{\!\mathbf{c}_{[\mathcal{K}_a \!\backslash\! S_{1}]}}\!\mathbf{z} \right\|_2^2 }
\!\!>\! V_{n, t} {\left\| \mathcal{P}^{\bot}_{\!\mathbf{c}_{[\mathcal{K}_a \!\backslash \!S_{1}]}} \!\!\left( \sum_{i\in S_{{1}}}\!\!{h_i \mathbf{c}_i } \!+\! \mathbf{z} \! \right)\!\right\|_2^2 } \right] \notag \\
& =\! \mathbb{P} \!\!\left[\!\bigcup_{t \in \mathcal{T}} \! \bigcup_{S_{1}}
\!\left\|Q_{S_{1}}\!\right\|_2^2
\!>\!\! \frac{V_{n, t}}{\left( 1\!\!-\!\!V_{n,t} \right)^2} {\left\| \mathcal{P}^{\bot}_{\!\mathbf{c}_{[\mathcal{K}_a \! \backslash \! S_{1}]}} \!\! \sum_{i\in S_{1}} \!\!{h_i \mathbf{c}_i } \right\|_2^2 } \right]\!.\!
\end{align}
where $Q_{S_{1}}\!\!\!= \!\! \mathcal{P}^{\bot}_{\mathbf{c}_{[\mathcal{K}_a \! \backslash \! S_{1}]}} \!\!\left( \!\mathbf{z} \!-\! \frac{V_{n, t}}{1-V_{n,t}} \! \sum_{i\in S_{1}} \!\!{h_i \mathbf{c}_i } \right)$.
Conditioned on $\mathbf{H}$ and $\mathbf{c}_{\left[\mathcal{K}_a \right]}$, we have ${\left\|Q_{S_{1}}\!\right\|_2^2 } \!\sim\! \frac{1}{2} \chi'_2 \!\left( 2\lambda, 2n_t\right)$ with conditional expectation $\mu \!=\! \lambda\!+\!n_t$ and
$\lambda \!=\! {\left\| \! \frac{V_{n, t}}{1-V_{n,t}} \!\mathcal{P}^{\bot}_{\mathbf{c}_{[\mathcal{K}_a \! \backslash \! S_{1}]}} \!\sum_{i\in S_{1}} \!\!{h_i \mathbf{c}_i } \right\|_2^2 }$.
Denote $U \!=\! \frac{V_{n, t}}{1-V_{n,t}} \!\left\| \mathcal{P}^{\bot}_{\mathbf{c}_{[\mathcal{K}_a \! \backslash \! S_{1}]}} \sum_{i\in S_{1}} \!\!{h_i \mathbf{c}_i } \right\|_2^2 \!-\! n_t\!=\!n_t U^1$ and $T \!= \!\frac{1}{2} \chi'_2 \!\left( 2\lambda, 2n_t\right) - \mu$.
Define the event $L_{2} \!=\! \bigcap_{t \in \mathcal{T}} \bigcap_{S_{1}} \!\left\{ U^1 \!\geq\! \delta_{2,\theta}\right\}$ with $\delta_{2,\theta}\!>\!0$.
Then, we can obtain
\begin{equation}\label{PUPE_p1_p2_p3b_noCSI}
p_{3} \!\leq \!
\sum_{t \in \mathcal{T}} \sum_{S_{1}} \mathbb{E} \!\left[ \mathbb{P} \!\left[\left. T\! >\! U \right| \mathbf{H},\! \mathbf{c}_{\left[\mathcal{K}_a \right]} \right] \!1\!\left[ U^1 \!\!\geq\! \delta_{2,\theta} \right] \right]
\!+ \mathbb{P} \!\left[ L_{2}^c\right]\!.\!
\end{equation}
To further bound $p_3$, we use the following concentration results \cite{concentration_ineq1, concentration_ineq2}.
Let $\chi \sim \chi_2(d)$. Then $\forall x >1 $, we have
\begin{equation}
\mathbb{P}\left[\chi \leq \frac{d}{x}\right] \leq \exp\left\{-\frac{d}{2}\left(\ln x+\frac{1}{x}-1\right)\right\}.
\end{equation}
Let $\chi \sim \chi'_2(a, d)$. Then $\forall x >0 $, we have
\begin{equation}
\mathbb{P}[\chi\!\geq\! x\!+\!a\!+\!d\;\!]
\!\leq \! \exp\!\left\{\!\!-\frac{1}{2}\!\!\left(\!x\!+\!d\!+\!2 a\!-\!\!\sqrt{\!d\!+\!2 a\!} \sqrt{2 x\!+\!d\!+\!2 a}\right)\!\!\right\}\!.
\end{equation}
Hence, we have $\mathbb{P}\!\left[\left. T \!>\! U \right| \mathbf{H}, \!\mathbf{c}_{\left[\mathcal{K}_a \right]} \right] \!\!\leq \!\! \exp\!\left\{\!-n_t f_{n,\theta}(U^1)\!\right\}$.
Let $V'_{n,\theta} \!=\! \frac{2 V_{n, \theta}}{1\!-\!V_{n,\theta}}$.
For $0\!<\!V_{n, \theta}\!<\!1$ and $x\!>\!0$, $f_{n,\theta}(x)$ is given by
\begin{align}
f_{n,\theta}(x) \!=& (1+V'_{n,\theta})(1+x) \notag\\
& \!-\!\! \sqrt{1\!+\!V'_{n,\theta}(1\!+\!x)} \sqrt{1\!+\!2x\!+\!V'_{n,\theta}(1\!+\!x)}>0.
\end{align}
It is a monotonically increasing function of $x$.
Then, we have
\begin{align}\label{PUPE_p1_p2_p3c_noCSI}
p_{3}\!&\leq \sum_{\theta \in \Theta_n\!\backslash\!\{1\}} \!\!\!\!\!\! \exp\!\left\{\!K_a h(\theta) \!- \! n_tf_{n,\theta}(\delta_{2,\theta}) \!-\! \frac{1}{2}\!\ln \!\left(2\pi\theta K_a(1\!-\!\theta) \right)\!\right\}\notag \\
& \;\;\;\; + \exp\left\{- n_tf_{n,\theta=1}(\delta_{2,\theta=1}) \right\} \!+\! \mathbb{P}\left[ L_{2}^c \right].
\end{align}
We have $\left\| \mathcal{P}^{\bot}_{\mathbf{c}_{[\mathcal{K}_a \! \backslash \! S_{1}]}} \!\sum_{i\in S_{1}} \!\!{h_i \mathbf{c}_i } \right\|_2^2 \!\!\sim\! \frac{P'}{2} \!\sum_{i\in S_{1}} \!\!{\left|h_i \right|^2}\!\;\! \chi_2\!\left(2n_t\right)$ conditioned on $\mathbf{H}$.
Let $L_3 \!=\! \!\left\{ \!\frac{\chi_2\left(2n_t\right)}{2n_t} \!\geq\! 1\!-\!\delta_{3}\!\right\}$ with $0\!<\!\delta_{3}\!<\!1$ and we have $p_4 = \mathbb{P}\left[ L_3^c \right] \leq \exp\left\{-n_t \left(- \ln \left( {1-\delta_{3}} \right) - \delta_{3}\right)\right\} $.
Given $W_{\theta} $ in \eqref{W_achi_noCSI}, we can bound $p_5 = \mathbb{P} \left[ L_{2}^c\right]$ as
\begin{align}\label{PUPE_p1_p2_p3_p4_noCSI}
p_{5}\!&\leq \mathbb{P}\!\left[\bigcup_{t \in \mathcal{T}} \bigcup_{S_{1}}
\left\{ \! P'\!\sum_{i\in S_{1}} \!{\left|h_i \right|^2} \frac{\chi_2\!\left(2n_t\right)}{2n_t}
\!<\! W_{\theta} \! \right\}\!\cap\! \left\{ L_3 \right\} \!\right]
\!+\! \mathbb{P} \left[ L_{3}^c\right] \notag \\
& \leq \sum_{t \in \mathcal{T}} \mathbb{P}\!\left[
P'\!\!\sum_{i=K_a\!-t+1}^{K_a} \!{\left|h_i^{\downarrow} \right|^2}
\!<\!\frac{W_{\theta}}{1-\delta_{3}} \!\right]
+ p_4\notag\\
&= p_6 + p_4.
\end{align}
Define the event $L_{4}\!\! =\!\!\left\{ \!\!\frac{1}{K_a}\! \!\sum_{j= K_a\!-t+1}^{ K_a}\!\left|{{h}^{\downarrow}_{j}} \!\right|^{2}
\!\!\!\!= \! \xi \!\left( 1\!-\!\theta, \!1 \right) \!+\! o(1)\! \right\}$ with $\mathbb{P}\left[ L_{4}^{c}\right]$ exponentially small in $n$. $p_6$ can be bounded as
\begin{align}\label{PUPE_p1_p2_p3_p6_noCSI}
p_{6} &\leq\! \sum_{t \in \mathcal{T}} \mathbb{P}\!\left[\!
\left\{\!P'\! \!\!\!\sum_{i=K_a\!-t+1}^{K_a} \!\!{\left|h_i^{\downarrow} \right|_2^2}
\!<\!\frac{W_{\theta}}{1-\delta_{3}} \!\right\}\cap \left\{ L_4 \right\}\!\right]
\!+ \!\sum_{t \in \mathcal{T}} \mathbb{P}\!\left[ L_4^c \right]\notag \\
& \leq\! \sum_{t \in \mathcal{T}} 1\!\left[
{P' \!K_a \!\left(\xi \left( {1-\theta},1\right)\!+\!o(1)\right) }
\!<\!\frac{W_{\theta}}{1\!-\!\delta_{3}}\right]
\!+\! o(1).
\end{align}
Hence, $p_{2}$ can be bounded as
\begin{align}\label{PUPE_p_suma_noCSI}
p_{2} \leq & \sum_{\theta \in \Theta_n } \left\{ \exp\left\{ o(n) - n \left((1- p_a\mu)\delta_{1,\theta} - p_a\mu h(\theta) \right)\right\} \right.\notag \\
& + \exp\left\{ o(n) \!-\! n \!\left((1\!-\!p_a\mu\!+\!\theta p_a \mu)f_{n,\theta}(\delta_{2,\theta}) \!-\! p_a\mu h(\theta) \right)\right\} \notag \\
& + \left. \!1\!\left[
{P' \!K_a \!\left(\xi \! \left( 1\!-\!\theta ,1\right)\!+\!o(1)\right)}\!<\! \!\frac{W_{\theta}}{1\!-\!\delta_{3}} \right] \!\right\} \notag \\
& + \exp\left\{-n_t \left(- \!\ln\! \left( {1\!-\!\delta_{3}} \right) \!-\! \delta_{3} \right)\right\} + o(1).
\end{align}
For $\forall \theta \!\in\! \Theta \!=\! (\epsilon,1]$, choosing $\delta_{1,\theta} \!>\! \delta_{1,\theta}^{*}$, $\delta_{2,\theta} \!>\! \delta_{2,\theta}^{*}$, $\delta_{3} \!>\! \delta_{3}^{*}$, and $K_aP' \!>\! P_{tot,a}^{'}\left(\theta\right)$ will ensure $\limsup_{n\to\infty} p_2 \!=\! 0$.
\end{proof}
\section{ Converse Bound } \label{section4}
\subsection{CSIR} \label{section4_sub1}
\begin{prop}\label{prop_converse_CSIR}
We assume spectral efficiency $S$ and target PUPE $\epsilon$ are fixed.
With CSIR, we can obtain $\varepsilon^{*} (M, \mu, p_a, \epsilon) \geq \inf \!\frac{P_{tot,a}}{S}$, where infimum is taken over all ${P_{tot,a}}\!>\!0$ satisfying
\begin{equation}\label{P_tot_conv_CSIR}
P_{tot,a} \!\geq
\frac{2^{ p_a \mu \theta k - p_a \mu \epsilon \! \log_2(M\!-\!1) - p_a \mu h_2(\epsilon) }\!-\!1 }
{\xi\!\left(1-\theta,1 \right) }, \forall \theta\!\in\!(0,1],
\end{equation}
\begin{equation}\label{P_tot_conv_CSIR_singleUE}
\epsilon \geq 1-\mathbb{E}\left[Q\left(Q^{-1}\left(\frac{1}{M}\right)-\sqrt{\frac{2 P_{t o t,a}}{p_a\mu}|h|^{2}}\right)\right].
\end{equation}
\end{prop}
\begin{proof}\label{proof_converse_CSIR}
Let $\mathcal{W}_{\mathcal{K}_a} \!\!=\!\! \left\{W_i\!: i\!\in\! {\mathcal{K}_a} \!\right\}$ be the sent messages of active UEs,
$\mathcal{X}_{\mathcal{K}_a} \!\!=\!\! \left\{\mathbf{c}^i\!: i\!\in\! {\mathcal{K}_a} \!\right\}$ be corresponding codewords,
$\mathbf{y}_{\mathcal{K}_a}$ be the received vector, and $\hat{\mathcal{W}}_{\mathcal{K}_a} \!\!=\!\! \left\{\hat{W}_i\!: i\!\in\! {\mathcal{K}_a} \!\right\}$ be the decoded messages.
We have the Markov chain: $\mathcal{W}_{\mathcal{K}_a} \!\!\!\to\!\! \mathcal{X}_{\mathcal{K}_a} \!\!\!\to\!\! \mathbf{y}_{\mathcal{K}_a} \!\!\!\to\!\! \hat{\mathcal{W}}_{\mathcal{K}_a}$.
We assume a genie reveals the set $\mathcal{K}_a$ of active UEs and a set $S_1 \!\subset\! \mathcal{K}_a$ for messages $\mathcal{W}_{S_1} \!=\! \left\{W_i\!: i\!\in\! {S_1} \right\}$ and fading coefficients ${h}_{S_1} \!=\! \left\{h_i\!: i\!\in\! {S_1} \right\}$ to the decoder. Let $S_2 \!=\! \mathcal{K}_{a}\!\backslash S_1$ with $\left|S_2\right| \!=\! \theta K_a$ and $\theta \!\in \!\Theta_n \!=\! (0, 1]\cap \!\left\{\! \frac{i}{K_a}\!\!: \!i\!\in\! [{K}_a] \right\} $. The equivalent received message is given by
\begin{equation}\label{receive_y_G}
\mathbf{y}^G = \sum_{i\in{S_2}}{h}_i\mathbf{c}^{i}+\mathbf{z} \in \mathbb{C}^{n}.
\end{equation}
Denote the decoded message for the $i$-th UE with genie as $\hat{W}_{i}^{G}$.
Let $L_i \!=\! 1\!\left[ {W}_{i}\!\neq\! \hat{W}_{i}^{G}\right]$ and $P_{e,i}^G \!=\! \mathbb{E}\left[ L_i \right]$. We have $P_{e,i}^G \!=\! 0$ for $i\!\in\! S_1$.
The averaged PUPE is $P_{e}^G\!=\!\frac{1}{K_a} \!\sum_{i\in {S_2}} \!P_{e,i}^G \leq \epsilon$.
For $i \in S_2$, based on Fano inequality, we have
\begin{equation}\label{fano}
\log_2M \!-\! P_{e,i}^G \log_2(M-1) \!-\! h_2\!\left( P_{e,i}^G \right) \!\leq\! I\left(W_i;\hat{W}_i^{G} \right).
\end{equation}
Considering $\sum_{i\in S_2} \!I\!\left(\!W_i; \;\!\!\hat{W}_i^{G} \!\right) \!\leq\! n \mathbb{E}\!\left[\log_2\!\;\!\!\left( \!1\!+\!\;\!\! P \!\sum_{i\in S_2} \!\!\left| h_i\right|^2\right) \!\right]$
and the concavity of $h_2$, we can obtain
\begin{equation}
{\theta}k \!-\! P_{e}^G \!\log_2(M\!-\!1) \!-\! h_2\!\left( P_{e}^G \right)
\!\leq \!\! \frac{n}{K_a} \mathbb{E}\!\left[\log_2\!\!\left( \!\!1\!+\! P \!\sum_{i\in S_2} \!\left| h_i\right|^2\!\!\right) \!\right] \!\!.
\end{equation}
Since $P_{e}^G \leq \epsilon \leq 1-\frac{1}{M}$, we have $P_{e}^G \log_2(M-1) + h_2\left( P_{e}^G \right) \leq \epsilon \log_2(M-1) + h_2\left( \epsilon \right) $.
We can obtain
\begin{equation}
{\theta k}\;\!\! -\;\!\! \epsilon \log_2(M\!\!\;\!-\;\!\!1) \;\!\!-\!\;\! h_2\!\left( \epsilon \right) \!\!\leq\! \! \frac{n}{K_a} \! \mathbb{E}\!\!\left[\!\log_2\!\!\left( \!\!1\!+\! P \!\!\!\!\!\!\!\sum_{i=(1\!-\!\theta) \!K_a}^{K_a} \!\!\!\!\left| h_i^{\downarrow}\!\right|^{2}\!\right)\! \!\right]\!\!.\!
\end{equation}
For $a,b\!\in\! (0,1]$, let $S_{K_a}\!(a,b) \!=\! \frac{1}{K_a}\!\!\sum_{i=a{K_a}}^{b{K_a}} \!\!\left| h_i^{\downarrow}\right|^2 $ satisfying $S_{K_a}\!(a,b) {\to} \xi(a,b)$ as ${K_a}\!\!\!\to \!\!\!\infty$ and $\mathbb{E}\!\left[S_{K_a}\!(a,b)\right]\!\!\! \leq\!\!\! 1$.
The family of random
variables $\left\{ S_{K_a}\!(a,b)\! :\! {K_a}\!\in \! \mathbb{N}_{+}\!\right\}$ is uniformly integrable
based on the dominated convergence theorem \cite{uniformly_integrable}.
Since $0\!\!<\!\!\log_2\left( 1\!+\!P_{tot,a}S_{K_a}\!(a,b)\right) \!\!<\!\! P_{tot,a}S_{K_a}\!(a,b)$, the family $\left\{ \log_2\!\left( 1\!+\!P_{tot,a}S_{K_a}(a,b)\right) \!: \!{K_a}\!\in \! \mathbb{N}_{+}\right\}$ is also uniformly integrable.
As ${K_a}\!\!\to \!\!\infty$, since $\log_2\!\left( 1\!+\!P_{tot,a}S_{K_a}\!(a,b)\right) \!\!\to\!\! \log_2\!\left( 1\!+\!P_{tot,a} \xi(a,b)\right)$,
we have $\mathbb{E}\!\left[ \log_2\!\left( 1\!+\!P_{tot,a}S_{K_a}\!(a,b)\right) \right] \!\!\to\!\! \log_2\!\left( 1\!+\!P_{tot,a} \xi(a,b)\right)$ \cite{uniformly_integrable}.
As $n\to \infty$, we can obtain \eqref{P_tot_conv_CSIR}.
In addition, \eqref{P_tot_conv_CSIR_singleUE} is derived for a single UE sending $k$ bits with PUPE $\epsilon$ in quasi-static fading channels \cite{singleUE}.
\end{proof}
\subsection{no-CSI} \label{section4_sub2}
\begin{prop}\label{prop_converse_noCSI}
We generate codebooks independently for UEs with each entry i.i.d. from $\mathcal{CN}(0,P)$. Given spectral efficiency $S$ and target PUPE $\epsilon$,
we have $\varepsilon^{*} (M, \mu, p_a, \epsilon) \geq \inf \!\frac{P_{tot,a}}{S}$, where infimum is taken over all ${P_{tot,a}}\!>\!0$ satisfying
\begin{equation}\label{P_tot_conv_noCSI}
\ln \!M\;\!\!-\epsilon \;\!\!\ln (\;\!\!M\;\!\!-\;\!\!1)-h(\epsilon) \;\!\!\!\leq\;\!\!\!
{M} \!\mathcal{V}\;\!\!\!\left(\!\frac{1}{{p_a}\;\!\!\mu M}\;\!\!, \!P_{tot,a}\!\!\right)\;\!\!-\;\!\!\;\!\!\mathcal{V}\;\!\!\!\left(\!\frac{1}{p_a\;\!\!\mu}, \!P_ {tot,a}\!\!\right) \!\;\!\!,
\end{equation}
\begin{equation}\label{P_tot_conv_noCSI_v}
\mathcal{V}(r, \!\gamma)\!=\!r \!\ln (1\;\!\!+\;\!\!\gamma\;\!\!-\;\!\!\!\mathcal{F}(r, \!\gamma)\;\!\!)
\;\!\!+\;\!\!\ln (1\;\!\!+\;\!\!r \gamma\;\!\!-\;\!\!\mathcal{F}(r,\! \gamma))\;\!\!-\;\!\!\frac{\mathcal{F}(r, \!\gamma)}{\gamma} \!,\!\!
\end{equation}
\begin{equation}\label{P_tot_conv_noCSI_f}
\mathcal{F}(r, \gamma)\!=\!\frac{1}{4}\!\left(\!\sqrt{\!\gamma\!\left(\!\sqrt{r}+1\right)^{2}\!+\!1}
\!-\!\sqrt{\!\gamma\left(\sqrt{r}-1\right)^{2}\!+\!1}\right)^{\!2}\!.
\end{equation}
\end{prop}
\begin{proof}\label{proof_converse_noCSI}
We assume a genie reveals the set of active UEs. Based on the analysis in Section \ref{section4_sub1}, we have
\begin{equation}
{K_a}\!\ln\! M \!-\! {K_a} \epsilon \ln(M\!-\!1) \!-\! {K_a} h\!\left(\epsilon\right)
\!\leq \! I\!\left(\!\mathcal{X}_{\mathcal{K}_a}\!;\!\hat{\mathcal{X}}_{\mathcal{K}_a} \!\right)
\!\leq\! I\!\left(\left.\bar{\boldsymbol{\beta}};\mathbf{y} \right|\!\bar{\mathbf{A}}\right),
\end{equation}
where $\bar{\boldsymbol{\beta}} \!\in \!\mathbb{C}^{K_{\;\!\!a} \;\!\!M}$ indicates which codewords are sent for active UEs and $\;\!\!\bar{\mathbf{A}}\;\!\!$ is the $n\times K_{\;\!\!a} \;\!\!M$ submatrix of $\;\!\!{\mathbf{A}}\;\!\!$ including codewords of active UEs.
Let $\bar{\mathbf{H}}\!\in\!\mathbb{C}^{K_a M\times K_a M}$ be the submatrix of ${\mathbf{H}}$ including fading coefficients of active UEs.
Based on the chain rule of mutual information, we have
\begin{align}
I\left(\!\left.\bar{\boldsymbol{\beta}}, \bar{\mathbf{H}} \bar{\boldsymbol{\beta}}; \mathbf{y} \right|\!\bar{\mathbf{A}}\right)
&\!=\! I\left(\left.\bar{\boldsymbol{\beta}};\mathbf{y} \right|\bar{\mathbf{A}}\right) \!+ \! I\left(\left.\bar{\mathbf{H}}\bar{\boldsymbol{\beta}};\mathbf{y} \right|\bar{\boldsymbol{\beta}}, \bar{\mathbf{A}}\right) \notag \\
&\!=\! I\left(\left. \bar{\mathbf{H}}\bar{\boldsymbol{\beta}};\mathbf{y} \right|\bar{\mathbf{A}}\right) \!+\! I\left(\left.\bar{\boldsymbol{\beta}};\mathbf{y} \right|\bar{\mathbf{H}}\bar{\boldsymbol{\beta}}, \bar{\mathbf{A}}\right).
\end{align}
Since $\bar{\boldsymbol{\beta}}\!\to\! \bar{\mathbf{H}}\bar{\boldsymbol{\beta}}\!\to\! (\mathbf{y},\bar{\mathbf{A}})$ forms a Markov chain, the mutual information $I\!\left(\left.\bar{\boldsymbol{\beta}};\mathbf{y} \right|\!\bar{\mathbf{H}}\bar{\boldsymbol{\beta}}, \!\bar{\mathbf{A}}\right) \!=\! 0$. Hence, we have
$ I\!\left(\left.\!\bar{\boldsymbol{\beta}};\mathbf{y} \right|\!\bar{\mathbf{A}}\right)
\!=\! I\!\left(\left. \!\bar{\mathbf{H}}\bar{\boldsymbol{\beta}};\mathbf{y} \right|\!\bar{\mathbf{A}}\right) \!-\! I\!\left(\!\left.\bar{\mathbf{H}}\bar{\boldsymbol{\beta}};\mathbf{y} \right|\!\bar{\boldsymbol{\beta}}, \bar{\mathbf{A}}\right)$.
We can obtain
\begin{align}
I\left(\left. \bar{\mathbf{H}}\bar{\boldsymbol{\beta}};\mathbf{y} \right|\bar{\mathbf{A}} = \bar{\mathbf{A}}_1\right)
&= I\left(\bar{\mathbf{H}}\bar{\boldsymbol{\beta}}; \bar{\mathbf{A}}_1\bar{\mathbf{H}}\bar{\boldsymbol{\beta}}+\mathbf{z} \right)\notag \\
& \leq \sup_{\mathbf{u}} I\left(\mathbf{u}; \bar{\mathbf{A}}_1\mathbf{u}+\mathbf{z} \right) \notag \\
& = \ln \det\left( \mathbf{I}_{n} + \frac{1}{M}\bar{\mathbf{A}}_1\bar{\mathbf{A}}_1^{H} \right),
\end{align}
where $\bar{\mathbf{A}}_1$ is a realization of $\bar{\mathbf{A}}$ and the supremum is over random vector
$\mathbf{u}$ with $\mathbb{E}\!\left[\mathbf{u} \right] \!\!=\! \mathbf{0}$ and $\mathbb{E}\!\left[\mathbf{u} \mathbf{u}^{H} \right] \!\!=\! \mathbb{E}\!\left[\!\left(\bar{\mathbf{H}}\bar{\boldsymbol{\beta}} \right)\!\! \left(\bar{\mathbf{H}}\bar{\boldsymbol{\beta}} \right)^{\!\!H} \right] \!=\!\! \frac{1}{M}\mathbf{I}_{K_aM}$.
The supremum is achieved if $\mathbf{u}\!\sim\! \mathcal{CN}\!\left( \mathbf{0}, \!\frac{1}{M}\mathbf{I}_{K_aM} \right)$ \cite{sparsity_pattern}. Based on the random-matrix theory, we have \cite{random_matrix}
\begin{equation}
I\!\!\left(\!\left. \bar{\mathbf{H}}\bar{\boldsymbol{\beta}};\!\mathbf{y} \right|\!\bar{\mathbf{A}}\!\right)
\!\!\leq\! \mathbb{E}\! \!\left[ \!\ln \!\det\!\left(\!\! \mathbf{I}_{n} \!\!+\!\!\frac{1}{M}\!\bar{\mathbf{A}}\bar{\mathbf{A}}^{\!\!H} \!\!\right) \!\right]
\!\!\!=\!\!{K_a\!M} \mathcal{V}\!\left(\!\frac{1}{p_a\mu M}, \!P_{tot,a}\!\!\right)\!\!.
\end{equation}
For any realization $\bar{\mathbf{A}}_1$ of $\bar{\mathbf{A}}$ and $\bar{\boldsymbol{\beta}}_1$ of $\bar{\boldsymbol{\beta}}$, we have
\begin{align}
I\left(\left.\bar{\mathbf{H}}\bar{\boldsymbol{\beta}};\mathbf{y} \right|\bar{\boldsymbol{\beta}} \!=\! \bar{\boldsymbol{\beta}}_1, \bar{\mathbf{A}} \!=\! \bar{\mathbf{A}}_1 \right)
&= I\left(\bar{\mathbf{H}}\bar{\boldsymbol{\beta}}_1; \bar{\mathbf{A}}_1\bar{\mathbf{H}}\bar{\boldsymbol{\beta}}_1+\mathbf{z} \right) \notag \\
&= I\left(\tilde{\mathbf{h}}; \tilde{\mathbf{A}}_1\tilde{\mathbf{h}} +\mathbf{z} \right) \notag \\
&= \ln \det\!\left( \mathbf{I}_{n} \!+\! \tilde{\mathbf{A}}_1 \tilde{\mathbf{A}}_1^{H} \right),
\end{align}
where $\tilde{\mathbf{h}} \in \mathbb{C}^{K_a}$ includes fading coefficients of active UEs and $\tilde{\mathbf{A}}_1$ is the $n\times K_a$ submatrix of $\bar{\mathbf{A}}_1$ formed by columns corresponding to the support of $\bar{\boldsymbol{\beta}}_1$. Hence, we have \cite{random_matrix}
\begin{equation}
I\!\left(\!\left.\bar{\mathbf{H}}\bar{\boldsymbol{\beta}};\mathbf{y} \right|\!\bar{\boldsymbol{\beta}}, \!\bar{\mathbf{A}}\right)
\!=\! \mathbb{E}\!\left[ \ln \!\det\!\left( \mathbf{I}_{n} \!+\! \tilde{\mathbf{A}} \tilde{\mathbf{A}}^{\!\!H} \right) \!\right]
\!\!=\! K_a \mathcal{V}\!\left(\frac{1}{p_a\mu}, \!P_ {tot,a}\!\right)\!.
\end{equation}
\end{proof}
\section{Results and Discussion} \label{section5}
In this section, we evaluate the bounds derived in this work.
Given the payload $k \!=\! 100$, active probability $p_a \!=\! 0.6$, and target PUPE $\epsilon \!=\! 0.001$, we show the trade-off of UE density $\mu$ with the minimum energy-per-bit $\varepsilon^{*}$ in Fig. 1.
For TDMA, we split the blocklength $n$ equally among $K$ UEs.
To achieve $S \!=\! p_a \mu k$, we obtain the smallest $P^{*}$ and $\varepsilon^{*} \!=\! P^{*}\!/(\mu k)$ ensuring the access of an active UE with rate $\mu k$, blocklength $1/\mu$ and PUPE $\epsilon$ based on the bound in \cite{TDMA_yangwei}.
From Fig. 1, we can observe perfect multi-user interference (MUI) cancellation effect in quasi-static fading random access channels.
It means that for small values of $\mu$, the optimal coding system can be performed as if each active UE is operated in isolation without interference.
The orthogonalization scheme TDMA does not have this behavior.
Although TDMA has better performance when $\mu\!\to\! 0$, it is
more energy-inefficient at higher UE density.
In Fig. 2, given $k \!=\! 100$ and $\epsilon \!=\! 0.001$, we show the trade-off of active UE density $p_a\mu$ and $\varepsilon^{*}$ with no-CSI.
The converse bound and the achievability bound with the knowledge of UE activity are invariant for $p_a$ if $p_a\mu$ is fixed.
Given $p_a\mu$, as $p_a$ increases, the uncertainty of active UEs decreases, and the achievability bound of random access can be reduced.
Besides, this bound is close to the achievability bound with the knowledge of active UE set,
i.e., only a bit more energy is required for random access compared with the case where UE activity is known.
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{EbN0_mu_achi_CSIR_TDMA_06_0001_2tdma.eps}\\
\caption{$\mu$ versus $\varepsilon^{*}$ with $k = 100$, $p_a = 0.6$, and $\epsilon = 0.001$.}
\label{fig:1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{EbN0_mu_achi_CSIR_TDMA_pamu4.eps}\\
\caption{$p_a\mu$ versus $\varepsilon^{*}$ with $k = 100$ and $\epsilon = 0.001$.}
\label{fig:2}
\end{figure}
\section{Conclusion} \label{section6}
In this work, we assume the number of UEs grows linearly and unboundedly with the blocklength and each active UE has finite data bits to send over quasi-static fading channels. The achievability and converse bounds on the minimum energy-per-bit for reliable massive random access with CSIR and no-CSI are derived.
Simulation results show perfect MUI cancellation for small values of UE density $\mu$ and the energy-inefficiency of TDMA as $\mu$ increases.
Besides, with no-CSI, the energy-per-bit for random access is only a bit more than that with the knowledge of UE activity.
The bounds derived in this work provide energy-efficiency targets for future massive random access coding and communication schemes.
|
1,314,259,993,396 | arxiv | \section{Introduction}
Feynman diagrammatic technique is a powerful tool of statistical mechanics. Among the hallmarks of the method are the ability to deal---both analytically and numerically---with the thermodynamic limit rather than a finite-size cluster, the possibility of partial summations up to infinite order, and
the fully self-consistent formulation in terms of renormalized (dressed) quantities.
The latter properties allow one to go beyond the Taylor expansion in terms of the coupling constant
or any other parameter.
Advantages of the diagrammatic technique come at a price. The most serious issue is the
divergence of expansions in powers of the coupling constant for systems prone to
Dyson's collapse\cite{dyson1952divergence} ({\it i.e.}, pathological system behavior when the
coupling constant is rotated in the complex plane). For partial summation techniques to work,
the non-perturbed part of the theory has to be Gaussian
(in terms of either real, or complex, or Grassmann variables) to ensure the validity of Wick's theorem.
These issues are often related: for example, Ising and XY models formulated in terms of original
spin variables do not suffer from Dyson's collapse but lack the Gaussian (non-interacting) limit,
while their classical (lattice) field counterparts with the well-defined Gaussian limit are subject
to Dyson's collapse. It would be a mistake, however, to think that meaningful
diagrammatic series are only possible for a very limited class of Hamiltonians, namely,
when the original system is that of interacting lattice fermions.
As already clearly explained by Samuel in a series of papers,\cite{Samuel_anticommuting1,Samuel_anticommuting2,Samuel_anticommuting3}
a broad class of classical spin and dimer models can be reformulated in terms of
familiar interacting fermions and studied with field-theoretical techniques.
Similarly, rather arbitrary quantum spin/boson lattice models can be rigorously
mapped onto fermionic field theories.\cite{PopFed,misha,PopovFedotovG}
As expected, grassmannian formulations of spin/link/boson models with local constraints
are generically strongly-coupled theories at low temperature, and even the most advanced
self-consistent treatments based on the lowest-order graphs are not supposed to provide
quantitatively (and often qualitatively) accurate answers.
Moreover, these theories may contain arbitrary multi-particle interaction vertexes, which
further complicate the structure of the diagrammatic expansion.
One of the promising numerical techniques currently under development for strongly
correlated systems is diagrammatic Monte Carlo (DiagMC).
It is based on the stochastic evaluation of irreducible Feynman graphs up to some high order
and can be implemented in a number of ways, from perturbative expansions in powers
of the coupling constant to various self-consistent skeleton schemes based on fully renormalized
one- or two-body propagators. In such contexts as resonant fermions,\cite{VanHoucke2012}
frustrated magnetism,\cite{kulagin2013bdm, kulagin2013bdm2} and out-of-equilibrium impurity-like
models\cite{Profumo2015, Cohen_out_of_eq_2015} the method was recently shown to be able to go significantly beyond the state of the art. Also, significant progress has been
made in understanding superfluid properties of the Hubbard-type models.\cite{gukelberger2014pss,Deng2015,Gukelberger2015}
Notably, the infamous sign-problem
preventing conventional Monte Carlo methods from simulating fermionic system with sizes
large enough for reliable extrapolation to the thermodynamic limit, is absent as such in
DiagMC. Instead, the computational complexity is now linked to the number of diagrams
growing factorially with their order. Nevertheless, millions of diagrams can be accounted
for and the approach is flexible enough to deal with an arbitrary interaction Hamiltonian/action.
The current paradigm for generic lattice gauge models, as they occur in lattice-QCD as well
as in solid state and ultra-cold atomic physics, is to work with finite-size systems and
to treat link variables separately from the fermionic sector. More precisely, link
variables are simulated using classical Monte Carlo techniques (with local updates),
and fermions (quarks) are described by determinants. This approach suffers from a severe
sign-problem for finite density of fermions (non-zero chemical potential).\cite{QCDsignproblem,QCDsignproblem2}
If link variables are straightforwardly represented by bosonic fields, then the thermodynamic
limit can be addressed within the diagrammatic approach that treats bosonic and fermionic degrees
of freedom on equal footing. However, in this formulation the bosonic fields pose a
fundamental problem, which manifests itself in a zero convergence radius. It is thus desirable
to have a generic scheme for replacing link variables with Grassmann fields to ensure that the
diagrammatic expansion has proper analytic properties around the Gaussian point.
In this paper, we introduce a general procedure of {\it grassmannization} for
classical lattice models. It is by no means a unique one, and in certain specific cases
more compact/simpler representations can be found. There is a strong connection to the anti-commuting variables approach introduced
by S. Samuel,\cite{Samuel_anticommuting1, Samuel_anticommuting2, Samuel_anticommuting3} which can solve
the 2D Ising model exactly (free fermion operators to solve the Ising model exactly were first found by Kaufman\cite{Kaufman1949} and
refined by Schultz, Mattis and Lieb\cite{SchultzMattisLieb1964}) and provides a good starting point for field-theoretic studies of the 3D Ising model.
For the latter system our approach amounts to an alternative but equally complicated field theory.
Our prime goal is to build on these ideas and develop a scheme that is flexible enough to apply to a broader class of link models with arbitrary multi-bond
interactions and local constraints.
The idea of grassmannization is to represent the partition function of the model as a Grassmann
integral from the exponential of a Grassmann functional. The Feynman rules then emerge by
Taylor-expanding the non-Gaussian part of the exponential and applying Wick's theorem to the Gaussian averages. Paradigmatic lattice systems are link and plaquette
models featuring discrete degrees of freedom---integer numbers---residing on links (plaquettes) of square lattices and subject to certain
local constraints in terms of the allowed values of the sum of all link (plaquette) variables
adjacent to a given site (edge). It turns out that it is these constraints that require
special tricks involving multiple Grassmann variables for each value of each discrete variable.
Link models often emerge as high-temperature expansions of
lattice systems\cite{Oitmaa} in Ising, XY, O(3), etc. universality classes no matter whether the original
degrees of freedom are discrete or continuous (e.g., classical vector-field variables).
Link models may also emerge as dual (low-temperature) expansions, and specific examples
are provided by the 2D Ising model\cite{mccoy1973} and the 3D $|\psi|^4$ model (the latter case leads to the
so-called J-current model with long-range interactions). Similarly, plaquette models emerge as a
high-temperature expansion of lattice gauge theories, but sometimes they represent the dual
(low-temperature) expansion, as in the case of the 3D Ising model. Finally, it is worth mentioning
how the models with the same general structure are generated by strong-coupling expansions
in lattice-QCD.\cite{Wilsonloop}
The paper is structured as follows. In Sec.~\ref{sec:II} we explain how a partition function
of a discrete link model can be written as a Grassmann integral. The equivalence between the two
formulations is readily proved through term-by-term comparison. Standard properties of Grassmann
variables then immediately allow one to express the Grassmann weight in the exponential form in order to
define the field-theory. In Sec.~\ref{sec:III} we discuss generalizations of the proposed
grassmannization scheme. We start by describing the procedure for a broad class of plaquette models.
Next we show a simple way to introduce Grassmann variables for non-local link models with pairwise
interactions between the link variables. The construction is further simplified when constraints
are replaced with statistical penalties for certain configurations of link (plaquette) variables.
We conclude this section with defining the meaning of the term ``order of expansion" for the
resulting field theory. In Sec.~\ref{sec:IV} we deliberately choose the most general
grassmannization scheme for the 2D Ising model to illustrate and test how our construction
works in practice. We stress that our goal is not to solve the 2D Ising model
exactly\cite{Onsager44, mccoy1973} or determine a series expansion
for it\cite{Oitmaa} but to develop a general framework---including numeric component---for applying Grassmann variables
to link and plaquette models and show that its evaluation can be done realistically. After determining all field-theoretic parameters, characterizing various
interaction terms and source operators for calculating correlation functions (with and without magnetic
field), and explaining Feynman rules for constructing the perturbative expansion, we proceed with
the description of algorithms to compute them (Monte Carlo and deterministic)
in Sec.~\ref{sec:V}. Results are presented and discussed in Sec.~\ref{sec:VI}.
By comparing with the exact solution we show that the critical exponent
$\gamma$ for magnetic susceptibility could be determined with an accuracy of about $5$\%, while
the critical point could be located with sub-percent accuracy. In Sec.~\ref{sec:VII} we discuss the
implementation of the self-consistent skeleton technique within the so-called $G^2W$-expansion\cite{Heidin}
which computes irreducible (skeleton) diagrams for the self-energy and ``polarization" function
and uses them in the Dyson equations in order to find the renormalized propagators and screened interactions.
We also present results that emerge when this technique is based on several low-order diagrams.
We briefly comment in Sec.~\ref{sec:VIII} that both bare-series and the $G^2W$-expansion methods readily
solve the 1D Ising model exactly. We conclude with prospects for future work in Sec.~\ref{sec:IX}.
\section{Grassmannization of local link models}
\label{sec:II}
\subsection{Local link models}
For the purposes of this article, we mean by a link model a classical statistical model
with states labeled by a set of {\it discrete} variables $\{ \alpha_b \}$ residing on
links (bonds) of a certain lattice. In addition, we require that the ground state
is unique. Without loss of generality, it can be chosen to be the state with $\alpha_b=0$
on each link $b$.
We further narrow the class of link models---to which we will refer to as {\it local} link models---by the requirement that the statistical weight of a state factors into a product of link and site weights
(to be referred to as link and site factors, respectively). A link factor, $f_b$, is a function
of the corresponding link variable, $f_b\equiv f(\alpha_b)$. The site factor, $g_j$, is a function that
depends on all variables residing on links attached to the site $j$, denoted as $\{ \alpha_b \}_j$. Then, $g_j \equiv g(\{ \alpha_b \}_j)$.
Solely for the purpose of avoiding heavy notations, we consider translational invariance when $f(\alpha_b) \equiv f_b$ is the same function on all links and $g_j$ site independent, $g_j \equiv g$.
Given that only the relative weights of the states matter,
we set $f(0)=1$ and $g(0_j)=1$, where $0_j$ stands for the $\{ \alpha_b=0 \}_j$ set.
The site factors play the key role in link models. They describe interactions between
(otherwise independent) link degrees of freedom. In particular, this interaction can take the extreme form
of a {\it constraint} on the allowed physical configurations of $\{ \alpha_b \}_j$ ({\it e.g.,} the zero-divergency constraint in J-current models,\cite{Villain}
or the even-number constraint in the high-temperature expansion
of $Z_2$ models), in which case $g_j(\{ \alpha_b \}_j)$ is identically {\it zero} for each non-physical state
of $\{ \alpha_b \}_j$.
\subsection{Grassmannization}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.18\columnwidth, angle=-90]{fig_fields.pdf}
\caption{Assignment of Grassmann fields for link (left) and site (right) factors.
Upon integration, the labels of the Grassmann variables must be equal in order to
connect variables from all factors (see text).
}
\label{fig:fields}
\end{figure}
For each label $\alpha \neq 0$ of the link $b$, introduce four Grassmann variables: $\xi_{\alpha,b}$, $\xi_{\alpha,b}'$, $\bar{\xi}_{\alpha,b}$, and $\bar{\xi}_{\alpha,b}'$. For a textbook
introduction to Grassmann variables, we refer to Ref.~\onlinecite{negele1988quantum}.
For $\alpha = 0$ we assume that
$\xi_{0,b}=\xi_{0,b}'=\bar{\xi}_{0,b}=\bar{\xi}_{0,b}'=1$.
In terms of these variables, define the Grassmann weight---a product of link, $A_b$, and site, $B_j$, factors such that tracing over all degrees of freedom yields the partition function $Z = {\rm Tr} \prod A_b \prod B_j$---by the following rules,
\begin{eqnarray}
A_b & = & \exp \left\{ \sum_{\alpha \neq 0} \left[
\frac{\bar{\xi}_{\alpha, b}' \xi_{\alpha, b}'}{\sqrt{f(\alpha)}} +
\frac{\bar{\xi}_{\alpha, b} \xi_{\alpha, b}}{\sqrt{ f(\alpha)}} \, \right]\right\} \nonumber \\
& = & \prod_{\alpha \neq 0}
\, \exp \left\{ \frac{\bar{\xi}_{\alpha, b}' \xi_{\alpha, b}'}{\sqrt{f(\alpha)}} +
\frac{\bar{\xi}_{\alpha, b} \xi_{\alpha, b}}{\sqrt{ f(\alpha)}} \right\} \, ,
\label{link}
\end{eqnarray}
\begin{eqnarray}
B_j\, =\, \sum_{ \{ \alpha_b \}_j} g(\{ \alpha_b \}_j) \prod_{b \in \{ b \}_j} \breve{\xi}_{\alpha_{b}, b}\, \breve{\xi}_{\alpha_{b}, b}^*\nonumber \\
\, =\, 1 \, + \sum_{ \{ \alpha_b \}_j \neq 0_j} g(\{ \alpha_b \}_j) \prod_{b \in \{ b \}_j} \breve{\xi}_{\alpha_{b}, b}\, \breve{\xi}_{\alpha_{b}, b}^* \, .
\label{site}
\end{eqnarray}
Here $\{ b \}_j$ stands for the set of all links incident to the site $j$, and variables
$\breve{\xi}_{\alpha_{b}, b}$ and $\breve{\xi}_{\alpha_{b}, b}^*$ are defined differently
for different links. We first introduce the notion of direction (on each link)
so that one of the two link ends becomes ``incoming" and its counterpart ``outgoing"
(with respect to the site adjacent to the end). Next, we assign (see Fig.~\ref{fig:fields} for an illustration)
\begin{equation}
\begin{array}{*{5}{l}}
\breve{\xi}_{\alpha_b, b} =\xi_{\alpha_b, b}\, , \;\; \breve{\xi}_{\alpha_b, b}^*= \xi_{\alpha_b, b}' \; \; \mbox{(for incoming end)} ,\\
\breve{\xi}_{\alpha_b, b} =\bar{\xi}_{\alpha_b, b}'\, , \;\; \breve{\xi}_{\alpha_b, b}^*= \bar{\xi}_{\alpha_b, b} \; \; \mbox{(for outgoing end) .}
\end{array}
\label{convention}
\end{equation}
The claim is that the Grassmann integral of the weight over all variables reproduces the partition function of the original link model. For a link $b$ to yield a non-zero contribution to the integral the link labels in (\ref{site}) for the sites of the incoming ($j=1$) and outgoing ($j=2$) ends of the link should match each other:
$\alpha_1=\alpha_2$. Indeed, at $\alpha_1\neq \alpha_2$, it is not possible to find
an appropriate term in the expansion of the link exponential (\ref{link})
such that---upon multiplying by the site factors $\breve{\xi}_{\alpha_1, b} \, \breve{\xi}_{\alpha_1, b}^* $ and $\breve{\xi}_{\alpha_2, b} \, \breve{\xi}_{\alpha_2, b}^*$---all powers of the Grassmann variables $\xi_{\alpha_1,b}$, $\xi_{\alpha_1, b}'$, $\bar{\xi}_{\alpha_1, b}$, $\bar{\xi}_{\alpha_1, b}'$, $\xi_{\alpha_2, b}$, $\xi_{\alpha_2, b}'$, $\bar{\xi}_{\alpha_2, b}$, $\bar{\xi}_{\alpha_2, b}'$ are exactly equal to 1 to ensure that the Grassmann integral is non-zero.
For $\alpha_1=\alpha_2\equiv \alpha$,
we need to consider two cases: $\alpha =0$ and $\alpha \neq 0$.
In the first case, the non-zero contribution to the integral comes from the product of second terms in the expansion of the link exponentials (\ref{link}):
\begin{eqnarray}
&\!& \!\!\!\!\!\!\!\!
\prod_{\gamma \neq 0} \int \mathcal{D} [ \bar{\xi}' \xi' \bar{\xi} \xi ]_{\gamma }
\, \exp \left\{ \frac{\bar{\xi}_{\gamma}' \xi_{\gamma}'}{\sqrt{f(\gamma)}} +
\frac{\bar{\xi}_{\gamma} \xi_{\gamma}}{\sqrt{ f(\gamma)}} \right\}
\nonumber \\
&=&\prod_{\gamma \neq 0} \int \mathcal{D} [ \bar{\xi}' \xi' \bar{\xi} \xi ]_{\gamma }
\left[ 1 + \frac{\bar{\xi}_{\gamma}' \xi_{\gamma}'}{\sqrt{f(\gamma)}} \right] \,
\left[ 1 + \frac{\bar{\xi}_{\gamma} \xi_{\gamma}}{\sqrt{ f(\gamma)}} \right] \nonumber \\
&=& \prod_{\gamma \neq 0} \frac{1}{f(\gamma)} \equiv \frac{1}{f_*} \;,
\label{f_star}
\end{eqnarray}
where we defined $f_*$ in the last step.
In the second case, the two end sites contribute the factor $\xi_{\alpha, b}\, \xi_{\alpha, b}'\, \bar{\xi}_{\alpha, b}' \, \bar{\xi}_{\alpha, b}=\bar{\xi}_{\alpha, b}' \, \xi_{\alpha, b}' \, \bar{\xi}_{\alpha, b} \, \xi_{\alpha, b}$. Now we have to consider the first term in the expansion
of the link exponential for state $\alpha$, while for other variables the calculation is repeated
as in (\ref{f_star})
\begin{eqnarray}
&\!& \!\!\!\!\!\!\!
\prod_{\gamma \neq 0} \int \mathcal{D} [ \bar{\xi}' \xi' \bar{\xi} \xi ]_{\gamma }
\bar{\xi}'_{\alpha} \bar{\xi}_{\alpha}
\left[ 1 + \frac{\bar{\xi}_{\gamma}' \xi_{\gamma}'}{\sqrt{f(\gamma)}} \right] \,
\left[ 1 + \frac{\bar{\xi}_{\gamma} \xi_{\gamma}}{\sqrt{ f(\gamma)}} \right]
\xi_{\alpha} \xi'_{\alpha} \nonumber \\
& = & \prod_{\gamma \neq 0, \alpha} \frac{1}{f(\gamma)} = \frac{f(\alpha)}{f_*}.
\label{non_groundstate}
\end{eqnarray}
We see that, apart from the irrelevant global factor $\prod_b 1/f_*$,
we reproduce the configuration space and weight factors of the original link model.
\subsection{Field-theoretical formulation}
To generate the Feynman diagrammatic expansion, we need to represent the Grassmann weight factor in the exponential form. The link factors (\ref{link}) have the form of Gaussian exponentials
already. Hence, it is only the site factors that need to be rewritten identically as
\begin{equation}
B_j\, =\, \exp \left[ \sum_{ \{ \alpha_b \}_j } \lambda(\{ \alpha_b \}_j) \prod_{b \in \{ b \}_j} \breve{\xi}_{\alpha_b, b} \, \breve{\xi}_{\alpha_b, b}^*
\right] \, .
\label{site2}
\end{equation}
The constants $ \lambda(\{ \alpha_b \}_j)$ are readily related to the site
factors $g(\{ \alpha_b \}_j)$ by simple algebraic equations obtained by expanding
the exponential and equating sums of similar terms to their counterparts in the r.h.s. of Eq.~(\ref{site}).
By expanding the non-Gaussian part of the exponential (\ref{site2}) and applying Wick's theorem,
we arrive at Feynman rules for the diagrammatic series.
The reader should avoid confusion by thinking that an expansion of the exponential (\ref{site2})
takes us back to Eq.~(\ref{site}). Recall that connected Feynman diagrams are formulated
for the free energy density, not the partition function, and summation over
all lattice sites is done for a given set of interaction vertexes in the graph,
as opposite to the summation over all vertex types for a given set of lattice points.
Therefore, the ``coupling constants" in Feynman diagrams are $\lambda$'s, not $g$'s.
\subsection{Absorbing link factors into site factors}
The separation of the weight factors into link and site ones is
merely a convention. Indeed, each link factor can be ascribed
to one of the two site factors at its ends. This leads to a
slightly different Grassmannization protocol.
This trick may prove convenient for generalization to non-local models
considered below.
\section{Generalizations}
\label{sec:III}
\subsection{Plaquette models}
A plaquette model can be viewed as a certain generalization of the local link model.
States (configurations) of a plaquette model are indexed by a set of discrete labels
residing on (oriented) plaquettes of a hyper-cubic lattice.
The plaquette label $\alpha$ takes on either a finite or countably infinite number of values.
The statistical weight of each state factors into a product of plaquette and edge weights (to be referred to as plaquette and edge factors, respectively). A plaquette factor, $f$, is a function of the corresponding plaquette variable, $f\equiv f(\alpha)$. An edge factor, $g$, is a function which depends on the labels of all plaquettes sharing this edge (this set of labels will be denoted as
$\{ \alpha_p \}_j$ for the edge $j$); it encodes, if necessary, constraints on the allowed
sets of $\{ \alpha_p \}_j$.
Without loss of generality (up to a global normalization factor), we identify the
``ground state" as $\alpha_p = 0$ for all plaquettes, and set $f(0)=1$.
The orientation of the plaquette (for some models it is merely a matter of convenience)
is enforced by an ordered enumeration of sites at its boundary. For a plaquette $p$, the vertex label $\nu \equiv \nu_p = 0,\, 1,\, 2,\, 3$ enumerates four vertices in such a way that $\nu \pm 1$ modulo 4 stands for the next/previous vertex with respect to the vertex $\nu$ in the clockwise direction.
For each state $\alpha \neq 0$ of the plaquette $p$, we introduce eight Grassmann variables: $\xi_{\alpha,p, \nu_p}$, $\bar{\xi}_{\alpha,p, \nu_p}, \nu_p = 0,1,2,3$.
As before, for $\alpha=0$ the variables $\xi$ and $\bar{\xi}$ are not Grassmannian, $\xi_{0, p, \nu } =0$, $\bar{\xi}_{0,p,\nu }=1$.
The corresponding plaquette weight in the Grassmann partition function reads
\begin{equation}
A_p = \exp \left\{ \sum_{\alpha \neq 0}\, [-f(\alpha)]^{-1/4} \sum_{\nu_p=0}^{3}\, \bar{\xi}_{\alpha,p, \nu_p}\, \xi_{\alpha,p, \nu_p} \right\}\, .
\label{plaquette}
\end{equation}
Note a close analogy with Eq.~(\ref{link}).
Site weights Eq.~(\ref{site}) are now replaced with edge weights $B_j$.
Using the notation $\{ p \}_j$ for the set of all plaquettes sharing the
edge $j$, and $0_j$ for the state when all plaquettes in this set have
$\alpha_p=0$, we write
\begin{equation}
B_j\, =\, 1 + \sum_{ \{ \alpha_p \}_j \neq 0_j} g(\{ \alpha_p \}_j) \prod_{p \in \{ p \}_j} \xi_{\alpha,p, (\nu_p^{(j)}+1)} \, \bar{\xi}_{\alpha,p, \nu_p^{(j)}}\, ,
\label{edge}
\end{equation}
where $\nu_p^{(j)}$ is the site enumeration index within the plaquette $p$, with respect to which the edge $j$ is outgoing. [Accordingly, the edge $j$ is incoming with respect to site $(\nu_p^{(j)}+1)$.] In what follows, we will associate $\nu_p^{(j)}$ not only with the site, but also with the corresponding edge.
The proof that the classical and Grassmannian partition functions are identical (up to a global factor) is similar to the one for the link model after we notice that a non-zero contribution from
plaquette $p$ is possible only if the same plaquette label $\alpha_p$ is used in all edge weights.
The $\alpha =0$ contribution comes from the term
\begin{equation}
-\prod_{\gamma} {1\over f(\gamma)}\, \prod_{\nu_p=0}^{3}\, \bar{\xi}_{\gamma, p, \nu_p} \, \xi_{\gamma, p, \nu_p} \qquad (\mbox{at}~~\alpha =0)
\label{groundstate_p}
\end{equation}
in the expansion of the exponential (\ref{plaquette}). It contributes a factor $1/q_*$, where
\begin{equation}
q_* = \prod_{\gamma } (-1) f(\gamma) \, .
\label{f_star_p}
\end{equation}
The $\alpha \neq 0$ contribution comes from the plaquette term
\begin{equation}
\prod_{\gamma \neq \alpha} {1\over f(\gamma)}\, \prod_{\nu_p=0}^{3}\, \bar{\xi}_{\gamma, p, \nu_p} \, \xi_{\gamma, p, \nu_p} \qquad (\mbox{at}~~\alpha \neq 0)
\label{non_groundstate_p}
\end{equation}
multiplied by the product $\prod_{\nu_p=0}^{3}\, \bar{\xi}_{\alpha, p, \nu_p} \, \xi_{\alpha, p, \nu_p}$ originating from the boundary edge terms $ \xi_{\alpha,p, (\nu_p^{(j)}+1)} \, \bar{\xi}_{\alpha,p, \nu_p^{(j)}}$. Because of the Grassmann anticommutation rules, this four-edge factor yields an additional minus sign, explaining the use of the negative sign
in front of $f(\alpha)$ in Eq.~(\ref{plaquette}).
Upon Grassmann integration, the contribution to the partition function of the resulting term equals to $f(\alpha)/q_*$.
Feynman diagrammatics for the plaquette model is obtained by following the same basic steps as for
the link models. The Gaussian part is given by Eq.~(\ref{plaquette}) with four pairs of Grassmann
fields for every non-zero plaquette state.
The interaction part of the Grassmann action is contained in edge weights (\ref{edge})
after they are written in an exponential form
\begin{equation}
B_j\, =\, \exp \left[ \sum_{ \{ \alpha_p \}_j} \lambda(\{ \alpha_p \}_j) \prod_{p \in \{ p \}_j} \xi_{\alpha,p, (\nu_p^{(j)}+1)} \, \bar{\xi}_{\alpha,p, \nu_p^{(j)}} \right] \, ,
\label{edge2}
\end{equation}
with the constants $ \lambda(\{ \alpha_b \}_j)$ unambiguously related to the edge factors $g(\{ \alpha_b \}_j)$.
\subsection{Unconstrained discrete models with pair-wise interaction}
The hallmark of the considered link (plaquette) models is the non-trivial interaction introduced via site (edge) factors. It is due to this type of interaction---and, in particular, its extreme form of a constraint on allowed combinations of discrete variables---that we had to introduce multiple
Grassmann variables for each state of the link (plaquette). The situation simplifies dramatically
if we are dealing with unconstrained discrete degrees of freedom with pair interactions between them.
Consider a link model defined by the statistical weight
\begin{equation}
W(\{ \alpha_b \}) = \prod_{b_1 , b_2}
F(\, \alpha_{b_1}, b_1; \, \alpha_{b_2}, b_2 ) \;,
\label{Wlink22}
\end{equation}
based on products of two-link factors. Without loss of generality, these factors can be
cast into the exponential form
\begin{equation}
W(\{ \alpha_b \}) = \prod_{b_1 , b_2}\,
e^{-(1/2) \eta_{\, \alpha_{b_1}, b_1;\, \alpha_{b_2}, b_2}} \;,
\label{Wlink2}
\end{equation}
We assume that all factors in the product are bounded and properties of the $\eta$-matrix are well-conditioned.
Grassmannization of this model can be done by taking advantage of properties of Gaussian integrals
that allow one to express (\ref{Wlink2}) identically (up to normalization) as
\begin{equation}
W(\{ \alpha_b \}) = \int {\cal D} X \prod_{b} \, e^{iX_{\,\alpha_b, b}}
\, W_G( \{ X_{\, \alpha_b, b} \} ) \;.
\label{Wlink3}
\end{equation}
Here $\{ X_{\, \alpha_b, b} \}$ is a collection of auxiliary real continuous variables.
For briefness, we do not show explicitly the Gaussian weight $W_G$ that is
uniquely defined by the values of all pairwise averages performed with this weight
\begin{equation}
\eta_{\, \alpha_{b_1}, b_1; \,\alpha_{b_2}, b_2} \, =\, \langle X_{ \alpha_{b_1}, b_1}
X_{\alpha_{b_2}, b_2 } \rangle \, .
\label{expon1}
\end{equation}
What we achieve for a fixed set of $X$ variables is a link model that contains only single-link factors
\begin{equation}
\forall b: \qquad f_b(\alpha_b) = \, e^{iX_{\alpha_b,b }}.
\label{expon}
\end{equation}
For models with site constraints, link factors can be attributed to site factors
at the incoming (or outgoing) ends with subsequent Grassmannization of the latter
as discussed above. For unconstrained models, Grassmannization is accomplished by replacing sums over link variables with
\begin{equation}
\sum_{\alpha_b} f_b(\alpha_b) \, \to \,
{\cal W}^{(G)}_b \, =\, \exp \left[ \bar{\xi}_b \xi_b \left( \sum_{\alpha_b} \, e^{iX_{\, \alpha_b, b}} \right) \right] \;.
\label{weight_GL}
\end{equation}
Note that here Grassmann variables have nothing to do with the discrete index $\alpha_b$,
in contrast with previous considerations. The resulting formulation contains both
Grassmann and real-number integrations.
Clearly, all considerations can be repeated identically (up to a trivial change in notations)
for a model based on discrete variables $\alpha_s$ residing on lattice sites when
the configuration weight is given by
\begin{equation}
W(\{ \alpha_s \}) = \prod_{s_1 , s_2}\,
e^{-(1/2) \eta_{\, \alpha_{s_1}, s_1;\, \alpha_{s_2}, s_2}} \; .
\label{Q_1_Q_2}
\end{equation}
\subsection{Order of expansion}
\label{subsec:D}
The notion of the order of expansion is absolutely central for practical applications
when diagrammatic series are truncated. Normally, it is defined as an integer non-negative power of a certain dimensionless parameter $\zeta$ playing the role of a generalized coupling constant,
such that the diagrammatic expansion corresponds to a Taylor expansion in $\zeta$ about the point $\zeta=0$. Without loss of generality,
we can always select $\zeta$ (by an appropriate rescaling) in such a way that the physical value of $\zeta$ is 1. This is especially convenient in cases
when there is more than one interaction vertex, and ascribing different powers of
$\zeta$ to them results in (re-)grouping of different terms in the series.
A reasonable guiding principle behind such a (re-)grouping is the requirement to end up with
Taylor series having finite convergence radius around $\zeta=0$.
The latter is guaranteed if the theory is analytic in $\zeta$ at the origin; the necessary condition for this to be true is the absence of Dyson's collapse when changing
the sign (more generally, the phase) of $\zeta$.
As an illustration, consider the theory (\ref{Wlink22})-(\ref{Wlink2}) and its
Grassmann counterpart (\ref{weight_GL}). Introduce the $\zeta$-dependence by the replacement
\begin{equation}
e^{iX_{\, \alpha_b, b } } \, \to \, e^{i\zeta X_{\, \alpha_b, b}} .
\label{zeta}
\end{equation}
In terms of the original theory, the replacement (\ref{zeta})
means $\eta \to \zeta^2 \eta$, for all $\eta$'s in Eq.~(\ref{Wlink2}).
If amplitudes of all $\eta$ values in (\ref{Wlink2}) are bounded,
we expect that such a dependence on $\zeta$ is analytic not only for a finite system,
but also in the thermodynamic limit at finite temperature.
In the Grassmann action (\ref{weight_GL}), the expansion of the exponential $e^{i\zeta X_{s, \alpha_b}}$ in powers of $\zeta$ generates an infinite series of interaction vertexes (the zeroth-order term defines the harmonic action):
\begin{equation}
\bar{\xi}_b \xi_b \sum_{\alpha_b } \left( i \zeta X_{\, \alpha_b, b} - {1\over 2} \zeta^2
X_{\, \alpha_b, b}^2 - {i\over 3!} \zeta^3 X_{\, \alpha_b, b}^3 + \ldots \right).
\label{coupling}
\end{equation}
Higher-order vertexes in $X$ come with a higher power of $\zeta$ and this sets unambiguously the rules for defining the diagram order.
\section{Illustration for the 2D Ising model}
\label{sec:IV}
\begin{figure}[tbp]
\centering
\includegraphics[width=1.0\columnwidth]{prediagrams.pdf}
\caption{(Color online) Four classes of generic pre-diagrams for link models on a square lattice. The elements in the first and the third row can only occur at the end points of the spin correlator (indicated by the open circle), the elements in the second and fourth row are the generic basic vertexes of the theory ascribed to the sites of the underlying lattice. There are hence 4 $V_1$ vertexes with 1 leg (first row, $U, R, D,$ and $L$), 6 $V_2$ vertexes with 2 legs (second row, $RU, RD, LD, LU, UD$ and $LR$), 4 $V_3$ vertexes with 3 legs (third row, $LUR, URD, LDR,$ and $DLU$), and 1 $V_4$ vertex with 4 legs (fourth row, $RULD$). Connected to the legs of these vertexes are pairs of bi-Grassmann fields (thick dash lines (blue and red)) that reside on the links of the underlying 2D lattice. Thin dashed lines (showing lattice links adjacent to the site of the vertex) are to guide the eye and have no other meaning than showing the underlying 2D lattice. The generalization to other dimensions is straightforward.} \label{fig:prediagrams}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[trim = 0 0mm 0mm 0mm, clip, width=1.0\columnwidth]{rho_generic_a.pdf}
\caption{(Color online) The first- and third-order diagrams for $\rho_{(1,0)}$ (at $h=0$) based on expanding (\ref{sigma_sigma_correlator_1}). The contribution of these diagrams is $\zeta+ 2\zeta^3$.}
\label{fig:rho_generic_a}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[trim = 0 0mm 0mm 0, clip, width=1.0\columnwidth]{rho_generic_b1.pdf}
\caption{(Color online) Fifth-order diagrams for $\rho_{(1,0)}$ (at $h=0$) based on expanding (\ref{sigma_sigma_correlator_1}): These four diagrams involve a three-leg end vertex. Each diagram contributes $(-2)\zeta^5$.}
\label{fig:rho_generic_b1}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[trim = 0 0mm 0mm 0, clip, width=1.0\columnwidth]{rho_generic_b2.pdf}
\caption{(Color online) Fifth-order diagrams for $\rho_{(1,0)}$ (at $h=0$) based on expanding (\ref{sigma_sigma_correlator_1}): These four counterparts of the diagrams shown in
Fig.~\ref{fig:rho_generic_b1} are obtained by replacing a three-leg end vertex with a one-leg end vertex.
Each diagram contributes $\zeta^5$.}
\label{fig:rho_generic_b2}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[trim = 0 0mm 0mm 0, clip, width=1.0\columnwidth]{rho_generic_b.pdf}
\caption{(Color online) The four remaining counterparts (cf. Fig.~\ref{fig:rho_generic_b2}) to Fig.~\ref{fig:rho_generic_b1}. }
\label{fig:rho_generic_b}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[trim = 0 0mm 0mm 0mm, clip, width=1.0\columnwidth]{rho_generic_c.pdf}
\caption{(Color online) Additional fifth-order diagrams for $\rho_{(1,0)}$ (at $h=0$) involving two one-leg end vertexes. Each diagram contributes $\zeta^5$.}
\label{fig:rho_generic_c}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[trim = 0 0mm 0mm 0mm, clip, width=0.6\columnwidth]{rho_generic_d.pdf}
\caption{(Color online) Fifth-order diagrams for $\rho_{(1,0)}$ (at $h=0$) containing a link with multiple Grassmann pairs. The net sum of the shown diagrams is $- \zeta^5$, because there are three ways of associating the primed and non-primed propagators along the bottom link,
two of them contribute with the negative sign (upper right and lower left panel) and the third one is contributing with the positive sign (lower right panel). The remaining possibility
(shown in the upper left panel) is not allowed since it produces a disconnected diagram.}
\label{fig:rho_generic_d}
\end{figure}
\subsection{Model and observables}
Consider the 2D Ising model on the square lattice with the Hamiltonian
\begin{equation}
-H/T = \beta \sum_{\langle i,j \rangle} \sigma_i \sigma_j + \sum_i h_i \sigma_i.
\end{equation}
The Ising variables $\sigma = \pm 1$ live on the sites of the 2D square lattice and interact ferromagnetically with their nearest neighbors, as is represented by the first term in the Hamiltonian. We write the dimensionless coupling as $\beta$ in units of the temperature $T$.
Additionally, every spin feels a dimensionless magnetic field $h_i=h$, which can be taken
$h \ge 0$ without loss of generality.
The partition function of the Ising model reads
\begin{equation}
Z = \sum_{ \{ \sigma_i \} }
\prod_{\langle i,j \rangle} e^{\beta \sigma_i \sigma_j} \; \prod_{i} e^{h_i \sigma_i}.
\end{equation}
The most typical observable of the Ising model is the spin-spin correlation function $\rho_{ij}$,
\begin{equation}
\rho_{\ij} = \langle \sigma_i \sigma_j \rangle = \frac{1}{Z} \left.
\frac{ \partial^2 Z}{\partial h_i \partial h_j} \right|_{h_i = h_j= h} \,.
\label{sigma_sigma_correlator_1}
\end{equation}
\subsection{Grassmannization of the high-temperature expansion}
Using the well-known identities
\begin{eqnarray}
e^{\beta \sigma_i \sigma_j} & = & \cosh \beta \left( 1 + \sigma_i \sigma_j \tanh \beta \right) \nonumber \\
e^{h \sigma_i} & = & \cosh h \left( 1 + \sigma_i \tanh h \right),
\end{eqnarray}
the partition function can be written as $Z = Z_0 Z'$ with
$Z_0 = ( \cosh \beta)^{2N} ( \cosh h)^N$ for a lattice of $N$ sites and $2N$ links. With the notation
$\zeta = \tanh \beta$ and $\eta = \tanh h$ the remaining factor is given by
\begin{equation}
Z' = \sum_{ \{ \sigma_i \} } \, \prod_{\langle i,j \rangle} ( 1 + \sigma_i \sigma_j \zeta )
\, \prod_{i} ( 1 + \sigma_i \eta).
\end{equation}
Upon summation over spin variables
we are left with a link model, where link variables take only two values, 0 or 1, to
specify whether we are dealing with the first or the second term in the sum
$(1 + \sigma_i \sigma_j \zeta)$. In the partition function, terms with an odd power of $\sigma_i$ on any of the sites yield zero upon spin summation. The remaining terms depend on link
variables in a unique way.
The formalism of the previous section can be straightforwardly applied, and we obtain
\begin{eqnarray}
& f(0) = 1\,, \;\; f(1) = f_* = \zeta \,, \\
& g(0) = g(2) = g(4)=1 \;, \;\; g(1) = g(3)=\eta \,. \label{eq:factors_f_g}
\end{eqnarray}
Here we label site factors using the total sum of incident link variables,
$\sum_{b \in \{ b \}_j } \alpha_b$, to avoid unnecessary rank-4 tensor notations.
If we further redefine $Z_0 \to Z_0 2^N f_*^{2N}$, then the Grassmann representation
of the partition function $Z'$ is given by
\begin{eqnarray}
Z' &= & \int \mathcal{D}[ \bar{\xi}' \xi' \bar{\xi} \xi ]_{ \{ \alpha_b \} } \prod_{ \{ \alpha_b \} } \exp \left( \frac{1}{\sqrt{\zeta}} \bar{\xi}'_{\alpha_b} \xi_{\alpha_b} + \frac{1}{\sqrt{\zeta}} \bar{\xi}_{\alpha_b} \xi_{\alpha_b} \right) \nonumber \\
{} & {} & \times \exp \left( \sum_j \lambda_{ \alpha_{ \{b \} } } \prod_{b_j} \breve{\xi}_{\alpha_b} \breve{\xi}^*_{\alpha_b} \right) \,.
\end{eqnarray}
\subsection{Vertex coefficients}
We now compute the factors $\lambda$. To this end, we first introduce notations
(for a fixed site $j$ and suppressing the site index for clarity)
\begin{eqnarray}
V_1\!\! &=&\!\! \breve{\xi}_R \breve{\xi}_R^* + \breve{\xi}_U \breve{\xi}_U^* + \breve{\xi}_L \breve{\xi}_L^* + \breve{\xi}_D \breve{\xi}_D^* = n_R + n_U + n_L + n_D, \nonumber \\
V_2\!\! &=&\!\! n_R n_U + n_R n_L + n_R n_D + n_U n_L + n_U n_D + n_L n_D, \nonumber \\
V_3\!\! &=&\!\! n_R n_U n_L + n_R n_U n_D + n_R n_L n_D + n_U n_L n_D, \nonumber \\
V_4\! \!&=&\! \!n_R n_U n_L n_D ,
\end{eqnarray}
and then Taylor expand
\begin{equation}
\exp \left[ \lambda_1 V_1 + \lambda_2 V_2 + \lambda_3 V_3 + \lambda_4 V_4 \right].
\end{equation}
The only non-zero terms generated by this expansion are $V_1^2 = 2V_2, V_1^3 = 6V_3, V_1^4 = 24 V_4, V_1 V_2 = 3V_3, V_1 V_3 = 4 V_4$ and $V_2^2 = 6 V_4$. All other powers and multiplications
of operators yield zero. Note that operators from different sites commute and may be
excluded from consideration here. The final result is
\begin{eqnarray}
\lefteqn{\exp \left[ \lambda_1 V_1 + \lambda_2 V_2 + \lambda_3 V_3 + \lambda_4 V_4 \right] = } \nonumber \\
& & 1 + \lambda_1 V_1 + \lambda_2 V_2 + \lambda_3 V_3 + \lambda_4 V_4 \nonumber \\
& & + \frac{1}{2} \left( \lambda_1^2 2V_2 + \lambda_2^2 6V_4 + 2 \lambda_1 \lambda_2 3 V_3 + 2 \lambda_1 \lambda_3 4V_4 \right) \nonumber \\
& & + \frac{1}{6} \left( \lambda_1^3 6V_3 + 3 \lambda_1^2 \lambda_2 12 V_4 \right) + \frac{1}{24} \lambda_1^4 24 V_4 .
\end{eqnarray}
Term-by-term matching with Eq.~(\ref{eq:factors_f_g}) then leads to
\begin{eqnarray}
g_1 = \eta & = & \lambda_1\, , \\
g_2 = 1 & = & \lambda_2 + \lambda_1^2\, ,\\
g_3 = \eta & = & \lambda_3 + 3 \lambda_1 \lambda_2 + \lambda_1^3\, ,\\
g_4 = 1 & = & \lambda_4 + 3 \lambda_2^2 + 4 \lambda_1 \lambda_3 + 6 \lambda_2 \lambda_1^4 + \lambda_1^4 \, .
\end{eqnarray}
The solution is immediate
\begin{eqnarray}
\lambda_1 & = & \eta \, , \\
\lambda_2 & = & 1 - \eta^2\, , \\
\lambda_3 & = & -2 \eta + 2 \eta^3\, , \\
\lambda_4 & = & -2 + 8 \eta^2 - 6 \eta^4 \, .
\end{eqnarray}
In what follows we will discuss the $\eta=0$ case (zero external field)
when the only vertexes with non-zero coupling in the partition function
are $V_2$ and $V_4$,
\begin{equation}
\prod_j \exp(V_2^{(j)} + V_4^{(j)}) = \exp \left\{ \sum_j (V_2^{(j)} - 2 V_4^{(j)}) \right\}.
\end{equation}
The expansion of $Z'$ in powers of $\zeta$ then goes as
\begin{eqnarray}
Z' & = & \prod_b \int \mathcal{D} [...]_b \exp \left( \frac{1}{\sqrt{\zeta}} \bar{\xi}'_b \xi'_b + \frac{1}{\sqrt{\zeta}} \bar{\xi}_b \xi_b \right) \nonumber \\
{} & {} & \times \exp \left\{ \sum_j (V_2^{(j)} - 2 V_4^{(j)}) \right\}
\nonumber \\
& = & \left[ 1 + 4 \zeta^4 + 12 \zeta^6 + \ldots \right]^N.
\label{zetprim}
\end{eqnarray}
and the spin-spin correlation function is given by
\begin{eqnarray}
\rho_{ij} & = & \frac{1}{Z'}\prod_b \int \mathcal{D} [...]_b \exp \left( \frac{1}{\sqrt{\zeta}} \bar{\xi}'_b \xi'_b + \frac{1}{\sqrt{\zeta}} \bar{\xi}_b \xi_b \right) \nonumber \\
{} & {} & \times (V_1^{(i)} - 2 V_3^{(i)})(V_1^{(j)} - 2 V_3^{(j)}) .
\label{rhoprim}
\end{eqnarray}
\subsection{Feynman rules}
In order to arrive at the Feynman perturbative expansion we need to write the partition function in the form
\begin{equation}
Z' = Z_{\cal l} \left( \sum_{n=0}^{\infty} \sum_{x_1, \ldots, x_n} \frac{(+1)^n}{n!} \langle V(x_1) \ldots V(x_n) \rangle_0 \right) ,
\end{equation}
where $Z_{\cal l}$ is the partition function of the Gaussian part
(it is the product of local link contributions),
$Z_{\cal l} = \prod_b \int \mathcal{D} [...] \exp( \bar{\xi}'_b \xi'_b + \frac{1}{\zeta} \bar{\xi}_b \xi_b ) = (1 + \zeta)^{(2N)}$.
Feynman rules for the correlation function of the 2D Ising model now follow from the textbook
considerations:
\begin{enumerate}
\item The bare propagators $G^{(0)}=\sqrt{\zeta}$ for primed and non-primes variables are local and reside on the links of the original lattice.
In the correlation function they always occur in pairs of conjugate Grassmann variables
and each pair contributes a factor $\zeta$.
The propagation lines do not have arrows. The bare interaction vertexes (or pre-diagrams, see Fig.~\ref{fig:prediagrams}) are also local and live on the sites of the lattice. There are different types belonging to the $V_2$ and $V_4$ classes with weight $1$ and $-2$, respectively [see Eq.~(\ref{zetprim})]. On the first (and last) site of the correlator we have a vertex belonging to the class $V_1$ or $V_3$ (see Figs.~\ref{fig:rho_generic_a}-\ref{fig:rho_generic_c}) with weight $1$ and $-2$, respectively [see (\ref{rhoprim})].
\item Draw in order $n$ all topologically distinct connected diagrams with $n$ pairs of bi-grassmann variables living on the links of the lattice. The number of interaction vertexes, excluding the end points, is at most $n-1$.
\item For links with multiple occupancy, a minus sign occurs when swapping 2 Grassmann variables. The minus sign can also be found by counting all closed fermionic loops.
\item The total weight of the diagram in order $n$ is hence $(-1)^P (-2)^q \zeta^n$ with $P$ the signature of the exchange permutation and $q$ the sum of all type-3 and type-4 vertexes.
\end{enumerate}
Disconnected diagrams are defined with respect to both the primed and non-primed Grassmann variables simultaneously. Thus, a link can lead to a disconnected diagram only if the primed and non-primed variables simultaneously lead to disconnected pieces (such as the upper left panel in Fig.~\ref{fig:rho_generic_d}). We check the connectivity of a diagram by the breadth-first algorithm.\\
\subsection{Example: the first element of the spin correlation function}
Let us focus on the first element of the correlation function connecting the sites $(0,0)$ and $(1,0)$ (using translational invariance, any 2 neighboring sites $\left< \bm{r}_1, \bm{r}_2 \right>$ can be taken).
To first order, we put a $V_1$ vertex on the origin and target site. There is one way to combine them, thus the total contribution is $\zeta$.
By the symmetry of the lattice, even expansion orders do not contribute. In third order, we can construct a diagram by putting a $V_2$ (RD) vertex on the site $(0,1)$ and a $V_2$ vertex $(LD)$ on the site $(1,1)$. The mirror symmetry of this diagram about the x-axis is also a valid diagram. Hence, the contribution is $2 \zeta^3$. These diagrams contributing in first and third order are shown in Fig.~\ref{fig:rho_generic_a}.
In fifth order, there are 4 diagrams with a $V_3$ vertex on one of the endpoints, yielding a contribution $-8\zeta^5$ . There are 14 diagrams consisting of only $V_1$ and $V_2$ vertexes and single pair-lines, yielding a contribution $14 \zeta^5$. The contributions to fifth order are shown in
Figs.~\ref{fig:rho_generic_b1},~\ref{fig:rho_generic_b2},~\ref{fig:rho_generic_c}, and~\ref{fig:rho_generic_d}.
There are however additional diagrams with 2 pairs of Grassmann variables living on the same link, as is shown in Fig.~\ref{fig:rho_generic_d} (there are equivalent diagrams obtained by mirror symmetry around the x-axis which are not shown).
They all have on the origin a $V_1$ and a $V_2$ ($RU$) vertex, and on the target site $(1,0)$ a $V_1$ and a $V_2$ ($UL$) vertex. On the site $(1,1)$ there is a $V_2$ ($LD$) and on site $(0,1)$ a $V_2$ ($RD$) vertex. Let us look more carefully at the link between the origin and target site:
\begin{equation}
\frac{1}{\zeta^2} \int \mathcal{D} \left[ \bar{\xi}' \bar{\xi} \xi \xi' \right] \bar{\xi}' \bar{\xi} \xi \xi' \bar{\xi}' \bar{\xi} \xi \xi' .
\label{example_term}
\end{equation}
The origin is associated with $\bar{\xi}' \bar{\xi}$ and the target with $ \xi \xi'$ by our convention.
Applying Wick's theorem, there are 4 possible ways to pair the Grassmann variables:
\begin{enumerate}
\item The pairing combination
$
\overbracket{\bar{\xi}'\underbracket{\bar{\xi} \xi} \xi'} \overbracket{\bar{\xi}'
\underbracket{\bar{\xi} \xi} \xi'}
$
comes with the sign +1 and leads to a connected diagram (this is the lower right panel in Fig.~\ref{fig:rho_generic_d}).
\item The pairing combination
$
\mathrlap{\overbracket{\phantom{\bar{\xi}' \bar{\xi} \xi \xi'}}} \bar{\xi}'\underbracket{\bar{\xi} \underbracket{\xi \xi' \mathrlap{\overbracket{\phantom{ \bar{\xi}' \bar{\xi} \xi \xi'}}}
\bar{\xi}' \bar{\xi}} \xi} \xi'
$
comes with the sign -1 and leads to a connected diagram (this is the upper right panel in Fig.~\ref{fig:rho_generic_d})
\item The pairing combination
$
\overbracket{\bar{\xi}'\underbracket{\bar{\xi} \xi} \underbracket {\xi'\bar{\xi}'}
\underbracket{\bar{\xi} \xi} \xi'}
$
comes with the sign -1 and leads to a connected diagram (this is the lower left panel in Fig.~\ref{fig:rho_generic_d})
\item The pairing combination
$
\overbracket{\bar{\xi}'\overbracket{\bar{\xi}\underbracket{\xi \underbracket{\xi'\bar{\xi}'}
\bar{\xi}} \xi} \xi'}
$
leads to a disconnected diagram and does not contribute to the correlation function (this is the upper left panel in Fig.~\ref{fig:rho_generic_d}).
\end{enumerate}
The net contribution of these 4 distinct diagrams is hence $-1$ (also the diagrams obtained by mirror symmetry around the x-axis yield $-1$, so the total contribution to fifth order is $ (-8 + 14 - 2) \zeta^5 = 4 \zeta^5$.
It is instructive to notice that the sum of all diagrams in which multiple Grassmann pairs live on the same link always produces zero in case all diagrams are connected, in line with the nilpotency of Grassmann variables. Wick's theorem splits however these contributions in connected and disconnected diagrams, where the disconnected diagrams cancel against the denominator of the Feynman expansion.
It is this non-trivial regrouping imposed by Wick's theorem that can yield non-zero
contributions from terms like (\ref{example_term}); and, in particular, from arbitrarily high powers of one and the same interaction vertex.
\section{Implementation}
\label{sec:V}
We explored two ways of evaluating the (bare) series for the spin-correlator: a stochastic Monte Carlo approach and a deterministic full evaluation of all diagrams.
\subsection{Monte Carlo sampling }
In order to perform a Monte Carlo sampling over all Feynman diagrams, we introduce a head and a tail that represent the endpoints of the correlation function. By moving them around the lattice and changing the diagrammatic elements in between the head and tail, we are able to reach an ergodic sampling. The algorithm can be formulated as follows:
The tail remains stationary at the origin whereas the head can move around the lattice. When the head and tail are on the same site and the expansion order is 0, the value of the correlation function is 1 which can be used for normalization of the Monte Carlo process. A Monte Carlo measurement contributes $+1$ or $-1$ depending on the sign of the diagram weight. The simplest Monte Carlo procedure samples according to the absolute weights of the diagrams and consists of the following pairs of reciprocal updates:
\begin{enumerate}
\item MOVE--RETRACT. We choose one of the 4 directions randomly, and attempt to place the head on the site adjacent to the current head site according to this direction. In case this direction does not correspond to backtracking, the current $V_1$ type of the tail turns into a $V_2$, otherwise the head goes back and changes the previous $V_2$ into a $V_1$ type (unless the diagram order is 0 or 1, when only $V_1$ types are possible).
When moving forward, the way of pairing primed and non-primed variables is always unique which in turns implies that we can only retract when the head is connected via a ``straight pair connection" to the previous vertex (both primed and non-primed Grassmann variables of the head are connected to the same vertex on the previous site). We only allow the MOVE--RETRACT updates if the end vertex types are $V_1$.
\item SWAP VERTEX. Swaps between the vertexes $V_1 + V_2 \leftrightarrow V_3$ (for head and/or tail) and $V_2 + V_2 \leftrightarrow V_4$ (anywhere in the diagram). This update is its own reciprocal.
\item RELINK. On a given link, relink primed and non-primed Grassmann variables. This can change the sign of the weight only. This update is its own reciprocal.
\end{enumerate}
The second and third type of updates may lead to disconnected diagrams. In such cases, the configuration is unphysical. We opt to allow such configurations, but a Monte Carlo measurement is forbidden and type-1 updates remain impossible until the diagram is connected again.
For small values of $\zeta$ the sign problem is nearly absent, but only low expansion orders can be reached. For higher values of $\zeta$ (close to and above the critical one) an increasing number of orders contributes significantly, consequently more time is spent in higher orders and the sign problem significantly worsens.
\subsection{Deterministic full evaluation}
For the case of the 2D Ising model, a Monte Carlo approach offers no advantages over a full series expansion approach. With this we mean the explicit listing and evaluation of all possible diagrams as opposed to the stochastic sampling over all topologies. This is because all diagrams in a given expansion order contribute a number of order unity (times the same power of $\zeta$), often with alternating sign, leading to huge cancellations. Only the exact cancellation has physical information, and this requires that every diagram is evaluated multiple times before the correct convergence can be seen. A Monte Carlo approach makes much more sense if the dominant contributions to the total weight are coming from a narrow parameter region, which is usually the case if there are additional integrals over internal momenta.
We therefore wrote a code that evaluates all diagrams for the correlation function up to a maximum order. The construction is based on the fact that there is an easy way to construct all the ``easy" diagrams (the ones that formally look like originating from a high-temperature series expansion). These can serve as parent diagrams, from which further offspring diagrams can be constructed which have one or multiple $V_3$ and $V_4$ vertexes as well as possible fermionic exchanges. All diagrams in order $n$ can be found as follows:
\begin{enumerate}
\item Write down all possible words of the form $X_1 X_2 \ldots X_n$ with the alphabet $X_j \in \{0,3 \}$ corresponding to the 4 directions on the square lattice. Make sure that subsequent directions are not backtracking. For example, if $X_4$ is in the positive $+\hat{x}$ direction, then $X_5$ cannot be in the negative $-\hat{x}$ direction. From this word we also know all sites and links that are visited, as well as all type-1 and type-2 vertexes that are used to make this diagram.
\item Such a parent diagram is added to a list of different topologies only if it has a unique topology. To store the topological information of a bare vertex, we need to store a pair consisting of a site index and a vertex type. The diagram is then stored as an ordered map where the ``key" values are given first by the lattice site index and second by the vertex type (in binary format). The ordered map may have multiple entries with the same key if multiple vertexes reside on the same site and if they are of the same type ({\it e.g.}, two $RL$ vertexes on the same site).
\item We iterate over this configuration list and check if the tail and head sites can be merged into a type-3 vertex by combining them with type-2 vertexes that reside on the same lattice site. If so, and if the resulting topology is unique, the diagram is added to the list. This step is performed in three parts: first for the head and tail together (in order to find all diagrams with 2 $V_3$ ends), then for the head alone, and finally for the tail alone.
\item We iterate again over the full configuration list and check if 2 type-2 vertexes that live on the same site can be merged into a type-4 vertex. This last step has to be repeated until no further merges are possible (since it may happen that a diagram has multiple type-4 vertexes or even multiple type-4 vertexes on the same site). Diagrams thus created are also added the configuration list if their topology is unique. After completion of this step, all possible topologies have been generated.
\item We compute the product of all the vertex weights, according to the Feynman rules.
\item From this list of parent diagrams we need to generate all offspring diagrams which feature all possible fermionic permutations for multiply occupied links. This first requires that we know how the vertexes are connected in the parent diagram, which is stored in the configuration list. The parent diagram always has permutation sign +1 (because the connections of the primed and non-primed Grassmann variables are always the same). Next we generate all possible permutations by relinking the primed and/or the non-primed Grassmann variables using Heap's algorithm. If a link has occupation number $m$, then there are $(m!)^2$ combinations to be generated (and there may be more than one multiply occupied link). The permutation signature is also stored.
\item We check the connectivity of the diagram using the breadth-first algorithm. Disconnected diagrams contribute 0.
\item Finally, we compute the isomorphism factor: if $m$ identical vertexes on the same site are found, a factor $1/m!$ must be taken into account. This is a consequence of how we construct the diagrams: topology checks were only performed on the parent diagrams (and based on vertexes only), not on offsprings obtained by fermionic exchange. (It would be prohibitively expensive to add the offspring diagrams to the list of all possible diagrams.) Hence, just as we generate illegal disconnected diagrams, we also have a double counting problem when identical vertexes occur in the list.
\end{enumerate}
In order 14, there were about 140,000 parent diagrams contributing to the first entry on the diagonal of the correlator. The hugest number of permutations was $(4!)^4(3!)^4 \approx 10^8$. Since the sum of these permutations has a net contribution of order 1, Monte Carlo has roughly a sign problem of the order of $10^{-8}$ for these diagrams. The first time a nontrivial isomorphism factor is seen is in order 6 for the first element on the diagonal of the spin correlator: There are diagrams in which two links are doubly occupied, and those links are connected by an identical $V_2$ vertex, hence the isomorphism factor $1/2$. More efficient ways of evaluating and storing the diagrams can probably be devised and implemented, but the above scheme is sufficient to check the validity of the technique and study the transition. \\
\section{Results}
\label{sec:VI}
\subsection{Spin-spin correlation function}
Our results for the spin-spin correlation function are shown in Table.~\ref{table:rho}. The correlation function is known recursively from Refs.~\onlinecite{Perk1980, Guttmann_rho1, Guttmann_rho2}. It is also known as a Painlev{\'e}-VI nonlinear differential equation\cite{McCoy76} but this is not so well suited to obtain the series coefficients. Along the principal axes and the diagonal it can also be expressed as a Toeplitz determinant.
The first element along and the axis and the diagonal can be recast in terms of complete elliptic integrals (see pp. 200-201 in Ref.~\onlinecite{mccoy1973}), which are convenient for series expansions,
\begin{eqnarray}
\rho_{(1,0)} & = & \coth(2 \beta) \left[ \frac{1}{2} + \frac{\cosh^2 2\beta }{\pi} (2 \tanh^2 2\beta - 1) K(k_>) \right] \nonumber \\
{} & \to & \zeta + 2 \zeta^3 + 4 \zeta^5 + 12 \zeta^7 + 42 \zeta^9 + \ldots \\
\rho_{(1,1)} & = & \frac{2}{\pi k_>} \left[ K(k_>) + (k_>^2 - 1) K(k_>) \right] \\
{} & \to & 2\zeta^2 + 4\zeta^4 + 10 \zeta^6 + 32 \zeta^8 + 118 \zeta^{10} + \ldots \nonumber
\end{eqnarray}
with $k_> = \sinh^2(2\beta)$, $K(.)$ and $E(.)$ the complete elliptic K and E functions, respectively.
The above-cited recursion relations could be initialized with these expansions and shown to yield the same results as the top 2 rows in Table.~\ref{table:rho}.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
site/order & $\zeta$ & $\zeta^2$ & $\zeta^3$ & $\zeta^4$ & $\zeta^5$ & $\zeta^6$ & $\zeta^7$ & $\zeta^8$ & $\zeta^9$ & $\zeta^{10}$ & $\zeta^{11}$ \\ \hline
(1,0) & 1 & 0 & 2 & 0 & 4 & 0 & 12 & 0 & 42 & 0 & 164 \\ \hline
(1,1) & 0 & 2 & 0 & 4 & 0 & 10 & 0 & 32 & 0 & 118 & 0 \\ \hline
(2,0) & 0 & 1 & 0 & 6 & 0 & 16 & 0 & 46 & 0 & 158 & 0 \\ \hline
(2,1) & 0 & 0 & 3 & 0 & 11& 0 & 31 & 0 & 97 & 0 & 351 \\ \hline
(2,2) & 0 & 0 & 0 & 0 & 6 & 0 & 24 & 76 & 0 & 248 & 0 \\ \hline
(3,0) & 0 & 0 & 1 & 0 & 12 & 0 & 48 & 0 &152 & 0 &506\\ \hline
(3,1) & 0 & 0 & 0 & 4 & 0 & 26 & 0 & 92 & 0 &298 & 0\\ \hline
(3,2) & 0 & 0 & 0 & 0 & 10 & 0 & 55 & 0 & 201 & 0 & 684\\ \hline
(3,3) & 0 & 0 & 0 & 0 & 0 & 20 & 0 & 120 & 0 & 480 & 0 \\ \hline
(4,0) & 0 & 0 & 0 & 1 & 0 & 20 & 0 & 118 & 0 &7 452 & 0\\ \hline
(4,1) & 0 & 0 & 0 & 0 & 5 & 0 & 52 & 0 & 244 & 0 & 885\\ \hline
(4,2) & 0 & 0 & 0 & 0 & 0 & 15 & 0 & 118 &0 & 521 & 0 \\ \hline
(4,3) & 0 & 0 & 0 & 0 & 0 & 0 & 25 & 0 &259 & 0 & 1176 \\ \hline
(4,4) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 70 & 0 & 560 & 0 \\ \hline
(5,0) & 0 & 0 & 0 & 0 & 1 & 0 & 30 & 0 & 250 & 0 & 1200 \\ \hline
(5,1) & 0 & 0 & 0 & 0 & 0 & 6 & 0 & 92 & 0 & 574 & 0 \\ \hline
(5,2) & 0 & 0 & 0 & 0 & 0 & 0 & 21 & 0 & 231 & 0 & 1266 \\ \hline
(5,3) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 56 & 0 & 532 & 0 \\ \hline
(5,4) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 126 & 0 & 1176 \\ \hline
(5,5) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 252 & 0 \\ \hline
(6,0) & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 42 & 0 & 474 & 0 \\ \hline
(6,1) & 0 & 0 & 0 & 0 & 0 & 0 & 7 & 0 & 149 & 0 & 1215\\ \hline
(6,2) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 28 & 0 & 416 & 0 \\ \hline
(6,3) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 84 & 0 & 1026 \\ \hline
(6,4) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 210 & 0 \\ \hline
(6,5) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 462 \\ \hline
(7,0) & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 56 & 0 & 826\\ \hline
(7,1) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 226 & 0 \\ \hline
(7,2) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 36 & 0 & 699 \\ \hline
(7,3) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 120 & 0 \\ \hline
(7,4) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 330 \\ \hline
(8,0) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 72 & 0 \\ \hline
(8,1) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 9 & 0 & 326 \\ \hline
(8,2) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 45 & 0 \\ \hline
(8,3) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 165 \\ \hline
(9,0) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 90 \\ \hline
(9,1) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 10 & 0 \\ \hline
(9,2) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 55 \\ \hline
(10,0)& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ \hline
(10,1)& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 11 \\ \hline
(11,0)& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \hline
\hline
\end{tabular}
\caption{Expansion coefficients for the correlation function up to order 11.}
\label{table:rho}
\end{table}
\subsection{Magnetic susceptibility}
\begin{figure}[tbp]
\centering
\includegraphics[width=1.0\columnwidth]{chi.pdf}
\caption{(Color online) The magnetic susceptibility versus $\zeta$ for different expansion orders from 12 to 1 (top to bottom), compared
to the order 100 result---the converged answer over this plotting range---obtained from Ref.~\onlinecite{Guttmann},
which shows a divergence in good agreement with the critical exponent $\gamma = 7/4$ starting from $\beta \ge 0.38$}
\label{fig:chi}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=1.0\columnwidth]{ratio_chi.pdf}
\caption{(Color online) Ratio of consecutive coefficients $\chi[n-1]$ and $\chi[n]$ in the expansion of the susceptibility as a function of the inverse of the expansion order $1/n$. Linear regression according to Eq.~(\ref{eq:lin_reg}) allows to determine the critical temperature with an accuracy of $0.5\%$ and the critical exponent $\gamma$ with an accuracy of $5\%$. The fitting regime included orders 9 through 14.
}
\label{fig:chi_ratio}
\end{figure}
The spin susceptibility is related to the zero momentum value of the Green function by $\beta^{-1} \chi = 1 +\rho (\bm{p}=0)$.
We can hence sum over the entire lattice to obtain
\begin{eqnarray}
\beta^{-1} \chi & = & 1 + 4 \zeta + 12 \zeta^2 + 36\zeta^3 + 100 \zeta^4 + 276\zeta^5 \nonumber \\
{} & {} & +740 \zeta^6 +1972 \zeta^7 + 5172 \zeta^8 + 13492 \zeta^9 \nonumber \\
{} & {} & + 34876 \zeta^{10} + 89764 \zeta^{11} + 229628 \zeta^{12} \nonumber \\
{} & {} & + 585508 \zeta^{13} + 1486308 \zeta^{14} +\ldots
\end{eqnarray}
To this order the series expansion agrees with the ones from Ref.~\onlinecite{Sykes1972} and Ref.~\onlinecite{Guttmann}. For a library of high-temperature series expansions,
see Ref.~\onlinecite{Butera2002}. Currently, the series is known (at least) up to order 2000 and still topic of active research.\cite{Guttmann, Guttmann_rho2}
The series is convergent for any finite expansion order, {\it i.e.}, in the thermodynamic limit the infinite series will diverge first at the phase transition point.
It is hence possible to study the critical behavior of the susceptibility, which is governed by the critical exponent $\gamma = 7/4$.
We plot in Fig.~\ref{fig:chi} the susceptibility versus $\beta$ for different expansion orders, and also plot the asymptotic behavior for comparison.
The critical temperature and the exponent $\gamma$ can be found from a study of the convergence radius of the series. Since
\begin{eqnarray}
\beta^{-1} \chi & = & \sum_n \chi_n \zeta^n \propto (1 - \zeta/ \zeta_c)^{-\gamma} \\
{} & = & 1 + \sum_{n=1}^{\infty} \frac{ \gamma (\gamma - 1) \cdots (\gamma + n - 1)}{ n!} \left( \frac{\zeta}{\zeta_c} \right)^n \nonumber
\label{eq:lin_reg}
\end{eqnarray}
the ratio of coefficients asympotically behaves as
\begin{equation}
\frac{\chi_n}{\chi_{n-1}} = \frac{1}{\zeta_c} + \frac{\gamma - 1}{\zeta_c}\frac{1}{n}.
\end{equation}
In Fig.~\ref{fig:chi_ratio} we extract the critical point $\zeta_c$ from the intercept and the critical exponent $\gamma$ from a linear fit through the ratio of the coefficients. The critical point could be determined with an accuracy of $0.5\%$, whereas the error on $\gamma$ is of the order of $5\%$. However, according to more advanced extrapolation techniques discussed in
Ref.~\onlinecite{ZinnJustin1979}, $\gamma$ can be determined independently from $\zeta_c$ as $\gamma \approx 1.751949$ on the square lattice when the series is known up to 14th order, {\it i.e.}, an accuracy of $0.5\%$.
\section{The $G^2W$ skeleton scheme}
\label{sec:VII}
The expansion of susceptibility in terms of $\zeta$ is, of course, identical to the one found by the high-temperature series expansion method. To make the distinction between the high-temperature series formalism and Grassmannization approach clear, we discuss the skeleton formulation of the interacting fermionic field-theory based on dressed (or ``bold") one-body propagators ($G$) and bold interaction lines ($W$). This leads to the so-called $G^2W$ skeleton scheme (see for instance Refs.~\onlinecite{Heidin,Molinari2006} for the terminology): all lines in all diagrams
are assumed to be fully renormalized propagators and effective potentials, but vertex functions
remain bare. In Sec. \ref{sec:VIII} we show that the $G^2W$-expansion scheme offers a very simple
way to solve the 1D Ising model exactly.
\subsection{Objects and notation}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.3\columnwidth, angle=-90]{fig_Psi}
\caption{Two low-order contributions to the generalized Luttinger-Ward functional $\Psi$.
Dashed lines denote bold Green functions for primed and non-primed Grassmann variables, and
wavy solid lines are effective potential lines.
}
\label{fig:psi_functional}
\end{figure}
The key objects in the standard skeleton scheme are the selfenergy ($\Sigma$) and the polarization function ($\Pi$). They are related to the Green function ($G$) and the
effective potential ($W$) by their respective Dyson equations.
The diagrams for $\Pi$ and $\Sigma$ are obtained by removing one $W$- or $G$-line, respectively,
from connected graphs for the generalized Luttinger-Ward functional $\Psi$, shown to second order in Fig.~\ref{fig:psi_functional}. In this setup, the expansion order is defined by
the number of $W$-lines (obviously, the discussion of Sec. \ref{subsec:D} does not apply
to the self-consistent skeleton sequence). All objects of interest are tensors; they have a coordinate (or momentum) dependence, as well as the legs orientation dependence for the incoming
and outgoing parts. This conventional scheme has to be supplemented with $\Psi$-graphs
involving $V_4$ vertexes to account for all contributions. We start with neglecting $V_4$ vertexes,
and discuss their role later.
In more detail, the formalism of the $G^2W$ expansion in the absence of $V_4$ vertexes
is as follows:
\begin{enumerate}
\item There are six bare two-body interaction vertexes $V_2$ $( RU, RL, RD, LU, UD, LD)$, see the second line in Fig.~\ref{fig:prediagrams}. They reside on the sites of the original square lattice and all have weight 1. Symbolically, we encode the tensor structure
of $V_2$ using a convenient short hand notation
$V_2=\sum_{\alpha , \gamma =1}^{4} V(\alpha, \gamma )n_{\alpha} n_{\gamma}$, where
\begin{equation}
V(\alpha, \gamma) = \begin{bmatrix} 0 & 1 &1 & 1 \\ 1 & 0 & 1 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 \end{bmatrix} .
\end{equation}
The row index represents the first leg enumerated according to the convention
$(R,U,L,D) \to (0,1,2,3)$, and the column index represents the second leg.
By doing so, we artificially double the number of vertexes from 6 to 12.
For example, the element $(0,2)$ corresponds to $n_Ln_R$ whereas $(2,0)$
corresponds to $n_Rn_L$, which is exactly the same term.
\item The selfenergies $\Sigma$ for the primed and non-primed Grassmann variables take the same value. Thus, we have to compute only one of them and we can suppress the index that distinguishes between the two Grassmann fields. The selfenergy defines the Green function
through the Dyson equation
\begin{equation}
G(\alpha, \gamma ) = G^{(0)} (\alpha, \gamma ) + \sum_{\mu, \nu}
G^{(0)}(\alpha, \mu ) \Sigma(\mu, \nu ) G(\nu, \gamma ) \,. \label{eq:Dyson}
\end{equation}
For a link going from site $i$ to site $j$, the first index $\alpha$ refers to site $i$ (in the above-defined sense), and the second index $\gamma$ refers to site $j$.
Note the absence of the momentum dependence in Eq.~(\ref{eq:Dyson}): The bold Green function remains local on the links in any order of renormalization. It means, in particular, that the only non-zero element for a link between sites $(0,0)$ and $(1,0)$ is $G_{02}$; it can be alternatively denoted as $G_{x}$ and, by $90^{o}$ rotation symmetry of the square lattice, is the same
for all links.
\item The matrix structure of polarization $\Pi$ is similar to that of $V$.
The 0th order expression based on bare Green functions is given by
\begin{equation}
\Pi^{(0)}_{\rm (x,y)}(\alpha, \gamma) = \zeta \begin{bmatrix}
0 & \!\! 0 & \!\! \delta_{x,1}\delta_{y,0} & \!\! 0 \\
0 & \!\! 0 & \!\! 0 & \!\! \delta_{x,0}\delta_{y,1} \\ \delta_{x,-1}\delta_{y,0} & 0 &0 & 0 \\ 0 & \delta_{x,0}\delta_{y,-1} & \!\! 0 & \!\! 0 \end{bmatrix} .
\end{equation}
\item The effective potential $W$ is defined through the Dyson equation
in momentum representation
\begin{equation}
W_{\bm q}(\alpha, \gamma ) = V(\alpha, \gamma ) + \sum_{\mu, \nu} V(\alpha, \mu)
\Pi_{\bm q}(\mu, \nu) W_{\bm q}(\nu, \gamma ) \,. \label{eq:W}
\end{equation}
We expect to see signatures of the ferromagnetic transition in matrix elements of $W_{{\bm q}=0}$
because they directly relate to the divergent uniform susceptibility $\chi$.
\end{enumerate}
\subsection{Zeroth order result}\label{sec:zero}
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth]{W_order0}
\caption{Divergence of the 0th order result for $W_{{\bm q}=0}$ at $\zeta_c = 1/3$ is compared
with the Frobenius norm and a reference line with power $-1$.
} \label{fig:order0}
\end{figure}
To obtain the 0th order result, we replace $\Pi$ with $\Pi^{(0)}$ in Eq.~(\ref{eq:W}).
For any $\zeta$ we compute $W_{\bm q=0}$ from Eq.~(\ref{eq:W}) by matrix inversion.
We find a divergence at $\zeta_c = 1/3$ (shown in Fig.~\ref{fig:order0}) that can be also
established analytically. We see that $W$ diverges as $(\zeta_c - \zeta)^{-1}$.
We get the same power law behavior for the $(0,1)$ matrix element as well as for the Frobenius norm---they just differ by a constant factor.
It is not surprising that our $\zeta_c$ is below the exact value for this model;
the skeleton approach at 0th order is based exclusively on simple ``bubble''
diagrams in terms of bare Green functions that are all positive, leaving to an overestimate of the critical temperature. Fermionic exchange cycles
and vertexes with negative weights do not contribute at this level of approximation.
\subsection{First order result}\label{sec:one}
We now include the diagrams with one $W$ line for the selfenergy and the polarization.
In real space we find
\begin{eqnarray}
\Sigma^{(1)}_x & = & \Sigma^{(1)} = - G_x W_{(1,0)}(2,0)= - G W_{(1,0)}(2,0) \nonumber \\
\Pi^{(1)} & = & G^4 W_{(0,0)}(0,0) + {\rm cycl.}
\label{eq:2dorder}
\end{eqnarray}
The matrix structure of $\Pi^{(1)}$ is identical to that of $\Pi^{(0)}$ and is not shown here
explicitly. Coupled self-consistent Eqs.~(\ref{eq:2dorder}), (\ref{eq:Dyson}), and (\ref{eq:W}) are
solved by iterations.
\subsection{Second order result} \label{sec:two}
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth]{gw_two_L64}
\caption{(Color online) Shown is the Frobenius norm of $W_{{\bm q}=0}$ (to second order)
on a lattice of size $64 \times 64$.
For comparison, the 0th order result is also shown.
The critical point is found to be at $\zeta_c \approx 0.35$ and the exponent is close to 1.1. } \label{fig:order2}
\end{figure}
As mentioned previously, to account for second-order terms in $\Sigma$,
one goes to the second order graphs for $\Psi$ and removes a $G$ line,
whereas the second-order terms for $\Pi$ are obtained by removing one $W$-line from the
third-order graphs for $\Psi$. The corresponding expressions in real space are
\begin{eqnarray}
\Sigma^{(2)} & = & - W_{(0,0)}(0,0) W_{(0,0)}(2,2) G^3 \nonumber \\
\Pi_{ (0,0)}^{(2)}(0,0) & = & G^6 W_{(0,0)}(2,2) W_{(1,0)}(0,2) \nonumber \\
\Pi_{ (1,0)}^{(2)}(0,2) & = & G^6 W^2_{(1,0)}(0,2) + \nonumber \\
{} & {} & G^6 W_{(0,0)}(0, 0) W_{(0,0)}(2,2).
\end{eqnarray}
The remaining non-zero contributions are obtained by invoking discrete lattice symmetries.
Note that to this order the polarization function is extremely local and contains only
same site and n.n. terms. Again, coupled self-consistent GW-equations are solved by
fixed-point iterations. The resulting behavior for $W$ is analyzed in Fig.~\ref{fig:order2}.
The transition point has slightly shifted to larger values of $\zeta$ compared to the zeroth-order result, and the exponent has also slightly increased.
\subsection{Relating ${\Pi}$ to the spin correlation function}
The $G^2W$-expansion scheme treats different bare vertexes (see Fig.~\ref{fig:prediagrams})
on unequal footing: the $V_2$ vertexes are fully dressed, but the $V_4$ vertexes are included perturbatively (we neglected them so far). These higher-rank vertexes have a weight of comparable magnitude to the $V_2$ vertexes (-2 for $V_4$ vs +1 for $V_2$). In addition, the difference in sign between the weights is expected to result in important cancellations between the diagrams and better
convergent series for the spin correlation function (this is how $\zeta_c$ increases towards
its exact value).
Formally, there is no valid reason for neglecting the $V_4$ vertexes altogether. Let us
show how they can be taken care of in the spirit of the shifted action approach.\cite{Rossi2015}
This discussion also gives us the opportunity to explain how the spin correlator is related
to the $G^2W$ skeleton expansion, which is most easily understood in the limit $\zeta \ll 1$.
By assuming that the skeleton sequence (without $V_4$) is solved, we introduce the full
polarization function $\bar{\Pi} (\hat{\alpha}, \hat{\beta} )$ through the Dyson equation
\begin{equation}
\bar{\Pi}_{\bm q}(\alpha, \gamma ) = \Pi_{\bm q}(\alpha, \gamma ) +
\sum_{\mu, \nu} \Pi_{\bm q} (\alpha, \mu) V(\mu, \nu) \bar{\Pi}_{\bm q}(\nu, \gamma ) \,. \label{eq:Pi_full}
\end{equation}
To be specific, we focus on the n.n. element $\rho_{(1,0)}$; similar manipulations hold
for any other distance. Now consider all diagrams for this correlator without the $V_4$ vertexes
within the $G^2W$ formulation (see Ref.~\onlinecite{Rossi2015}):
\begin{itemize}
\item Put one $V_1$ vertex on the origin site $(0,0)$ and the other $V_1$ vertex on the target site ${\mathbf r}=(1,0)$, see Eq.~(\ref{rhoprim}). There are $4 \times 4 = 16$ different ways of doing that depending on the directions of legs. Connect the legs with $\bar{\Pi}_{\bm r}(\alpha, \gamma)$. For example, in the limit of $\zeta \ll 1$, choosing the ($\alpha\! =\! 0$)-leg on site $(0,0)$ and the $\gamma=2$-leg on site $(1,0)$ results in the contributions $\zeta - 4 \zeta^5 + \ldots $. Similarly, the choice of $\alpha=1$ and $\gamma=1$ leads to the
contribution $\zeta^3$.
\item Put $V_3$ on $(0,0)$ and $V_1$ on $(1,0)$, and connect all legs with $\bar{\Pi}$ lines.
There are four ways to orient the $V_3$ vertex and for each one there are two choices for connecting legs with $\bar{\Pi}$ propagators. The leading contribution to $\rho_{(1,0)}$ goes hence as $-8 \zeta^5$.
\item Putting $V_1$ on $(0,0)$ and $V_3$ on $(1,0)$ gives the same contribution by symmetry.
\item Put one $V_3$ vertex on $(0,0)$ and the other $V_3$ vertex on $(1,0)$.
Now there are 16 ways of orienting both $V_3$ vertexes, and for each orientation there are 15 choices for connecting the legs. These contributions start at order $\propto \zeta^9$.
\end{itemize}
Next, we repeat the above procedure of connecting legs by adding one $V_4$ vertex,
which can be put on any site, after that we can add two $V_4$ vertexes etc. to generate a
perturbative expansion in the number of $V_4$ terms.
Compared to the original bare series in powers of $\zeta$, we have reordered the series:
the effective potential is summing up all $V_2$ vertexes, whereas we expand (and sample in a Monte Carlo framework) in powers of $\lambda_4$.
To illustrate this framework, let us take $\zeta = 0.01$ and recall that in the bare series $\rho_{(1,0)} = \zeta + 2 \zeta^3 + 4 \zeta^5 + 12 \zeta^7 + \ldots$.
The first 3 terms can be reproduced without $V_4$ vertexes and with only 1 $V_3$ on either the origin or the target site, see Figs.~\ref{fig:rho_generic_a}--\ref{fig:rho_generic_d}.
The fifth order coefficient originates from 16 ``simple" diagrams containing just $V_1$ and $V_2$ vertexes without any exchange. The diagrams containing a $V_3$ vertex yield a coefficient $-8$, and the exchange diagrams yield a coefficient $-4$.
On a $16 \times 16$ lattice, the propagators obtained in Sec.~\ref{sec:zero} ({\it i.e.}, to zeroth order) are
\begin{eqnarray}
\bar{\Pi}^{(0)}_{ (x,y)=(1,0) } (0,2) & = & 1.00000002 \times 10 ^{-02} , \\
\bar{\Pi}^{(0)}_{ (x,y)=(1,0) } (1,1) & = & 1.00010011 \times 10 ^{-06} ,\\
\bar{\Pi}^{(0)}_{ (x,y)=(1,0) } (1,2) & = & 1.00080057 \times 10 ^{-10} ,\\
\bar{\Pi}^{(0)}_{ (x,y)=(0,0) } (1,2) & = & 1.00020021 \times 10 ^{-08} .
\end{eqnarray}
We do not mention explicitly other symmetry-related elements.
The sum of all matrix elements for $\bar{\Pi}^{(0)}_{(x,y)=(1,0)}$ is $0.01000200160$. One clearly recognizes the coefficients $1$, $2$ and $16$ for the first-, third- and fifth-order contributions
to the bare series.
Contributions from the $V_3$ vertexes can be estimated from multiplying
$\bar{\Pi}^{(0)}_{ (x,y)=(0,0)}(1,2) \times \bar{\Pi}^{(0)}_{ (x,y)=(1,0)}(0,2)$ which yields $ \approx 10^{-10}$. There are four different diagrams, each with weight $-2$,
resulting in the above-mentioned
coefficient $-8$.
On a $16 \times 16$ lattice, the propagators obtained in Sec.~\ref{sec:two} ({\it i.e.}, to second order) are
\begin{eqnarray}
\bar{\Pi}_{ (x,y)=(1,0)}(0, 2) & = & 9.99999980\times 10 ^{-03} \\
\bar{\Pi}_{ (x,y)=(1,0)}(1, 1) & = & 1.00009999\times 10 ^{-06} \\
\bar{\Pi}_{ (x,y)=(1,0)}(1, 2) & = & 1.00120089\times 10 ^{-10} \\
\bar{\Pi}_{ (x,y)=(0,0)}(1, 2) & = & 1.00020005\times 10 ^{-08}
\end{eqnarray}
The sum of all matrix elements for $\bar{\Pi}^{(0)}_{(x,y)=(1,0)}$ is $0.01000200120$. One clearly recognizes the coefficients $1$, $2$ and $12$ for first, third and fifth order contributions to the bare series. For the fifth order contribution, we now obtain $12$ instead of $16$ thanks to the Grassmann exchange contribution that is accounted for properly at this level of approximation.
By adding the $V_3$ diagrams in the way described above we recover the correct result to this
order in $\zeta$ (which is +4).
The first instance of a $V_4$ vertex occurs in order $\zeta^6$ in the bare series. The relevant bare diagrams are the ones for $\rho_{(1,1)}$ with a $V_4$ vertex on site $(1,0)$ (and all cases related by the lattice symmetry). Our bold expansion can correctly account for this contribution if we put a $V_4$ vertex on this site and connect all unpaired legs with $\bar{\Pi}$ propagators. However, with the propagators obtained in Sec.~\ref{sec:two} we are not supposed to account for all possible diagrams in the bare series to order 6 because our bold expansion in Sec.~\ref{sec:two} is only accurate
up to order $\zeta^3$: Consider again $\rho_{(1,1)}$ and the bare diagrams where exchanges are possible on the links between the sites $(0,0) - (1,0)$ and $(1,0)-(1,1)$. Then there are irreducible non-local contributions that are not accounted for in Sec.~\ref{sec:two} with a positive weight that involves exchanges on both links in a correlated fashion. These contributions would obviously be accounted for in higher order corrections to $\Psi$, when $\Pi$ becomes non-local. This is also seen in the numerics: the $G^2W$ approach to second order yields a coefficient of $6$ for $\zeta^6$ contribution to $\rho_{(1,1)}$,
which is below the correct value of $10$.
\section{The Ising model in one dimension}
\label{sec:VIII}
Let us show that the proposed approach solves the 1D Ising model exactly,
both in the bare formulation as well as in the $G^2W$ skeleton formulation.
\subsection{Bare series}
In 1D, the only allowed vertex is $RL$ (the last one in the second line of Fig.~\ref{fig:prediagrams}). It has weight +1. The only allowed endpoints are $L$ and $R$ (the second and fourth vertexes
shown in the first line of Fig.~\ref{fig:prediagrams}). As expected, this means that there are no loops, no fermionic exchanges, and no minus signs in 1D. At order $n$ of the expansion for the spin correlator there is only one contributing diagram with weight $\zeta^n$
(up to the lattice symmetry). The susceptibility is hence
\begin{equation}
T \chi = 1 +2 ( \zeta + \zeta^2 + \ldots) = 1 + 2 \frac{\zeta}{1-\zeta},
\label{eq:1d_bare}
\end{equation}
reproducing the exact solution
with asymptotic behavior $\chi \propto \beta \exp(2 \beta)$ as $T \to 0$.
\subsection{$G^2W$ formulation}
The $G^2W$ skeleton expansion becomes exact already in 0th order,
\begin{eqnarray}
\Pi & = & \Pi^0 = \zeta \\
\Sigma &= & 0
\end{eqnarray}
which yields $G = G_0 =\sqrt{\zeta}$, $W = V / (1 - V \Pi) = 1/(1-\zeta)$, and also $\Pi = \zeta/(1- \zeta)$. This immediately leads to the same result as in Eq.~(\ref{eq:1d_bare}) when adding the end-point vertexes $L$ and $R$ to $\Pi$.
\section{Conclusion}
\label{sec:IX}
We have developed a general scheme for mapping a broad class of classical statistical
link (plaquette) models onto interacting Grassmann-field theories that can be studied
by taking full advantage of the diagrammatic technique. This mapping, in particular,
would allow to formulate an all-diagrammatic approach to $(d+1)$-dimensional lattice
gauge theories with finite density of fermions. The resulting field-theory looks very
complex because it contains a large number of Grassmann variables with numerous multi-point
interaction vertexes. Moreover, it is generically strongly-coupled at low temperature
meaning that an accurate solution using diagrammatic methods is only possible when
calculations are performed to high order and extrapolated to the infinite-order limit.
The complexity of the problem should not be taken as an indication that the entire idea
is hopeless. Monte Carlo methods were designed to deal with configuration spaces
of overwhelming size and complexity and arbitrary weights. In this sense, diagrammatic Monte Carlo methods simulating the configuration space
of irredicuble connected Feynman graphs are based on the same general principles
and one should not be surprised that they can evaluate the sum of millions of bare
(or skeleton) graphs, enough to attempt an extrapolation to the infinite-order limit.
What makes diagrammatic Monte Calro distinctly unique (apart from working with ever-changing number
of continuous variables without systematic errors) is the radical transformation of
the sign problem. It is completely eliminated in conventional sense because the
thermodynamic limit is taken first. Given that the number of diagrams increases
factorially with their order, finite convergence radius in $\zeta$ is only possible
if same-order diagrams cancel each other to such a degree that at high order
their combined contribution is not increasing factorially. In other words,
non-positive weights are {\it required} for the entire approach to work and we call
it the ``sign-blessing" phenomenon. Diagram weights for Grassmann/fermion fields
alternate in sign depending on the diagram topology; this leads to the sign-blessing
phenomenon for lattice models.
We illustrated the proposed approach by considering the 2D Ising model
as a prototypical example. We have deliberately chosen to work with the
generic formulation to avoid model specific simplifications because our
goal was not to solve the model but to demonstrate how one would proceed
in the general case. The ultimate goal is to explore how this field-theoretical
approach can help with understanding properties of lattice gauge models.
\section{Acknowledgement}
\label{sec:X}
We are grateful to A. J. Guttmann for providing us with references to the high-temperature series expansions and U. Wolff for drawing our attention to the work by S. Samuel. This work was supported by the National Science Foundation under the grant PHY-1314735, FP7/Marie-Curie Grant No. 321918 (``FDIAGMC"), and FP7/ERC Starting Grant No. 306897 (``QUSIMGAS").
\bibliographystyle{apsrev4-1}
|
1,314,259,993,397 | arxiv | \section{Introduction}
Ultracold dipolar gases have been attracting great attention in recent years. In these systems, long-range anisotropic dipole-dipole interaction gives rise to a plethora of
novel effects which makes them useful for various applications in quantum simulation~\cite{baranov2012condensed}. Different aspects of such systems can be studied using a variety of experimental platforms including magnetic atoms~\cite{Goral2000,Lahaye:2009}, polar molecules~\cite{bohn2017cold} and Rydberg atoms~\cite{saffman2010quantum},
allowing to access different regimes of interaction strength, geometry, particle number and quantum statistics. On the mean field level, a dilute gas of dipolar bosons
can become unstable towards a collapse caused by partially attractive nature of the interaction~\cite{Lahaye:2009}. This can be seen also in the excitation spectrum as the
Bogoliubov mode frequencies become imaginary. Depending on the external confinement, the instability can be tuned to occur at different values of the dipolar interaction
strength and change its character.
The development of experimental techniques allowing to produce Bose condensed clouds of highly magnetic lanthanide atoms such as erbium and
dysprosium~\cite{lu2011strongly,aikawa2012bose} has led to rapid progress in this field. Most notably, the experiments highlighted the role of beyond-mean-field effects
for the dynamics of the gas with the unexpected discovery of the droplet state~\cite{Kadau2016,Barbut2016,Chomaz2016,Schmitt2016,Wenzel2017}. It turns out that close to
the instability, the mean field contribution to the energy of the gas vanishes and the beyond-mean-field corrections, which typically have higher power-law dependence on
the density, become important. The positive correction to the chemical potential can be interpreted as a source of effective repulsion in the gas. This
results in formation of a long-lived finite size droplet with liquid properties. This kind of quantum droplet was originally suggested to occur in Bose-Bose
mixtures~\cite{Petrov2015,Petrov2016}, and was later observed as well ~\cite{Semeghini2018,Cabrera2018}.
The discovery of quantum droplets resulted in renewed theoretical interest in calculations of the beyond-mean-field corrections, pioneered already many years
ago~\cite{Lee1957,Beliaev1958,Schick1971,hugenholtz1959ground}. For dilute Bose gas with short-range interactions the first results have been provided by Lee, Huang and
Yang (LHY)~\cite{Lee1957}. For the case of dipolar interactions in free space, the correction turns out to have the same dependence on the density of the gas, but its
magnitude is enhanced~\cite{Pelster2012}. The presence of external confining potential enriches the problem by introducing a new lengthscale and allowing for the
effective reduction of the system dimensionality, which modifies the functional form of the beyond mean field terms~\cite{Popov,Mora2009}. For anisotropic interactions,
the relative orientation between the dipoles and trap geometry can be exploited to qualitatively change the excitation spectrum, developing a roton
mode~\cite{Santos2003,Fischer2006,Boudjemaa2013,Chomaz2018,Petter2018,Kora2019}.
The possibility to modify the properties of the roton mode allows to explore novel phases of matter.
As demonstrated recently, by carefully tuning the parameters it was possible to bring the ground state of the system from a
single droplet to an array of phase-coherent droplets~\cite{Bombin2017,Cinti2017,Tanzi2018,Roccuzzo2019,Bottcher2019,Zhang2019}, featuring broken translational
symmetry along with superfluid order and, thereby, having the properties of a supersolid~\cite{Boninsegni2012,li2017stripe,leonard2017supersolid}. Quantum droplets
were also predicted to occur in dipolar bosonic mixtures \cite{Smith2021,Bisset2021} and in Rabi-coupled Bose mixtures \cite{Cappellaro2017}.
Theoretical studies of the droplet physics were so far largely restricted to solving an extended Gross-Pitaevskii (GP) equation with an additional term accounting for the
LHY correction~\cite{Barbut2016,Wachtler2016,bisset2016ground} taken from free-space three-dimensional calculation~\cite{Pelster2012}. The validity of this approach
relies on the local density approximation (LDA), whereas external confinement is known to modify the beyond-mean-field
corrections~\cite{Edler2017,zin2018droplets,buechler2018crossover,jachymski2018nonuniversal}. Predictions of the extended GP equation have so far been rather successful
in interpreting the experimental results, and have been to some extent supported by Monte Carlo calculations~\cite{Macia2016droplets,Cinti2017superfluid,Bottcher2019a}.
The aim of this paper is to rigorously study the LHY term in a confined dipolar system, and to provide the form of the LHY correction for an effectively two-dimensional
system, as well as to check the validity of LDA. The calculation can be performed mostly analytically provided that the confining potential is assumed to be of a box
type with periodic boundary conditions.
This work is structured as follows. In Section~\ref{sec:description}, we introduce the system.
In Section~\ref{sec:LHY} we perform the calculation of the LHY correction for the uniform system.
In Section~\ref{sec:droplet}, we show that the calculated correction can give rise to formation of droplets. Conclusions are drawn in Section~\ref{sec:conclusion}.
Technical details of the calculations are given in the Appendices.
\section{Description of the system}\label{sec:description}
The system under study is an ultradilute gas of polarized bosonic dipoles confined in a highly anisotropic trap. For simplicity, we consider the system being placed between two infinite planes separated by distance $L$ from each other.
We choose the coordinate system such that $z$ is the direction perpendicular to the planes.
The dipoles polarization direction is taken to be tilted by angle $\theta$ from the $z$ direction
(see Fig.~\ref{fig:coord}).
We assume that the system forms a Bose-Einstein condensate, and study its properties using the standard Bogoliubov method. Than the energy of the system reads
\begin{eqnarray} \nonumber
E[\psi]\!=\!&&\int\!\! d{\bf r}_\perp \int_{-L/2}^{L/2} d z \, \frac{\hbar^2}{2m} |\nabla \psi|^2
\\ \label{energia}
&&
+\! \frac{1}{2}\int \!\!d{\bf r}_\perp d{\bf r}_\perp'\, \int_{-L/2}^{L/2}\!\!\! dz dz' \, v({\bf r}-{\bf r}')
|\psi({\bf r})|^2 |\psi({\bf r}')|^2 \! + E_{LHY}[\psi]
\end{eqnarray}
where $v({\bf r}-{\bf r}')$ denotes the interaction potential and $E_{LHY}$ the LHY energy term.
We additionally assume that the gas has a constant density in the $z$ direction. This naturally restricts our considerations to the systems where the changes of the density in the
$x$-$y$ plane takes place on distances much larger than $L$. Below, when we derive the profile of the droplet, we exploit the variational approach for the wave function in the
$x$-$y$ plane, but still the $z$ component is position independent~\cite{martin2017vortices}. With such approximation Eq.~(\ref{energia}) takes the form
\begin{eqnarray} \nonumber
E[\psi_\perp]\!=&&\!\int\!\! d{\bf r}_\perp \, \frac{\hbar^2}{2m} |\nabla_\perp \psi_\perp|^2
\\ \label{energia2}
&&
+\! \frac{1}{2}\int \!\!d{\bf r}_\perp d{\bf r}_\perp'\, v_{2d}({\bf r}_\perp-{\bf r}_\perp')
|\psi_\perp({\bf r}_\perp)|^2 |\psi_\perp({\bf r}_\perp')|^2 \! + E_{LHY}[\psi_\perp]
\end{eqnarray}
where ${\bf r}_\perp = x {\bf e}_x + y {\bf e}_y$ we denote the two-dimensional position vector,
$\psi_\perp = \sqrt{L} \psi$ and
\begin{equation}\label{eq:v2d}
v_{2d}({\bf r}_\perp-{\bf r}_\perp') = \int_{-L/2}^{L/2}\!\!\! dz dz' \, v({\bf r}-{\bf r}').
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=.45\textwidth]{fig1.pdf}
\caption{Graphical illustration of the system geometry. The dipoles (thick orange arrow) lie in the $x$--$z$ plane and are tilted by the angle $\theta$ with respect to the $z$-axis.
The gas is confined in the $z$-direction in a box of length $L$.
}\label{fig:coord}
\end{figure}
\section{Beyond mean field energy correction}\label{sec:LHY}
After describing the system, we proceed to the calculation of the energy $E_{LHY}$.
To make the system analytically tractable, we use the local density approximation, i.e., we calculate the LHY energy density of a uniform
system $\frac{\epsilon_0}{L^2} e_{LHY}^{2d}$ and then approximate
\begin{equation}\label{LDA}
E_{LHY}[\psi_\perp] \simeq \int\!\! d{\bf r}_\perp \,
\frac{\epsilon_0}{L^2} e_{LHY}^{2d}\!\left[\frac{2}{\pi} aL |\psi_\perp({\bf r}_\perp)|^2 \right].
\end{equation}
Here, we introduced an energy scale $\epsilon_0 \equiv \frac{\hbar^2}{2m} \left( \frac{2\pi}{L} \right)^2$. We separate out the factors
$\frac{\epsilon_0}{L^2}$ and $\frac{2}{\pi} aL$ for convenience and brevity of further formulas.
In order to evaluate the energy $ \frac{\epsilon_0}{L^2} e_{LHY}^{2d}$, we work with periodic boundary conditions (PBC) along $z$, which is an approximation that significantly
simplifies the calculations. Although convenient for analytical considerations, mathematically, they introduce spurious interaction between the Fourier copies of the
system in adjacent cells of the periodic system since the dipole-dipole interaction has a long tail. However, it was shown in Ref.~\cite{Ronen2006} that this spurious
interaction modifies the mean field energy by less than one percent as compared to the case without PBC. Since in the 3D limit, the local density approximation was
successfully used with the LHY energy in the form of the generalized Gross-Pitaevskii equation, we expect that the LHY energy, being more local in nature, will also be
little affected by the spurious interaction.
\subsection{Interaction potential}\label{sec:potential}
To proceed, we first focus on the interaction potential $v({\bf k})$.
The bare potential consists of two parts. The first is the anisotropic dipole-dipole
interaction, and the second is a strong, short-range potential which we assume to be isotropic and dominating over the dipolar part at small distances. However, as long
as only low energy and large distances are considered, the pseudopotential of the following form can be used~\cite{Yi2000,Oldziejewski2016}:
\begin{equation} \label{v}
v(\mathbf{r})=g\bigg[\delta(\mathbf{r})+\frac{3\epsilon_{dd}}{4\pi r^3}\left(1-3 ({\bf e}_{\bf r} \cdot {\bf e})^2 \right)\bigg].
\end{equation}
Here the interaction strength $g$ is determined by the scattering length of the total potential $a$ and atomic mass $m$ by $g=\frac{4\pi\hbar^2 a}{m}$ and $\epsilon_{dd}$
parametrizes the relative strength of the dipolar part of the interaction with respect to the short-range one, which can be expressed as a ratio $a_{\rm dd}/a$ with $a_{\rm dd}$
being a characteristic dipolar length. In addition, ${\bf e}_{\bf r} = {\bf r}/|{\bf r}|$ and ${\bf e}$ denotes the polarization direction. We emphasize that $a$ is the scattering length of the
total potential, i.e., the sum of the short-range and the dipolar potentials, and it may differ from the scattering length of the short-range potential
significantly~\cite{Ronen2006druga,Bortolotti2006}.
Since we calculate LHY energy correction in a system with periodic boundary conditions, we
shall make use of Fourier transform of the $v({\bf r})$ potential given by
\begin{equation}\label{gvk}
v({\bf k}) = \int_{-L/2}^{L/2}\!\!dz \int \!\!d{\bf r}_\perp \, e^{-i {\bf k} {\bf r}} \frac{v({\bf r})}{g}
\end{equation}
where ${\bf r}_\perp = x {\bf e}_x + y {\bf e}_y$. Due to periodic boundary conditions
$k_z$ is quantized, i.e., $k_z = \frac{2\pi}{L} q_z$ where $q_z$ is an integer.
To simplify further formulas, we use in the above $1/g$ prefactor.
We obtain the following analytical form of the potential
\begin{equation} \label{potdd}
\fl
v({\bf q}) = 1
+ \epsilon_{dd} \bigg\{
3 \frac{ q_x^2 \sin^2 \theta + q_x q_z \sin 2 \theta - q_\perp^2 \cos^2 \theta }{q^2} \,
\big[ 1 - (-1)^{q_z} e^{-\pi q_\perp} \big] +3 \cos^2 \theta -1 \bigg\},
\end{equation}
where ${\bf q} = {\bf k} L /2\pi$, $q_\perp = \sqrt{q_x^2+q_y^2}$, and we assume, without the loss of generality, the dipoles to be oriented within the $x-z$ plane. Here, $\theta$ is the angle
between the polarization direction and the $z$-axis, see Fig.~\ref{fig:coord}. Since the dipole-dipole potential scales as $1/r^3$ with the distance, some caution needs
to be taken at the two limiting cases: $r \to 0$ and $r \to \infty$. We discuss this issue in \ref{app0} for the sake of completeness.
The above formula has also been calculated in \cite{Ronen2006,Baillie2015}. However, there $L$ had a meaning
of a cutoff in the position space. Therefore, $k_z$ present in these works are not quantized.
\subsection{Critical point}\label{sec:critical}
We now proceed to the analysis of the critical point of the uniform system defined above.
We define it as the instability point of the spatially uniform solution within the
mean field approximation. We assume that initially the gas is homogeneous and search for the instability with increasing $\epsilon_{dd}$. This
instability can be found by analysing the spectrum of the Bogoliubov quasiparticles
$\varepsilon({\bf q}) = \sqrt{ q^2[q^2 + 2 \xi v({\bf q})] } = \sqrt{q^2 f({\bf q})}$ where $f({\bf q}) = q^2 + 2 \xi v({\bf q}) \geq 0$. To simplify the notation we introduce $\xi = gn/\epsilon_0$ as the dimensionless parameter measuring the strength of the interactions.
In the stable phase, all the quasiparticle energies should be real. However, at the critical point,
the function $f({\bf q}) = q^2 + 2 \xi v({\bf q}) \geq 0$, while for at least a single value of ${\bf q} = {\bf q}_c$, we have $f({\bf q}_c) = q_c^2 + 2 \xi v({\bf q}_c) = 0$. It is straightforward
to find that for $|\delta \theta | \ll 1 $ where $\delta \theta = \theta - \pi/2$ the above shall hold for $q_z=0$. In other words, the instability occurs in the plane,
where dipolar attraction is the strongest. In such a case, according to Eq.~(\ref{potdd}) we have
\begin{eqnarray*}
f({\bf q}) &=&
q^2
+ 2 \xi \left( 1 + \epsilon_{dd,crit} ( 3 \cos^2 \theta -1 ) \right)
\\
&+&
6\xi \epsilon_{dd,crit} (\sin^2 \phi \sin^2 \theta - \cos^2 \theta) \,
\big[ 1 - e^{-\pi q_\perp} \big],
\end{eqnarray*}
where $\sin \phi = q_x/q_\perp$. The minimal value is achieved for $\phi =0$. At the critical point ${\bf q}={\bf q}_c$ we have $f({\bf q}) =0$ and $\nabla_{\bf q} f({\bf q}) = 0$, which leads to
\begin{eqnarray*}
q_c^2 +2 \xi (1 - \epsilon_{dd,crit})
+
6 \xi \epsilon_{dd,crit} \sin^2 \delta \theta \, e^{-\pi q_c} = 0
\end{eqnarray*}
and
\begin{eqnarray*}
q_c e^{\pi q_c} = 3 \pi \xi \epsilon_{dd,crit} \sin^2 \delta \theta \, .
\end{eqnarray*}
The above can be rewritten as
\begin{equation}\label{epcrit}
\epsilon_{dd,crit} - 1 = \frac{1}{2\xi} \left( q_c^2 + \frac{2}{\pi} q_c \right) \, ,
\end{equation}
where $q_c$ is a solution of
\begin{equation}\label{qceq}
q_c e^{\pi q_c} = 3 \pi \xi \sin^2 \delta \theta
\left( 1 + \frac{1}{2\xi} \left( q_c^2 + \frac{2}{\pi} q_c \right) \right).
\end{equation}
As we can see, in general $\epsilon_{dd,crit}$ depends on $\delta \theta$ and $\xi$.
However, there are special cases that we want to address.
Firstly, for $\delta \theta =0$ the solution simplifies to $q_c = 0$ and $\epsilon_{dd,crit} = 1$ which does not depend on the value of $\xi$.
There is also another interesting limit where $\delta \theta \ll 1$ and simultaneously $ \delta \theta^2 \xi \ll 1$, where we find that
\begin{equation}\label{epp_crit}
\epsilon_{dd,crit} \simeq 1 + 3 \delta \theta^2
\end{equation}
which again shows lack of $\xi$ dependence.
\subsection{The beyond-mean-field term calculation}\label{sec:lhy}
We proceed to the calculation of the analogue of the Lee-Huang-Yang term for the trapped gas interacting with the potential given by Eq.~(\ref{v}) in the case of uniform system.
We express Lee-Huang-Yang $3d$ energy density of the system as $\frac{\epsilon_0}{L^3} e_{LHY}^{2d}(\epsilon_{dd},\theta,\xi)$, and~\cite{HP1959}
\begin{equation}
\label{lhydef}
-\frac{2 e_{LHY}^{2d}}{\xi^2} \!=\! \sum_{q_z}\!\! \int\!\! d{\bf q}_\perp \frac{v^2({\bf q})}{ \varepsilon({\bf q}) + q^2 +
\xi v({\bf q})} \!-\! \int\!\! d{\bf q} \frac{ v_{3\mathrm{d}}^2({\bf q})}{2q^2}\, ,
\end{equation}
where ${\bf q}_\perp = q_x {\bf e}_x + q_y {\bf e}_y$, $v({\bf q})$ is given by Eq.~(\ref{v}), $\varepsilon({\bf q}) = \sqrt{ q^2[q^2 + 2 \xi v({\bf q})] }$ and
$v_{3d}({\bf q}) = \int \mbox{d} {\bf r} \, \exp \left( - i \frac{2\pi}{L} {\bf q} {\bf r} \right) v({\bf r})/g = 1 + \epsilon_{dd} \left(3 \frac{({\bf q} \cdot {\bf e})^2}{q^2} - 1 \right)$ is the three-dimensional Fourier transform of the potential $v({\bf r})$. Here ${\bf e} = \sin \theta {\bf e}_x + \cos \theta {\bf e}_z$ is the direction of the dipoles' polarization (see Fig.~\ref{fig:coord}).
The second term in Eq.~(\ref{lhydef}) results from the standard high momenta renormalization procedure which we describe in \ref{appR}.
For $\xi \gg 1$, the atoms occupy many excitation levels in the confined direction and the system should behave as three-dimensional. Indeed, we find that for $\xi\gg1$, the
beyond-mean-field energy $e_{LHY}^{2d}$ recovers the 3D result~\cite{Pelster2012}
\begin{eqnarray}
\label{lhy3d}
e_{LHY}^{2{d}}(\xi) \stackrel{\xi\to\infty}{\longrightarrow}e_{LHY}^{3{d}}= \xi^{5/2} \frac{8\pi \sqrt{6}}{5}\, .
\end{eqnarray}
\subsubsection{Critical point calculation}
We now focus on the properties of $e_{LHY}^{2d}(\xi)$ for $\delta \theta =0 $ and $\epsilon_{dd}=1$, which as described in Section~\ref{sec:critical}, is the critical
point. Fig.~\ref{fig2} shows the numerically calculated results for the corrections described by Eq.~(\ref{lhydef}) as a function of~$\xi$. The beyond-mean-field term
approaches the 3D limit with increasing $\xi$. As can be seen from the figure, the value of $e_{LHY}^{2{d}}$ is already close to the limiting case of $e^{3{d}}_{LHY}$ for
$\xi \gtrsim 0.1$. This means that a rather strongly confined gas can still be reasonably well described using the free space results with the trap incorporated in the
LDA fashion.
The $\xi\ll 1$ regime describes the quasi-2D limit in which the collisions are 3D in character but the interaction energy is too low to populate the excited states in the
confined direction. As we explain in details in \ref{app1}, in this limit the two lowest orders of the expansion in $\xi$ take the form
\begin{eqnarray}
\label{lhy2d}
e_{LHY}^{2{d}}(\xi) \simeq c_2 \xi^2 + c_3 \xi^3\, ,
\end{eqnarray}
where $c_2 \simeq 0.1974$, $c_3 \simeq 108$. Comparing with the numerical result, we find that this approximation works well as long as $\xi \lesssim 0.002$. The first term of the expansion in Eq.~(\ref{lhy2d}) is proportional to the square of the density and provides a correction to the mean field energy of the BEC. This correction originates from the effect of the confinement on the two-body scattering amplitude and could also be derived from the two-body problem employing the Born
expansion, as observed also in Refs.~\cite{zin2018droplets,buechler2018crossover}. The second term proportional to $n^3$ can be interpreted as an emergent three-body
term in the energy functional stemming from quantum fluctuations. This effect is distinct from three-body forces induced by confinement or internal structure, which have been studied e.g. in~\cite{Johnson2009,PetrovPRL2014}. In our case, this term turns out to be repulsive ($c_3>0$), providing a possible stabilization mechanism for the gas close to the instability, and indicating that dipolar droplets may exist in a quasi-2D system. We note that similar effective term can be calculated using perturbation theory on a weakly interacting few-body system~\cite{Petrov2021}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\textwidth]{fig2a.pdf}
\includegraphics[width=0.49\textwidth]{fig2b.pdf}
\caption{Left panel: the LHY energy $e_{LHY}^{2{d}}$ as a function of $\xi$ at the critical point. The energy (solid line) is compared to the 3D limit from Eq.~(\ref{lhy3d}) (dotted line).
The dashed blue line shows the result of the approximation from Eq.~(\ref{lhy2d}.
Right panel: a magnified view of the comparison of the LHY energies calculated for $\delta \theta = 0$, $\epsilon_{dd} = 1$ (black), $\delta \theta = 0$, $\epsilon_{dd} =1 - 0.01$ (green dashed) and $\delta \theta = 0.1$, $\epsilon_{dd} = 1$ (red dotted).}
\label{fig2}
\end{figure}
The expansion in Eq.~(\ref{lhy2d}) has a vastly different structure than the beyond-mean-field term in quasi-2D Bose-Bose mixtures~\cite{zin2018droplets} or in a single
quasi-2D BEC~\cite{buechler2018crossover} with contact interactions. The origin of this discrepancy can be traced back to the fact that the Fourier transform of the
potential $v({\bf q})$ near the critical point is linear in $q_\perp$ (for $q_z =0$). This is the reason why the logarithmic terms that are usually present in two dimensions do not appear here.
Let us now briefly discuss the typical experimental conditions and the parameters needed to reach the 2D regime in the LHY term. Most experiments in this field are performed using dysprosium and erbium~\cite{Boettcher2020} at gas densities $n\sim10^{14}$cm$^{-3}$ with scattering length being of the order of $100\,a_0$. Assuming the box size of $1\mu$m, we then obtain $\xi\approx 1$ for a typical experiment. In order to study the 2D regime one would need to increase the confinement strength by more than an order of magnitude, which could be realized using subwavelength traps~\cite{Wang2018} or a potential shaped by a digital micromirror device~\cite{Tajik19}. Equivalently, one could decrease the gas density to about $10^{11}$cm$^{-3}$, which would on the other hand require much lower temperatures and longer operation times.
\subsubsection{Before the critical point }
In the considerations above, we have so far discussed the LHY term calculated at the critical point $\theta = \pi/2$ and $\epsilon_{dd} = 1$. Below we show that LHY energy in the region of parameters $\delta \theta \ll 1$
and $\delta \epsilon_{dd} \ll 1 $ does not change much.
We note that, strictly speaking, the LHY energy cannot be calculated
{\it after} crossing the critical point. Therefore we calculate numerically the
LHY energy {\it before} and at the critical point in three cases: $\delta \theta = 0$, $\epsilon_{dd} = 1$, $\delta \theta = 0$, $\epsilon_{dd} =1 - 0.01$, and $\delta \theta = 0.1$, $\epsilon_{dd} = 1$. The results are plotted in the lower panel of Figure~\ref{fig2}. We notice that all the studied cases show the same trend and differ by at most few percent. We have verified that the correction changes continuously as we depart from the $\delta \theta = 0$, $\epsilon_{dd} = 1$ point.
Therefore, in the calculations of the droplet state below we take the value of LHY energy
calculated at the critical point $\epsilon_{dd} =1$ and $\delta \theta =0$.
Finally, we note that for the chosen orientation of the dipoles causing the phonon instability to occur we were able to provide a universal result in the sense that the
expansion coefficients in Eq.~(\ref{lhy2d}) do not depend on the box width. This is related to the fact that for dipoles oriented in plane the condensate depletion
converges, while for perpendicular orientation, where the roton instability occurs, the condensate depletion calculated within Bogoliubov theory diverges and the system
becomes nonuniversal~\cite{Boudjemaa2013,Fedorov2014,jachymski2018nonuniversal}.
\section{The droplet state}
\label{sec:droplet}
So far we have shown that the quasi-2D dipolar gas can in principle support droplet solutions, as the LHY correction provides the mechanism for stabilization.
In this section, we find the droplet density in the limit of large atom number and numerically investigate the density profile of the droplets in the finite system case.
From Eqs.~(\ref{energia2}) and (\ref{LDA}) we obtain the energy of the considered system where
LHY energy correction is incorporated using local density approximation
\begin{eqnarray} \nonumber
E[\psi_\perp({\bf r})]\!=\!&&\int\!\! d{\bf r}_\perp \, \frac{\hbar^2}{2m} |\nabla_\perp \psi_\perp|^2 \!+ \frac{1}{2} \! \int \!\!d{\bf r}_\perp d{\bf r}_\perp'\, v_{2{d}}({\bf r}_\perp-{\bf r}_\perp')|\psi_\perp({\bf r}_\perp)|^2 |\psi_\perp({\bf r}_\perp')|^2
\\ \label{dlugie}
&&+\int d {\bf r}_\perp \, \frac{\epsilon_0}{L^2} e_{LHY}^{2{d}}\!\left[\frac{2}{\pi} aL |\psi_\perp({\bf r}_\perp)|^2 \right],
\end{eqnarray}
where by ${\bf r}_\perp = x {\bf e}_x + y {\bf e}_y$ we denote the two-dimensional position vector.
We note that the coefficient multiplying the square of the wave function in the LHY part is
introduced because we have calculated the correction as a function of $\xi$ instead of the density alone (we remind that here $a$ is the scattering length).
Now we focus on the $\delta \theta =0$ case.
To calculate the energy we need to have $e_{LHY}^{2{d}}$ as the function of $\epsilon_{dd}$.
In the previous Section we have presented calculation for $\epsilon_{dd} = 1$ and
$\epsilon_{dd} = 1 -0.01$. We found that $e_{LHY}^{2{d}}$ weakly depends on this change of
$\epsilon_{dd}$. The calculation of LHY in the uniform system case
are not possible within standard Bogoliubov above the
critical point i.e. for $\epsilon_{dd} > 1$, due to the imaginary frequencies of low modes.
Still, it is shown, using more sophisticated techniques, that generally $e_{LHY}^{2{d}}$
changes very weakly after crossing the critical point \cite{Ota2020,Hu2020}.
Therefore in what follows in the vicinity of $\epsilon_{dd} =1 $ we approximate
$e_{LHY}^{2{d}}$ by its value at $\epsilon_{dd} =1 $.
We now mention an interesting property. This comes from the analysis
of the interaction energy which in the Fourier space takes the form
\begin{eqnarray*}
\int \!\!d{\bf r}_\perp d{\bf r}_\perp'\, v_{2d}({\bf r}_\perp-{\bf r}_\perp')
|\psi_\perp({\bf r}_\perp)|^2 |\psi_\perp({\bf r}_\perp')|^2
= \frac{g}{(2\pi)^2} \int\!\!d {\bf k}_\perp \, v_{2d}({\bf k}_\perp) [n_{2d}({\bf k}_\perp)]^2,
\end{eqnarray*}
where on the right-hand side $n_{2\mathrm{d}}$ and $v_{2\mathrm{d}} $ are the Fourier transforms of the two-dimensional density, i.e., $n_{2d}({\bf k}_\perp) =\int d {\bf r}_\perp
\, e^{-i {\bf k}_\perp {\bf r}_\perp}|\psi_\perp({\bf r}_\perp)|^2$ and interaction potential
\begin{eqnarray*}
v_{2d}({\bf k}_\perp) = \int d {\bf r}_\perp \, e^{-i {\bf k}_\perp {\bf r}_\perp} \frac{v_{2d}({\bf r}_\perp)}{g}.
\end{eqnarray*}
The latter can be calculated analytically and for $\delta \theta =0$ reads
\begin{eqnarray}\label{v2dFT}
v_{2d}({\bf k}_\perp) \!=\! \frac{1}{L} \left\{ 1 \! +\! \epsilon_{dd} \left[ \frac{3 k_x^2}{k_\perp^3L}
\left( e^{ - k_\perp L} \!+\! k_\perp L \!-\!1 \right) \!-\!1 \right] \right\}. \quad
\end{eqnarray}
The form of the potential from Eq.~(\ref{v2dFT}) suggests it can be split into two parts. The first one is $v_{2{d},loc}({\bf k}_\perp) = \frac{1 - \epsilon_{dd}}{L}$, whose inverse Fourier transform yields a contact (local) potential $v_{2{d},loc}({\bf r}_\perp) = \frac{1 - \epsilon_{dd}}{L} g \delta({\bf r}_\perp)$.
The second part is the nonlocal potential
\begin{equation}\label{vnon}
v_{2{d},non}({\bf k}_\perp) = \frac{\epsilon_{dd}}{L} \frac{3 k_x^2}{k_\perp^3L}
\left( e^{ - k_\perp L} + k_\perp L -1 \right).
\end{equation}
We notice that this potential goes to zero for $k_\perp \rightarrow 0$, and so we expect it to be less important as the size of the droplets grows.
Having the above we now focus on the quasi-2D limit.
Here we make use of expansion given by Eq.~(\ref{lhy2d}).
Firstly, we calculate the equilibrium density $|\psi_\perp|^2 =n^{eq}$ of a large droplet.
Here we make a simple model of the droplet as having constant density and volume
in the two-dimensional plane denoted as $V_\perp$. We assume that the droplet finishes sharply and neglect the energies of the boundaries.
In such a case, the energy of the droplet is
\begin{equation}
E = \frac{g}{2L} (1-\epsilon_{dd}) (n^{eq})^2 V_\perp +
\frac{\epsilon_0}{L^2} \left( c_2 \left(\frac{2}{\pi} aL n^{eq }\right)^2 +
c_3 \left(\frac{2}{\pi} aL n^{eq}\right)^3 \right) V_\perp.
\label{Ee}
\end{equation}
Note that above we use the approximation described above and neglect $v_{2{d},non}$.
Substituting here the normalization condition, i.e., $N = n^{eq} V_\perp$,
and requiring $(\partial E/\partial V_\perp)_N = 0$, we find the equilibrium density
of the droplet
\begin{eqnarray*}
\label{eq:eq}
n^{eq} = \frac{1}{ a^2} \frac{\pi^2 \epsilon}{16 c_3}.
\end{eqnarray*}
Here $ \epsilon = \epsilon_{dd} - 1 - \frac{4}{\pi} \frac{a}{L} c_2$ is the stability parameter, and the droplet is formed when $\epsilon>0$. We notice that the critical point is shifted from the usual condition $\epsilon_{dd} - 1>0$ by the correction to the mean field coupling strength due to confinement.
With the equilibrium density at hand, we rewrite the energy functional in the convenient dimensionless units, i.e., the unit of length $d = \frac{\sqrt{2 c_3}}{\pi^{3/2} \epsilon
} \sqrt{aL}$, the unit of energy $E_0 = \frac{g}{L} \epsilon ( n^{eq})^2 d^2$, and we set $\psi_\perp/\sqrt{n^{eq}} \rightarrow \psi_\perp $.
Such a transformation results in
\begin{equation}\label{gpe2d}
E \!=\! \int \!\!d{\bf r}_\perp \bigg[|\nabla_\perp \psi_\perp({\bf r}_\perp) |^2 \!-\! \frac{1}{2} |\psi_\perp ({\bf r}_\perp)|^4
\!+\! \frac{1}{4} |\psi_\perp({\bf r}_\perp)|^6 \bigg] \!+\! \delta E,
\end{equation}
where the first integral contains the local terms and $\delta E$ is the contribution to the total energy from the nonlocal part of the dipole-dipole term
\begin{eqnarray*}
\delta E = \frac{L}{8\pi^2 \epsilon} \int d {\bf k}_\perp \, v_{2\mathrm{d},non}\left(\frac{{\bf k}_\perp}{d} \right) |n_{2d}({\bf k}_\perp) |^2.
\end{eqnarray*}
We note that here $n_{2d}({\bf k}_\perp)$ is the Fourier transform of the density (defined as before but now in the new units), and $v_{2\mathrm{d},non}$ is given by Eq.~(\ref{vnon}).
The physical number of atoms in the droplet is
\begin{eqnarray*}
N_{at} = n^{eq} d^2 N = \frac{L}{a} \frac{1}{8\pi \epsilon} N,
\end{eqnarray*}
where $N = \int d {\bf r}_\perp \, |\psi_\perp({\bf r}_\perp)|^2$.
We have minimized the functional from Eq.~(\ref{gpe2d}) numerically and found the density profiles of the droplet for different total number of atoms.
In the numerical calculations we take $a = 10\ \mathrm{nm}$, the length $L = 1\ \mu\mathrm{m}$, and we set $\xi^{eq} = 0.001$.
These numbers lead to $\epsilon = \xi^{eq} \frac{a}{L} \frac{8c_3}{\pi} \simeq 0.0027 $
and $ n^{eq} d^2 = \frac{L}{a} \frac{1}{8\pi \epsilon} \simeq 1400 $.
In order to find the density that minimizes the function, we calculate the functional derivative of
$E - \mu N$ with respect to $\psi_\perp(x,y)$; here, we impose an additional constraint on the total norm so a Lagrange multiplier $\mu$ appears. This
procedure leads to a Gross-Pitaevskii-type equation for $\psi_\perp$. In the first step of our numerical approach, we drop from the functional the term $\delta E$, and
then we solve it by the imaginary-time method. This leads to a profile $\psi_\perp$ which is spatially symmetric. This solution is then treated as the initial point for the full problem including now the term $\delta E$.
\begin{figure}[tb]
\centering \includegraphics[width=0.6\columnwidth]{fig3.pdf}
\caption{The cuts through the two-dimensional density of the droplet along the $x$ and $y$ directions taken through the centre of the cloud $|\psi_\perp(x,0)|^2$ and $|\psi_\perp(0,y)|^2$.
The black solid line is calculated for $N=15$; here the two cuts are indistinguishable and no anisotropy can be seen.
The red dotted (red dotted-dashed) and blue dotted (blue dotted-dashed) lines are for $N=300$ and $500$, respectively, and the cuts along $y$ ($x$) direction;
the norm $N$ is also indicated by arrows.
In each case, the double-dotted-dashed line between the two cuts shows the result of the calculation without the anisotropic contribution $\delta E$ which gives a symmetric profile.
The inset (the units on axes are the same as in the main panel) shows the zoom in of the cloud center.
The symmetric solution (without $\delta E$) always overestimates the central density. With increasing the norm $N$, the full solution approaches the homogeneous limit.
In the main panel, the horizontal, dotted gray line indicates the homogeneous limit~$|\psi_\perp|=n^{eq}$.
}
\label{fig_droplet}
\end{figure}
Fig.~\ref{fig_droplet} displays the numerically calculated density profiles of
the droplets for three values of the two-dimensional norm $N = 15$, 300 and 500. This corresponds to the number of atoms: $N_{at} = 2.1 \times 10^4$, $4.2 \times 10^5$ and $7.0\times 10^{5}$, respectively.
We note that for these parameters the three-dimensional
density of atoms is on the order of $10^{11} \mathrm{cm}^{-3}$, and so the three-body losses not included in the theory should still be moderate.
As can be seen in Fig.~\ref{fig_droplet}, due to the effect of $\delta E$, the droplet shape that we obtain is not cylindrically symmetric. The observed anisotropy, however, is not
prominent. For larger atom numbers, for instance $N = 300$ or 500, the density in the middle is almost constant and close to the analytically predicted equilibrium value
$n^\mathrm{eq}$ (see also the inset of Fig.~\ref{fig_droplet}). Finally, we see that with increasing the number of atoms, the droplet grows in its volume by keeping almost constant
central density, but attaching the atoms mainly to its surface. This effect indicates that the system has liquid properties.
Now we shortly discuss the validity of the assumptions made in Sections~\ref{sec:description} and~\ref{sec:LHY}. In Sec.~\ref{sec:description}, we assumed that the length on which
the density changes in the $x$ and $y$ directions is much larger than $L$. From our analysis, we find that this characteristic lengthscale is given by $d$, and the required
condition is $d \gg L$. In the case of quasi-2D limit analyzed in this Section, we have $\frac{L}{d} = \sigma^{eq} \sqrt{\frac{a}{L}} 4 \sqrt{2 \pi c_3 }$, and using the values of
the parameters taken from numerical simulations, we obtain $L/d \simeq 0.01$, which confirms the separation of lengthscales.
Additionally, in Sec.~\ref{sec:LHY} we assumed the validity of the LDA. When we calculated the LHY energy in the uniform case, we integrated and summed over the wave vectors ${\bf k}$ with a
characteristic cutoff $k_c$. In the quasi-2D geometry, $k_c$ is of the order of a few $1/L$, but the coefficients $c_2$ and $c_3$ have their values approximately the same as for
$k_c = \infty$. The LDA is justified, when $k_c$ is much larger than the inverse of the length on which the density changes in the $x$-$y$ plane, which we found to be equal to
$d$. Therefore, in the quasi-2D limit, we arrive at $d \gg L$, which is the same condition as obtained above where we analyzed the assumptions stated in Sec.~\ref{sec:description}.
Finally, we discuss the droplet formation in the case $\delta \theta \neq 0$ and $\delta \theta \ll 1$.
This problem can be found by minimization of the energy functional given by Eq.~(\ref{dlugie}).
In the above we did that for $\delta \theta =0$.
Still for $\delta \theta \ll 1$ the potential $v_{2d}({\bf r}_\perp)$ shall not change much (as it is a continuous
function of $\delta \theta$) with respect to $\delta \theta =0$ case.
In addition we have shown that the change of LHY energy term for $\delta \theta \ll 1$ is negligible.
The droplet state comes from the interplay between interaction and LHY energy.
As both of these does not change much for $\delta \theta \ll 1$ thus the properties of the
droplet state shall also be close to those analysed above ($\delta \theta =0$ case).
\section{Conclusions}
\label{sec:conclusion}
We analysed the beyond-mean-field behaviour of dipolar Bose gas subject to quasi-two-dimensional confinement. Under several simplifying assumptions, we determined analytically the Lee-Huang-Yang correction to the mean field energy. We showed that close to
the phonon instability the correction can become decisive for the properties of the system, preventing it from collapse. The ground state of the system in this case is a
finite size self-bound droplet similar to the three-dimensional case. Crucially, we found that for moderate confinement strength the magnitude of the correction is close to its free-space limit, validating the use of the local density approximation.
\ack
P.Z. and Z.I. acknowledge the support from the Polish National Science Centre Grant No. 2015/17/B/ST2/00592.
M.P. acknowledges support from grant No. 2017/25/B/ST2/01943.
|
1,314,259,993,398 | arxiv | \section{Spatial perception}
\label{sec:sensory_perception}
The equipment on board \ac{uav} platforms within our research group is modular and replaceable to support a wide spectrum of research areas~\cite{hert2022hardware}.
In the proposed system for agile subterranean navigation, however, the aerial platform is fixed to ease fine-tuning of the on-board-running algorithms.
From the point of perception, it relies heavily on 3D \ac{lidar} from Ouster (SLAM, dense mapping, and artifact localization), and utilizes vertically-oriented \ac{rgbd} cameras for filling space out of \ac{fov} of the primary \ac{lidar} sensor, and uses two \ac{rgb} Basler cameras for artifact detection, supported by powerful LEDs illuminating the scene.
The flow of sensory data within the entire system is shown directly in~\autoref{fig:system}.
\subsection{Sensors calibration}
\label{sec:sensors_calibration}
The intrinsics of \ac{lidar} sensor and \ac{rgbd} cameras are factory-calibrated whilst monocular \ac{rgb} cameras are calibrated with standard OpenCV calibration tools, assuming the pinhole camera model.
To find the extrinsics of the sensors, all the cameras are one-by-one calibrated with respect to the \ac{lidar} sensor whereas the extrinsics of the \ac{lidar} with respect to the flight control unit are given by the CAD model of the robot.
The camera-to-lidar extrinsics are calibrated using a checkerboard camera calibration pattern with known dimensions.
The calibration pipeline detects the pattern in both modalities (\ac{lidar} data and \ac{rgb} image), finds mutual correspondences, and estimates the extrinsics directly by defining the problem as perspective-n-point optimization which minimizes the reprojection error of the mutual correspondences.
The calibration process for a single camera is described in~\autoref{alg:calibrate_camera_to_lidar}.
\begin{algorithm}
\algdef{SE}[SUBALG]{Indent}{EndIndent}{}{\algorithmicend\ }%
\algtext*{Indent}
\algtext*{EndIndent}
\algnewcommand\AND{\textbf{and}~}
\algnewcommand\Not{\textbf{not}~}
\algnewcommand\Or{\textbf{or}~}
\algnewcommand\Continue{\textbf{continue}~}
\algnewcommand\Input{\State{\textbf{Input:~}}}%
\algnewcommand\Output{\State{\textbf{Output:~}}}%
\algnewcommand\Parameters{\State{\textbf{Parameters:~}}}%
\algnewcommand\Begin{\State\textbf{Begin:~}}%
\algnewcommand{\LineComment}[1]{\State \(\triangleright\) #1}
\caption{Calibrating extrinsic parameters of \ac{rgb} cameras with respect to \ac{lidar} sensor.
}\label{alg:calibrate_camera_to_lidar}
\begin{algorithmic}[1]
\footnotesize
\Input
\Indent
\State $\set{P} = \left\{\mathcal{L}_i,\mathcal{I}_i\right\}$
\Comment{set of synchronized pairs of \ac{lidar} and camera frames}
\State $\mathbf{K} \in \mathbb{R}^{3\times3}$
\Comment{camera matrix of intrinsic parameters}
\State $\mathbf{D} \in \mathbb{R}^{5}$
\Comment{camera distortion coefficients}
\EndIndent
\Output
\Indent
\State $\mathbf{R},~\mathbf{t}$
\Comment{extrinsics (rotation and translation) of the camera in the \ac{lidar} frame}
\EndIndent
\Parameters
\Indent
\State $\mathcal{O}$
\Comment{checkerboard camera calibration pattern (dimensions, square count, square size)}
\EndIndent
\Begin
\Indent
\State $\set{S} \coloneqq \emptyset$
\Comment{initialize set of camera-\ac{lidar} correspondences}
\For{$ \left\{\mathcal{L}_i,\mathcal{I}_i\right\} \in \set{P} $}
\State $ \set{C}_{L} \coloneqq \algfunc{findCalibPatternCornersInLidar}\left(\mathcal{L}_i, \mathcal{O}\right) $
\Comment{determine four square-corners of calibration pattern in \ac{lidar} data}
\State $ \set{C}_{I} \coloneqq \algfunc{findCalibPatternCornersInImage}\left(\mathcal{I}_i, \mathcal{O}, \mathbf{K}, \mathbf{D}\right) $
\Comment{determine four square-corners of calibration pattern in \ac{rgb} image}
\If{$\Not~|\set{C}_{L}| = 4~\Or~\Not~|\set{C}_{I}| = 4$}
\State \Continue
\Comment{skip pair}
\EndIf
\State $\set{S} \coloneqq \set{S} \cup \left\{ \set{C}_{L},~\set{C}_{I} \right\}$
\EndFor
\State $\hat{\mathbf{R}},~\hat{\mathbf{t}} \coloneqq \algfunc{solvePnP}\left(\set{S}\right)$
\Comment{compute rough estimate of extrinsics~\cite{lepetit2008EPnPAA}}
\State $\mathbf{R},~\mathbf{t} \coloneqq \algfunc{refinePnPWithLM}\left(\set{S}, \hat{\mathbf{R}}, \hat{\mathbf{t}}\right)$
\Comment{numerically optimize estimate of extrinsics with Levenberg-Marquardt method}
\EndIndent
\end{algorithmic}
\end{algorithm}
\subsection{Filtering observation noise}
\label{sec:filtering_observation_noise}
The aerodynamic influence of a multi-rotor \ac{uav} on the environment is not negligible, particularly in confined settings.
The fast-rotating propellers generate airflow lifting up light particles of dust and whirling them up in clouds.
In environments where the clouds are not blown away but are rather rebounded back to the \ac{uav}, the effect on sensory performance might be crippling.
To minimize deterioration in perception and its dependent systems (e.g., mapping, localization), the incident noise is filtered out from local \ac{lidar} data.
The idea of robust filtering of dust is based on the method presented in~\cite{kratky2021exploration} in which \ac{lidar} data are sorted by the intensity field (measured intensity of the reflected light for a given point) and \SI{10}{\percent} of the lowest-intensity data in a local radius from the sensor are removed.
In contrast to the baseline method, simpler thresholding is adopted such that a subset $\mathcal{P}_F \subseteq \mathcal{P}$ of \ac{lidar} data $\mathcal{P}$ is preserved.
The absence of data sorting lowers the computational load and reduces delay in data processing.
The set is given as $\mathcal{P}_F = \mathcal{P}_D \cup \mathcal{P}_I$, where
\begin{align}
\mathcal{P}_{D} &= \left\{\pnt{p}\,|\, \norm{\pnt{p}} \geq \kappa,\,\pnt{p} \in \mathcal{P}\right\},\\
\mathcal{P}_{I} &= \left\{\pnt{p}\,|\,\mathcal{I}(\pnt{p}) > \Upsilon,\,\pnt{p} \in \mathcal{P} \setminus \mathcal{P}_{D}\right\}.
\end{align}
$\mathcal{I}(\pnt{p})$ $\left(\si{\watt\per\meter\squared}\right)$ is the intensity of the reflected light from a point $\pnt{p}$, $\kappa$ $\left(\si{\meter}\right)$ is a local radius of a filtering sphere with \ac{lidar} data origin at its center, and $\Upsilon$ $\left(\si{\watt\per\meter\squared}\right)$ is the minimal intensity of preserved data points.
With $n$ data points within a radius $\kappa$, the computational complexity is reduced to $\mathcal{O}\left(n\right)$ from baseline $\mathcal{O}\left(n\,\log(n)\right)$.
Although to achieve optimal performance the method requires calibration to given environmental conditions, a set of reasonable parameters ($\kappa = \SI{5}{\metre}$ and $\Upsilon = \SI{30}{\watt\per\meter\squared}$ throughout many of our real-world deployments in the harshest dust conditions) suffices in the majority of applications.
The performance of the dust filtering is analyzed in \autoref{fig:dust_filtering} on an example \ac{uav} flight in the mine part (the dustiest zone) of the \ac{darpa} \ac{subt} finals environment.
\begin{figure}
\centering
\begin{subfigure}[t]{0.275\textwidth}
\centering
\includegraphics[width=\textwidth]{./fig/dust_filtering/dust_in_rgb.png}
\caption{Dense cloud dust around the \ac{uav} as viewed in onboard \ac{rgb} camera at time \SI{330}{\second}.}
\label{fig:dust_filtering_rgb}
\end{subfigure}%
\begin{subfigure}[t]{0.325\textwidth}
\centering
\includegraphics[trim=0 50 0 85,clip,width=\textwidth]{./fig/dust_filtering/dust_filtering_traj.pdf}
\caption{%
Top view on the \ac{uav} trajectory.}
\label{fig:dust_filtering_traj}
\end{subfigure}%
\begin{subfigure}[t]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{./fig/dust_filtering/dust_filtering_classification.pdf}
\caption{%
Performance of noise classification in \ac{lidar} data in \SI{3}{\metre} local radius from the sensor.
Average recall reached \SI{99}{\percent}.}
\label{fig:dust_filtering_classification}
\end{subfigure}%
\caption{%
\ac{lidar}-data noise filtration running onboard a \ac{uav} during a \SI{154}{\metre} flight in the mine part (the dustiest part) of the \ac{darpa} \ac{subt} finals environment.
The true positive classification in (c) denotes the ratio of correctly classified noise whereas the false negative represents the ratio of noise preserved after the filtration process (i.e., the unfiltered noise) to the size of the point cloud.
The data for the classification analysis (c) were obtained by spatially comparing the sensor measurements with the map of the environment provided by the organizers.
}
\label{fig:dust_filtering}
\end{figure}
\subsubsection{Detecting artificial fog in the virtual environment}
\label{sec:fog_detection}
The virtual competition contained a fog emitter plugin (see \autoref{fig:fog_detection}) to mimic environmental perception degradation arising from observing smoke, dust, and fog.
The plugin spawned a fog cloud when a robot reached the proximity of the emitter.
Although our localization pipeline was able to cope with local noise, the inability to filter out the fog particles in a robust way led to a degradation of the local DenseMap, and consequently to blocking local planning which respects strict requirements on collision-free path planning.
Thus in our setup for the virtual challenge, the navigation stack did not try to enter through the fog areas but detected them, maneuvered out of them, and blocked the areas for global planning.
To detect the presence of the \ac{uav} within such a fog cloud, a discretized occupancy voxel grid is built from a set of data within a local radius (example data within a radius are shown in \autoref{fig:fog_detection_infog}).
Within this radius is compared the occupancy ratio $r$ (number of occupied voxels to all voxels in the local grid) with maximum occupancy $R$ given by the field of view of the sensor producing the data.
For each \ac{lidar} or depth sensor, the sensor is classified as being in fog if
\begin{equation}
r > \lambda R,
\end{equation}
where $\lambda \in \langle0, 1\rangle$ is a unitless multiplier converting $\lambda R$ to a maximal occupancy ratio threshold.
The multiplier was set empirically to $\lambda = 0.7$ in our final setup.
For depth cameras that are not used for self-localization of the \ac{uav}, the in-fog classification solely controls whether the depth data are integrated within the mapping pipeline.
However, if a localization-crucial 3D \ac{lidar} is classified to be in fog, a backtracking behavior is triggered within the mission supervisor (see~\autoref{sec:mission_control}).
The primary purpose of the backtracking is to prevent being stuck in fog and thus the \ac{uav} is blindly navigated out of the fog through the recent history of collision-free poses, ignoring occupied cells in the DenseMap (including possible noise from fog measurements).
Lastly, detection of fog in a 3D \ac{lidar} blocks the area in global planning.
\begin{figure}
\centering
\begin{subfigure}[t]{0.33\textwidth}
\centering
\captionsetup{width=1.0\textwidth}
\adjincludegraphics[width=\textwidth,trim={0.1\width} {0\height} {0.1\width} {0\height},clip,]{./fig/fog_detection/fog_5.jpg}
\caption{Visualization of virtual fog in Ignition Gazebo.}
\end{subfigure}%
\begin{subfigure}[t]{0.33\textwidth}
\centering
\captionsetup{width=1.0\textwidth}
\includegraphics[width=\textwidth]{./fig/fog_detection/out_of_fog.png}
\caption{Example 3D \ac{lidar} data outside fog.}
\end{subfigure}%
\begin{subfigure}[t]{0.33\textwidth}
\centering
\captionsetup{width=1.0\textwidth}
\includegraphics[width=\textwidth]{./fig/fog_detection/inside_fog.png}
\caption{Example 3D \ac{lidar} data inside fog (fog colored locally in red).}
\label{fig:fog_detection_infog}
\end{subfigure}%
\caption{%
Simulated fog and its effect on sensory perception in the virtual environment.
A fog cloud (a) spawns when a robot reaches its proximity.
The cloud then affects the sensory inputs such that a uniform-distribution noise emerges in \ac{lidar} data corresponding to the fog (c).}
\label{fig:fog_detection}
\end{figure}
\subsection{Detecting spots safe for landing}
\label{sec:landing_spot_detection}
Depth data of downward-facing \ac{rgbd} camera are used in locating areas safe for landing throughout the \ac{uav} flight.
The depth data are fitted with a plane model whose coefficients are used in the binary classification of safe or unsafe landability respecting the plane-fit quality and deviation of its normal vector from the gravitational vector.
The process of deciding on safe landability given a single depth-data frame is visualized in~\autoref{fig:landing_spot_detection} and described in~\autoref{alg:landing_spot_detection}.
The classification assumes that the data frame can be transformed into a gravity-aligned world coordinate frame.
\begin{figure}
\centering
\begin{subfigure}[t]{0.31\textwidth}
\centering
\captionsetup{width=0.99\textwidth}
\includegraphics[width=\textwidth]{./fig/landing_spot_detection/uav.pdf}
\caption{Downward-facing \ac{rgbd} camera used for landability detection mounted on our \ac{uav} platform.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.22\textwidth}
\centering
\captionsetup{width=0.95\textwidth}
\includegraphics[width=\textwidth]{./fig/landing_spot_detection/safe.png}
\caption{Even planar surface: safe for landing.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.22\textwidth}
\centering
\captionsetup{width=0.95\textwidth}
\includegraphics[width=\textwidth]{./fig/landing_spot_detection/unsafe_nonplanar.png}
\caption{Non-planar surface (rails): unsafe for landing.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.22\textwidth}
\centering
\captionsetup{width=0.95\textwidth}
\includegraphics[width=\textwidth]{./fig/landing_spot_detection/unsafe_uneven.png}
\caption{Uneven surface: unsafe for landing.}
\end{subfigure}%
\caption{%
Deciding on landability of a \ac{uav} from downward-facing depth data --- binary classification to safe (b) and unsafe (c-d) landing areas.
In (b-d), the \ac{uav} is represented by Cartesian axes whereas the depth data are colored in black.
The blue sphere in the safe classification (b) denotes the centroid of the plane-inliers (colored in green) passed as a feasible landing position to LandMap (see~\autoref{sec:landmap}).}
\label{fig:landing_spot_detection}
\end{figure}
\begin{algorithm}
\algdef{SE}[SUBALG]{Indent}{EndIndent}{}{\algorithmicend\ }%
\algtext*{Indent}
\algtext*{EndIndent}
\algnewcommand\AND{\textbf{and}~}
\algnewcommand\Not{\textbf{not}~}
\algnewcommand\Or{\textbf{or}~}
\algnewcommand\Input{\State{\textbf{Input:~}}}%
\algnewcommand\Output{\State{\textbf{Output:~}}}%
\algnewcommand\Parameters{\State{\textbf{Parameters:~}}}%
\algnewcommand\Begin{\State\textbf{Begin:~}}%
\algnewcommand\RETURN{\State\textbf{return:~}}%
\algnewcommand{\LineComment}[1]{\State \(\triangleright\) #1}
\caption{Detecting spots safe for \ac{uav} landing in downward-facing \ac{rgbd} camera}
\label{alg:landing_spot_detection}
\begin{algorithmic}[1]
\footnotesize
\Input
\Indent
\State $\set{D}$
\Comment{Depth-data frame in sensor coordinate frame}
\EndIndent
\Output
\Indent
\State $\set{L}$
\Comment{Binary classification for landing: $\{\text{SAFE}, \text{UNSAFE}\}$}
\State $\mathbf{p}_W$
\Comment{Position of landing area in the world coordinate frame}
\EndIndent
\Parameters
\Indent
\State $s$
\Comment{Square-size of safe landing spot in meters}
\State $I_{min}$
\Comment{Minimal ratio of inliers in plane fitting}
\State $N_{min}^z$
\Comment{Minimal z-axis component of the normalized plane-normal vector}
\EndIndent
\Begin
\Indent
\State{$\set{S} \coloneqq \algfunc{cropFrameAtCenter}\left(\set{D}, s\right)$}
\Comment{Crop frame-centered square with size $s$}
\If{$\algfunc{height}\left(\set{S}\right) < s~\Or~\algfunc{width}\left(\set{S}\right) < s$}
\RETURN{$\{\set{L} = \text{UNSAFE},~\mathbf{p}_W = \text{N/A}\}$}
\Comment{Not safe to land: too close to the ground to decide}
\EndIf
\State{$\set{P} \coloneqq \algfunc{fitPlaneWithRANSAC}\left(\set{S}\right)$}
\Comment{Fit data with plane using RANSAC}
\If{$\algfunc{inliers}\left(\set{P}\right)~/~\algfunc{count}\left(\set{S}\right) < I_{min}$}
\RETURN{$\{\set{L} = \text{UNSAFE},~\mathbf{p}_W = \text{N/A}\}$}
\Comment{Not safe to land: data are not planar}
\EndIf
\State $\set{P}_W \coloneqq \algfunc{transformToWorldFrame}\left(\set{P}\right)$
\Comment{Transform plane to gravity-aligned frame}
\If{$|\algfunc{normal}\left(\set{P}_W\right).z| < N_{min}^z$}
\RETURN{$\{\set{L} = \text{UNSAFE},~\mathbf{p}_W = \text{N/A}\}$}
\Comment{Not safe to land: ground is too steep for landing}
\EndIf
\State{$\set{S}_W \coloneqq \algfunc{transformToWorldFrame}\left(\set{S}\right)$}
\State $\mathbf{p}_W \coloneqq \algfunc{centroid}\left(\set{S}_W\right)$
\Comment{Express landing spot as the centroid of the depth data in the world}
\RETURN{$\{\set{L} = \text{SAFE},~\mathbf{p}_W\}$}
\EndIndent
\end{algorithmic}
\end{algorithm}
\subsection*{Support materials}
\vspace{-1em}
\label{sec:multimedia_materials}
The paper is supported by the multimedia materials available at \href{http://mrs.felk.cvut.cz/fr2022darpa}{\texttt{mrs.felk.cvut.cz/fr2022darpa}}.
Open-source implementation of the core of the \acs{uav} system is available at \href{https://github.com/ctu-mrs/mrs_uav_system}{\texttt{github.com/ctu-mrs/mrs\_uav\_system}}.
The \acs{slam} datasets are available at \href{https://github.com/ctu-mrs/slam_datasets}{\texttt{github.com/ctu-mrs/slam\_datasets}}.
The visual detection datasets are available at \href{https://github.com/ctu-mrs/vision_datasets}{\texttt{github.com/ctu-mrs/vision\_datasets}}.
\pagebreak
\section{Introduction}
The research of new robotic technologies and solutions is accelerating at an unprecedented rate mainly in case of aerial robotics.
Technological development is improving many areas of our lives and, hopefully, even the future of humanity.
The authors of~\cite{shakhatreh2019unmanned} reviewed current research trends and future insights on potential \ac{uav} use for reducing risks and costs in civil infrastructure.
The survey of \ac{uav} applications is accompanied by a discussion of arising research challenges and possible ways to approach them.
Identifying research paths leading to disruptive technologies that can achieve success in the near future is the goal of the US organization \ac{darpa}.
The most famous technologies that began as \ac{darpa} research are the Internet and \ac{gps}.
The recent advent of self-driving cars also began as a competition organized by \ac{darpa} --- the \ac{darpa} Grand Challenge and the \ac{darpa} Urban Challenge.
The latest competition, the \ac{darpa} \ac{subt} focuses on the development of robotic systems to autonomously search subterranean environments.
The motivation behind searching subterranean environments is to gain situational awareness and assist specialized personnel in specific missions.
Such missions may include: assessing the structural integrity of collapsed buildings, tunnels, or mines; exploration of a newly discovered branch in a cave network; or searching for lost persons.
These tasks can often be life-threatening to human workers as many hazards are present in subterranean environments.
In order to reach survivors quickly in unstable caves or partially collapsed burning buildings, first responders, such as emergency rescuers and firefighters, may potentially put their lives at risk.
In firefighting tasks, fires can be either localized and reported to personnel by robots or the robots can even directly begin extinguishing flames if the presence of human firefighters is too risky~\cite{spurny2021autonomous,pritzl2021autonomous,martinez2022skyeye}.
In such scenarios, ceilings can suddenly collapse, toxic gas can appear in a mine, flames can extend to an escape corridor, or a cave cavity can flood with water.
In distress situations, it is essential to swiftly coordinate the rescue operation as the survivors of a catastrophe might need acute medical assistance or have a limited amount of resources available, namely oxygen and water.
However, without conducting a proper reconnaissance of the environment and assessing the potential risks prior to the rescue mission, the involved rescuers are exposed to a much higher probability of injury.
To reduce the possibility of bodily harm or to avoid risks altogether, a robotic system can be sent on-site before the rescuers in order to either quickly scout the environment and report any hazards detected by the onboard sensors, or directly search for the survivors.
The rescue mission can be further sped up by deploying a team of robots capable of covering larger areas and offer redundancy in case of losses of some robot units in harsh environments.
Multi-robot teams can also consist of heterogeneous agents with unique locomotion modalities to ensure traversability of various terrains, including muddy ground, stairs, and windows, which is discussed in the overview of collaborative \ac{sar} systems~\cite{queralta2020collaborative}.
Similarly, sensing modalities can be distributed among individual robots to detect various signs of hazards, such as increased methane levels or the potential presence of survivors deduced from visual or audio cues.
Mounting all sensors on a single platform would negatively affect its dimensions and, consequently, its terrain traversability as it may not be able to fit into narrow passages, such as crawlspace-sized tunnels or doorways.
It would also mean a single point of failure for the rescue operation.
On the other hand, the operation of a single robot can be managed by just one person, while commanding a robot team may be unfeasible for a single operator.
Assigning each robot to an individual operator would also be an ineffective allocation of resources.
Moreover, the range of the robot would be limited by the communication link to the operator.
To provide a valuable tool for the rescue team, the robots must be able to move through the environment on their own and infer about the environment using their sensor data.
The rescuer can then also act as an operator, providing only high-level inputs to the robotic system to bias their behavior based on a-priori information ({e.g.,}{} someone was last seen on the east side of the third floor).
The research and development of such autonomous systems for assisting first-responders is the primary focus of the \ac{sar} robotics, and also the motivation for the \ac{sar} \ac{uav} system presented in this paper.
The robotic platforms typically considered for \ac{sar} tasks are categorized into wheeled, tracked, legged, marine, and aerial platforms~\cite{delmerico2019current}.
Among these locomotive modalities, aerial robots are considered to have the highest traversal capabilities since they can fly over most obstacles which are untraversable by other platforms.
One example of an autonomous aerial research platform for \ac{sar} is found in ~\cite{tomic2012toward}.
The mobility of \acp{uav} also surpasses other robot types thanks to its dynamic flight which can achieve large velocities and accelerations.
These qualities make \acp{uav} ideal for swift environmental scouting for gaining initial knowledge about a situation.
As such, the aerial platform is predetermined to be deployed as the first robot during the first minutes of the rescue operation.
A team deployed in an outdoor multi-\ac{uav} disaster response task~\cite{alotaibi2019lsar} can effectively cover a large search area and minimize the time to find and reach survivors.
On the other hand, \acp{uav} cannot operate for extended periods of time due to their limited flight time, and the sensory equipment is limited by the maximum payload of the \ac{uav}.
Some sensing modalities might even be unsuitable for the use on aerial robots due to their propulsion system, {e.g.,}{} detecting gas due to the aerodynamic effects of the propellers, or sound detection due to noisy operation.
Due to the aforementioned pros and cons of \ac{uav} platforms, it is convenient to combine the capabilities of other robot types to form a heterogeneous robotic team.
This manuscript proposes an autonomous cooperative \ac{uav} system for \ac{sar} as part of the CTU-CRAS-NORLAB team, which participated in the \ac{darpa} \ac{subt}.
The team consisted of \ac{ctu} and \acl{laval}.
\subsection{DARPA SubT challenge}
\label{sec:darpa}
After major success in accelerating the development of self-driving cars in the Grand Challenges of 2004 and 2005 and the Urban Challenge in 2007, \ac{darpa} announced the \acf{subt}~\cite{orekhov2022darpa} for the years 2017-2021 to advance the state of the art of \ac{sar} robotics.
Participants had to develop robotic solutions for searching subterranean environments for specific objects that would yield points if reported with sufficient accuracy.
To achieve the task at hand, the competitors had to develop complex multi-robot systems spanning nearly all research areas of mobile robotics, from design of the robotic platforms to high-level mission planning and decision making.
The rules of the competition can be summarized in a few points.
Each team has a dedicated time slot, or \emph{run}, to send their robots into a previously unvisited course and search for specific objects, referred to as artifacts~(\autoref{fig:artifacts}).
Each run starts at a predefined time and ends exactly one hour later.
A single team is present on the course at a time during which they can deploy an unconstrained number of robots of arbitrary size.
The movement of team personnel and their handling of robots is allowed only in the area in front of the entrance to the course, as shown in~\autoref{fig:staging_area}.
Only robots can enter the course and only one human operator/supervisor can command the robots and access the data they acquire during the run.
These conditions should mimic the conditions of a real \ac{sar} robotic mission.
The operator can report the type and position of an artifact. If the type was correct and the reported position was not further than \SI{5}{\meter} from the true position, the team was awarded one point.
The team with the highest score wins the prize according to~\autoref{tab:prize_table}.
For a more detailed description of the challenge, see~\cite{orekhov2022darpa}.
\begin{figure}
\newcommand\scale{0.09}
\centering
\includegraphics[width=\scale\textwidth,trim={0 0 0 0},clip]{./fig/artifacts/1_survivor.png}
\includegraphics[width=\scale\textwidth,trim={0 0 0 0},clip]{./fig/artifacts/2_phone.png}
\includegraphics[width=\scale\textwidth,trim={0 0 0 0},clip]{./fig/artifacts/3_backpack.png}
\includegraphics[width=\scale\textwidth,trim={0 0 0 0},clip]{./fig/artifacts/4_drill.png}
\includegraphics[width=\scale\textwidth,trim={0 0 0 0},clip]{./fig/artifacts/5_extinguisher.png}
\includegraphics[width=\scale\textwidth,trim={0 0 0 0},clip]{./fig/artifacts/6_gas.png}
\includegraphics[width=\scale\textwidth,trim={0 0 0 0},clip]{./fig/artifacts/7_vent.png}
\includegraphics[width=\scale\textwidth,trim={0 0 0 0},clip]{./fig/artifacts/8_helmet.png}
\includegraphics[width=\scale\textwidth,trim={0 0 0 0},clip]{./fig/artifacts/9_rope.png}
\includegraphics[width=\scale\textwidth,trim={0 0 0 0},clip]{./fig/artifacts/10_cube.png}
\caption{\label{fig:artifacts}
All 10 artifacts searched for in the final event of \ac{darpa} \ac{subt} (image courtesy of \acs{darpa}).
The operator had to submit the position of the identified artifact with accuracy better than \SI{5}{\meter}.
While the first three artifacts (survivor, cellphone, and backpack) were present in all circuits, the drill and the fire extinguisher were tunnel-specific.
Similarly, the gas and vent were located in the urban environment, and the helmet with rope could be found in the caves.
The last artifact (the cube) was introduced only for the final event.
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.495\textwidth]{./fig/screenshots/staging_area_sketch.png}
\includegraphics[width=0.495\textwidth]{./fig/photos/finals_staging_team.jpg}
\caption{\label{fig:staging_area}
The bounded staging area (image courtesy of \acs{darpa}) is the only place where the human crew members can handle the robots.
The person sitting behind the displays is the operator who is the only one allowed to issue commands to the team of robots, and also to view and interpret mission data.
}
\end{figure}
\begin{table}
\parbox{.45\linewidth}{
\centering
\caption{\label{tab:prize_table}
The prize money awarded for achieving the first three places in the final event.
}
\centering
\small
\begin{tabular}{ccc}
\toprule
\tablehdg{Place} & \tablehdg{Systems Track} & \tablehdg{Virtual Track} \\
\midrule
1. & \$2M & \$750K \\
2. & \$1M & \$500K \\
3. & \$500K & \$250K \\
\bottomrule
\end{tabular}
}
\hfill
\parbox{.45\linewidth}{
\centering
\caption{\label{tab:env_size}
Approximate distribution of the environment cross-section as announced by the organizers before the final event.
}
\centering
\small
\begin{tabular}{cc}
\toprule
\tablehdg{Cross-section (\si{\meter\squared})} & \tablehdg{Distribution} \\
\midrule
$<$5 & 65\% \\
5-100 & 20\% \\
$>$100 & 15\% \\
\bottomrule
\end{tabular}
}
\end{table}
To encourage the development of high-level components without worrying about the resilience of the hardware in harsh subterranean conditions and also to enable teams without many resources and/or physical robots to compete, a virtual version (Virtual Track) of the competition was run in parallel to the physical Systems Track.
The solutions of the Virtual Track were uploaded as Docker images (one image per robot) to the Gazebo-based Cloudsim simulation environment, where the entire run was simulated.
Every team could use the Cloudsim simulator to test their approaches in practice worlds prior to the actual competition.
The competition was further subdivided into individual circuits, which were events in the specific subterranean environments of a tunnel, cave, and urban space. Examples of each environment are shown in~\autoref{fig:circuit_envs}.
The surroundings were chosen to correlate with typical real \ac{sar} sites to assure the applicability of the systems developed during the competition.
Every type of environment differs in size, geometric dimensions, traversability conditions, and requirements on perception modalities.
The specifics of tunnel-like environments are summarized in~\cite{tardioli2019ground} with 10 years of experience in \ac{sar} ground robots research.
The role of mobile robots in rescue missions after mine disasters is discussed in~\cite{murphy2009mobile}.
The final event combined all of the previous environments for the ultimate challenge.
\begin{figure}
\centering
\includegraphics[width=0.325\textwidth]{./fig/photos/finals_tunnel_gopro.png}
\includegraphics[width=0.325\textwidth]{./fig/photos/finals_urban_gopro.png}
\includegraphics[width=0.325\textwidth,trim={0 45pt 0 0},clip]{./fig/photos/finals_cave_gopro_2.png}
\includegraphics[width=0.325\textwidth]{./fig/screenshots/virtual_tunnel.jpg}
\includegraphics[width=0.325\textwidth,trim={0 70pt 0 0},clip]{./fig/screenshots/virtual_urban.jpg}
\includegraphics[width=0.325\textwidth,trim={0 55pt 0 0},clip]{./fig/screenshots/virtual_cave.jpg}
\caption{\label{fig:circuit_envs}
Three types of subterranean environments found in the competition, each challenging for the robot team in a different way.
From left to right: tunnel, urban, and cave.
The top row shows examples of environments from the system track of the final event, while the virtual worlds are pictured in the bottom row.
}
\end{figure}
We participated in the competition first as a non-sponsored team.
In the Tunnel Circuit, we won \nth{1} place among the non-sponsored teams and \nth{3} place in total, which earned us \$200,000.
This success was repeated in the Urban Circuit with the same place achieved but this with time larger prize money \$500,000.
Thanks to consistent performance in both circuits, \ac{darpa} awarded our team the funding for the Final Event, which allowed us to acquire more capable hardware.
The approach presented in this paper is the result of \ac{uav} research, development, and testing over the whole 3-year-long period.
\section{Related work}
\label{sec:related_work}
The state of the art in rescue robotics is coherently summarized in the survey~\cite{delmerico2019current}, which concerns both hardware and software.
On the hardware side, different robot morphologies, locomotion types, and platform designs are categorized. Regarding software, the survey concerns perception and control algorithms.
The authors interviewed experts on disaster response and humanitarian aid to understand the situation and needs of rescuers.
Here, we provide an overview of the solutions for perception in adverse conditions of the underground environments, methods of localization and mapping for precise and reliable navigation, and techniques for safe traversal of narrow corridors.
A summary of systems deployed in previous circuits of \ac{darpa} \ac{subt} follows.
Finally, relevant datasets are referenced in order to prompt further research effort in the \ac{sar} area.
\subsection{Degraded sensing}
\label{sec:sota_perception}
Perception in subterranean environments faces constant degradation of the sensor outputs due to the harsh conditions of such places.
The underground climate is often filled with impervious dust (particularly in mines), where any movement agitates the settled layer of fine dirt and mineral particles.
On the other hand, caves are typically humid ecosystems, where dense mud replaces the dust layer found in mines.
However, the elevated humidity forms droplets of fog, which corrupt the measurements of most visible or \ac{nir} light-based sensor modalities, and also causes frequent reflections on wet surfaces.
Radars can reliably penetrate smoke, dust, and fog, and after post-processing using, e.g., \acp{gan}~\cite{goodfellow2014generative}, a 2D occupancy grid for navigation~\cite{lu2020see} can be constructed.
Another reliable sensing modality for when images from \ac{rgb} cameras are polluted by dust or fog is thermal imaging, which, in~\cite{khattak2019robust}, is used for the localization of robots in areas with airborne obscurants.
Our approach goes beyond these works by employing intensity-based filtering of the \ac{lidar} data, and thus no additional sensors are necessary even in dense clouds of dust.
\subsection{Localization and mapping}
\label{sec:sota_slam}
Recent developments in \ac{sar} robotics sparked the research of more precise local pose estimation algorithms (also referred to as odometry), as well as long-term globally-consistent trajectory and multi-robot map fusion of all agents of the robotic team.
The state-of-the-art methods were published in~\cite{cadena2016past}, where the challenges and future direction of the \ac{slam} development are also identified.
The demands on low control error and robustness to degraded sensor data in the narrow subterranean environments present in the \ac{darpa} \ac{subt} pushed all contesting teams to either adapt and improve an existing method to be usable in the extreme conditions, or to develop a new \ac{slam} tailored to this specific domain.
Team CoSTAR developed a \ac{lidar} odometry solution~\cite{palieri2020locus} based on \ac{gicp} matching of \ac{lidar} scans with initialization from \ac{imu} and wheel odometry, including the possibility of extension to other odometry sources, such as \ac{vio}.
The method is shown to outperform state-of-the-art localization methods on the datasets from Tunnel and Urban circuits. An ablation study presents the influence of individual components on the total \ac{ape}.
All presented experiments are conducted with ground robots.
The localization of aerial vehicles is handled by a resilient HeRo state estimation system~\cite{santamaria2019towards}.
The state estimation stack considers heterogeneity and redundancy in both sensing and state estimation algorithms in order to ensure safe operation, even under the failure of some modules.
Failures are detected by performing confidence tests on both data and algorithm health.
If a check does not pass successfully, the resiliency logic switches to the algorithm with the best confidence, similar to our previous solution published in~\cite{baca2021mrs}.
The local odometry of~\cite{palieri2020locus,santamaria2019towards} is accompanied by loop closure detection and pose graph optimization locally on each robot, as well as globally on the base station. This optimizes the trajectories of all robots for a multi-robot centralized \ac{slam} solution~\cite{ebadi2020lamp}.
A decentralized \ac{slam} solution for \acp{uav}~\cite{lajoie2020door} performs distributed outlier-resilient pose graph optimization when another agent is within communication range.
This method can be used with either a stereo camera or a \ac{lidar}, and is evaluated on a dataset from the Tunnel Circuit.
The long, featureless corridors that are often present in man-made tunnels lead to unobservability of the motion along the degenerate direction, which leads to significant drift.
Promising approaches, such as~\cite{liosam2020shan,xu2022fast}, constrain the solution of the optimization problem using the preintegrated \ac{imu} measurements. This helps to reduce the localization drift under unfavorable environmental geometry.
Nevertheless, the vibrations induced by spinning propellers degrade the inertial measurements, and can thus negatively affect the localization precision.
Approaches, such as those seen in~\cite{ebadi2021dare}, detect the geometrical degeneracy using the ratio of the most observable and the least observable directions.
This ratio is then used to determine loop closure candidates to reduce the drift along the degenerate direction.
Similarly,~\cite{zhang2016degeneracy} handles environment degeneracy in state estimation by not updating the solution in detected degenerate directions.
Another possibility is to combine the 3D \ac{lidar} method with a direct visual odometry method (e.g.,~\cite{alismail2016direct}), which tracks image patches by minimizing the photometric error.
This approach, which is shown in~\cite{shin2020dvl}, has the advantage over feature-based methods like that of~\cite{zhang2015visual} in that it provides low drift, even when salient image and geometric features are lacking.
The disadvantage is that localization performance is worsened when whirling dust is present in the camera image, as reported in~\cite{petrlik2020robust}.
Team CERBERUS developed a complementary multi-modal sensor fusion~\cite{khattak2020complementary}.
The odometry estimated by visual/thermal inertial odometry is used as a prior for \ac{lidar} scan-to-scan and scan-to-map matching.
The \ac{vio}/\acs{tio} priors constrain the scan matching optimization problem, thus reducing drift in a degenerate environment significantly, which is demonstrated in an experiment conducted in a self-similar environment.
Another multi-modal approach is the Super Odometry~\cite{zhao2021super} of team Explorer, which was deployed on aerial robots in the tunnel and urban circuits of \ac{darpa} \ac{subt}.
The core of the method is the \ac{imu} odometry with biases constrained by \ac{vio} and \ac{lio}, which are initialized with preintegrated inertial measurements of the constrained \ac{imu}.
The relative pose factors of \ac{vio} and \ac{lio} are weighted based on the visual and geometrical degradation, respectively.
Team MARBLE first relied on visual \ac{slam}~\cite{kramer2021vi}, but after \ac{stix}, they transitioned to the \ac{lidar}-based Cartographer~\cite{hess2016real} due to unstable tracking of motion under poor illumination, reflections, dust, and other visual degradation.
Wildcat SLAM~\cite{hudson2021heterogeneous} of the \acl{csiro} team is a multi-agent decentralized solution, where each agent computes a global map using the currently available data shared among the robots.
The odometry of each agent is based on the work of~\cite{bosse2012zebedee}.
Our approach is similar to the other teams' as we also use primarily \ac{lidar} for localization and mapping.
An improvement over the state of the art is the compensation of the delay~\cite{pritzl2022repredictor} caused by the \ac{lidar} scan processing and the delay of the localization itself.
\subsection{Mobility}
\label{sec:sota_mobility}
Deploying aerial robots has one great advantage over ground robots due to their full terrain traversability.
A \ac{uav} can fly over terrain that would compromise the safety of an \ac{ugv}, e.g., steep decline, mud, water, etc.
The only movement constraint of aerial platforms flying through an enclosed environment is the minimum size of a passage that the robot can safely pass through.
The dimensions of such passages depend largely on the size of the \ac{uav}, but also on the precision of the pose estimation, the control error of onboard regulators, the map quality, and the reactive behavior in close vicinity of obstacles.
Some platforms also tolerate contact with obstacles in the sense that the contact does not endanger the continuation of the mission~\cite{huang2019duckiefloat}.
Other types of platforms adapt their morphology and/or locomotion modality to their current surroundings and obstacles~\cite{fabris2021soft}.
In voxel-based map representations, the size of a narrow passage is represented too conservatively, i.e., the size of the narrow passage in the voxel map is the lower bound of the true size.
However, in practice, the narrow passage can be up to twice the map resolution larger than its voxel representation, which prevents traversing passages that are well within the physical limits of the \ac{uav}.
To better approximate the true shape of the narrow passage, ~\cite{o2018variable} propose continuous representation based on \ac{gmm}, which is converted to a voxel map of arbitrary resolution when queried.
We took another approach of locally increasing the resolution of the occupancy voxel map when the size of the environment requires it.
\subsection{DARPA SubT approaches}
\label{sec:sota_darpa}
This paper primarily focuses on the approach developed for and experimentally verified in the final event of \ac{darpa} \ac{subt}.
As mentioned, these results are built upon the experience in using the approaches developed for the tunnel and urban circuits.
The practical verification of the developed solutions in challenging environments justifies the robustness of these algorithms. Valuable insights on the future of \ac{sar} robotics can be drawn from lessons learned by the teams.
Team CoSTAR relied on their uncertainty-aware framework, NeBula, in the tunnel and urban circuits~\cite{agha2021nebula}.
The framework supports multi-modal perception and localization including radar, sonar, and thermal cameras.
Aerial robots were part of their heterogeneous team in \ac{stix} and the tunnel circuit, mainly for exploring areas inaccessible to ground robots and data muling with distributed data sharing~\cite{ginting2021chord}.
A reactive autonomy approach COMPRA~\cite{lindqvist2021compra} was also proposed for \ac{uav} underground \ac{sar} missions.
Their solution gained \nth{2} and \nth{1} place in the tunnel and urban circuits respectively.
Team Explorer developed a system~\cite{scherer2022resilient} that achieved \nth{1} place in the tunnel circuit and \nth{2} place in the urban circuit.
Their collision-tolerant platform ``DS" with flight time of~\SI{13}{\minute} was carried on top of a \ac{ugv} and could be launched by the operator when needed.
The authors identified the challenge of combined exploration and coverage problem when their \acp{uav} with limited camera \ac{fov} missed some artifacts along their flight path.
The frontier-based exploration pipeline used a custom OpenVDB mapping structure~\cite{museth2013vdb} for sampling frontier-clearing viewpoints.
Paths to found viewpoints were planned using bidirectional RRT-Connect.
Team CERBERUS deployed legged ANYMAL robots and aerial DJI Matrice M100 robots in the tunnel circuit.
Their graph-based system for the autonomous exploration of subterranean environments called GBPlanner was deployed in multiple locations.
The exploration of Edgar mine during \ac{stix} and the \ac{niosh} mine during the tunnel circuit are documented in~\cite{dang2020graph}.
Specifically, the exploration method for aerial robots~\cite{dang2019explore} consists of a local fast-response layer for planning short collision-free paths and a global layer that steers the exploration towards unvisited parts of the map.
This method is part of the solution for underground search by aerial robots found in~\cite{dang2020autonomous}.
A mapping and navigation approach~\cite{papachristos2019autonomous} for autonomous aerial robots based on the next-best-view planner~\cite{papachristos2017uncertainty, bircher2016receding} was also proposed, but was later outperformed by the GBPlanner~\cite{dang2020graph}.
The uncertainty in localization and mapping is taken into account during the planning in~\cite{papachristos2019localization} in such a way that among all trajectories arriving to the reference waypoint, the one that minimizes the expected localization and mapping uncertainty is selected.
To unify the exploration framework across both legged and aerial platforms,~\cite{kulkarni2021autonomous} have revised~\cite{dang2020graph} and added a cooperation framework that identifies global frontiers in a global graph built from the sub-maps of individual robots.
The unified strategy for subterranean exploration using legged and aerial robots in tunnel and urban circuits is presented in~\cite{tranzatto2022cerberus}
Team MARBLE presents their system deployed to \ac{stix}, the tunnel circuit, and the urban circuit in~\cite{ohradzansky2021multi}.
The aerial robots relied on direct vision-based local reactive control and map-based global path planning.
Global path planning is common with ground and aerial robots.
Viewpoints are selected based on the frontier voxels covered by the camera \ac{fov} and the approximate travel time.
In the tunnel circuit, the local reactive control generates velocity commands by steering the \ac{uav} towards a look-ahead point from the global path, while being repulsed by nearby obstacles.
With this planner, traversing narrow passages was problematic due to noise in the depth image. Thus, a new planner was developed for the urban circuit based on voxel-based probabilistic tracking of obstacles~\cite{ahmad2021end}.
A heterogeneous team of robots including \acp{uav} was also deployed by team \acl{csiro}~\cite{hudson2021heterogeneous}, both in the tunnel and urban circuits.
The aerial part of the team consisted of a DJI M210 equipped with the commercially available payload of Emesent Hovermap, and a custom gimballed camera.
To explore the environment of the urban circuit, the autonomy utilized an approach based on the direct point cloud visibility~\cite{williams2020online}.
Although team NCTU did not participate in the final event, their solution~\cite{chenlung2022heterogeneous} to the tunnel and urban circuit showcased originality in the form of autonomous visually-localized blimps~\cite{huang2019duckiefloat}.
Their navigation was based on policies learned by deep reinforcement learning with simulation-to-world transfer.
Our CTU-CRAS-NORLAB team first participated in the \ac{stix} event with a hexarotor platform localized by optic flow~\cite{walter2017mesas} of the downward-facing camera.
The reactive navigation used \ac{lidar} scans to stay in the middle of the tunnel and move forward in a preferred direction at an intersection.
The predictive controller~\cite{baca2016embedded} was forgiving to imprecise localization caused by strenuous optic flow estimation in the whirling dust of the tunnels.
The heterogeneous team that secured \nth{3} place in the tunnel circuit~\cite{roucek2019darpa} consisted of wheeled, tracked, and aerial robots with different sensor payloads.
Instead of unreliable optic flow, the localization of the \ac{uav} system~\cite{petrlik2020robust} was revamped to rely on 2D \ac{lidar}, HectorSLAM~\cite{kohlbrecher2011flexible}, and state estimation~\cite{petrlik2021lidar}.
The hardware platform was also downscaled to a \SI{450}{\milli\meter} diameter quadrotor.
The vertical element of the urban circuit called for upgrading the \ac{lidar} to a 3D one, which consequently required a redesign of the whole navigation pipeline~\cite{kratky2021exploration} to allow for six \ac{dof} mobility through the 3D environment.
Physically, the platform was based on the same frame as what was used in the tunnel circuit, however prop guards were added to reduce the chance of destructive collision while flying through doors.
The CTU-CRAS-NORLAB approach to the urban circuit, which we completed in \nth{3} place, is described in~\cite{roucek2020urban}.
Although the cave circuit was canceled, extensive preparations were still performed in the sizable Bull Rock cave in South Moravia~\cite{petracek2021caves}.
The exploration depth of the \ac{uav} team was greatly extended by a multi-robot coordinated homing strategy that focused on extending the communication range of the base station by landing the returning \acp{uav} on the edge of the signal.
Based on the lessons learned during these competition and testing deployments (during the 3 years of development \acp{uav} of the CTU-CRAS-NORLAB team achieved $>400$ flights and traveled $>$\SI{50}{\kilo\meter} in demanding real world environments) the new approaches presented in this paper were designed.
\subsection{Datasets}
\label{sec:sota_datasets}
Due to the challenging nature of the subterranean environments, such as narrow passages, degenerate geometry, and perception degradation, datasets that were collected by the competing teams are valuable to the community as the algorithms can be evaluated on demanding data degraded by the previously mentioned issues.
In contrast to the verification often conducted under artificially ideal lab conditions, these datasets present a fair way to compare algorithms in realistic conditions.
A \ac{slam} dataset~\cite{rogers2020test} collected during the tunnel circuit and \ac{stix} consists of \ac{lidar} scans, images from a stereo camera and thermal camera, \ac{imu} measurements, and \ac{rssi}, together with a professionally surveyed ground truth map and measured artifact positions.
The dataset from the urban circuit~\cite{rogers2020darpa} was recorded using the same sensors with the exception of an added \ac{co2} sensor and the lack of a thermal camera.
Another dataset~\cite{kasper2019benchmark} for comparison of \ac{vio} methods contains outdoor, indoor, tunnel, and mine sequences, with ground truth poses obtained by laser tracking the sensors rig.
Aerial datasets consisting of unsynchronized \ac{lidar} scans and \ac{imu} measurements from \acp{uav} flying in the cave, tunnel, and mine environments are included in this paper\footnote{\href{https://github.com/ctu-mrs/slam_datasets}{\texttt{github.com/ctu-mrs/slam\_datasets}}}, with ground truth poses estimated using a professionally surveyed ground truth map.
We also publish the labeled visual detection datasets\footnote{\href{https://github.com/ctu-mrs/vision_datasets}{\texttt{github.com/ctu-mrs/vision\_datasets}}} consisting of images from both \ac{uav} and \ac{ugv} cameras that were used for training of the artifact detection \ac{cnn}.
Images from the Tunnel and Urban circuits, Bull Rock Cave, and industrial buildings are included.
\section{Contributions}
\label{sec:contributions}
An approach for cooperative exploration of demanding subterranean environments by a team of fully autonomous \acp{uav} in \ac{sar} tasks is presented in this paper.
Deployment of this approach in the \ac{darpa} \ac{subt} virtual competition was awarded by \nth{2} place. The simulation model of the \ac{uav} platform designed by our team was used by seven out of nine teams.
The crucial contributions of the developed system can be summarized in the following list:
\begin{itemize}
\item
A complex approach that can serve as a guide for building a system for \ac{gps}-denied operations.
The proposed approach was extensively verified in numerous simulated worlds and real physical environments ranging from vast caves, industrial buildings, tunnels, and mines to large outdoor openings.
Most importantly, the \acp{uav} were deployed into the intentionally harsh conditions of the \ac{darpa} \ac{subt} to push them to their limits.
The experience gained from hundreds of flights in such conditions are condensed into the lessons learned presented in this paper, which we deem valuable for the field robotics community.
\item Novel mapping structures are proposed for safety-aware reactive planning over large distances, for compact volumetric inter-robot information sharing, for storing coverage of surfaces by onboard sensors, and for finding a suitable landing spot.
\item Maximization of the probability of detecting a nearby artifact by searching not only the unexplored space, but also visually covering known surfaces while respecting the limited field of view of the onboard sensors.
The detection is coupled with probabilistic estimation of artifact positions based on multi-target tracking and detection-to-hypothesis association, which improves the precision of artifact localization while the robot is moving around the artifact.
\item A novel safety-aware approach to planning that considers the risk of planned trajectories in addition to the path length in the optimized cost function.
In contrast to the state-of-the-art methods, longer paths are selected if the estimated risk of collision is lower than the risk of a shorter path.
\item Full autonomy of the \ac{uav} allows for scalability of the size of the deployed fleet without placing additional workload on the operator.
Nevertheless, the operator can override the autonomy with one of the available special commands to change the default search behavior when the \ac{uav} is in communication range.
\item The multi-robot autonomous search benefits from a higher number of deployed \acp{uav} that share their topological representations of the environment to cooperatively cover a larger area by biasing the search towards parts unvisited by other agents.
\end{itemize}
\section{System architecture overview}
\label{sec:architecture}
The whole autonomous system of a single \ac{uav} consists of software modules, each with different inputs, outputs, and purpose.
These modules and their interconnections are depicted in~\autoref{fig:system} with the individual modules grouped into more general logical categories.
The first category includes the physical \textit{Sensors}~(\autoref{sec:sensory_perception}) of the \ac{uav} --- the \ac{imu}, \ac{lidar}, \acs{rgb}, and \acs{rgbd} cameras.
The description of the important parameters of the used sensors is available in~\autoref{sec:hardware}.
Measurements from \ac{imu} and \ac{lidar} enter the \textit{Localization} group~(\autoref{sec:localization}), where a full-state estimate of the \ac{uav} is obtained.
\ac{lidar} is also used in combination with the \acs{rgbd} camera for building maps in the \textit{Mapping} module group~(\autoref{sec:mapping}).
The \textit{Perception}~(\autoref{sec:object_detection}) category focuses on detection and localization of artifacts using all the available sensor data.
Autonomous search through the environment is governed by the \textit{Mission control} category~(\autoref{sec:mission_control}), which selects goals~(\autoref{sec:exploration}) based on the current state of the state machine, models of the environment from the \textit{Mapping} group, and possibly also commands from the operator.
A coarse path consisting of waypoints to the selected goals is found by the \textit{Navigation}~(\autoref{sec:navigation}) and further refined and time-parametrized in the \textit{Planning} modules~(\autoref{sec:planning}) in order to produce a safe and dynamically feasible trajectory.
The \textit{Control} blocks~\cite{baca2021mrs} track the trajectory and generate attitude rate references for the low-level \textit{Autopilot} that controls the actuators~(\autoref{sec:hardware}).
The operator receives crucial mission status data, topological maps, and, most importantly, detected artifacts through the \textit{Communication} layer \cite{roucek2020urban}. This also allows the operator to influence or override the autonomous behavior of the \ac{uav}.
All transmitted data is received by other \acp{uav} (or other robots, in the case of a heterogeneous team) in the communication range, which serves two purposes:
one, the receiving agent can propagate the message further down the network, and, two, the topological maps allow penalizing goals already visited by other robots to better allocate resources over a large area.
\begin{figure}
\includegraphics[width=1.0\textwidth]{./fig/diagrams/system_architecture.pdf}
\caption{\label{fig:system}
The diagram shows individual modules of the \ac{uav} system architecture grouped into logical categories.
Hardware modules are filled with gray, and red distinguishes open source modules not developed by us.}
\end{figure}
\input{chapters/sensory_perception.tex}
\section{Localization}
\label{sec:localization}
Accurate and reliable localization is critical for most other parts of the system.
The ability of the reference controller to track the desired state depends largely on the quality of the available state estimate.
In the narrow environments which are often present in subterranean environments (see~\autoref{tab:env_size} for cross-section distribution in the final event), minimizing the control error is crucial to avoid collisions.
Multi-robot cooperation assumes the consistency of maps created by individual robots.
If the maps of two robots are not consistent due to errors in localization, the multi-robot search might be suboptimal.
For example, an unvisited goal can be rejected as already reached by a robot with an inconsistent map.
Moreover, the localization accuracy influences the position error of a reported artifact.
A \ac{uav} with localization drift over \SI{5}{\meter} can detect and perfectly estimate the position of an artifact. Nevertheless, the report may never score a point since the position of the \ac{uav} itself is incorrect.
Our approach relies on a \ac{lidar} sensor for localization as the laser technology proved to be more robust to the harsh conditions of the subterranean environment than the vision-based methods.
We have been using \ac{lidar} since the Tunnel circuit~\cite{petrlik2020robust} where a lightweight 2D \ac{lidar} aided by a rangefinder for measuring \ac{agl} height was sufficient for navigation in tunnels with a rectangular cross-section.
The more vertical environment of the urban circuit required redesigning the localization system to use 3D \ac{lidar} for navigating in 3D space~\cite{kratky2021exploration}.
The localization system deployed in the final event and presented in this manuscript builds upon the solution proposed in~\cite{kratky2021exploration} and is divided into two modules: the localization algorithm and the state estimation method.
\autoref{fig:diagram_localization} shows the data flow in the localization pipeline.
We have based the localization on the \acs{aloam} implementation of the \ac{loam} algorithm~\cite{zhang2014loam} for the Systems Track and the \ac{liosam}~\cite{liosam2020shan} for the Virtual Track.
Our implementation\footnote{\href{https://github.com/ctu-mrs/aloam}{\texttt{github.com/ctu-mrs/aloam}}} has been tested in a real-time \ac{uav} control pipeline throughout multiple experimental deployments as part of our preliminary works~\cite{kratky2021exploration,petracek2021caves} and in the \ac{darpa} \ac{subt} competition.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{./fig/diagrams/darpa_localization_both.pdf}
\caption{\label{fig:diagram_localization}
The diagram shows the flow of data among individual localization modules for the Systems Track (left) and Virtual Track (right).
The 3D \ac{lidar} supplies \acs{aloam} or \acs{liosam} with the laser scans in the point cloud form $\mathcal{P}$.
Assisted by the orientation $\mathbf{R}$ from the \ac{imu}, \acs{aloam} produces a position estimate $\mathbf{r}=\left[x, y, z\right]^T$ that is fed into the \textit{State estimation} block, which outputs the full state estimate.
In the case of the virtual pipeline, the \ac{imu} data fusion is executed in \ac{liosam}, and thus the state estimation module is not needed thanks to the sufficient accuracy of both lateral and heading components.
}
\end{figure}
\subsection{A-LOAM}
\label{sec:aloam}
The \acs{aloam} implementation of the \ac{loam}~\cite{zhang2014loam} algorithm utilizes the laser scans from a multi-line \ac{lidar} to obtain its 6-\ac{dof} pose.
To achieve real-time performance and accurate pose estimation at the same time, the method is divided into two parts.
The first part of the algorithm processes the incoming data at the rate of their arrival and estimates the rigid motion between the consecutive point clouds $\mathcal{P}_k$ and $\mathcal{P}_{k+1}$ obtained at the timestamps $t_k$ and $t_{k+1}$, respectively.
The process starts with finding geometric features in the input point cloud $\mathcal{P}_{k+1}$.
The points are first sorted by the smoothness of their local neighborhood, and then those which are the least and most smooth are selected as edge and planar features, respectively.
To achieve a more uniform distribution of features, the point cloud is divided into regions of the same size, and each region can contain only a limited number of edge and planar feature points.
A point cannot be chosen as a feature point if there is already a feature point in its local neighborhood.
A correspondence is found in $\mathcal{P}_k$ for each edge/planar point from $\mathcal{P}_{k+1}$.
These correspondences are then weighted by their inverse distance, and correspondences with the distance larger than a threshold are discarded as outliers.
Finally, the pose transform $\mathbf{T}^L_{k+1}$ between $\mathcal{P}_{k+1}$ and $\mathcal{P}_k$ is found by applying the Levenberg-Marquardt method to align the correspondences.
The second part estimates the pose of the sensor in the map $\mathcal{M}_k$, which is continuously built from the feature points found by the first part of the algorithm.
First, $\mathcal{P}_{k+1}$ is projected into the map coordinate system to obtain $\mathcal{P}_{k+1}^W$.
Then, feature points are searched similarly to as is done in the first part, with the difference being that 10 times more features are found.
Their correspondences are found in $\mathcal{M}_k$, which is divided into cubes with \SI{10}{\meter} edges.
The correspondences are searched for only in the cubes intersected by the $\mathcal{P}_{k+1}^W$ to keep the run-time bounded.
The transform $\mathbf{T}^W_{k+1}$ between $\mathcal{P}_{k+1}^W$ and $\mathcal{M}_k$ is obtained with the same steps as in the first part.
Due to the 10-times greater amount of correspondences and search through a potentially larger map, this is a much slower process than the first part.
Thanks to the combination of both parts, the algorithm outputs the pose estimate of the rate of the \ac{lidar}, with drift bounded by slower corrections that snap the pose to the map.
\begin{figure}
\includegraphics[width=1.0\textwidth]{./fig/aloam_runtime/aloam_runtime.pdf}
\caption{\label{fig:aloam_runtime}
The computation time of the most demanding parts of the \ac{aloam} algorithm is plotted with respect to the time in the mission.
The total time is the sum of all three parts.
The darkest colors depict moving mean, the medium dark bands represent the moving standard deviation, and raw data is shown by the lightest colors.
The moving statistics are calculated over \SI{1}{\second} long time window.
On average, the feature extraction takes \SI{1}{\milli\second}, the laser odometry \SI{19}{\milli\second}, the map optimization \SI{91}{\milli\second}, and, in total, the pose estimate is obtained in \SI{111}{\milli\second}.
}
\end{figure}
\subsection{LIO-SAM}
\label{sec:liosam}
\ac{liosam}~\cite{liosam2020shan} utilizes \ac{imu} integration on top of dual factor-graph optimization.
The first factor-graph optimization is similar to the \ac{aloam} mapping pipeline as it first extracts geometrical features out of raw \ac{lidar} data and registers them to a feature map, with the motion prior given by the second optimization pipeline.
The second factor-graph optimization fuses the mapping output with \ac{imu} measurements and outputs fast odometry used in the state estimation pipeline.
The first graph is maintained consistently throughout the run, whereas the second graph optimization is reset periodically to maintain real-time properties.
In a simulated environment, \acs{liosam} yields greater accuracy than \acs{aloam} for its fusion of inertial measurements with precisely modeled and known characteristics.
A comparison of both the methods within the simulated environment is summarized in \autoref{fig:slam_accuracy}.
In the real world, the measurements of an \ac{imu} rigidly mounted on board a \ac{uav} contain a wide spectrum of large stochastic noise.
During empirical testing, the integration method in \ac{liosam} was shown to not be robust towards the unfiltered noise while frequency-band and pass filters induced significant time delays, destabilizing the pipeline completely.
For the inability to accurately model the noise, real-world laser-inertial fusion is done manually by smoothing over a short history of past measurements (see~\autoref{sec:estimation}).
\begin{figure}[htb]
\centering
\includegraphics[width=0.95\textwidth]{./fig/slam_accuracy/slam_accuracy.pdf}
\caption{\label{fig:slam_accuracy}
The performance of \acs{aloam} and \acs{liosam} during a single flight within \textit{Finals Prize Round World 01} (see~\autoref{fig:virtual_worlds}) of the \ac{darpa} \ac{subt} virtual environment.
\acs{aloam} does not fuse the inertial measurements which assist \acs{liosam} during \ac{lidar}-scan matching in areas of the environment where such matching suffers from geometric degeneration, in the context of solving optimization problems.
The selected environment contains a variety of narrow vertical passages where the performance of narrow-\acs{fov} \ac{lidar} perception is limited, leading to drift in the ego-motion estimation that is clearly visible in the \acs{aloam} method.
The \ac{liosam} method was shown to achieve sufficient accuracy and low drift during long-term and arbitrary 3D navigation within a simulated environment.%
}
\end{figure}
\subsection{State estimation}
\label{sec:estimation}
For precise and collision-free navigation through a cluttered narrow environment, which typically appears in subterranean \ac{sar} scenarios, the control stack requires a smooth and accurate state estimate at a high rate (\SI{100}{\hertz}).
The \textit{State estimation} module provides such an estimate through the fusion of data from \ac{aloam} and \ac{imu}. It also does this by applying filtering, rejection, and prediction techniques.
We provide only a brief description of the estimation process as it is not viewed as the primary contribution and has already been presented in~\cite{baca2021mrs}.
The state vector of the \ac{uav} is defined as $\mathbf{x}=[\mathbf{r}, \mathbf{\dot{r}}, \mathbf{\ddot{r}}, \mathbf{R}, \mathbf{\dot{R}}]^T$.
The position $\mathbf{r}=[x, y, z]^T$, its first two derivatives of $\mathbf{\dot{r}}$ and $\mathbf{\ddot{r}}$, the orientation in the world frame $\mathbf{R}$, and the angular velocities $\mathbf{\dot{R}}$ include all the dynamics required by other onboard algorithms.
Even though the position $\mathbf{r}$ is provided by the \ac{aloam} algorithm, the rate of the position updates is too low for the control loop.
Furthermore, the velocity and acceleration vector is not known, and must thus be estimated.
A \ac{lkf} of a point mass model with position, velocity, and acceleration states is employed to estimate the unknown variables at the desired rate.
While the \ac{imu} of the onboard autopilot provides the orientation $\mathbf{R}$, the heading\footnote{Heading is the angle between the heading vector and the first world axis.
The heading vector is the direction of the forward-facing body-fixed axis projected onto the plane formed by the horizontal axes of the world frame, as formally defined in~\cite{baca2021mrs}.} $\eta${} is prone to drift due to the bias of the gyroscopes in \ac{mems} \acp{imu}.
We correct this drift in a standalone heading filter, which fuses $\mathbf{\dot{R}}$ gyro measurements with \ac{aloam} $\eta${} corrections.
Corrections from the magnetometer are not considered, due to the often-occurring ferromagnetic materials and compounds in subterranean environments.
The processing of a large quantity of points from each scan and matching them into the map takes \SI{111}{\milli\second} on average (see~\autoref{fig:aloam_runtime} for run time analysis) for the onboard \ac{cpu}.
The empirical evaluation shows that the controller of the \ac{uav} becomes increasingly less stable when the state estimate is delayed for more than \SI{300}{\milli\second}.
To reduce the negative effect of the delay on the control performance, we employ the time-varying delay compensation technique~\cite{pritzl2022repredictor}.
We define the delay as $\tau=t_{\mathbf{T}_{k+1}} - t_{\mathcal{P}_{k+1}}$, {i.e.,}{} the time it took \ac{loam} to compute the pose transform after receiving the point cloud from \ac{lidar}.
The core of the method is a buffer $\mathbf{Q_x}$ containing the past states $\mathbf{x}_{\langle t_0-\tau_{\text{max}}, t_0\rangle}$, and buffer $\mathbf{Q_z}$ having the past corrections $\mathbf{z}_{\langle t_0-\tau_{\text{max}}, t_0\rangle}$ of the filter.
The length of the buffer is not fixed, but data older than the expected maximum delay $\tau_{\text{max}}$ are discarded to keep the buffer size bounded.
When a new delayed measurement $\mathbf{z}_{t_0-\tau}$ arrives at time $t_0$, it is applied as a correction to the state $\mathbf{x}_{t_0-\tau}$ in $\mathbf{Q_x}$.
The corrected state $\mathbf{\bar{x}}_{t_0-\tau}$ replaces $\mathbf{x}_{t_0-\tau}$.
All subsequent states $\mathbf{x}_{(t_0-\tau,t_0\rangle}$ are discarded from $\mathbf{Q_x}$, and replaced by the states $\mathbf{\bar{x}}_{(t_0-\tau,t_0\rangle}$ propagated from $\mathbf{\bar{x}}_{t_0-\tau}$, using regular prediction steps of the filter with all corrections from $\mathbf{Q_z}$.
\autoref{fig:repredictor} visualizes the sequence of performed actions.
Thus, we acquire a time-delay compensated state estimate which, when used in the feedback loop of the \ac{uav} controller, allows for stable flight with a delay of up to \SI{1}{\second}.
The effect that increasing the delay has on the control error is plotted in~\autoref{fig:delay_analysis}.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{./fig/diagrams/repredictor.pdf}
\caption{\label{fig:repredictor}
The left time sequence shows the situation in the filter after the arrival of delayed correction $\mathbf{z}_{t_0-\tau}$ at time $t_0$.
The green arrows represent corrections applied at the correct time.
The delayed $\mathbf{z}_{t_0-\tau}$ would be fused at $t_0$ in a traditional filter, resulting in a suboptimal state estimate.
However, thanks to the buffering of state and correction history, it is fused into the correct state at time $t_0-\tau$.
The states after $t_0-\tau$ had to be recalculated to reflect the correction $\mathbf{z}_{t_0-\tau}$, which is shown by the blue color in the right time sequence.
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{./fig/plots/delay_analysis.pdf}
\caption{\label{fig:delay_analysis}
The box plot shows the median with lower and upper quartiles of the control error with respect to the delay of the position estimate used in the feedback loop.
The data was obtained in simulation by artificially increasing the delay of ground truth position in \SI{50}{\milli\second} increments.
Without compensation, the system becomes unstable after exceeding \SI{300}{\milli\second} delay, which results in oscillation-induced control error at \SI{350}{\milli\second}.
The control error for the longer delay is not shown, because the high amplitude of oscillations led to a collision of the \ac{uav}.
The highest delay with compensation is \SI{1000}{\milli\second} when the system has over a \SI{5}{\centi\meter} control error, but is still stable.
The \ac{uav} stability is lost at \SI{1050}{\milli\second} delay.
}
\end{figure}
\section{Mapping}
\label{sec:mapping}
In this section, we present our approach to mapping the explored environments.
As each task has specific requirements on the map properties, we designed multiple spatial representations, each of which is structured for a particular task.
In particular, DenseMap~(\autoref{fig:map_types}a) is utilized for short-distance path planning; FacetMap~(\autoref{fig:map_types}b) for surface coverage tracking; SphereMap~(\autoref{fig:map_types}c) for fast and safe long-distance path planning; \ac{ltvmap}~(\autoref{fig:map_types}d) for compressed, topological, and mission-specific information sharing between robots in low bandwidth areas; and LandMap~(\autoref{fig:landmap}) for representing feasible spots for safe \ac{uav} landing.
These maps and the methods for building them are presented in this section.
\begin{figure} [ht]
\newcommand{12.50em}{10.8em}
\newcommand{0.95em}{1.0em}
\newcommand{0.8em}{-1.0em}
\newcommand{0.3}{0.3}
\centering
\begin{tikzpicture}
\node[anchor=north west,inner sep=0] (b) at (0,0) {\adjincludegraphics[height=12.50em,trim={{0.0\width} {0.0\height} {0.35\width} {0.0\height}},clip]{./fig/maps/top_densemap.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(a)}};
\end{scope}
\end{tikzpicture}%
\begin{tikzpicture}
\node[anchor=north west,inner sep=0] (b) at (0,0) {\adjincludegraphics[height=12.50em,trim={{0.1\width} {0.1\height} {0.4\width} {0.05\height}},clip]{./fig/maps/top_spheremap.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(b)}};
\end{scope}
\label{fig:spheremap_spheres}
\end{tikzpicture}%
\begin{tikzpicture}
\node[anchor=north west,inner sep=0] (b) at (0,0) {\adjincludegraphics[height=12.50em,trim={{0.1\width} {0.05\height} {0.40\width} {0.15\height}},clip]{./fig/maps/top_facetmap.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(c)}};
\end{scope}
\end{tikzpicture}%
\begin{tikzpicture}
\node[anchor=north west,inner sep=0] (b) at (0,0) {\adjincludegraphics[height=12.50em,trim={{0.1\width} {0.10\height} {0.45\width} {0.00\height}},clip]{./fig/maps/top_ltvmap.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(d)}};
\end{scope}
\end{tikzpicture}%
\caption{
Top view of the used mapping structures from the intersection of the final event map.
DenseMap (a) is used for short-distance planning, SphereMap (b) for safety-aware long-distance planning, FacetMap (c) for storing surface coverage, and LTVMap (d) for compact topological information sharing among robots.
}
\label{fig:map_types}
\end{figure}
\subsection{DenseMap}
\label{sec:densemap}
Local information of the \ac{uav} is combined within a dense map to serve as the basis for the entire navigation stack, as described in~\cite{kratky2021exploration}.
The map integrates information in a dense, probabilistic manner using an efficient octree structure implemented within the OctoMap~\cite{hornung2013octomap} library.
During the map update, the data of each input modality producing spatial measurements are used to update the map with respect to the pose estimate correlating to the timestamp of the respective measurement.
The data to be integrated are first cleared of any observation noise (see~\autoref{sec:sensory_perception}). The ray of each remaining spatial measurement is integrated within a discretized representation of the environment using the Bayes rule and ternary classification to the unknown, free, and occupied voxels.
The output of dense mapping is convertible to other navigation representations and serves as the fundamental structure for local planning and dynamic obstacle detection.
To retain maximum information under constraints on real-time performance, the voxelization resolution is selected such that a scan insertion is processed at \SI{5}{\hertz}, at worst.
The resolution can be locally increased if path planning demands a decrease in discretization errors. This is a useful feature for improving safety and repeatability in navigating highly narrow passages.
To maintain the map structure, the local resolution is controlled by a factor $n$ such that the local resolution equals ${r}/{2^n}$ with $r$ being the default resolution of the dense map.
In our sensory and computation setup, the default resolution is empirically set to \SI{20}{\centi\meter}, reduced by a factor of $n = 2$ to \SI{5}{\centi\meter} for navigating narrow passages, if required.
The integrated data consist of \ac{lidar} measurements and depth estimates of two \ac{rgbd} cameras.
These sensors are mounted on-board \acp{uav} so that the spatial observations cover roughly all directions around the robot, enabling almost arbitrary \ac{uav}-motion planning in collision-free 3D space.
\subsection{SphereMap}
\label{sec:spheremap}
To enable the UAV to quickly evaluate the travel time and risk caused by flying near obstacles while also pursuing any given goal, we developed a multi-layer graph structure that uses volumetric segmentation and path caching, called SphereMap~\cite{musil2022spheremap}.
All three layers of the SphereMap are updated near the UAV in every update iteration, which runs at approximately \SI{2}{\hertz}.
Path planning in the SphereMap depends on only one parameter $c_R$, which we call \textit{risk avoidance}. It is used to trade path safety for path length.
For long-distance planning, we disregard UAV dynamics and only take into account the path length and obstacle clearance along the path.
We define the path cost between points $\mathbf{p}_1$ and $\mathbf{p}_2$ as
\begin{equation}
\label{eq:spheremap_path_cost}
D(\mathbf{p}_1,\mathbf{p}_2) = L + c_R R,
\end{equation}
where $L$ is the path Euclidean length summed over all edges of the path in the sphere graph, and $R\in[0,L]$ is a risk value computed by examining the radii of the spheres along the path.
For example, a path with all spheres with radii at the minimal allowed distance from obstacles would have $R=L$, and a path through open space with large sphere radii would have $R=0$.
The lowest layer of the SphereMap is a graph of intersecting spheres, shown in~\autoref{fig:map_types}b.
It is constructed by filling the free space of an obstacle $k$-d tree built from the DenseMap with spheres at randomly sampled points.
The graph is continuously built out of intersecting spheres, and then by pruning the spheres that become unsafe or redundant.
The radii of the spheres carry obstacle clearance information, which is used for path risk evaluation.
The second layer of the SphereMap is a graph of roughly convex segments of the sphere-graph. It is updated after every update of the sphere graph by creating and merging segments until every sphere in the graph belongs to a segment.
The third and last layer of the SphereMap is a navigation graph.
For every two adjacent segments, we store one sphere-sphere connection, which we call a \textit{portal} between the segments, as in~\cite{blochtlinger2018topomap}.
These portals form the vertices of the navigation graph.
At the end of every SphereMap update iteration, we compute which paths are optimal according to the path cost from~\autoref{eq:spheremap_path_cost} between all pairs of portals of a given segment.
The paths are computed only inside that given segment. If the segments are kept small (tens of meters in length), the recomputation is reasonably fast.
The optimal portal-portal paths form the edges of the navigation graph.
The UAV uses the navigation graph to quickly find long-distance paths between any two points in the known space by planning over the edges of the navigation graph, and then by only planning over the sphere graph in the first and last segments of the path.
\subsection{FacetMap}
\label{sec:facetmap}
\begin{figure}
\centering
\includegraphics[width=0.495\textwidth]{./fig/screenshots/facetmap_showcase_1.png}
\includegraphics[width=0.495\textwidth]{./fig/screenshots/facetmap_showcase_2.png}
\caption{\label{fig:facetmap_demo}
Illustration of the FacetMap as described in~\autoref{sec:facetmap}.
The map is built from the DenseMap (left) by finding normals of sampled points.
The orientation of the visualization discs (right) is determined by the facet's normal, and the color by whether the facet was covered by the UAV's front-facing cameras or not.
}
\end{figure}
The occupancy octree and SphereMap maps are sufficient for volumetric exploration.
However, the goal of the DARPA SubT challenge was to locate artifacts, most of which could be detected only from cameras. Because the \ac{fov} of our UAVs' cameras did not cover the entire \ac{fov} of the \ac{lidar} and depth cameras, not all occupied voxels in the occupancy map could be considered as ``covered by cameras".
For this reason, we developed another map, called FacetMap, illustrated in~\autoref{fig:facetmap_demo}.
This map is a simple surfel map, with the facets stored in an octree structure, each having an orientation, a coverage value, and a fixed size.
The FacetMap is built by computing the normals of the occupancy map at sampled occupied points, and creating facets with a set resolution if there are no existing facets with a similar normal nearby.
The facets are updated (i.e. added or deleted) periodically at approximately \SI{2}{\hertz} in a cube of pre-defined size around the \ac{uav}.
Each facet holds a coverage value that is, for simplicity, defined as binary.
A facet is marked as covered if the facet center falls into the FOV of any camera, and the ray from the camera to the facet center is at a certain angle from the facet's normal, so as to not mark surfaces as covered if they are viewed at a very skewed angle.
The covered facets stay in the map even if the underlying occupancy map shifts (e.g. when an obstacle moves).
As described in~\autoref{sec:viewpoint_path_enhancement}, one strategy used in our system uses this map to cover as much of the surface as possible while flying between volumetric exploration viewpoints. The strategy in~\autoref{sec:dead_end_inspection} uses this map to completely cover surfaces of certain areas.
Coverage of entire regions of the SphereMap can also be easily computed, and then stored in the \ac{ltvmap}, as described in~\autoref{sec:lsegmap}
\subsection{\acs{ltvmap}}
\label{sec:lsegmap}
Distributing all of the maps described in this chapter among the \acp{uav} would be highly demanding for the communication network.
As such, we have developed the \acf{ltvmap}, which combines the necessary mission-related information from the other maps and can be quickly extracted from the SphereMap and sent at any time.
This map consists of an undirected graph, where each vertex is created from a free-space segment in the original SphereMap and the edges are added for all of its adjacent segments.
Each vertex holds an approximation of the segment's shape.
In our implementation, we use four \ac{dof} bounding boxes (with variable size and rotation along the vertical axis) for shape approximation, though any other shape could be used.
For cooperative exploration purposes, the frontier viewpoints (described in~\autoref{sec:viewpoint_computation}) found by a given UAV are also sent in the \ac{ltvmap}, with each viewpoint being assigned an information value and segment from which the viewpoint is reachable.
For surface coverage purposes, every segment in the \ac{ltvmap} also holds a single numerical value representing the percentage of relevant surfaces covered in that segment.
This value is computed by projecting points from the facets of the FacetMap and counting the points that fall into every segment.
Further description and analysis of \acp{ltvmap} can be found in~\cite{musil2022spheremap}.
These \acp{ltvmap} are shared among robots, and are used for cooperative search planning onboard \acp{uav}, as described in~\autoref{sec:coop_ss}.
\subsection{LandMap}
\label{sec:landmap}
As described in \autoref{sec:landing_spot_detection}, a downward-facing \ac{rgbd} camera detects areas safe for landing.
These areas are continuously collected within an unconnected set and stored in a sparse point-cloud manner under a certain resolution, usually low enough to keep the LandMap memory-light.
An example of the LandMap is shown in~\autoref{fig:landmap}.
During the homing phase of the mission, the \ac{uav} navigates to an area connected to the ground station via the communication network (see~\autoref{sec:communication}).
After reaching this area, the \ac{uav} navigates towards a safe landing spot as indicated by the LandMap, which is closest to its current pose (see mission state machine in~\autoref{fig:mission_sm}).
While flying towards the LandMap-selected spot, the \ac{uav} lands sooner if the ground below the \ac{uav} is classified as safe-for-landing in the current \ac{rgbd} data.
The landing spots previously identified as safe are, once more, verified before landing in order to ensure safety in dynamic environments.
If the spot is no longer safe for landing, it is invalidated and the \ac{uav} is navigated to the next closest landing spot.
\begin{figure}[htb]
\centering
\includegraphics[width=0.9\textwidth]{./fig/screenshots/landmap_finals_example.pdf}
\caption{\label{fig:landmap}
Example of the LandMap with resolution of \SI{5}{\meter} built in the beginning of the \ac{darpa} \ac{subt} final event after \SI{70}{\second} of a \ac{uav} flight. The \ac{uav} is represented by the Cartesian axes with its trajectory colored in red.
The LandMap incorporates the spots classified as safe for \ac{uav} landing (green circles) which are used during the \ac{uav} homing phase of the mission to ensure safety during the landing procedure.%
}
\end{figure}
\section{Autonomous search}
\label{sec:exploration}
Since communication between robots in subterranean environments can never be ensured, the UAVs in our system operate completely autonomously and only use information from other robots to update their goal decision (e.g. blocking frontiers leading to areas explored by other robots).
The system can also be controlled at a very high level by the human operator, which is described in~\autoref{sec:operator_commands}.
This section describes the high-level search autonomy of our system.
\subsection{Informative viewpoint computation and caching}
\label{sec:viewpoint_computation}
For exploration purposes, the UAVs in our system do not consider the information gain along trajectories, but rather sequences of discrete viewpoints, so that we can have a unified goal representation for both local and global search planning.
These viewpoints are divided into places at which a UAV could obtain some volumetric information, called \textit{frontier viewpoints}, and the points at which a UAV could cover some not-yet-covered surfaces with its cameras, called \textit{surface coverage viewpoints}.
Each viewpoint $\xi$, comprising of position $\vec{p}_\xi$ and heading $\varphi_\xi$, is therefore assigned some information value $I(\xi)$.
In our approach, the information gain of frontier viewpoints $\xi_F$ and surface viewpoints $\xi_S$ is computed as
\begin{equation}
{I}( \xi_F) = c_{F} \frac{n_{\textrm{unk}}}{n_{\textrm{rays}}}, \quad \quad I(\xi_S) =c_{S}n_{\textrm{unc}} + c_{SF},
\end{equation}
where ${n_{\textrm{unk}}}/{n_{\textrm{rays}}}$ is the ratio of rays cast in the UAV's depth cameras' and LIDAR's FOVs that hit an unknown cell of the occupancy map before hitting an occupied one or going out of range.
Similarly, $n_{\textrm{unc}}$ is equal to the number of uncovered facets of the FacetMap, hit by rays that are cast in the UAV's RGB cameras' FOVs.
The constants $c_F$, $c_S$, $c_{SF} $ are empirically tuned to alter the UAV's next viewpoint selection and hence, its behavior.
The UAV does not sample and evaluate viewpoints on-demand after reaching some viewpoint, rather it continually samples viewpoints in its vicinity at a given rate and stores them into a map of cached viewpoints.
Only viewpoints that have $I(\xi)$ above some threshold, are safe, not too close to another informative viewpoint, and not blocked by mission control are stored.
The viewpoints are also pruned from the map if they become uninformative or if a better viewpoint is nearby.
Lastly, viewpoints that were found in a previous update and are now outside the local update box, are kept as global goals and are pruned more aggressively than the local goals.
This approach continually produces a map of informative viewpoints that is denser near the UAV and sparse in the rest of the environment.
\subsection{Single-UAV autonomous search planning}
\label{sec:single_uav_goal_eval}
In our approach, the UAV can be in three states of autonomous search --- \textit{locally searching}, \textit{traveling to goal} or \textit{returning}, and the goal planning and evaluation is divided into local and global planning, as in~\cite{dang2019gbplanner}.
The transitions between these states are fairly simple --- if there are informative and reachable viewpoints near the UAV, the UAV is in the locally searching state and tries to always keep a sequence of two viewpoints.
These are given to the trajectory planning pipeline so that the UAV doesn't stop at each viewpoint and compute the next best one.
This is done by performing a local replanning of the sequence whenever the UAV is getting close to a viewpoint.
When there are no reachable viewpoints near the UAV or when new information is received from the operator or other robots, a global replanning is triggered.
The global replanning computes paths to all stored informative viewpoints (not only in the local search box) and evaluates them.
The best viewpoint is then set as a goal to the long-distance navigation pipeline described in~\autoref{sec:long_distance_navigation}.
Finally, the \textit{returning} state is triggered when the global planning does not find any reachable goals, or if the operator demands it, or if $t_{\textrm{home}} < c_R t_{\textrm{battery}}$, where $t_{\textrm{home}}$ is the estimated time of flight needed to return to the base station, $t_{\textrm{battery}}$ is the estimated remaining flight time, and $c_R$ is an empirically tuned constant.
The value of $t_{\textrm{home}}$ is computed from the UAV's average flight speed, and a path found through the SphereMap to the base station.
If there is no path to the base station, the UAV will instead try to return along a tree of visited positions, which is built specifically for this purpose, so that for example if a path is only temporarily blocked, the UAV will fly to the roadblock, and if it is removed, will continue flying to the base station.
The UAV can also recover from this state, if it is returning due to having found no reachable goals, and suddenly some goals become reachable again.
When the UAV gets close to the goal, it switches back to the \textit{locally searching} state.
The reward functions used to evaluate goals govern the behavior of the UAV while searching the environment, and as such, they define the search strategy of the UAV.
For simplicity, we made the local planner and global planner use the same reward function in a given strategy, with only one difference, that the local planner can add a penalty to local goals, based on the UAV's current momentum and heading, to allow for smoother local search.
These strategies and their corresponding reward functions were utilized in the challenge:
\subsubsection{Greedy search strategy (GS)}
The simplest reward function for selecting the next best viewpoint $\xi$ from the current UAV viewpoint $\xi_{\textrm{UAV}}$ (the UAV's current position and heading) can be written as
\begin{equation}
R_{\textrm{GS}}(\xi_{\textrm{UAV}}, \xi) = {I(\xi)} - { D(\xi_{\textrm{UAV}}, \xi)},
\end{equation}
where $I(\xi)$ is the information value of the viewpoint (described in~\autoref{sec:viewpoint_computation}) and $D$ is the best path cost computed in the SphereMap (described in~\autoref{sec:spheremap}).
This reward function for controlling the next best goal selection thus depends on the constants $c_F$, $c_S$, $c_{FS}$ described in~\autoref{sec:viewpoint_computation} and the risk-awareness constant $c_R$ used in path planning, which can be used to tune the search based on the user's needs.
This reward function is very simple and can take the UAV in various directions, leaving behind uncovered surfaces in faraway places.
The next strategy aims to solve this.
\subsubsection{Dead end inspection strategy (DEI)}
\label{sec:dead_end_inspection}
A more thorough reward function can be written as
\begin{equation}
R_{\textrm{DEI}}(\xi_{\textrm{UAV}}, \xi) = I(\xi) - D(\xi_{\textrm{UAV}}, \xi) + (D(\mathbf{p}_{\textrm{HOME}}, \xi) - D(\mathbf{p}_{\textrm{HOME}}, \xi_{\textrm{UAV}})).
\end{equation}
This strategy adds the difference in path costs to the base station position $\mathbf{p}_{\textrm{HOME}}$ from the evaluated viewpoint and from the UAV.
This greatly increases the value of viewpoints that are deeper in the environment, relative to the UAV.
Using this reward function, the UAV will most likely first explore frontiers until reaching a dead-end, and then thoroughly cover surfaces from the dead end back to the base, analogous to a depth-first search.
\subsubsection{Viewpoint path enhancement strategy (VPE)}
\label{sec:viewpoint_path_enhancement}
The third strategy used on the UAVs is not a change of the reward function, but rather a simple way to increase surface coverage when the UAV is flying through long stretches of explored but not perfectly covered space, either in the DEI or GS strategy.
If VPE is enabled and the UAV is flying to a distant goal, then we periodically take the short-distance trajectory from the local path planner (described in~\autoref{sec:planning}), sample it into multiple viewpoints, and try to perturb these viewpoints to increase surface coverage, while not increasing the flight time too much.
Thus, we fully utilize the agility of quadcopter UAVs, as they can easily turn from side to side while flying in a given direction.
\subsection{Probabilistic cooperative search planning}
\label{sec:coop_ss}
Our approach to multi-UAV search planning was to make the UAVs completely autonomous and decentralized by default, while also being able to share important information and use it for their own planning.
Each UAV always keeps the latest version of the LTVMap (described in~\autoref{sec:lsegmap}) received from a given UAV.
When a new LTVMap is received, every newest received map currently being stored onboard the UAV is updated by every other newest received map, as well as by the LTVMap constructed from the UAV's own SphereMap.
\begin{figure}
\centering
\includegraphics[clip, trim=4cm 4.0cm 0cm 0cm, width=0.80\textwidth]{fig/diagrams/coop_exploration.pdf}
\caption{%
Diagram illustrating the computation of the cooperative exploration reward function, as described in~\autoref{eq:coop_planning}.
The image shows a UAV evaluating a frontier viewpoint $\xi_L$ (orange) in its local occupancy map (black lines).
The UAV has received two LTVMaps $M_1, M_2$ from two other UAVs.
As the local map frontier $\xi_L$ falls into one of the free space segments $\sigma_{ML,M_1}$ of $M_1$, it is assigned as belonging to that segment and acts as an edge in planning paths between the local map and the received map $M_1$.
Therefore, the frontier viewpoints $\xi_{1, M1}, \xi_{2,M2}$ should be reachable through $\xi_L$. A path to them is estimated across the centroids of the segments of $M_1$.
The viewpoints $\xi_{3,M1}, \xi_{1,M2}$ (black) are marked as having $l(\xi \in V_{\mathrm{exp}}) = 1$, since they fall deep into the explored space of the other received map, and are therefore not considered.
}
\label{fig:coop_schematic}
\end{figure}
The updating is done so that the frontier viewpoints, sent along with each LTVMap, which fall into explored space in other LTVMaps, are blocked.
This is difficult to do in a deterministic manner due to map drift and other inaccuracies. Therefore, each frontier viewpoint in any LTVMap is assigned a likelihood $l(\xi \in V_{\textrm{exp}})$ to represent how likely it is that the viewpoint has already been visited by any other UAV.
The $l(\xi \in V_{\textrm{exp}})$ of any viewpoint is the maximum of a function describing the likelihood that the point lies in a given segment's bounding box, computed over all segments of all the other received LTVMaps.
This likelihood function can be selected arbitrarily; for our approach, we selected a function, which is equal to 0 outside of the segment's bounding box, and grows linearly to 1 the closer it is to the center of the bounding box.
The updates of these $l(\xi \in V_{\textrm{exp}})$ values for a three \ac{uav} mission can be seen in~\autoref{fig:use_of_received_maps}.
For a frontier viewpoint $\xi_L$ in the UAV's local map, which has $l(\xi_L \in V_{\textrm{exp}}) > 0$, the reward function changes into
\begin{equation}
R(\xi_{\textrm{UAV}}, \xi_L, \mathbb{M}) = l(\xi_L \in V_{\textrm{exp}})R_R (\xi_{\textrm{UAV}}, \xi_L, \mathbb{M}) + (1 - l(\xi_L \in V_{\textrm{exp}})) R_L (\xi_{\textrm{UAV}}, \xi_L),
\end{equation}
where $R_L$ is the reward function defined by the employed single-\ac{uav} search strategy described in~\autoref{sec:single_uav_goal_eval}. This does not take into account any information from other UAVs. $R_R$ is a reward function which considers other frontiers in received LTVMaps that could be reachable through $\xi_L$.
This function is defined as
\begin{equation}
\label{eq:coop_planning}
R_{R}(\xi_{\textrm{UAV}}, \xi_L, \mathbb{M}) = \max_{M \in \mathbb{M}} \max_{\xi_R \in M} {I(\xi_R) - D(\xi_{\textrm{UAV}}, \xi_L)} - \frac{D_R(\xi_L, {\xi_{R}}, \sigma_{ML,M})}{1- l(\xi_R \in V_{\textrm{exp}} )},
\end{equation}
where $\mathbb{M}$ is the set of all received LTVMaps, and $\sigma_{ML,M}$ is the most likely segment that $\xi_L$ belongs to in a map $M$.
The function $D_R$ is a special path cost function computed as a sum of Euclidean distances of segment centers in a given map, spanning from $\xi_L$, through the center of $\sigma_{ML,M}$, and towards a given frontier viewpoint $\xi_R$.
The value of $D_R$ is also scaled by a user-defined parameter. This is done so as to increase the cost of viewpoints in received maps as there is more uncertainty about the path to these viewpoints.
The division by $1-l(\xi_R \in V_{\textrm{exp}})$ serves to gradually decrease the reward of exploring the viewpoint up to $-\infty$ when the viewpoint was surely explored by another \ac{uav}.
Computation of this reward function is illustrated in~\autoref{fig:coop_schematic}.
The percentage of covered surfaces inside segments received in the LTVMap is used for blocking the surface coverage viewpoints in segments, where the percentage is above a user-defined threshold.
The segments with low surface coverage could be used as additional goals in a similar manner as shared frontiers in~\autoref{fig:coop_schematic}. However, for simplicity, this was not implemented.
\begin{figure}
\newcommand{12.50em}{12.50em}
\newcommand{0.95em}{0.95em}
\newcommand{0.8em}{0.8em}
\newcommand{0.3}{0.3}
\newcommand{0.11}{0.11}
\centering
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[width=0.245\textwidth,trim={{0.00\width} {0.0\height} {0.00\width} {0.11\height}},clip]{./fig/screenshots/multiuav_lsegmap_1.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(a)}};
\end{scope}
\end{tikzpicture}
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[width=0.245\textwidth,trim={{0.00\width} {0.0\height} {0.00\width} {0.11\height}},clip]{./fig/screenshots/multiuav_lsegmap_2.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(b)}};
\end{scope}
\end{tikzpicture}
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[width=0.245\textwidth,trim={{0.00\width} {0.0\height} {0.00\width} {0.11\height}},clip]{./fig/screenshots/multiuav_lsegmap_3.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(c)}};
\end{scope}
\end{tikzpicture}
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[width=0.245\textwidth,trim={{0.00\width} {0.0\height} {0.00\width} {0.11\height}},clip]{./fig/screenshots/multiuav_lsegmap_4.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(d)}};
\end{scope}
\end{tikzpicture}
\caption{\label{fig:use_of_received_maps}
Illustration of \ac{ltvmap} sharing and utilization during a cave exploration mission in simulation with three UAVs running the DEI strategy (described in~\autoref{sec:dead_end_inspection}).
The heatmap color of the \ac{ltvmap} segments shows surface coverage of the individual segments, with purple signifying complete coverage. The colors of the exploration viewpoints signify their $l(\xi \in V_{\textrm{exp}})$ value, with white having a value equal to 1 and black being 0.
Image (a) shows the \ac{ltvmap} sent by UAV1 after returning to communication range with the base station.
This map is given to UAV2, which then launches and chooses to explore the nearest unexplored frontier in the map of UAV1.
Image (b) shows the \ac{ltvmap} sent by UAV2 when it is returning.
Image (c) then shows how the maps are co-updated onboard UAV3, which launches after receiving the \ac{ltvmap} from UAV2.
The only non-explored viewpoint remaining is in the top part of the image.
Image (d) shows the maps received by the base station from all three UAVs at the end of the mission with no unexplored viewpoints remaining.
}
\end{figure}
\subsection{Autonomy robustness enhancements}
\label{sec:autonomy_robustness}
One important problem is that in the case of dark and non-reflective surfaces (common in the DARPA SubT Finals course) the LiDAR beam does not return with enough energy.
Such surfaces will not be marked as occupied and essentially become permanent frontiers, which means that some informative viewpoints, as defined in~\autoref{sec:single_uav_goal_eval}, are non-informative.
To solve this, the UAV builds a map of visited positions.
With time spent near a visited position, we linearly decrease the value of nearby viewpoints.
After some time, the sampling is blocked near those positions completely.
Another problem arising is due to highly dynamic obstacles in the occupancy map, such as other robots, fog, or very narrow corridors where the discretization of occupancy can oscillate. As such, the reachability of a given viewpoint can oscillate.
This was solved by putting a timeout on trying to reach a given viewpoint and was triggered if the \ac{uav} did not get closer to the goal within a defined time.
After this timeout, an area around the viewpoint is blocked until the end of the mission, or until a manual reset by the operator.
This approach may cause the \ac{uav} to block some goals that are only temporarily blocked by another robot in narrow passages, but it was deemed preferable rather than having the \ac{uav} permanently oscillate in such passages.
The autonomy system can be easily controlled by operator commands (described in~\autoref{sec:operator_commands}) which can block viewpoints in a set cylinder in space, force the \ac{uav} to explore towards some goal, or simply move to a given position and stay there.
In this way, problematic situations not covered by our solution, such as organizing multiple robots in a tight corridor, can be resolved by the operator.
\section{Path planning, trajectory generation and tracking}
\label{sec:planning}
Planning collision-free paths and generating dynamically feasible trajectories is another vital component of the presented \ac{uav} system operating in a constrained environment.
The sequence of waypoints that efficiently guides the \ac{uav} through the environment is produced by the long-distance navigation module, described in~\autoref{sec:navigation}.
Given the navigation waypoints, a computationally undemanding multi-stage approach is applied to obtain a trajectory lying at a safe distance from obstacles, while also respecting dynamic constraints and minimizing the time of trajectory following.
In particular, the solution can be divided into three modules: path planning to obtain the local reference path, path processing to increase the safety margin of the path, and the trajectory generation to obtain a time-parametrized trajectory respecting the dynamic constraints of the \ac{uav}.
\subsection{Safety-aware long-distance navigation}
\label{sec:long_distance_navigation}
When a goal, or a sequence of goals, is set to the navigation stack, the long-distance navigation module computes a path through the SphereMap, optimal according to~\autoref{eq:spheremap_path_cost}.
The module then keeps this path and utilizes the trajectory planning and tracking modules to follow it.
This is done simply by a ``carrot and stick" approach, where the trajectory planning module is given a near point (approx. \SI{20}{\meter} away from the \ac{uav} at maximum, to keep planning time short) on the path. This point is then slid across the path towards the goal.
If the trajectory planning and tracking modules cannot advance along the SphereMap path for a specified amount of time, which can be caused by a dynamic obstacle such as a rockfall, fog, or another robot, the SphereMap path following is stopped and an unreachability flag is raised.
The \ac{uav} then chooses a different goal or tries to find a new path to the same goal based on the current state of mission control.
When the search planning requires the \ac{uav} to fly through multiple nearby viewpoints, such as when covering the surfaces in a room with cameras or when visiting multiple viewpoints while traveling and using the VPE strategy described in~\autoref{sec:viewpoint_path_enhancement}, the local planning is instead given a sequence of viewpoints. It then plans a trajectory that reaches each up until a maximum distance in order to keep planning time short.
Thus, the output of this module is always a sequence of one or more waypoints, which may or may not require heading alignment, and through which the local path planning module should find a path in a short time, which we can control by changing the look-ahead distance.
\label{sec:navigation}
\subsection{Local path planning}
The grid-based path planner coupled with iterative path processing was adopted from~\cite{kratky2021exploration} to obtain the primary reference path.
The proposed approach presents a path planning and processing algorithm, which is based on the traditional A* algorithm applied on a voxel grid with several modifications to decrease the computational demands.
The first modification lies in avoiding the computationally demanding preprocessing of the map representation (e.g., obstacle dilation by Euclidean distance field), which often requires more time than the actual planning on the grid.
This holds true especially for shorter direct paths that leave a significant portion of the previously processed environment unexploited.
For this reason, the presented approach builds a $k$-d tree representation of the environment which is then used to conclude the feasibility of particular cells, based on their distance to the nearest obstacle.
As a result, the computational demands are partially moved from the preprocessing phase to the actual planning phase.
This approach is particularly efficient in the case of paths that do not require exploiting a significant part of the environment.
The second important modification is applying node pruning, similar to the jump point search algorithm.
This modification helps to decrease the number of unnecessarily expanded nodes. As such, it lowers the computational time required for obtaining the solution.
A detailed analysis of the influence of particular modifications on the performance of the planning algorithm is provided in~\cite{kratky2021exploration}.
To allow the generated paths to lead through narrow passages, the limits on safety distance are set to the dimension of the narrowest opening that is supposed to be safely traversable by the \ac{uav}.
However, setting this distance to a value that ensures safety in the event of the maximum possible deviation from the path caused by any external or internal source would lead to the preclusion of entering narrow passages of the environment.
On the contrary, setting this distance to a minimum value without considering safety margins would increase the probability of collision along the whole path.
To balance the traversability and safety of the generated path, the minimum required \ac{uav}-obstacle distance applied in the planning process is set to the lowest traversability limit, and iterative path post-processing is applied to increase the \ac{uav}-obstacle distance in wider parts of the environment.
The employed post-processing algorithm proposed in~\cite{kratky2021exploration} iteratively shifts the path towards the free part of the environment, while continually maintaining the path's connectivity.
As such, this anytime algorithm increases the average \ac{uav}-obstacle distance throughout the flight, which significantly improves the reliability of the navigation with respect to imprecisions in the reference trajectory tracking.
The generated path is periodically replanned at a rate of \SI{0.5}{\hertz} to exploit the newly explored areas of the environment and handle dynamic obstacles.
The continuous path following is achieved by using the predicted reference generated by the MPC tracker~\cite{baca2018model} to identify the starting position for the planner at time $T_s$ in the future.
Apart from the periodic replanning, the planning is also triggered by the detection of a potential collision on the prediction horizon of the trajectory reference produced by the MPC tracker.
The potential collisions are checked at a rate of \SI{5}{\hertz} by comparing the distance of particular transition points of the predicted trajectory to the nearest obstacle in the most recent map of the environment.
Depending on the time left to the time instant of a potential collision, the \ac{uav} is either requested to perform a stopping maneuver or to trigger replanning with the most up-to-date map.
\subsection{Trajectory generation}
The path generated by the path planning pipeline is a series of waypoints, each consisting of a 3D position and heading.
A trajectory (a series of dense time-parameterized waypoints) is generated for each new path, so that the motion of the \ac{uav} satisfies translational dynamics and dynamic constraints up to the 4th derivative of position.
The trajectory generation system is based on the polynomial trajectory generation approach~\cite{richter2016polynomial, burri2015realtime}, but it was significantly extended to perform in a constrained, real-world environment~\cite{baca2021mrs}.
This approach was modified to minimize the total flight time while still satisfying the dynamic constraints.
Furthermore, an iterative sub-sectioning algorithm was added to force the resulting trajectory into a feasible corridor along the original path.
Moreover, a fallback solver was added to cope with invalid QP solver results caused by numerical instabilities.
Finally, a dynamic initialization mechanism and a time-outing system were added to cope with the non-zero trajectory generation and path planning computation times.
Even though the path planning and the trajectory generation can last for several hundreds of milliseconds, the resulting trajectory always smoothly connects to the currently tracked trajectory. Therefore, no undesired motion of the \ac{uav} is produced.
The updated trajectory generation approach was released and is maintained as part of the MRS UAV System~\cite{baca2021mrs}.
\subsection{Trajectory tracking and feedback control}
The low-level guidance of the \ac{uav} is provided by a universal \ac{uav} control system, as developed by the authors of~\cite{baca2021mrs}.
The onboard control system supports modular execution of \ac{uav} reference generators, feedback controllers, and state estimators.
During the SubT Finals, the system exclusively utilized the geometric tracking control on \emph{SE(3)}~\cite{lee2010geometric} to follow the desired states generated by the MPC Tracker~\cite{baca2018model}.
\section{Artifact detection, localization and reporting}
\label{sec:object_detection}
Objects of interest (artifacts) in the explored area are detected visually using a \ac{cnn} that processes images from several onboard RGB cameras covering the frontal, top, and bottom sectors of the \ac{uav}.
The \ac{cnn} detector is trained on our manually-labeled dataset and outputs predicted bounding boxes and corresponding classes of the artifacts in the input images.
To estimate the 3D positions of the detections, we have leveraged the onboard 3D \ac{lidar} sensor and the mapping algorithm described in~\autoref{sec:mapping}.
These positions are processed by an artifact localization filter based on our previous work~\cite{vrba_ral2019}, which fuses the information over time to filter out sporadic false positives and improve the localization precision.
The artifact detection, localization, and filtering pipeline is illustrated in~\autoref{fig:detection_schematic}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\textwidth]{fig/diagrams/uav_detection_schematic.pdf}
\caption{%
Schematic of the artifact detection and localization pipeline.%
}
\label{fig:detection_schematic}
\end{figure}
\subsection{Artifact detection}
The artifact detection is executed in parallel on image streams from all cameras at the same time, which would require a dedicated \ac{gpu} onboard the \ac{uav}.
Therefore, we have chosen the lightweight MobileNetV2 \ac{cnn}~\cite{mobilenet}, in order to achieve a high detection rate and keep the load on the onboard computer as low as possible.
The \ac{cnn} is running on the Intel UHD \ac{gpu} that is integrated within the onboard \ac{cpu} of the \ac{uav}.
The integrated Intel \ac{gpu} interfaces with our pipeline using the OpenVino\footnote{\href{https://docs.openvino.ai/latest/index.html}{\texttt{docs.openvino.ai/latest/index.html}}} framework.
The OpenVino framework together with the Intel \ac{gpu} achieves more than \SI{5}{\hertz} detection rate on 4 cameras in parallel but due to fixed resource allocation, we are locking the camera rates to \SI{5}{\hertz}.
This artificial throttling of the detection rate avoids issues when the integrated GPU locks the memory resources for the \ac{cpu}, which might lead to lag in the control pipeline.
\begin{table}
\setlength{\tabcolsep}{3pt}
\centering
\footnotesize
\begin{tabular}{l c c c c c c c c c c c}
\toprule
\tablehdg{Input} & $384^2$×3 & $1122^2$×32 & $1122^2$×16 & $562^2$×24 & $282^2$×32 & $142^2$×64 & $142^2$×96 & $72^2$×160 & $72^2$×320 & $72^2$×1280 & 1×1×1280 \\
\tablehdg{Type} & conv2d & bottleneck & bottleneck & bottleneck & bottleneck & bottleneck & bottleneck & bottleneck & conv2d 1×1 & avgpool 7×7 & conv2d 11 \\
\tablehdg{$n$} & 1 & 1 & 2 & 3 & 4 & 3 & 3 & 1 & 1 & 1 & - \\
\bottomrule
\end{tabular}
\caption{The architecture of MobileNetV2.
Each column of the table describes a sequence of 1 or more identical layers, repeated $n$ times.}
\label{table:mobilenet}
\end{table}
The MobileNetV2 base model (see~\autoref{table:mobilenet}) is modified for training using the OpenVino open-source tools.
The evaluation of the model is based on the mean average precision metric (mAP) and recall.
The mAP metric is a standard metric for object detection models since it provides information about how accurate the prediction is.
Recall provides an understanding what is the ratio between true positive predictions and the total number of positive samples in the dataset.
The main challenge for the model is to adapt to different domains --- mine, urban, and cave environments have different lighting and backgrounds (see~\autoref{fig:pics_exa}), which affect the detection performance.
Moreover, the angle from which the images were taken is different as part of the images in the dataset were taken by ground vehicles and the rest by UAVs.
As the model was trained only on part of the dataset, we had to train it incrementally whenever data from a new type of environment or camera angle was gathered to ensure all cases were represented uniformly in the training data.
For training the model on a growing dataset, we used a variety of learning schedulers from the MMdetection toolbox~\cite{mmdetection}.
The Cosine scheduler designed by~\cite{cosineLR} is used for warm-restarts of the training pipeline to overcome the loss of learned features.
The main challenge of transfer learning is to overcome the loss of learned distribution on the previous dataset when training the model on the new dataset (in this case the new dataset is a combination of the previous dataset and newly collected data).
In our experience, different learning rate schedulers should be used depending on the size of newly added data:
\begin{itemize}
\item \textit{Cosine scheduler} is used during clean model training on the initial dataset.
\item \textit{Cyclic scheduler} \cite{cyclicLR} is used when the size of new data is more than \SI{15}{\percent} of the size of the initial dataset.
\item \textit{Step decal scheduler} is used when a small portion of data is added.
\end{itemize}
This method resulted in a score of \SI{49.1}{\percent} mAP on the whole dataset.
Such a value is acceptable for the current solution since it is a trade-off between accuracy and speed of detection.
The dataset was collected using the off-the-shelf objects that were specified by the organizers, see~\autoref{fig:artifacts}.
The data has been recorded from the onboard cameras on the UAVs and UGVs, in particular:
\begin{itemize}
\item Intel RealSense D435
\item Basler Dart daA1600
\item Bluefox MLC200w
\end{itemize}
The Basler cameras do not have an IR filter installed to maximize the amount of captured captured light.
Altogether the dataset has 37820 labeled images, sometimes with multiple objects in one frame, an example of images from the dataset is shown in~\autoref{fig:pics_exa}.
We publish the labeled detection datasets that were used for training of the neural network at \href{https://github.com/ctu-mrs/vision_datasets}{\texttt{github.com/ctu-mrs/vision\_datasets}}.
In addition we also publish the tools to convert it into PASCAL VOC or COCO formats for immediate usage on most of the open-source models.
\begin{figure}
\centering
\newcommand{0.95em}{0.95em}
\newcommand{0.8em}{0.8em}
\newcommand{0.3}{0.3}
\newcommand{12.50em}{10em}
\centering
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[height=12.50em,trim={{0.00\width} {0.0\height} {0.00\width} {0.00\height}},clip]{./fig/artifacts/cave.jpg}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(a)}};
\end{scope}
\end{tikzpicture}%
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[height=12.50em,trim={{0.00\width} {0.0\height} {0.10\width} {0.00\height}},clip]{./fig/artifacts/mine.jpg}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(b)}};
\end{scope}
\end{tikzpicture}%
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[height=12.50em,trim={{0.00\width} {0.0\height} {0.10\width} {0.00\height}},clip]{./fig/artifacts/urban.jpg}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(c)}};
\end{scope}
\end{tikzpicture}%
\caption{Training images containing artifacts captured by the onboard cameras in cave (a), tunnel (b), and urban (c) environments.}
\label{fig:pics_exa}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{./fig/diagrams/artifact_localization_weights.pdf}
\caption{%
Example of an artifact detection with an overlay visualization of the sample weighting function $f_{\text{w}}$.
}
\label{fig:det_pos_weights}
\end{subfigure}%
~
\begin{subfigure}[t]{0.75\textwidth}
\centering
\includegraphics[width=\textwidth]{./fig/diagrams/artifact_localization1.pdf}
\caption{%
Model of the camera and the point cloud-based sampling method. Rays $r_1, r_2, r_3, r_4$ are projections of $c_1, c_2, c_3, c_4$, respectively. Only the points within the area defined by these rays are selected. The selected points are colored based on their weight. Non-selected points are not drawn for clarity.
}
\label{fig:det_pos}
\end{subfigure}%
\caption{
Illustration of the point sampling for 3D position estimation of detected artifacts with an example detection of a backpack.
}
\end{figure}
\subsection{Estimation of 3D position of detections}
Positions of the detected objects are estimated using data from the onboard \ac{lidar} sensor and the mapping algorithm.
Each detection is represented by four corner points $\pnt{c}_1, \pnt{c}_2, \pnt{c}_3, \pnt{c}_4$ of its bounding rectangle in the image plane of the corresponding camera, as estimated by the detector (see~\autoref{fig:det_pos_weights}).
These points are expressed as undistorted pixel coordinates in the image frame $\mathcal{I}$.
The mathematical projection model of the camera $f_{\text{proj}} : \mathbb{R}^3 \to \mathbb{R}^2$ is assumed to be known.
In our case, we have used the standard pinhole camera model formulated as
\begin{equation}
k \bemat{ u \\ v \\ 1 } = \bemat{ f_u & 0 & u_0 \\ 0 & f_v & v_0 \\ 0 & 0 & 1 } \bemat{ x \\ y \\ z },
\end{equation}
where $f_u, f_v, u_0, c_0$ are parameters of the model (focal length and image center), $\bemat{ x, y, z }^{\intercal}$ is a 3D point in the camera coordinate frame $\mathcal{C}$, and $u, v$ are distortion-free pixel coordinates in the image frame $\mathcal{I}$, corresponding to the 3D point (see~\autoref{fig:det_pos} for illustration).
To model the distortion of the real-world camera, we have used a standard radial-tangential polynomial distortion model.
It is worth noting that the output of $f_{\text{proj}}^{-1}$ is a 3D ray and not a single point, which is represented in the model by the free scaling factor $k \in \mathbb{R}$.
The input \ac{lidar} scan is represented as a set of 3D points $\set{S} = \left\{ \pnt{p}_i \right\}$ expressed in the camera coordinate frame~$\mathcal{C}$.
The occupancy map is represented using the DenseMap data structure that is described in~\autoref{sec:mapping}, and which provides a raycasting function $f_{\text{raycast}} : \set{R} \to \mathbb{R}^3$ where $\set{R}$ is the set of all 3D rays.
The function $f_{\text{raycast}}$ returns the point, corresponding to the first intersection of the specified ray with an obstacle in the environment (or nothing if there is no such intersection).
The position of each detected object is estimated from a number of points that are sampled using two methods: a primary one that utilizes the latest available point cloud from the \ac{lidar} and a secondary backup method using the latest DenseMap estimated by the mapping algorithm.
The primary method is more accurate and less computationally intensive, while the secondary method ensures that 3D positions of artifacts lying outside of the \ac{fov} of the \ac{lidar} scan can still be estimated.
For each sampled point $\pnt{s}_i \in \set{S}$, its weight $w_i$ is calculated.
The position estimate $\pnt{d}$ and its corresponding uncertainty covariance matrix $\mat{Q}_{\pnt{d}}$ are obtained as a weighted mean of the sampled points:
\begin{align}
\pnt{d} = \sum_{i = 1}^{|\set{S}|} \pnt{s}_i w_i, && \mat{Q}_{\pnt{d}} = \frac{1}{1 - \sum_{i = 1}^{|\set{S}|} w_i^2} \sum_{i = 1}^{|\set{S}|} w_i \left( \pnt{s}_i - \pnt{d} \right)\left( \pnt{s}_i - \pnt{d} \right)^{\intercal},
\end{align}
where $\set{S}$ is the set of sampled points and the weights $w_i$ are normalized so that $\sum_{i=1}^{\set{S}}w_i = 1$.
The weight of a point $\pnt{s}$ is obtained based on the distance of its reprojection to the image coordinates $\pnt{s}' = \bemat{s_u, s_v}^{\intercal} = f_{\text{proj}}\left( \pnt{s} \right)$ from the center of the detection's bounding box $\pnt{c}_0 = \bemat{c_u, c_v}^{\intercal}$ using the function
\begin{equation}
f_{\text{w}}\left(\pnt{s}', \pnt{c}_0\right) = \left( 1 - \frac{2\abs{s_u - c_u}}{w_{\text{bb}}} \right)^2 \left( 1 - \frac{2\abs{s_v - c_v}}{h_{\text{bb}}} \right)^2,
\end{equation}
where $w_{\text{bb}}, h_{\text{bb}}$ are the width and height of the bounding box, respectively.
The weighting function serves to suppress points further from the center of the bounding box.
This is based on our empirical observation that the center provides the most reliable estimate of the detected object's position, while the bounding box's corners typically correspond to the background and not the object, as illustrated in~\autoref{fig:det_pos_weights}.
The whole 3D position estimation algorithm is presented in~\autoref{alg:posest}.
The \texttt{sampleRectangle} routine used in~\autoref{alg:posest} is described in~\autoref{alg:sampleRectangle}.
The estimated positions and the corresponding covariance matrices serve as an input to the \textit{artifact localization filter} described in the next section (refer to~\autoref{fig:detection_schematic}).
To avoid bias and numerical singularities in the filter, some special cases of the covariance calculation have to be handled.
Namely, these are:
\begin{enumerate}
\item \textit{All extracted points lie on a plane.}
This happens, e.g. when all the cast rays of the secondary position estimation method intersect the same voxel of the DenseMap.
The covariance matrix is then singular, which causes numerical problems with integrating the measurement.
\item \textit{All extracted points are too close to each other.}
This typically happens when the detected object is too far or too small.
The covariance matrix's eigenvalues are then too small, biasing the fused position estimate of the artifact.
\end{enumerate}
To avoid these problems, the estimated covariance matrix is rescaled, so that all eigenvalues conform to a specified minimal threshold before being processed by the artifact localization filter.
\begin{algorithm}
\algdef{SE}[SUBALG]{Indent}{EndIndent}{}{\algorithmicend\ }%
\algtext*{Indent}
\algtext*{EndIndent}
\algnewcommand\AND{\textbf{and}~}
\algnewcommand\Not{\textbf{not}~}
\algnewcommand\Or{\textbf{or}~}
\algnewcommand\Input{\State{\textbf{Input:~}}}%
\algnewcommand\Output{\State{\textbf{Output:~}}}%
\algnewcommand\Parameters{\State{\textbf{Parameters:~}}}%
\algnewcommand\Begin{\State\textbf{Begin:~}}%
\algnewcommand{\LineComment}[1]{\State \(\triangleright\) #1}
\caption{Algorithm for the estimation of a detection's position and covariance.}\label{alg:posest}
\begin{algorithmic}[1]
\footnotesize
\Input
\Indent
\State $\set{D} = \left\{ \pnt{c}_1, \pnt{c}_2, \pnt{c}_3, \pnt{c}_4 \right\},~\pnt{c}_i \in \mathbb{R}^2 $
\Comment{undistorted coordinates of the detection's bounding box}
\State $f_{\text{proj}} : \mathbb{R}^2 \to \set{R} $
\Comment{the projection model of the camera}
\State $\set{P} = \left\{ \pnt{p}_1, \pnt{p}_2, \dots, \pnt{p}_{|\set{P}|} \right\},~\pnt{p}_i \in \mathbb{R}^3 $
\Comment{the latest point cloud from the \ac{lidar}}
\State $f_{\text{raycast}} : \set{R} \to \mathbb{R}^3 $
\Comment{the raycasting function of the occupancy map}
\State $n_{\text{desired}} \in \mathbb{N}$
\Comment{the desired number of sampled points}
\EndIndent
\Output
\Indent
\State $\pnt{d} \in \mathbb{R}^3$
\Comment{estimated position of the detection}
\State $\mat{Q}_{\pnt{d}} \in \mathbb{R}^{3\times 3}$
\Comment{covariance matrix of the position estimate}
\EndIndent
\Begin
\Indent
\LineComment{First, the desired number of points is sampled using the primary and secondary methods.}
\State $r_1 \coloneqq f_{\text{proj}}^{-1}\left(\pnt{c}_1\right),~r_2 \coloneqq f_{\text{proj}}^{-1}\left(\pnt{c}_2\right),~r_3 \coloneqq f_{\text{proj}}^{-1}\left(\pnt{c}_3\right),~r_4 \coloneqq f_{\text{proj}}^{-1}\left(\pnt{c}_4\right)$
\Comment{project the corners of the bounding box to 3D rays}
\State $\set{S}_1 \coloneqq \left\{ \pnt{p}_i \in \set{P} \mid \pnt{p}_i \text{ within the area defined by edges } r_1, r_2, r_3, r_4 \right\}$
\State $\set{S}_2 \coloneqq \text{sampleRectangle}\left( \left\{ \pnt{c}_1, \pnt{c}_2, \pnt{c}_3, \pnt{c}_4 \right\}, n_{\text{desired}} - |\set{S}_1|, f_{\text{proj}}, f_{\text{raycast}} \right)$
\Comment{sample any remaining points from the occupancy map}
\State $\set{S} \coloneqq \set{S}_1 \cup \set{S}_2$
\LineComment{Then, the weight of each sampled point is calculated using the weighting function $f_{\text{w}}$.}
\State $\vec{c}_0 \coloneqq \text{mean}\left( \pnt{c}_1, \pnt{c}_2, \pnt{c}_3, \pnt{c}_4 \right)$
\Comment{calculate the center of the bounding box}
\For{each $ \pnt{s}_i \in \set{S} $}
\State $\vec{s}_i' \coloneqq f_{\text{proj}}\left( \pnt{s}_i \right) $
\Comment{project the point back to the image frame $\mathcal{I}$}
\State $w_i \coloneqq f_{\text{w}}\left( \vec{s}_i', \vec{c}_0 \right)$
\Comment{calculate its weight}
\EndFor
\LineComment{Finally, the position and its uncertainty are calculated as a weighted mean and covariance and returned.}
\State $\pnt{d} \coloneqq \sum_{i = 1}^{|\set{S}|} \pnt{s}_i w_i $
\State $\mat{Q}_{\pnt{d}} = \frac{1}{1 - \sum_{i = 1}^{|\set{S}|} w_i^2} \sum_{i = 1}^{|\set{S}|} w_i \left( \pnt{s}_i - \pnt{d} \right)\left( \pnt{s}_i - \pnt{d} \right)^{\intercal}$
\State\textbf{return~} $\pnt{d},~\mat{Q}_{\pnt{d}}$
\EndIndent
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\algdef{SE}[SUBALG]{Indent}{EndIndent}{}{\algorithmicend\ }%
\algtext*{Indent}
\algtext*{EndIndent}
\algnewcommand\AND{\textbf{and}~}
\algnewcommand\Not{\textbf{not}~}
\algnewcommand\Or{\textbf{or}~}
\algnewcommand\Input{\State{\textbf{Input:~}}}%
\algnewcommand\Output{\State{\textbf{Output:~}}}%
\algnewcommand\Parameters{\State{\textbf{Parameters:~}}}%
\algnewcommand\Begin{\State\textbf{Begin:~}}%
\algnewcommand{\LineComment}[1]{\State \(\triangleright\) #1}
\caption{The \texttt{sampleRectangle} routine for sampling a number of 3D points from the occupancy map.}\label{alg:sampleRectangle}
\begin{algorithmic}[1]
\footnotesize
\LineComment{This routine samples points within a rectangle in the image plane $\mathcal{I}$ by raycasting pixels on inscribed ellipses with an increasing radius.}
\State\textbf{Routine } $\text{sampleRectangle}$\textbf{:}
\Indent
\Input
\Indent
\State $\left\{ \pnt{c}_1, \pnt{c}_2, \pnt{c}_3, \pnt{c}_4 \right\},~\pnt{c}_i \in \mathbb{R}^2 $
\Comment{corners of the rectangle to be sampled in the image frame $\mathcal{I}$}
\State $n_{\text{remaining}} \in \mathbb{N}$
\Comment{the desired number of samples}
\State $f_{\text{proj}} : \mathbb{R}^2 \to \set{R} $
\Comment{the projection model of the camera}
\State $f_{\text{raycast}} : \set{R} \to \mathbb{R}^3 $
\Comment{the raycasting function of the occupancy map}
\EndIndent
\Output
\Indent
\State $\set{S} = \left\{ \pnt{s}_i \right\}$
\Comment{a set of sampled points in the image frame $\mathcal{I}$ such that $|\set{S}| \le n_{\text{remaining}}$}
\EndIndent
\Parameters
\Indent
\State $n_r \in \mathbb{N}$, $n_{\alpha} \in \mathbb{N}$
\Comment{number of radial sampling steps and number of circumferential steps per unit circumference}
\EndIndent
\Begin
\Indent
\State $ w \coloneqq c_{1,u} - c_{3,u},~h \coloneqq c_{1,v} - c_{3,v} $
\Comment{calculate the width and height of the rectangle}
\State $ r_{\text{step}} \coloneqq 1/n_{r} $
\For{$ r \in \left\{ 0, r_{\text{step}}, 2r_{\text{step}}, \dots, 1 \right\} $}
\State $ \alpha_{\text{step}} \coloneqq r / n_{\alpha} $
\State $ {}_\Delta\alpha \coloneqq u,~u \sim \mathcal{U}\left(-\pi, \pi\right) $
\Comment{generate a random angular offset to avoid biasing some directions}
\For{$ \alpha \in \left\{ 0, \alpha_{\text{step}}, 2\alpha_{\text{step}}, \dots, 2\pi \right\} $}
\State $\pnt{s}' \coloneqq \bemat{ w r\cos\left( \alpha + {}_\Delta\alpha \right)/2,~h r\sin\left( \alpha + {}_\Delta\alpha \right)/2 }^{\intercal}$
\Comment{calculate a sample point on an ellipse}
\State $r \coloneqq f_{\text{proj}}\left( \pnt{s}' \right)$
\Comment{project the point to a 3D ray}
\State $\set{S} \coloneqq \set{S} \cup f_{\text{raycast}}\left( r \right)$
\Comment{find an intersection of the ray with an obstacle and add it to $\set{S}$}
\If{$|\set{S}| = n_{\text{remaining}}$}
\State\textbf{return~} $\set{S}$
\EndIf
\EndFor
\EndFor
\State\textbf{return~} $\set{S}$
\EndIndent
\EndIndent
\end{algorithmic}
\end{algorithm}
\subsection{Artifact localization filter}
\label{sec:artifact_localization}
Artifact detections are filtered using an approach based on our previous work, where a multi-target tracking algorithm was employed for detection, localization, and tracking of micro aerial vehicles~\cite{vrba_ral2019}.
The filtering serves to improve the precision of the artifacts' estimated positions and to reject false positives.
Only artifacts that are consistently detected multiple times with sufficient confidence are confirmed, and only the confirmed artifacts are then reported to the operator to save the limited communication bandwidth.
A single step of the algorithm is illustrated in~\autoref{fig:art_filter}.
The filter keeps a set of hypotheses about objects in the environment.
Each hypothesis $\mathcal{H}$ is represented by an estimate of the object's position $\hat{\vec{x}}$, its corresponding covariance matrix $\mat{P}$, and a probability distribution of the object's class $p_{\mathcal{H}} : \mathcal{C} \to \interval{0}{1}$, where $\mathcal{C}$ is the set of considered classes.
For every hypothesis $\mathcal{H}$, up to one detection $\mathcal{D}_\mathcal{H}$ is associated according to the rule
\begin{equation}
\mathcal{D}_\mathcal{H} = \begin{cases}
\argmax_{\mathcal{D}} l\left(\mathcal{D} \mid \mathcal{H} \right), &\text{ if }\max_{\mathcal{D}} l\left(\mathcal{D} \mid \mathcal{H} \right) > l_{\text{thr}}, \\
\emptyset, &\text{ else,}
\end{cases} \label{eq:art_assoc}
\end{equation}
where $l\left(\mathcal{D} \mid \mathcal{H}\right)$ is the likelihood of observing $\mathcal{D}$ given that it corresponds to $\mathcal{H}$, and $l_{\text{thr}}$ is a likelihood threshold.
The associated detections are used to update the corresponding hypotheses. The detections that are not associated initialize new hypotheses.
The position estimate $\hat{\vec{x}}$ of a hypothesis $\mathcal{H}$ and its covariance $\mat{P}$ are updated using the Kalman filter's update equation and an associated detection $\mathcal{D}_\mathcal{H}$ at time step $t$ as
\begin{align}
\mat{K}\tstep{t} &= \mat{P}\tstep{t} \mat{H}^{\intercal} \left( \mat{H} \mat{P}\tstep{t} \mat{H}^{\intercal} + \mat{Q}\tstep[\vec{d}]{t} \right)^{-1}, \label{eq:art_kf1}\\
\hat{\vec{x}}\tstep{t+1} &= \hat{\vec{x}}\tstep{t} + \mat{K}\tstep{t}\left( \vec{d}\tstep{t} - \mat{H}\hat{\vec{x}}\tstep{t} \right), \\
\mat{P}\tstep{t+1} &= \left( \mat{I} - \mat{K}\tstep{t} \mat{H} \right)\mat{P}\tstep{t},\label{eq:art_kf3}
\end{align}
where $\mat{K}\tstep{t}$ is a Kalman gain, $\mat{I}$ is an identity matrix, $\mat{H}$ is an observation matrix (in our case, equal to $\mat{I}$), $\vec{d}\tstep{t}$ and $\mat{Q}\tstep[\vec{d}]{t}$ are the estimated position of $\mathcal{D}\tstep[\mathcal{H}]{t}$ and its corresponding covariance matrix, respectively.
The class probability distribution $p_\mathcal{H}$ is updated as
\begin{equation}
p\tstep[\mathcal{H}]{t+1}\left(c\right) = \frac{ n\tstep[\text{dets}]{t} p\tstep[\mathcal{H}]{t}\!\left(c\right) + p\tstep[\mathcal{D}_\mathcal{H}]{t}\!\left(c\right) }{ n\tstep[\text{dets}]{t} + 1},
\end{equation}
where $c \in \mathcal{C}$ is an object's class and $n\tstep[\text{dets}]{t}$ is the number of detections, associated to $\mathcal{H}$ thus far.
Because the artifacts are assumed to be immobile, the Kalman filter's prediction step is not performed, which has the effect that the uncertainty of a hypothesis (represented by $\mat{P}$) can decrease without bounds.
This can cause the likelihood $l\left(\mathcal{D} \mid \mathcal{H} \right)$ of new measurements corresponding to the same object to be below the association threshold, breaking the association algorithm.
To avoid this, the covariance matrix $\mat{P}$ is rescaled after each update so that its eigenvalues are larger than a specified minimal value, which enforces a lower bound on the position uncertainty of the hypotheses.
\begin{figure}
\centering
\begin{subfigure}[t]{0.44\textwidth}
\centering
\includegraphics[width=\textwidth]{./fig/diagrams/artifact_filter.pdf}
\caption{%
Situation before the update step.
The detections $\mathcal{D}_1$ and $\mathcal{D}_2$ are associated to the hypotheses $\mathcal{H}_1$ and $\mathcal{H}_3$, respectively.
The detection $\mathcal{D}_3$ is not associated to any hypothesis. The hypothesis $\mathcal{H}_2$ has no detection associated.
}
\end{subfigure}%
~~
\begin{subfigure}[t]{0.44\textwidth}
\centering
\includegraphics[width=\textwidth]{./fig/diagrams/artifact_filter2.pdf}
\caption{%
Situation after the update step.
The detections $\mathcal{D}_1$ and $\mathcal{D}_2$ updated the hypotheses $\mathcal{H}_1$ and $\mathcal{H}_3$, respectively.
The detection $\mathcal{D}_3$ initialized a new hypothesis $\mathcal{H}_4$ and the hypothesis $\mathcal{H}_2$ remained unchanged.
}
\end{subfigure}%
\caption{%
Illustration of one step of the artifact localization filter (a top-down view).
Hypotheses $\mathcal{H}_i$ are shown as covariance ellipsoids with the mean $\hat{\vec{x}}_i$ marked by an `×' symbol.
Detections $\mathcal{D}_i$ are represented in the same way using dashed lines.
Associations between hypotheses and detections are highlighted using color.
}
\label{fig:art_filter}
\end{figure}
\subsubsection{Association likelihood}
To calculate the likelihood $l\left( \mathcal{D}\tstep{t} \mid \mathcal{H}\tstep{t} \right)$ of observing a detection $\mathcal{D} \equiv \left\{ \vec{d},~ \mat{Q}_{\vec{d}} \right\}$ given that it corresponds to a hypothesis $\mathcal{H} = \left\{ \hat{\vec{x}},~ \mat{P} \right\}$ at time step $t$, we use a measurement model
\begin{align}
\vec{d}\tstep{t} = \mat{H} \vec{x} + \vec{\xi}\tstep{t}, && \vec{\xi}\tstep{t} \sim \mathcal{N}\left(\vec{0},~\mat{Q}\tstep[\vec{d}]{t}\right), \label{eq:art_meas_model}
\end{align}
where $\mat{H}$ is the observation matrix, $\vec{x}$ is a hidden state (the real position of the artifact), $\vec{\xi}\tstep{t}$ is measurement noise, and $\mathcal{N}\left(\vec{0},~\mat{Q}\tstep[\vec{d}]{t}\right)$ denotes the Gaussian probability distribution with zero mean and covariance matrix $\mat{Q}\tstep[\vec{d}]{t}$.
Using this model, the probability density function of the expected measurement given $\vec{x}$ is
\begin{align}
p\left( \vec{d}\tstep{t} \mid \vec{x} \right) = f\left( \vec{d}\tstep{t} \mid \mat{H}\vec{x}, \mat{Q}\tstep[\vec{d}]{t} \right),
\end{align}
where $f\left(~\cdot \mid \vec{\mu}, \mat{\Sigma} \right)$ denotes the density function of the Gaussian distribution with mean $\vec{\mu}$ and covariance matrix $\mat{\Sigma}$.
The Kalman filter described by equations \eqref{eq:art_kf1} to \eqref{eq:art_kf3} can be interpreted as an estimator of the probability density of the hidden state given previous measurements.
This probability density is represented as a random variable with a Gaussian distribution:
\begin{equation}
p\left( \vec{x} \mid \vec{d}\tstep{1}, \dots, \vec{d}\tstep{t} \right) = f\left( \vec{x} \mid \hat{\vec{x}}\tstep{t}, \mat{P}\tstep{t} \right). \label{eq:art_state_model}
\end{equation}
The likelihood $l\left(\vec{d}\tstep{t}\right)$ of observing a new measurement $\vec{d}\tstep{t}$ given previous measurements $\vec{d}\tstep{1}, \dots, \vec{d}\tstep{t-1}$ is the value of a probability density function $p\left( \vec{d} \mid \vec{d}\tstep{1}, \dots, \vec{d}\tstep{t-1} \right)$ at $\vec{d}\tstep{t}$.
By combining equations \eqref{eq:art_meas_model} and \eqref{eq:art_state_model}, the likelihood may be expressed as
\begin{equation}
\begin{split}
l\left(\vec{d}\tstep{t}\right) = p\left( \vec{d}\tstep{t} \mid \vec{d}\tstep{1}, \dots, \vec{d}\tstep{t-1} \right) &= \int p\left( \vec{d}\tstep{t} \mid \vec{x} \right) p\left( \vec{x} \mid \vec{d}\tstep{1}, \dots, \vec{d}\tstep{t-1} \right) d\vec{x}, \\
&= \int f\left( \vec{d}\tstep{t} \mid \mat{H}\vec{x}, \mat{Q}\tstep[\vec{d}]{t} \right) f\left( \vec{x} \mid \hat{\vec{x}}\tstep{t-1}, \mat{P}\tstep{t-1} \right) d\vec{x}, \\
&= f\left( \vec{d}\tstep{t} \mid \mat{H}\hat{\vec{x}}\tstep{t-1}, \mat{Q}\tstep[\vec{d}]{t} + \mat{H}\mat{P}\tstep{t-1}\mat{H}^{\intercal} \right), \label{eq:art_likelihood}
\end{split}
\end{equation}
which is the value of the probability density function of a Gaussian distribution with mean $\mat{H}\hat{\vec{x}}\tstep{t-1}$ and covariance $\mat{Q}\tstep[\vec{d}]{t} + \mat{H}\mat{P}\tstep{t-1}\mat{H}^{\intercal}$ at $\vec{d}\tstep{t}$.
This expression is used to determine the detection-to-hypothesis association at each step according to equation~\eqref{eq:art_assoc}, as described in the previous section.
\subsection{Arbiter for artifact reporting}
\label{sec:arbiter_for_artifact_reporting}
In contrast to the system part of the competition, the Virtual Track requires substituting the human operator with an autonomous arbiter for artifact reporting.
The main functionality of the autonomous base station resides in collecting the hypotheses from the robots and reporting the location of artifacts.
The number of reports in each run is limited and usually lower than the number of hypotheses collected from all robots. Therefore, a subset of hypotheses needs to be chosen so that the expected score is maximized.
The implemented reporting strategy is based on filtering the collected hypotheses by considering their location and artifact type, followed by evaluating the performance index of particular hypotheses.
The entire workflow is illustrated in~\autoref{fig:automatic_reporting_scheme}.
\begin{figure}
\centering
\adjincludegraphics[width=1.0\textwidth,trim={{0.00\width} {0.0\height} {0.00\width} {0.00\height}},clip]{./fig/diagrams/artifact_reporting_scheme.pdf}
\caption{\label{fig:automatic_reporting_scheme}
Illustration of the automatic reporting process.
}
\end{figure}
The autonomous base station collects the hypotheses from individual robots throughout the entire run.
The predefined reporting scheme specifies the maximum allowed number of reports at particular time instants of the mission.
Most of the reports are saved to the last minutes of the mission when the base station holds most of the information collected from the robots.
However, some reports are allowed sooner during the mission to tackle the problem of unreliable communication and prevent a failure to report all hypotheses before the time limit exceeds.
When the reporting scheme allows for submitting a report, the collected hypotheses are processed to obtain the best available hypothesis $h^*$ in a set of all collected hypotheses $\mathbf{H}$.
First, the hypotheses are filtered using information about previous reports, their validity, location, and per robot limits on the number of reports and minimum success rate.
The final set of filtered hypotheses is obtained as
\begin{equation}
\mathbf{H}_f = \mathbf{H} \setminus \{\mathbf{H}_{\mathrm{area}} \cup \mathbf{H}_{\mathrm{succ}} \cup \mathbf{H}_{\mathrm{unsucc}} \cup \mathbf{H}_{r}\},
\end{equation}
where $\mathbf{H}_{\mathrm{area}}$ stands for the hypotheses located outside of the competition course, $\mathbf{H}_{\mathrm{succ}}$ stands for hypotheses in the vicinity of the successful reports of the same artifact class, $\mathbf{H}_{\mathrm{unsucc}}$ contain hypotheses in the vicinity of the unsuccessful reports of the same artifact class, and $\mathbf{H}_{r}$ represents the hypotheses of robots that have exceeded their own limit on reports and concurrently have a low success rate of their submitted hypotheses.
The performance index for a hypothesis $h_i$ is computed as
\begin{equation}
P(h_i) = \alpha p_r + \beta p_c + \gamma p_n + \delta p_a,
\end{equation}
where the values $p_r$, $p_c$, $p_n$, $p_a$ represent the percentile of particular performance indices of hypothesis $h_i$ among all hypotheses in $\mathbf{H}_f$, and $\alpha$, $\beta$, $\gamma$, $\delta$ are the weight coefficients.
The particular performance indices are related to the number of robots with a similar hypothesis ($p_r$), the overall confidence of the detections assigned to the hypothesis ($p_c$), the number of detections assigned to the hypothesis ($p_n$), and the apriori probability of detection of a particular object ($p_a$).
The next hypothesis to be reported $h^*$ is chosen based on the following equation:
\begin{equation}
h^* = \arg \max_{h_i \in \mathbf{H}_f} P(h_i).
\end{equation}
The distribution of successful reports over particular reporting attempts during all runs of the SubT Virtual Track Prize Round is shown in~\autoref{fig:successful_reports_distribution}.
\begin{figure}
\centering
\adjincludegraphics[width=0.6\textwidth,trim={{0.00\width} {0.0\height} {0.00\width} {0.00\height}},clip]{./fig/reporting/successful_reports_distribution.pdf}
\caption{\label{fig:successful_reports_distribution}
The distribution of successful reports over particular reporting attempts during all runs of the SubT Virtual Track Prize Round. The lower success rate of the first attempt in comparison to later attempts is caused by the early time of the first report, which was allowed \SI{100}{\second} after the start of the mission. By this time, only a single \ac{uav} had already entered the course, and thus the number of available hypotheses to choose from was low.
}
\end{figure}
\section{Mission control}
\label{sec:mission_control}
The proposed system is designed for fully autonomous operation, so that the rescue team can benefit from the autonomous reconnaissance of the \ac{uav} without the need for any additional personnel operating the \ac{uav}.
The DARPA SubT competition reflects this requirement on autonomy by allowing only robots without human operators to enter the course.
In theory, the robots could be teleoperated~\cite{moniruzzaman2022teleoperation}.
However, this is not scalable with the number of robots.
Moreover, for teleoperation, a reliable communication link between the robot and the operator is required, but is often not available, especially deeper in the subterranean environment where impenetrable walls diminish signal propagation.
Thus, the correct execution of an autonomous mission relies on a state machine that governs the high-level actions of the \ac{uav}.
\subsection{State machine}
The state machine applied in the SubT System Finals consists of 12 fundamental states.
In the first state, the status of components that are vital to the mission is checked to ensure that the mission will be accomplished.
Both the software components (\textit{localization, mapping, planning, artifact detection, artifact localization, database}) and hardware components (\textit{LiDAR, RGB cameras, depth cameras, mobilicom unit}) are checked prior to the mission.
This component health check is crucial as, while still in the staging area, any potential component failures can be addressed, but it is not possible when the UAV is already flying.
When all components are running correctly, the UAV enables the output of the reference controller, transits to \textit{WAITING FOR TAKEOFF} state, and waits for approval from the safety operator to start the mission.
The approval required to guarantee the safety of the personnel moving in the vicinity of the UAV is given by arming the \ac{uav} and transferring the control of the UAV fully to the onboard computer by toggling the RC switch.
After the approval to start, the \ac{uav} waits for a specified safety timeout in the \textit{READY FOR TAKEOFF} state while signaling the imminent takeoff by flashing LEDs.
In this state, the approval can be taken back by the safety operator.
After the timeout elapsed, the \textit{PERFORMING TAKEOFF} state is entered, during which the UAV ascends until reaching the desired takeoff height.
In the next state (\textit{FLYING THROUGH GATE}), the UAV is navigated to a position inside the area to be explored.
Once this position is reached, the space behind the UAV is virtually closed to prevent flight back towards the rescue personnel.
If the rescuers have some prior knowledge about the environment, e.g., they see a door to which they want to send the UAV, they can optionally specify this first position to steer the UAV in that direction.
After reaching this first position or if the flight to the first position is not requested, the UAV enters the \textit{EXPLORATION} state.
In this state, the UAV fulfills the primary mission goals until the upper bound of the estimated time to return is equal to the remaining flight time.
Then the \ac{uav} initiates returning to the takeoff position in the state \textit{FLYING BACK TO START}.
The return position is the takeoff position by default, but the operator can request any other position (e.g., to serve as a communication retranslation node) to which the UAV tries to return.
After the position is reached, the UAV flies to the nearest safe landing spot as described in~\autoref{sec:landing_spot_detection}, and the \textit{LANDING} state is entered.
The landing is also triggered when the flight time is elapsed during the \textit{FLYING BACK TO START} or \textit{FLYING TO LANDING SPOT} states.
When the UAV lands, it enters the \textit{FINISHED} state, in which it turns off the motors, \acp{led}, \ac{lidar}, and other components except the communication modules to conserve battery power for retranslating communications.
The required communication between the \ac{uav} and its operator during the start of the mission is limited to signals provided by the RC controller and visual signals provided by flashing LEDs.
This enables very fast deployment of the \ac{uav} that automatically starts all necessary software components once the onboard computer is powered on and provides the information about being prepared to start by a single long flash of LEDs.
After that, the operator can approve the mission by the remote controller without the need for any additional communication or commanding of the UAV.
Following this automated procedure, the \acp{uav} are prepared to start one minute after the battery is plugged in.
The state machine applied in the Virtual Track of the SubT Challenge differs only in a few states given by the specifics of the simulation environment.
First, it does not contain the operator commands states that are not available in a virtual environment.
Second, it contains two additional states, \textit{BACKTRACKING} and \textit{AVOIDING COLLISIONS}.
The \textit{BACKTRACKING} state is entered when the \ac{uav} is stuck in a fog and tries to escape from it by backtracking to the most recent collision-free poses, ignoring the occupied cells in the current map (see~\autoref{sec:fog_detection} for details).
In the \textit{AVOIDING COLLISIONS} state, the \ac{uav} is avoiding collision with the \acp{uav} of higher priority by stopping the lateral motion and decreasing its altitude.
We have decided against using collision avoidance in the Systems Track due to the low probability of collision, and high probability of deadlocks in narrow corridors.
\begin{figure}
\centering
\adjincludegraphics[width=1.0\textwidth,trim={{0.00\width} {0.0\height} {0.00\width} {0.00\height}},clip]{./fig/diagrams/mission_control_scheme.pdf}
\caption{\label{fig:mission_sm}
Simplified version of the state machine governing the autonomous mission in SubT System Finals.
}
\end{figure}
\subsection{Operator commands}
\label{sec:operator_commands}
While the \ac{uav} is capable of carrying out the mission on its own in the fully autonomous mode, the operator can intervene by issuing an operator command to influence the behavior of the \ac{uav}.
All operator commands can be activated only in the \textit{EXPLORATION} state and in the operator command states, in which the \ac{uav} performs its primary goal.
Allowing operator commands in other states would interfere with the takeoff, returning, and landing processes.
The commands are transmitted from the operator's base station to the \ac{uav} through the available communication modalities described in~\autoref{sec:communication}.
The following commands are available for the operator:
\begin{itemize}
\item \textbf{Explore to position} ---
The operator can bias the automatic goal selection process by issuing the \textit{Explore to position} command.
After the command is received by the \ac{uav}, the currently used reward function for evaluating viewpoints is extended by a term that penalizes the Euclidean distance of the viewpoint from the desired position $\mathbf{p}_D$.
The term added to the reward function for a viewpoint $\xi$ is simply
\begin{equation}
\label{eq:vp_criterion}
\Delta R(\xi_\textrm{UAV}, \xi, \mathbf{p}_D) = -c_{oc} \left| \mathbf{p}_{\xi} - \mathbf{p}_D \right|.
\end{equation}
Such modification of the reward function causes the viewpoints closer to the desired positions to be preferred over farther viewpoints.
The assertiveness of reaching the desired position can be controlled by the coefficient $c_{oc}$. If this is set too high, it might force the viewpoints with a minimal distance from obstacles and low information value to be selected.
\item \textbf{Plan to position} ---
The \textit{Plan to position} command bypasses the viewpoint selection process and requests the planner to find a path directly to the specified position.
When the requested position is not reachable, i.e., it is in an occupied or unknown space, the planner will find the path to the closest point using the Euclidean distance heuristic function.
Thus, this command should be used primarily for reaching an already visited position, e.g., to land there and retranslate communication from robots that are already further in the environment, or to approach a stuck robot to retrieve its data.
\item \textbf{Set return position} ---
Under normal operation, the \ac{uav} returns to the staging area when its battery is depleted.
The operator can change the return position by issuing the \textit{Set return position} command.
This can save valuable flight time of the \ac{uav} when a communication chain is already established.
\item \textbf{Stop} ---
The operator can also halt the movement of the \ac{uav} by issuing the \textit{Stop} command.
This command is useful when the operator wants to inspect an interesting area in more detail, prevent the \ac{uav} from going into a non-informative or dangerous area, or temporarily retranslate communications.
Moreover, this command is a prerequisite for calling the \textit{Land} command.
\item \textbf{Land} ---
It is possible to land the \ac{uav} prematurely before the end of the mission by issuing the \textit{Land} command.
The expected use case involves landing the \ac{uav} at a position advantageous for extending the communication network.
Before calling the \textit{Land} command, the \textit{Stop} command must be called to prevent an accidental landing at an incorrect location, due to the arbitrary delay of the command sent through an unreliable network.
The system does not guarantee landing at the exact specified position, as a safe landing spot is found in the vicinity of the requested position.
\item \textbf{Return home} ---
The \textit{Return home} command switches the \ac{uav} to the \textit{returning} state, as defined in~\autoref{sec:single_uav_goal_eval}.
In this state, the \ac{uav} uses the navigation module to get as close as possible to the specified return position.
\item \textbf{Resume autonomy} ---
The last operator command cancels the behavior that was forced by previous operator commands (except \textit{Land} and \textit{Set return position}).
This causes the \ac{uav} to resume autonomous exploration, start its return, or land (depending on the flight time left).
\end{itemize}
\subsection{Communication}
\label{sec:communication}
The developed system assumes an unreliable bidirectional low-bandwidth communication network with intermittent dropouts.
It should be mentioned that 2 meshing-capable wireless technologies are used on the hardware level --- \SI{2.3}{\giga\hertz} Mobilicom and \SI{900}{\mega\hertz} motes, with details of both available in~ \cite{roucek2020urban}.
This paper focuses on high-level usage of the communication network, which is used as a black box, and as such the low-level layers of the communication protocol are not discussed.
The developed system benefits from available connections to other agents and the base station in multiple ways.
First, when a robot detects an artifact, the detection with its estimated position is shared over the network instead of returning physically to the base station, thus saving time valuable for the success of the mission.
Second, the individual agents can share the information about the already explored volume in the form of a topological-volumetric map (\acs{ltvmap}) introduced in \autoref{sec:lsegmap}.
The knowledge of other agents' topological-volumetric maps penalized regions already explored by other robots, which encourages splitting of the robot team and covering a larger volume over the same time period as shown in~\autoref{fig:coop_virtual}.
Third, each robot shares its position with the base station, so that the operator has an overview of where all robots are located.
The operator can then influence the future behavior of any robot in the communication range by sending an operator command (\autoref{sec:operator_commands}).
Last, positions of the communication nodes (breadcrumbs or landed \acp{uav}), which form the communication network shown in \autoref{fig:comm_chain}, are sent to be used for returning to the communication range when the remaining flight time is low.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\columnwidth]{fig/virtual_coop/virtual_coop.pdf}
\caption{Example of the dispersed exploration of a tunnel system during the first run in world 1 of the virtual track.
Only LTVMap from \ac{uav}1 is shown for clarity, other \acp{uav} received this map and maps from the other \acp{uav}.
Instead of exploring again the same places as \ac{uav}1, both \ac{uav}2 and \ac{uav}4 explore previously unvisited corridors.
Dark parts of LTVMap in this figure are not yet fully explored, so \ac{uav}3 flies to inspect these areas to not miss any potentially hidden artifacts.
}
\label{fig:coop_virtual}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\columnwidth]{fig/virtual_coop/virtual_comm.pdf}
\caption{A communication network consisting of a base station and 8 breadcrumbs (black) deployed by the \acp{ugv} and 2 \acp{uav} from the \nth{3} run in world 1 of the virtual track.
\ac{uav}3 with its trajectory shown in blue could explore further thanks to the deployed communication nodes.
Without the communication network, the \ac{uav} would have to return to the staging area, thus traveling additional \SI{500}{\meter} from its final landing position.
}
\label{fig:comm_chain}
\end{figure}
\subsection{Calibrating global reference frame}
The entire navigation system of heterogeneous robots within the CTU-CRAS-NORLAB team is decentralized under the assumption of a shared coordinate frame --- the world coordinate frame $O_W$.
To obtain the transformation of a robot's local origin within the world frame, the staging area of the competition environment provides a set of visual tags and a set of reflective markers, both with precisely known poses within the world (see the markers mounted on the entrance to the environment in~\autoref{fig:staging_area_calibration}).
The reflective markers are used within our 6-\ac{dof} calibration procedure in which a Leica TS16 total station is employed to measure 3D points with sub-millimeter accuracy.
The origin $\mathbf{T}_{TS}^{W}$ of the total station in the world is derived from measuring known in-world marker poses and used in deriving $\mathbf{T}_{B}^{W}$ of a robot $B$.
To calibrate the pose of a single robot $B$ after $\mathbf{T}_{TS}^{W}$ is known, 4 known points on the robot's frame need to be measured, used in estimating $\mathbf{T}_{B}^{W}$, and sent to the information database (see \autoref{sec:communication}) or directly to the robot.
As the number of robots in the CTU-CRAS-NORLAB team deployments reached up to 9 robots per run (see~\autoref{fig:staging_area_calibration}), the overhead for robots-to-world calibration decelerated the rate of robot deployments as well as limited the possibilities for quick in-situ decision making.
To speed up the calibration pipeline for \acp{uav} with limited flight distance (and hence with greater room for calibration errors), just a single \ac{uav} $A$ needs to be calibrated with the total station wherein the initial pose of the remaining \acp{uav} $B$ is estimated from on-board \ac{lidar} data.
The known transformation $\mathbf{T}_A^W$ and pre-takeoff \ac{lidar} data $\mathbf{D}_A$ of a robot $A$ are shared throughout the robots and used to estimate $\mathbf{T}_B^W$.
The transformation $\mathbf{T}_B^A$ is estimated by registering source \ac{lidar} data $\mathbf{D}_B$ onto target data $\mathbf{D}_A$ using \ac{icp} with extremely tight constraints in matching the rotation component of the transformation.
The tight rotation constraints are important as frame-orientation misalignments are the largest source of absolute error during deep deployments.
The pose of robot $B$ in the world is then given by $\mathbf{T}_B^W = \mathbf{T}_B^{A}\mathbf{T}_A^{W}$.
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\columnwidth]{fig/calibrating_global_frame/staging_area_calibration.pdf}
\caption{Example robot distribution (7 \ac{ugv} robots in blue, 2 \ac{uav} robots in green) of team CTU-CRAS-NORLAB within the staging area of System Finals environment of \ac{darpa} \ac{subt} Challenge, 2021.
The Right figure highlights the reference frames of interest --- the world origin $O_W$ together with the origin of the Leica total station $O_{TS}$ used for calibrating local robot origins $O_A$ and $O_B$ within the world.}
\label{fig:staging_area_calibration}
\end{figure}
\section{Hardware platform}
\label{sec:hardware}
The components of our \ac{sar} \ac{uav} were carefully selected to optimize the flight time and perception capabilities based on years of experience with building aerial robots for research~\cite{ahmad2021autonomous}, competitions~\cite{walter2022fr}, inspection~\cite{silano2021powerline}, documentation~\cite{kratky2021documentation} and aerial filming~\cite{kratky2021aerialfilming}.
All platforms we have designed for diverse tasks and purposes including \ac{darpa} \ac{subt} are presented in~\cite{hert2022hardware}.
Our platform is built upon the Holybro X500 quadrotor frame.
The \SI{500}{\milli\meter} frame is made entirely of carbon fiber, therefore it is stiff and light.
Moreover, the arm length can be changed to accommodate different propellers.
Our team designed and manufactured a custom \ac{pcb} that replaced the top board of the X500 frame.
This PCB integrates power supplies for onboard sensors and \ac{led} lights, facilitates communication with our \ac{fcu}, and integrates the XBee-based e-stop receiver.
We selected MN3510 KV700 motors from T-motor and paired them with \SI{13}{\inch} carbon fiber propellers for large payload capacity and propulsion efficiency.
The 3D \ac{lidar} was upgraded to the OS0-128 model, which features 128 scanning lines and wide \SI{90}{\degree} vertical field of view, which allows for perceiving the surroundings of the \ac{uav} in the challenging underground environments.
Despite the wide coverage of the \ac{lidar} sensor, there are still blind spots above and below the \ac{uav} when mounted horizontally.
To cover these spots, we use two Intel Realsense D435 \ac{rgbd} cameras, facing up and down.
This enables the \ac{uav} to fly directly upwards, even in cluttered vertical shafts, without risking collision.
Both of the \ac{rgbd} cameras are also used for mapping and artifact detection.
Additionally, the bottom facing \ac{rgbd} camera is used for landing site detection.
The platform is equipped with two (left and right) dedicated artifact detection cameras, the Basler Dart daA1600, and sufficient lighting provided by \ac{led} strips.
All algorithms run on the onboard Intel NUC i7-10710U \ac{cpu} with 6 physical cores and the detection \ac{cnn} utilizes the integrated Intel UHD \ac{gpu}.
The high-power Mobilicom MCU-30 wireless communication module provides long-range connection between robots and the base station.
In some topologically complex areas, even the high-power Mobilicom cannot assure reliable connection between the units, so it is supported by smaller \SI{900}{\mega\hertz} communication motes, which are also dropped as breadcrumbs by the \acp{ugv} to improve the signal range.
Finally, the large payload capacity of the \ac{uav} allowed us to extend the flight time by using a larger battery.
We used two 4S \SI{6750}{\milli\ampere\hour} Li-Po batteries in parallel. Instead of a larger battery, two smaller batteries were used due to the \SI{100}{\watt\hour} limit for aircraft transportation.
This gave the UAV a flight time of \SI{25}{\minute}.
The X500 platform~(\autoref{fig:uav_x500_labeled}) is capable of flying in dense indoor environments, even in tight vertical shafts, while being able to localize itself with the required accuracy.
It has four different cameras for artifact detection, is able to communicate and form mesh networks with other robots, and possesses a long flight time.
Furthermore, this platform was also replicated in the virtual competition with the same parameters as the physical counterpart.
All of the teams except for two used the X500 platforms in the Virtual Track due to its long flight time, sufficient sensor suite, and agile dynamics.
\begin{figure}[!htb]
\centering
\adjincludegraphics[height=14em,trim={{0.0\width} {0.01\height} {0.0\width} {0.01\height}},clip]{fig/x500/x500_labeled.pdf}
\adjincludegraphics[height=14em,trim={{0.0\width} {0.01\height} {0.0\width} {0.01\height}},clip]{fig/x500/x500_virtual_labeled.pdf}
\caption{X500 platform used in the Systems Track (left) and Virtual Track model counterpart (right).
}
\label{fig:uav_x500_labeled}
\end{figure}
\section{Technical details}
\label{sec:technical_details}
With a few exceptions, the components of the \ac{uav} software stack deployed in the virtual and systems tracks are equal, yet the available processing powers are not.
The Virtual Track yields a low real-time simulation factor. Together with the computational capacities of each simulated robot, it provides almost unlimited computational resources for running all algorithms with any desired resolution or maximal settings.
On the other hand, the simulation-to-world transition requires the algorithms to run on the onboard processing units. This imposes hard requirements on the algorithms' optimization, as well as on minimization of the amount of data transfers and their latency.
These requirements force us to
\begin{itemize}
\item compromise between accuracy and real-time performance in the system design (i.e., cutting out global optimization in on-board running \ac{slam}),
\item ensure real-time properties for systems handling critical factors of the mission (i.e., \ac{uav} control),
\item optimize the data flow and the priorities of processing order within the software stack, and
\item prevent any possible deadlocks from arising from outages of both synchronous, and asynchronous data.
\end{itemize}
Ensuring real-time settings for all systems of a robotic deployment is implausible, particularly in complex robotic-research projects where the stack design must allow for the system to function as a whole under limited real-world conditions.
We summarize the specific aspects of the proposed ROS-based software stack, allowing us to transfer all components to on-board processing capacities.
Thus, providing full decentralization within a \ac{uav} team.
Software based on ROS 1 allows for connecting components under a \textit{nodelet manager} in order to group \textit{nodelet} plugins.
In contrast to \textit{node} configuration, the \textit{nodelets} under a \textit{manager} have shared memory and do not require copying data, a tool useful particularly in the case of passing large maps within the navigation stack.
Our deployment stack consists of several \textit{managers}, each of which handles a distinctive part of the system.
These include \ac{uav} control, pre-processing of \ac{lidar} data and \ac{slam}, pre-processing of \ac{rgbd} data and dense mapping, navigation and path planning, and perception.
The data flowing between these \textit{managers} are copied, and thus the rate of sharing is subject to maximal reduction.
To decrease the temporal and memory demands of algorithms, the resolution of input data and the output maps is decreased as much as possible within the scope and requirements of the desired application.
The rate of saving data for after-mission analyses is also limited as much as possible, with no post-reconstructable data being recorded at all.
In contrast to the system designs for \ac{ugv} platforms, the delays in state estimation and control inputs are a critical subject for reduction.
This is because excessive delays lead to destabilization of a multi-rotor aerial platform (see analysis on delay feasibility in~\autoref{fig:delay_analysis}) as it is a dynamically unstable system requiring frequent feedback, even for simple hovering.
The \textit{nodelet managers} handling such critical parts of the system are prioritized at the CPU level, utilizing the negative \texttt{nice} values that prioritize the related processes during CPU scheduling.
To decrease asynchronous demands on the CPU, non-prioritized components are penalized with positive \texttt{nice}.
Furthermore, their scheduling is restricted on a predetermined set of threads in a multi-threaded CPU.
The primary subject of scheduling restriction is the perception pipeline containing a computationally heavy \ac{cnn}, where static allocation reduces its asynchronous influence on the rest of the system at the cost of a limited processing rate.
The effect of switching on the perception pipeline is visible in~\autoref{fig:cpu_load}, showing the CPU load of the three deployed \acp{uav} during the \ac{darpa} \ac{subt} System Finals.
In other validation tests, the CPU load reached up to \SI{90}{\percent} in \SI{1500}{\second} long missions within vast underground environments.
Such an overloaded CPU results in frequent asynchronous delays, culminating to unpredictable and destructive behavior.
To limit the power consumption and hence, increase the maximum flight time, unsolicited hardware and software components can be temporarily powered off.
These include switching off on-board lights in meaningless settings, disabling \ac{cnn} processing when not needed, or powering off the \ac{lidar} in the after-landing phase when the \ac{uav} is serving solely as a retranslation unit for communication.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{./fig/cpu_loads/cpu_load.pdf}
\caption{\label{fig:cpu_load}
The CPU load of onboard computers of individual \acp{uav} (\textit{red}{}, \textit{green}{}, \textit{blue}{}) during the prize round of SubT System Finals.
The highlighted parts of the graph correspond to the start of processing onboard images by the object detection pipeline.}
\end{figure}
\section{System deployment}
\label{sec:experiments}
Throughout the development of the system presented in this paper, the individual components were extensively tested before integration.
Deployments of the whole system were less frequent, but allowed testing the interaction of individual modules and verifying the ability to fulfill the primary objective of finding objects of interest in subterranean environments.
\subsection{Continuous field verification}
\label{sec:field_testing}
The \ac{sar} \ac{uav} system was continuously tested to empirically verify the correctness and reliability of the developed algorithms, strategies, and hardware.
The \acp{uav} were deployed into diverse types of environments, including historical and industrial buildings of varied levels of disintegration, in humid unstructured caves, a decommissioned underground military fortress, and vast outdoor rural areas.
Some of these environments are shown in~\autoref{fig:field_testing}.
Such tests are critical for evaluating the performance under the stochastic influence of real-world conditions, which are typically not modeled in simulations.
In particular, each perception mode is more or less degraded by ambient lighting or the lack of it, the fog with microscopic condensed droplets of water, smoke or dust particles, reflections on water or smooth surfaces, etc.
The filtration of \ac{lidar} and depth data from~\autoref{sec:filtering_observation_noise} therefore had to be tuned correctly to prevent the integration of false positives into the map, while keeping the actual obstacles.
Moreover, the artifact detection system needed to work under a wide range of visibility conditions and chromatic shifts, for which it was necessary to collect artifact datasets from the mentioned environments.
\begin{figure} [ht]
\newcommand{12.50em}{12.50em}
\newcommand{0.95em}{0.95em}
\newcommand{0.8em}{0.8em}
\newcommand{0.3}{0.3}
\centering
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[width=0.49\textwidth,trim={{0.00\width} {0.0\height} {0.00\width} {0.00\height}},clip]{./fig/photos/x500_pilsen_dust.jpg}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(a)}};
\end{scope}
\end{tikzpicture}
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[width=0.49\textwidth,trim={{0.00\width} {0.0\height} {0.00\width} {0.00\height}},clip]{./fig/photos/x500_pilsen_vertical.jpg}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(b)}};
\end{scope}
\end{tikzpicture}
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[height=12.50em,trim={{0.00\width} {0.0\height} {0.00\width} {0.00\height}},clip]{./fig/photos/x500_byci.jpg}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(c)}};
\end{scope}
\end{tikzpicture}
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[height=12.50em,trim={{0.00\width} {0.0\height} {0.00\width} {0.00\height}},clip]{./fig/photos/x500_byci_multi.jpg}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(d)}};
\end{scope}
\end{tikzpicture}
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[height=12.50em,trim={{0.00\width} {0.0\height} {0.00\width} {0.00\height}},clip]{./fig/photos/x500_outside.jpg}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(e)}};
\end{scope}
\end{tikzpicture}
\caption{
The verification of localization and perception in the following scenarios: data degraded by insufficient lighting and whirling dust (a), traversal of vertical narrow passage (b), performance in humid caves (c), multi-robot exploration (d), and scalability with the environment size (e).
}
\label{fig:field_testing}
\end{figure}
\subsection{DARPA SubT Final Event Systems Track}
\label{sec:darpa_systems}
The final event, which was the culmination of the \ac{darpa} \ac{subt} competition, was organized in the Louisville Mega Cavern in Kentucky on September 23, 2021.
The course consisted of all three environments from the previous circuits and contained all artifacts from previous events plus \textit{the cube}, which was a new artifact for the final event.
This section reports on the results achieved by the aerial part of the CTU-CRAS-NORLAB team.
A total of 40 artifacts were distributed over \SI{880}{\meter} long course, which was divided into 28 smaller sectors to track the team's progress.
Every robot starts in the staging area, from which a single corridor leads to an intersection that branches into three ways.
Each of the branches leads to one of the three specific environment types (tunnel, urban, and cave).
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{./fig/mapping_accuracy/trajectories_over_map_overlay.pdf}
\caption{\label{fig:finals_trajs_over_map_overlay}
\ac{uav} trajectories and on-board-built maps of the environment from all flights during the prize round (colored in red) and the post-event testing (colored in blue) overlaid over the ground truth map (colored in black).
The photos from on-board camera highlight the diversity and the narrow confines of the environment.%
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{./fig/mapping_accuracy/mapping_accuracy.pdf}
\caption{\label{fig:finals_mapping_errors_histogram}
Distribution of mapping errors throughout the prize round and the post-event testing flights (colored in red and in blue in~\autoref{fig:finals_trajs_over_map_overlay}) of \ac{darpa} \ac{subt}.
The absolute mapping error denotes the distance between the ground truth map and concatenation of DenseMaps built with resolution of \SI{20}{\centi\meter} on-board during particular \ac{uav} flights.
The error metric is the Euclidean distance between a point from the on-board maps to its closest point in the ground truth map.}
\end{figure}
\begin{table}
\caption{\label{tab:mission_stats}
The mission statistics from the prize round of the final event.
The row \textit{mission start} marks the time of takeoff in the \SI{60}{\minute} long SubT mission.
\textit{Mission end} is the time of landing with the \textit{landing cause} in the last row.%
}
\centering
\small
\begin{tabular}{l c c c}
\toprule
\tablehdg{UAV} & \tablehdg{Red} & \tablehdg{Green} & \tablehdg{Blue} \\
\midrule
\tablehdg{Mission start} & 2:00 & 46:30 & 36:00 \\
\tablehdg{Mission end} & 5:05 & 52:00 & 58:40 \\
\tablehdg{Flight time} & \SI{180}{\second} & \SI{310}{\second} & \SI{1345}{\second} \\
\tablehdg{Flight distance} & \SI{69}{\meter} & \SI{119}{\meter} & \SI{304}{\meter} \\
\tablehdg{Localization accuracy:} & & & \\
\tablehdg{~~~avg{\textbar}max error in translation (\si{\meter})} & 0.38\,{\textbar}\,0.63 & 0.97\,{\textbar}\,2.66 & - \\
\tablehdg{~~~avg{\textbar}max error in heading (\si{\degree})} & 0.64\,{\textbar}\,4.06 & 1.48\,{\textbar}\,5.37 & - \\
\tablehdg{Sectors explored} & 1 & 3 & 4 \\
\tablehdg{Sectors entered} & 4 & 6 & 4 \\
\tablehdg{Safety clearance} & \SI{0.4}{\meter} & \SI{0.11}{\meter} & \SI{0.4}{\meter} \\
\multirow[t]{2}{*}[0em]{\tablehdg{Landing cause}} &
Collision with \acs{ugv} &
\multirow[c]{2}{*}{\shortstack[c]{Collision with a metal rod\\protruding from the wall}} &
\multirow[c]{2}{*}{\shortstack[c]{Drained battery after being\\trapped in degraded map}}\\\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.90\textwidth]{./fig/obstacle_dist_systems/obstacle_dists_red.pdf}
\includegraphics[width=0.90\textwidth]{./fig/obstacle_dist_systems/obstacle_dists_green.pdf}
\includegraphics[width=0.90\textwidth]{./fig/obstacle_dist_systems/obstacle_dists_blue.pdf}
\caption{\label{fig:uav_obstacle_dists}
Distance between the center of the \acp{uav} and the measured nearest obstacle during the prize round of the SubT System Finals. The moving mean and standard deviation are computed over a \SI{10}{\second} long time window.
}
\end{figure}
\subsubsection{UAV deployment summary}
Three \acp{uav} in total (\textit{red}{}, \textit{green}{}, and \textit{blue}{}) were deployed in the \SI{60}{\minute} long run.
The \ac{uav} performance is summarized in \autoref{tab:mission_stats} and the flight trajectories are plotted in~\autoref{fig:paths}.
The first \ac{uav} (\textit{red}{}) took off just after the first \ac{ugv}, arrived to the first intersection, explored \SI{10}{\meter} of the tunnel section, returned to the intersection, flew to the cave branch where it collided with the Spot \ac{ugv}~(\autoref{fig:landing_events}a).
The chronologically second deployed \ac{uav} was \textit{blue}{}, which went into the urban branch where it traveled to a vertical alcove with a phone artifact.
Then it returned to the start of the urban section, where it hovered until exhausting the battery~(\autoref{fig:landing_events}c), because all viewpoints were blocked in its map corrupted by drift in the featureless urban corridor.
The last deployed \ac{uav} was \textit{green}{} that explored the tunnel section, where it was blocked by a dynamically added artificial wall~(\autoref{fig:dynamic_wall}).
After flying through a cluttered tunnel corridor, the \ac{uav} collided with a metal rod protruding from the wall~(\autoref{fig:landing_events}b).
The maps and the trajectories of all our \ac{uav} flights during the prize round and the post-event testing are shown in~\autoref{fig:finals_trajs_over_map_overlay}, together with summary of the mapping errors from these flights in~\autoref{fig:finals_mapping_errors_histogram}.
The distance of the \acp{uav} from the nearest obstacle during all flights in the prize round are shown in~\autoref{fig:uav_obstacle_dists}.
\begin{figure} [ht]
\newcommand{0.95em}{0.95em}
\newcommand{0.8em}{0.8em}
\newcommand{0.3}{0.3}
\centering
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\includegraphics[height=15em]{./fig/screenshots/artificial_wall_blocked_2.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(a)}};
\end{scope}
\end{tikzpicture}%
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\includegraphics[height=15em]{./fig/screenshots/artificial_wall_basler_right.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(b)}};
\end{scope}
\end{tikzpicture}
\caption{
The artificial wall that blocked the way back for \ac{uav} \textit{green}{} in the map (a) and in the camera image (b).
}
\label{fig:dynamic_wall}
\end{figure}
\begin{figure} [ht]
\newcommand{0.95em}{0.95em}
\newcommand{0.8em}{0.8em}
\newcommand{0.3}{0.3}
\centering
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[height=15.5em,trim={{0.20\width} {0.0\height} {0.25\width} {0.00\height}},clip]{./fig/screenshots/end_red.jpg}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(a)}};
\end{scope}
\end{tikzpicture}%
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[height=15.5em,trim={{0.30\width} {0.0\height} {0.15\width} {0.00\height}},clip]{./fig/screenshots/end_green.jpg}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(a)}};
\end{scope}
\end{tikzpicture}%
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[height=15.5em,trim={{0.25\width} {0.0\height} {0.2\width} {0.00\height}},clip]{./fig/screenshots/end_blue.jpg}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(a)}};
\end{scope}
\end{tikzpicture}
\caption{
The landing events of all three \acp{uav}.
The \ac{uav} \textit{red}{} (a) collided with the Spot \ac{ugv}, \ac{uav} \textit{green}{} (b) hit a metal rod protruding from the wall, and \ac{uav} \textit{blue}{} (c) landed after its battery was exhausted by hovering while being trapped in a map corrupted by drift in the featureless corridor.
}
\label{fig:landing_events}
\end{figure}
\subsubsection{Artifact detection discussion}
The performance of the artifact detection and localization system is summarized in~\autoref{tab:detection_details}, and the number of artifacts detected by each \ac{uav} in~\autoref{tab:detection_stats}.
A total of seven artifacts appeared in the camera images, and six artifacts were detected by the \ac{cnn}.
The detections with estimated bounding boxes from all \acp{uav} are shown in~\autoref{fig:detections},
The survivor \textit{s2} was seen in three frames of the bottom camera. However, only a small part of the survivor sleeve was visible and the images were further degraded by motion blur, as can be seen in~\autoref{fig:survivor_undetected}. Thus, the \ac{cnn} did not manage to detect the artifact.
From the six detections, the cellphone artifact \textit{p1} was detected only on one image frame when the \ac{uav} \textit{blue}{} peeked into the vertical shaft in the urban part.
However, as explained in~\autoref{sec:object_detection}, a total of four detections are necessary to create a hypothesis and to confirm the position, and thus this single detection was discarded.
Another missed point was the survivor \textit{s1}, which was detected and localized within the \SI{5}{\meter} limit, but the artifact was labeled as a cube instead of a survivor.
The hypothesis was merged with a high number of false positives and, consequently, the correct image was not sent to the operator, who could not determine the correct class to report.
Both vent \textit{v1} and drill \textit{d1} were detected, localized, and correctly labeled.
The drill \textit{d4} was incorrectly classified as a backpack, nevertheless, the operator reported the correct class based on the detection image.
All three \acp{uav} detected the \textit{d4} drill, but \ac{uav} \textit{green}{} provided the highest accuracy, which is reported in~\autoref{tab:detection_details}.
In total, four artifact hypotheses arrived to the base station with sufficient information for obtaining a point for the report.
\begin{figure}
\centering
\includegraphics[width=0.33\textwidth]{./fig/hypotheses/full_img/s2_undetected_1.png}%
\hfill
\includegraphics[width=0.33\textwidth]{./fig/hypotheses/full_img/s2_undetected_2.png}%
\hfill
\includegraphics[width=0.33\textwidth]{./fig/hypotheses/full_img/s2_undetected_3.png}
\caption{\label{fig:survivor_undetected}
The only three image frames of the survivor \textit{s2} captured by the downward-facing camera.
The artifact was not detected as there is only a small part of the survivor's sleeve visible in the image, which is also degraded by motion blur.
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{./fig/screenshots/rviz_paths_artifacts_hyps_zoomed}
\caption{\label{fig:paths}
The map of the final event course was obtained by the organizers by scanning the course with a laser scanner station.
The paths traveled by all three \acp{uav} (\textit{red}{}, \textit{green}{}, and \textit{blue}{}) during the final event are depicted by their respective colors.
The ground truth positions of artifacts are surrounded by a yellow sphere in order to visualize the \SI{5}{\meter} limit for the reported artifact to be counted as a point in the competition.
The five artifacts that were detected and localized within this \SI{5}{\meter} limit are shown as squares colored by the detecting UAV and highlighted in the magnified sections with red arrows.
}
\end{figure}
\begin{table}
\caption{\label{tab:detection_stats}
Statistics of artifact detection for each deployed \ac{uav} from the prize round of the final event.
The \textit{seen} column yields the number of artifacts that appeared in the image of one of the on-board cameras.
If the artifact was detected by the \ac{cnn}, it is listed in the \textit{detected} column and the detection is shown in~\autoref{fig:detections}.
Artifacts that were \textit{confirmed} had enough consistent detections to establish a hypothesis.
\textit{Confirmed unique} artifacts were not detected by another robot, including \acp{ugv}.
}
\centering
\small
\begin{tabular}{lcccc}
\toprule
\multirow{2}{*}[0em]{\tablehdg{UAV}} & \multicolumn{4}{c}{\tablehdg{Artifacts}}\\\cmidrule(r){2-5}
& \tablehdg{seen} & \tablehdg{detected} & \tablehdg{confirmed} & \tablehdg{confirmed unique}\\
\midrule
\tablehdg{Red} & 1 & 1 & 1 & 0\\
\tablehdg{Green} & 4 & 3 & 3 & 1\\
\tablehdg{Blue} & 4 & 4 & 3 & 1\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\newcommand{12.80em}{12.80em}
\newcommand{10.60em}{10.60em}
\begin{tikzpicture}
\node[anchor=south west, inner sep=0] at (0,0) (images) {
\begin{minipage}{\linewidth}
\includegraphics[height=12.80em]{./fig/hypotheses/red/detection_right_28.jpeg}
\hfill
\includegraphics[height=12.80em]{./fig/hypotheses/green/detection_right_9.jpeg}
\hfill
\includegraphics[height=12.80em]{./fig/hypotheses/green/detection_right_80.jpeg}
\hfill
\includegraphics[height=12.80em]{./fig/hypotheses/green/detection_left_138.jpeg}
\vspace{-10pt}
\\
\includegraphics[height=10.60em]{./fig/hypotheses/blue/detection_right_24.jpeg}
\hfill
\includegraphics[height=10.60em]{./fig/hypotheses/blue/detection_left_46.jpeg}
\hfill
\includegraphics[height=10.60em]{./fig/hypotheses/blue/detection_left_356.jpeg}
\hfill
\includegraphics[height=10.60em]{./fig/hypotheses/blue/detection_right_431.jpeg}
\end{minipage}};
\begin{scope}
\node[align=center] at (0.4, 4.2) {\color{white}d4};
\node[align=center] at (4.4, 4.2) {\color{white}d4};
\node[align=center] at (8.3, 4.2) {\color{white}f1};
\node[align=center] at (10.9, 4.2) {\color{white}d1};
\node[align=center] at (0.4, 0.4) {\color{white}d4};
\node[align=center] at (3.9, 0.4) {\color{white}v1};
\node[align=center] at (9.5, 0.4) {\color{white}s1};
\node[align=center] at (13.1, 0.4) {\color{white}p1};
\node[align=center] at (3.4, 4.2) {\color{white}2:46};
\node[align=center] at (7.2, 4.2) {\color{white}47:26};
\node[align=center] at (9.8, 4.2) {\color{white}48:42};
\node[align=center] at (15.9, 4.2) {\color{white}49:54};
\node[align=center] at (2.9, 0.4) {\color{white}36:52};
\node[align=center] at (8.4, 0.4) {\color{white}38:01};
\node[align=center] at (12.0, 0.4) {\color{white}42:10};
\node[align=center] at (15.9, 0.4) {\color{white}42:51};
\draw[draw=red,line width=1pt] (0.03,3.84) rectangle (3.932,8.35);
\draw[draw=ForestGreen,line width=1pt] (3.985,3.84) rectangle (16.51,8.35);
\draw[draw=blue,line width=1pt] (0.03,0.0) rectangle (16.51,3.75);
\end{scope}
\end{tikzpicture}
\caption{\label{fig:detections}
Images of artifacts detected by the \acp{uav} in the final event.
The color of the rectangle shows which \ac{uav} detected the artifact and at what mission time as shown in the bottom right corner.
}
\end{figure}
\begin{table}
\caption{\label{tab:detection_details}
Unique artifacts detected by lightweight \ac{cnn} running on-board \acp{uav} in real time.
The total error $\mathbf{e_{tot}}$ of the artifact position is the sum of the \ac{uav} localization drift error $\mathbf{e_{loc}}$ and the error of estimating the artifact position $\mathbf{e_{est}}$ from the detected bounding box.
Artifacts detected by more \acp{uav} are listed only once with values from the most accurate hypothesis among the \acp{uav}.
The hypothesis was \textit{Confirmed} when more than four images were associated with it.
Some artifacts were correctly detected and localized, but the wrong label was assigned to them. This is documented in the \textit{Correct class} column.
Even with a wrong label, the operator could still deduce the correct class by looking at the image sent with the hypothesis.
Only one image was sent with each hypothesis, and if it was possible to deduce the correct class, then the image was listed as \textit{Correct image}.
}
\centering
\small
\begin{tabular}{crcccrrr}
\toprule
\tablehdg{Artifact} & \tablehdg{Frames detected} & \tablehdg{Confirmed} & \tablehdg{Correct class} & \tablehdg{Correct image} & $e_{loc}$ (\si{\meter}) & $e_{est}$ (\si{\meter}) & $e_{tot}$ (\si{\meter})\\
\midrule
v1 & 27 & $\checkmark$ & $\checkmark$ & $\checkmark$ & 1.94 & 4.61 & 3.08\\
s1 & 60 & $\checkmark$ & $\times$ & $\times$ & 2.93 & 4.57 & 2.89\\
p1 & 1 & $\times$ & $\times$ & $\times$ & - & - & -\\
d4 & 11 & $\checkmark$ & $\times$ & $\checkmark$ & 0.77 & 1.61 & 1.30\\
f1 & 13 & $\checkmark$ & $\checkmark$ & $\checkmark$ & 0.85 & 1.33 & 1.31\\
d1 & 9 & $\checkmark$ & $\checkmark$ & $\checkmark$ & 1.46 & 2.30 & 1.55\\
\bottomrule
\end{tabular}
\end{table}
\subsection{DARPA SubT Final Event Virtual Track}
\label{sec:darpa_virtual}
In parallel to the Systems Track, the competition was also running in the simulated form of the Virtual Track.
The teams had to submit a solution consisting of docker images of a robotic team put together within a limited budget to buy the robots and their sensory packages.
The Systems Track included a single run (with two preliminary rounds) conducted in a single world and was therefore focused on the reliability of the robots, which had to overcome challenging terrain with narrow passages and adverse conditions for perception.
On the other hand, the virtual teams were deployed three times in each of the eight worlds, ranging from vast models of artificially created environments to scanned courses from the previous events, including the final event course.
Moreover, in the Virtual Track, the whole mission must be fully autonomous and no human interventions are possible.
The purpose of the virtual event was to evaluate the high-level planning, cooperation, decision-making, and efficient coverage of the large worlds.
As the cooperative searching strategy is one of the core contributions of this work, we have presented the results from the virtual course here as most of the worlds allowed for efficient deployment and cooperation of the multi-robot teams.
\subsubsection{Differences from the Systems Track}
The simulation model of the \ac{imu} provides much better data compared to the real sensor with the same parameters. Thus is due to the measurements in the simulation not being corrupted by propeller-induced vibrations, wind gusts, or saturation, as well as having the \ac{imu} rigidly attached to the \ac{uav} body with known extrinsic parameters.
The higher quality of the simulated data allows for the use of \ac{lidar}-inertial odometry. In addition to the \ac{lidar}, it also relies on the \ac{imu} preintegration in its optimization process, thus providing a smooth and drift-free position estimate, even when there are few geometrically rich features present.
Specifically, the \ac{liosam}~\cite{liosam2020shan} algorithm was chosen for its low drift and high precision over the \ac{aloam} deployed in the Systems Track.
Both algorithms are detailed in~\autoref{sec:localization}.
Reporting of the found artifacts is handled by the operator in the Systems Track, which is not possible in the fully autonomous Virtual Track.
A virtual artifact reporter algorithm was developed to gather artifact hypotheses from all robots and decide which hypotheses are the most likely to score a point (described in detail in~\autoref{sec:arbiter_for_artifact_reporting}).
The control interface of the simulated \ac{uav} was also different from the real one.
While the \ac{fcu} of the real \ac{uav} accepted attitude rate commands generated by the \ac{se3} controller, the simulated \ac{uav} was controlled on a higher level by velocity commands. This did not allow for precise control of the \ac{uav} motion, as was the case for the low-level attitude rate control.
\subsubsection{Virtual Track results}
In the virtual deployment, our team consisted of five \acp{uav} and two \acp{ugv}. The \acp{uav} were the superior platform in the Virtual Track due to their greater movement speed, smaller form-factor, and better mobility to fly over terrain untraversable by the \acp{ugv}.
We deployed two \acp{ugv} to build a communication network consisting of breadcrumbs dropped at the edges of the wireless signal range. This allowed for the \acp{uav} to maximize the time for searching for artifacts as they could return to the nearest breadcrumb instead of to the base station back at the staging area.
Our solution achieved \nth{2} place with a total of 215 scored points.
\autoref{tab:virtual_score} summarizes the points scored by the top three teams on each world of the Virtual Track~(\autoref{fig:virtual_worlds}).
The lower number of points on worlds 4, 5, 6, and 8 can be explained by the fact that these worlds were not made of the tiles that were used in the qualification and practice worlds.
The details on traveled distance and collected hypotheses by particular \acp{uav} during all runs of the SubT Virtual Finals are provided in~\autoref{fig:virtual_travel_dist_and_time} and \autoref{fig:virtual_reports_and_hypotheses} respectively.
\begin{figure} [ht]
\newcommand{12.50em}{12.50em}
\newcommand{0.95em}{1.5em}
\newcommand{0.8em}{1.0em}
\newcommand{0.3}{0.3}
\centering
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[width=0.25\textwidth,trim={{0.30\width} {0.40\height} {0.30\width} {0.15\height}},clip]{./fig/virtual_worlds/01.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{1}};
\end{scope}
\end{tikzpicture}%
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[width=0.25\textwidth,trim={{0.30\width} {0.40\height} {0.30\width} {0.15\height}},clip]{./fig/virtual_worlds/02.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{2}};
\end{scope}
\end{tikzpicture}%
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[width=0.25\textwidth,trim={{0.30\width} {0.40\height} {0.30\width} {0.15\height}},clip]{./fig/virtual_worlds/03.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{3}};
\end{scope}
\end{tikzpicture}%
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[width=0.25\textwidth,trim={{0.30\width} {0.40\height} {0.25\width} {0.15\height}},clip]{./fig/virtual_worlds/04.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{4}};
\end{scope}
\end{tikzpicture}
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[width=0.25\textwidth,trim={{0.30\width} {0.40\height} {0.30\width} {0.15\height}},clip]{./fig/virtual_worlds/05.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{5}};
\end{scope}
\end{tikzpicture}%
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[width=0.25\textwidth,trim={{0.30\width} {0.40\height} {0.30\width} {0.15\height}},clip]{./fig/virtual_worlds/06.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{6}};
\end{scope}
\end{tikzpicture}%
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[width=0.25\textwidth,trim={{0.30\width} {0.40\height} {0.25\width} {0.10\height}},clip]{./fig/virtual_worlds/07.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{7}};
\end{scope}
\end{tikzpicture}%
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[width=0.25\textwidth,trim={{0.05\width} {0.25\height} {0.05\width} {0.15\height}},clip]{./fig/virtual_worlds/08_edit.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{8}};
\end{scope}
\end{tikzpicture}%
\caption{
All eight worlds used in the Virtual Track of the \ac{darpa} \ac{subt} Finals.
The worlds 1, 2, 3, and 7 are built from tiles that were used in the preliminary and practice rounds.
World 4 is the model of the NIOSH research mine, where the tunnel circuit was held.
Similarly, world 5 corresponds to the model of the location of the urban circuit --- the unfinished Satsop nuclear power plant.
World 6 is a model of a narrow cave system. World 8 is modeled based on the System Track Finals.
}
\label{fig:virtual_worlds}
\end{figure}
\begin{table}
\caption{\label{tab:virtual_score}
The score achieved by the top three teams on each world of the Virtual Track.
The reported values are the sums of three runs on each world.
}
\centering
\small
\begin{tabular}{lccccccccc}
\toprule
\tablehdg{World} & \tablehdg{1} & \tablehdg{2} & \tablehdg{3} & \tablehdg{4} & \tablehdg{5} & \tablehdg{6} & \tablehdg{7} & \tablehdg{8} & \tablehdg{total} \\
\cmidrule{2-10}
\tablehdg{Dynamo} & 21 & 52 & 48 & 18 & 15 & 11 & 44 & 14 & 223\\
\tablehdg{CTU-CRAS-NORLAB} & 31 & 39 & 45 & 16 & 18 & 13 & 36 & 17 & 215 \\
\tablehdg{Coordinated Robotics} & 44 & 41 & 27 & 23 & 17 & 14 & 26 & 20 & 212 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{./fig/virtual_stats/distance_travelled_virtual.pdf}
\includegraphics[width=0.48\textwidth]{./fig/virtual_stats/active_time_virtual.pdf}
\includegraphics[width=0.96\textwidth]{./fig/virtual_stats/average_speed.pdf}
\caption{\label{fig:virtual_travel_dist_and_time}
Overall traveled distance, time of active motion, and average velocity of particular \acp{uav} in all runs of the SubT Virtual Finals. The maximum traveled distance throughout all runs was achieved by UAV5 in run 1c (\SI{3560}{\meter}). The maximum active time was achieved by UAV2 in run 5b (\SI{1539}{\second}). The presented average velocity incorporates the entire flight, including hovering states.
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.96\textwidth]{./fig/reporting/reports_distribution_among_uavs.pdf}
\includegraphics[width=0.96\textwidth]{./fig/reporting/hypotheses_distribution_among_uavs.pdf}
\caption{\label{fig:virtual_reports_and_hypotheses}
Distribution of successful reports among the \acp{uav} in particular runs of SubT Virtual Finals (top) and the number of valid hypotheses collected throughout particular runs by individual robots (bottom). The number of successful reports of individual robots is mostly influenced by the ordering of robots and their delayed starts in the mission.
}
\end{figure}
\section{Lessons learned and future work}
\label{sec:lessons_learned}
In this section, we present our view on the state of the \ac{sar} \acp{uav}, the lessons learned, which problems are solved, and what areas require more research to achieve reliable performance suitable for deployment as a tool for assisting rescue workers.
These findings were collected throughout the preparation for as well as during the DARPA SubT Competition, which aimed to push the state of the art of \ac{sar} robotics. Furthermore, this discussion should be of some interest to the community as we highlight aspects that could be explored in future research and development.
In general, most of the individual subproblems, such as localization, mapping, detection, and communication, are solved to the point of being capable of performing an autonomous mission in extremely challenging conditions.
The developed algorithms are now used in actual field deployment instead of just laboratories and simulations, which introduces disturbances, noise, dust, and other detrimental effects that negatively impact the algorithms' performance and reliability.
It is essential to focus on the reliability of the employed methods to make the \acp{uav} a valuable asset to the \ac{sar} team.
The localization method based on 3D \ac{lidar} provides precise position estimates, even under severe degradation by dust.
However, as proved by the \ac{uav} \textit{blue}{}, the estimate can begin to drift when the solved optimization is ill-conditioned due to low-variance geometry, typically in long corridors with straight walls.
The unpredictable nature of subterranean environments requires a localization method that is reliable and drift-free under arbitrary conditions.
Solutions based on detecting geometrical degeneracy, and multi-modal fusion of \ac{lidar} and visual methods were described in~\autoref{sec:sota_slam}.
The results seem promising but due to high unpredictability and challenges of subterranean environments more research in localization algorithms is still required for truly robust pose estimation in arbitrary conditions.
In addition to map drift caused by errors in the localization, the volumetric occupancy grid did not contain the smaller obstacles like ropes, cables, and thin poles, which led to the collision of \ac{uav} \textit{green}{} as seen in~\autoref{fig:landing_events}b.
Although some \ac{lidar} rays hit these thin obstacles, the occupied cells generated by these rays were often changed to free when multiple rays that passed through these cells hit the wall behind them.
As a result, the navigation pipeline planned a path through these cells that appeared free, but contained a thin metal pole, causing a collision.
The ability to traverse narrow passages is also impaired since the passages appear narrower than they really are due to grid discretization.
We propose to locally increase the resolution of the grid of DenseMap on demand to approximate the free space more accurately, while keeping the scan integration times bounded.
This approach is however only a partial solution as the need for a more granular resolution might not always be reliably detected.
Consequently, the need arises for a flexible map that is not bound by fixed cell size, similarly to the SphereMap, possibly based on surfel mapping as seen in~\cite{behley2018efficient}, or based on \ac{gmm}~\cite{o2018variable}.
We experienced a surprising issue when our \ac{uav} equipped with the Ouster OS0-128 \ac{lidar} was passing around a \ac{ugv} with LeiShen C16 \ac{lidar}.
The rays emitted by the LeiShen corrupted some of the Ouster measurements, which manifested as points in random distance within the \ac{fov} of the \ac{lidar}.
These false positives were not filtered out by the intensity filter from~\autoref{sec:filtering_observation_noise}, because the intensities fall into the same range of values as true positives.
As a result, the points get integrated into the map, as shown in~\autoref{fig:lidar_interference}. Nevertheless, the performance of the \ac{uav} was not degraded as the navigation pipeline is robust to such sparse noise.
This experience highlights the importance of testing the compatibility of robotic platforms deployed in heterogeneous teams.
\begin{figure} [ht]
\newcommand{12.50em}{12.50em}
\newcommand{0.95em}{0.95em}
\newcommand{0.8em}{0.8em}
\newcommand{0.3}{0.3}
\centering
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[height=12.50em,trim={{0.0\width} {0.2\height} {0.3\width} {0.1\height}},clip]{./fig/lidar_interference/before_marmotte.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(a)}};
\end{scope}
\end{tikzpicture}%
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[height=12.50em,trim={{0.0\width} {0.0\height} {0.0\width} {0.0\height}},clip]{./fig/lidar_interference/marmotte.png}};%
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(b)}};
\end{tikzpicture}%
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (b) at (0,0) {\adjincludegraphics[height=12.50em,trim={{0.0\width} {0.2\height} {0.4\width} {0.1\height}},clip]{./fig/lidar_interference/after_marmotte_2.png}};%
\begin{scope}[x={(b.south east)},y={(b.north west)}]
\node[fill=black, fill opacity=0.3, text=white, text opacity=1.0] at (0.95em, 0.8em) {\textbf{(c)}};
\end{scope}
\end{tikzpicture}%
\caption{
DenseMap before (a) approaching the \ac{ugv} with LeiShen C16 \ac{lidar} (b) and (c) when it gets corrupted by random points in the \ac{fov} of the \ac{lidar} mounted on the \ac{uav} after flying in close vicinity ($\approx$~\SI{1}{\meter}) to the \ac{ugv}.
Notice, a few false positives were integrated into the map even when the \ac{uav} was \SI{8}{\meter} away from the \ac{ugv} (a).
}
\label{fig:lidar_interference}
\end{figure}
The flight time of the \ac{uav} over \SI{20}{\minute} was achieved as the payload was limited only to crucial components.
However, the presence of only a single computational unit without \ac{cnn} acceleration or dedicated \ac{gpu} led to compromises in the artifact detection \ac{cnn}.
Large-size models such as YOLOv3~\cite{redmon2018yolov3} were too slow for achieving satisfactory frame rates on the \ac{cpu}, so lighter models had to be used.
As explained in~\autoref{sec:object_detection}, the lightweight MobileNetV2 \ac{cnn} allowed for lightweight models (\SI{7}{\mega \byte}) that could fit into the cache of the \ac{cpu}.
Furthermore, the OpenVino framework supports accelerating the \ac{cnn} on the \ac{gpu} integrated with the \ac{cpu}, which helped to achieve sufficient frame rates.
Although the lightweight model successfully detected all artifact types, the labeling was not very reliable and many false positives were detected.
This impacted the artifact localization process, as the false positives were fused into the artifact hypotheses, which shifted the estimate further from the true position.
Also, the images of these false positives were sometimes sent as the representative image of the hypothesis.
Thus, the operator could not correctly decide the artifact class when the label produced by the \ac{cnn} was incorrect.
When payload capacity prevents the use of more capable hardware, the issue must be compensated by the sensing strategy.
In contrast to \acp{ugv} the mobility of \acp{uav} allows reaching closer to the artifact to verify the detection.
Approaches of perception-driven navigation can improve the performance of the lightweight detector by planning a trajectory to inspect the artifact from a closer distance and other angles after the initial detection.
Although our platform is quite compact, it could not pass through all of the narrow passages, even during the post-event testing.
Apart from the discrete map and conservatively set distance from obstacles, the size of the \ac{uav} prevented flying through some of the narrower passages of the circuit.
As even smaller passages are to be expected during deployment of robots in real \ac{sar} scenarios, the \ac{uav} platforms should be further miniaturized to allow safe traversal of narrow passages.
When such miniaturization is not possible due to, e.g., insufficient payload of smaller platforms, a heterogeneous aerial team consisting of both large and small platforms can be deployed,
In such case, the large platform carrying a \ac{lidar} can command and send position corrections to smaller visually-localized \ac{uav} that can inspect tight narrow passages that are unreachable by the large \ac{uav}.
A mutual collision avoidance module is a necessity for any application where multiple robots share the same workspace.
The developed priority-based module uses the already shared information about the robots' positions when communication is available, as it should since the risk of collision arises when robots are in close proximity.
This module prevented collisions in the Virtual Track, where despite the vastness of most of the worlds, the collisions happened often in the practice runs before implementing the collision avoidance.
We decided against using the collision avoidance module in the Systems Track.
This was done as the robots could easily become deadlocked in tight corridors and also due to the collision probability being quite low because of the delay between each \acp{uav} launch.
Additionally, the operator could override the full autonomy to prevent collision, if necessary.
Nevertheless, the \ac{uav} \textit{red}{} collided with a Spot \ac{ugv} shortly after the start of the run, which could have been prevented if collision avoidance was enabled.
A deadlock-free solution based on agent theory approaches can be devised for situations when communication is available, and behavior prediction methods can provide a backup when communication is not possible.
\section{Conclusion}
\label{sec:conclusion}
This paper has presented the complex \ac{uav} system deployed in the final round of the \ac{darpa} \ac{subt} Challenge after 3 years of development and testing in numerous real world demanding environments (including gold mine, coal mine, abandoned nuclear power plant, caverns, military fortress, natural caves, old factory hall, subway station, etc.).
Based on these unique opportunities and experience, we have designed both the hardware \ac{uav} platform and the multi-\ac{uav} software with a focus on the exploration of such vast, complicated, and varying environments.
In the Systems Track of \ac{darpa} \ac{subt} Challenge, three \acp{uav} were deployed alongside ground robots into the competition course consisting of a heterogeneous environment of the tunnel, urban, and cave sections, where the aerial team detected and localized four artifacts and traveled \SI{492}{\meter} in total.
The austere conditions of the circuit, such as narrow passages, dust, featureless corridors, and dynamic obstacles, tested the reliability of the system as a whole, including the hardware design of a compact platform with a considerable flight time of \SI{25}{\minute}.
Most of the testing was realized in environments where it performed exceptionally well, including a former brewery where the UAV had to explore an abandoned building with partially collapsed floor and ceiling, or during the exploration of Byci Skala (Bull Rock Cave) in the Moravian Karst cavern system.
Compared to ground robots the \acp{uav} could search a larger volume of space because they could easily fly over any encountered problematic terrain such as mud, water, and rubble and thus had an advantage in the exploration of unknown terrains with unexpected obstacles.
Furthermore, the worlds of the Virtual Track of the competition were also very large; even with our \ac{uav} possessing a \SI{25}{\minute} flight time and fast dynamics, they were not able to reach the furthest parts of some worlds.
Although our system was designed primarily for these large-scale environments, its performance in the challengingly tight corridors of the prize round was also impressive.
The difficulty of \ac{uav} deployment in such adverse environments motivated numerous achievements beyond the state of the art that are summarized in this paper.
Many lessons were learned in the process that could facilitate and support designing complex robotic systems in similar applications in the future.
A larger team of five aerial robots was deployed in the Virtual Track, alongside two \acp{ugv}.
By employing the proposed cooperative exploration strategies based on topological map sharing, the exploration effort of our team was spread out over a wider area. Together with dynamic flight and reliable artifact detection/localization, this helped to achieve the \nth{2} place with 215 scored points.
Moreover, seven out of the nine participating teams used our X500 \ac{uav}, which was modeled according to the specification of the physical platform thanks to its long flight time, a wide array of sensors, modest size, and reasonable price.
Based on the successful deployment in the \ac{darpa} \ac{subt}, which focused on providing challenging conditions typically encountered during rescue missions in underground environments, we conclude that the presented \ac{uav} system is a valuable addition to teams of first responders, as it can provide situational awareness and even find survivors after a catastrophe without risking the lives of rescuers in dangerous environments.
\subsubsection*{Acknowledgements}
\label{sec:acknowledgements}
We would like to thank the members of the CTU-CRAS-NORLAB team who participated in the design and development of the \ac{uav} and \ac{ugv} hardware platforms, software development, simulations, testing and general support.
Namely:
Ruslan Agishev,
Afzal Ahmad,
Teymur Azayev,
Jan Bayer,
Tommy Bouchard-Lebrun,
Petr Čížek,
Simon-Pierre Deschênes,
Jan Faigl,
Olivier Gamache,
Alexandre Guénette,
Bedřich Himmel,
Jakub Janoušek,
Tomáš Krajník,
Vladimír Kubelka,
Denis Ouellet,
Tomáš Petříček,
François Pomerleau,
Miloš Prágr,
Tomáš Rouček,
Vojtěch Šalanský,
Martin Škarytka,
Vojtěch Spurný,
Arsen Tkachev,
Maxime Vaidis,
Volodymyr Zabulskyi,
Karel Zimmermann,
and Martin Zoula.
This work was partially funded
by the \acf{darpa},
by the CTU grant no. SGS20/174/OHK3/3T/13,
by the Czech Science Foundation (GAČR) under research project no. 20-29531S,
by TAČR project no. FW03010020,
by the OP VVV funded project CZ.02.1.01/0.0/0.0/16 019/0000765 ``Research Center for Informatics",
and by the NAKI II project no. DG18P02OVV069.
\bibliographystyle{apalike}
|
1,314,259,993,399 | arxiv | \section{Introduction}
Persistent currents (PC) in normal and superconducting meso- and nanorings
is a fundamental ground state property of such systems. The basic physical reason
behind the existence of PC is quantum coherence of electron wave functions
which may survive at long distances provided the temperature remains low.
A lot is known about the average value of PC in such systems. E.g., in normal rings
this quantity was analyzed in details both theoretically \cite{thy} and experimentally \cite{exp}.
At the same time, only few studies of equilibrium fluctuations of PC are available.
While non-vanishing thermal fluctuations of PC can easily be expected, the issue
of quantum fluctuations of PC is somewhat less trivial since at
$T \to 0$ the system approaches its non-degenerate ground state.
Recently it was demonstrated \cite{SZ10} that in this case PC does not fluctuate
provided the current operator $\hat I$ commutes with the total Hamiltonian $\hat H$ of the
system. In all other cases PC fluctuates even though the system remains in its ground state. For
instance, quantum fluctuations of PC in mesoscopic rings may persist down to $T=0$ provided such
systems are coupled to an external dissipative bath \cite{Buttiker}.
Interestingly enough, quantum fluctuations of PC may also occur in the absence of
any dissipation. For instance, it is easy to verify that
the operators $\hat I$ and $\hat H$ do not commute for a quantum particle on a ring
with some (periodic) potential \cite{SZ10} and, hence, PC fluctuations do
not vanish even in the ground state at $T=0$. Since quantum coherence remains fully
preserved in this case the magnitude of PC fluctuations should depend on
the external magnetic flux \cite{SZ10}. It follows immediately that
by measuring the equilibrium current noise in meso- and nanorings it is possible
to effectively probe quantum coherence and decoherence in such systems.
The goal of this paper is to theoretically analyze persistent
current noise in both dissipative and non-dissipative systems.
In the presence of dissipation the time reversal symmetry is violated and PC noise
in metallic nanorings may be affected by decoherence. If, however, no source of dissipation is available,
the time reversal symmetry is preserved and PC fluctuations remain fully coherent, as we already
explained above. The first sutuation will be described within a model of a quantum particle
on a ring interacting with some quantum dissipative environment \cite{GZ98,Paco,GHZ} which
could be, e.g., a bath of Caldeira-Leggett oscillators or electrons in a disordered
conductor. Within this model PC fluctuations will be analyzed in section 2 both in the
perturbative and non-perturbative in the interaction regimes. The second situation will be
represented by a model of superconducting nanorings with quantum phase slips \cite{AGZ} which tend
to suppress PC in sufficiently large rings. PC noise in superconducting nanorings will be studied in
section 3. In section 4 we will briefly summarize our main observations.
\section{Particle on a ring in a dissipative environment}
\subsection{The model and effective action}
Let us consider a quantum particle with mass $M$ and electric charge $e$
moving in a 1d ring of radius $R$ pierced by magnetic flux $\Phi_x$. This quantum particle
interacts with some collective variable $V$ describing voltage fluctuations
in our dissipative bath. The total Hamiltonian for this system reads
\begin{equation}
\hat H=\frac{(\hat \phi -\phi_x)^2}{2MR^2}+\hat H_{\rm env}(V)+\hat H_{\rm int}(\theta,V),
\label{H}
\end{equation}
where $\theta$ is the angle variable which controls the position of the particle
on the ring, $\hat \phi =-i\frac{\partial}{\partial\theta}$ defines the
angular momentum operator, $\phi_x=\Phi_x/\Phi_0$ and $\Phi_0=2\pi c/e$ is the
flux quantum (here and below we set the Planck's constant equal to unity $\hbar =1$).
The first term in Eq. (\ref{H}) is just the particle kinetic energy, $\hat H_{\rm env}(V)$
is the Hamiltonian of the bath, and
the term
\begin{equation}
\hat H_{\rm int}=e\hat V, \label{eV}
\end{equation}
accounts for Coulomb interaction between the particle and
the bath.
In what follows we will model a dissipative bath by a 3d
diffusive electron gas \cite{Paco,GHZ} with the inverse dielectric function
\begin{equation}
\frac{1}{\epsilon (\omega , k)}\approx \frac{-i\omega +Dk^2}{4\pi \sigma}.
\label{diel}
\end{equation}
Fluctuations of the electric potential $V$ in this dissipative environment are described by the correlator
\begin{equation}
\langle VV\rangle_{\omega , k}= -\coth \frac{\omega}{2T}{\rm Im}\frac{4\pi}{k^2\epsilon (\omega , k)}.
\end{equation}
Here $\sigma$ is the Drude conductivity of this gas, $D=v_Fl/3$
is the electron diffusion coefficient, $v_F$ is Fermi velocity
and $l$ is the electron elastic mean free path which is assumed to obey
the condition $k_F l \gg 1$ but to remain much
smaller than the ring radius $l \ll R$. We also point out that Eq.
(\ref{diel}) applies at not too high frequencies $\omega\ll \omega_c \sim v_F/l$.
Employing the definition for the current operator
\begin{equation}
\hat I=\frac{e}{2\pi}\dot{\hat \theta}=\frac{ie}{2\pi}[\hat H,\hat\theta ]=\frac{e(\hat\phi -\phi_x)}{2\pi MR^2}
\label{curop}
\end{equation}
and making use of the Heisenberg representation
$\hat I(t)=e^{it\hat H}\hat I e^{-it\hat H}$,
we introduce the current-current correlation function
$\langle\hat I(t)\hat I(0)\rangle$ and define PC noise power \cite{SZ10}
\begin{equation}
S(t)=\frac12\langle\hat I(t)\hat I(0)+\hat I(0)\hat I(t)
\rangle-\langle\hat I\rangle^2=\int\frac{d\omega}{2\pi}S_\omega e^{-i\omega t}.
\label{sdef}
\end{equation}
In order to evaluate this correlation function we further
introduce the evolution operator
$\hat U(t,t_0)$ and define the density matrix operator
$\hat \rho(t)=\hat U(t,0)\hat\rho_i \hat U^\dag(t,0)$,
where $\rho_i$ is the initial density matrix. Since our goal here is to analyze quantum dynamics of the particle rather than that of
the bath, it will be convenient to employ the standard influence functional technique \cite{FH} and trace out fluctuating potential $V$.
Making use of a simplifying assumption that at the initial time moment the total density matrix is factorized into the product
of the equilibrium bath density matrix and that of a particle $\hat \rho_i$,
one can rewrite the evolution equation for the density matrix in the form of a double path integral
over the angle variables $\theta^F$ and $\theta^B$ defined respectively on
the forward and backward parts of the Keldysh contour
\begin{widetext}
\begin{eqnarray}
\rho(\theta_1,\theta_2;t)=\sum\limits_{m_1,m_2=-\infty}^\infty e^{i(\theta_1+2\pi m_1)\phi_x-i(\theta_2+2\pi m_2)\phi_x} \int\limits_0^{2\pi}d\theta_1'd\theta_2' e^{-i(\theta_1'-\theta_2')\phi_x}\rho_i(\theta_1',\theta_2')\nonumber\\
\times\int\limits_{\theta^F(0)=\theta_1'}^{\theta^F(t)=\theta_1+2\pi m_1}\mathcal D \theta^F\int\limits_{\theta^B(0)=\theta_2'}^{\theta^B(t)=\theta_2+2\pi m_2}\mathcal D \theta^B e^{i\int\limits_{0}^{t}[((\dot \theta^F)^2-(\dot \theta^B)^2)/4E_C]dt'}e^{-iS_R-S_I},
\end{eqnarray}
where $\rho(\theta_1,\theta_2;t)\equiv\langle\theta_1|\hat\rho(t)|\theta_2\rangle$, $E_C=1/(2MR^2)$ and $\exp(-iS_R-S_I)$
is the influence functional.
Calculation of this functional amounts to averaging over the quantum variable $V$ which is also defined on the Keldysh contour.
Such averaging was performed, e.g., in Refs. \onlinecite{GZ1,GZS,GZ2} for a degenerate electron gas where Pauli exclusion
principle should explicitly be accounted for.
The same procedure can be employed in our present situation of a particle on a ring where no Pauli principle needs
to be included. Introducing the new variables $\theta_+=(\theta^F+\theta^B)/2$ and $\theta_-=\theta^F-\theta^B$,
after the standard algebra we obtain
\begin{equation}
S_{R}[\theta_{+},\theta_-]=\pi\alpha \sum\limits_{n=1}^\infty a_n n \int\limits_{0}^{t} dt' \dot \theta_{+}(t')\sin(n\theta_-(t')),
\label{inffunc1b}
\end{equation}
and
\begin{equation}
S_{I}[\theta_{+},\theta_-]=-2\pi\alpha \sum\limits_{n=1}^\infty a_n\int\limits_{0}^{t} dt' \int\limits_{0}^{t} dt''\frac{\pi T^2}{\sinh^2(\pi T(t'-t''))}\cos(n(\theta_{+}(t')-\theta_{+}(t'')))\sin\frac{n\theta_-(t')}{2}\sin\frac{n\theta_-(t'')}{2},
\label{inffunc1a}
\end{equation}
\end{widetext}
where $\alpha=3/(8 k_F^2l^2)$ is the effective coupling constant in our problem and $a_n$ are the Fourier coefficients equal
to $a_n=(2/(\pi r))\ln(r/n)$ for $n<r\equiv R/l \gg 1$ and to zero $a_n=0$ otherwise. The weak disorder condition $k_Fl \gg 1$ implies
a small effective coupling constant $\alpha \ll 1$. It is also worth pointing out that the above influence functional reduces
to one derived within the Caldeira-Leggett model provided one chooses $\alpha =\eta R^2/\pi$ and $a_n=\delta_{1n}$, where
$\eta$ defines effective friction produced by the bath and $\delta_{ij}$ is the Kronecker symbol.
In the case of the Caldeira-Leggett environment decoherence of
a quantum particle on a ring was investigated with the aid of
both real-time \cite{GZ98} and Matsubara \cite{Paco} techniques which
yield similar results, i.e. exponential suppression of quantum
coherence down to $T=0$ at sufficiently large ring radii.
This result is by no means surprizing since the model is
exactly equivalent to that of Coulomb blockade in a
single electron box where exponential reduction of the effective charging energy at
large conductances is also well established \cite{pa91,HSZ}.
The model of a particle in a diffusive electron gas
was employed by different authors
\cite{Paco,GHZ,HlD,CH,KH,SZ09} investigating
the effect of interaction-induced decoherence on the average
value of PC. Below we will make use of this model in order
to analyze PC fluctuations in the presence of quantum
decoherence.
\subsection{Perturbation theory}
Provided the ring radius $r$ is sufficiently small one can proceed perturbatively in $\alpha$. Consider the kernel
of the evolution operator $\mathcal U$ which establishes a relation between the density matrix elements at different moments of time
\begin{equation}
\tilde \rho (m_1,m_2;t)=\sum\limits_{m_1',m_2'}\mathcal U_{m_1,m_1'}^{m_2,m_2'}(t)\tilde \rho (m_1',m_2';0),
\end{equation}
where
\begin{equation}
\tilde \rho (m,n;t)=\int\limits_0^{2\pi}\frac{d\theta_1}{2\pi}\int\limits_0^{2\pi}\frac{d\theta_2}{2\pi}\rho (\theta_1,\theta_2;t) e^{-im\theta_1+in\theta_2}
\end{equation}
is the density matrix in the momentum representation which remains diagonal for the problem in question.
Therefore one can rewrite the above evolution equation as a matrix one for the diagonal elements of the density matrix.
In order to account for interaction effects we will employ the perturbation theory similar to that developed for the Coulomb blockade problem
\cite{GZ94,SS}. Explanding the influence functional $\exp(-iS_R-S_I)$ in powers of the coupling constant $\alpha$
and taking into account first order diagrams depicted in Fig. \ref{fig1} we arrive at the evolution kernel
\begin{figure}[t]
\includegraphics[width=0.9\columnwidth]{fig1.eps}
\caption{First order self-energy diagrams}
\label{fig1}
\end{figure}
\begin{equation}
\tilde{\mathcal U}_{m,m'}^{m,m'}(\omega)\equiv\int\limits_0^\infty dt e^{i\omega t}\mathcal U_{m,m'}^{m,m'}(t)=\left[\frac{i}{\omega+i\tilde\Gamma_\omega}\right]_{m,m'},
\end{equation}
where
\begin{multline}
[\tilde \Gamma_\omega]_{m+n,m}=-\frac{\pi\alpha a_{|n|}}{2}\left(\frac{E_{m+n}-E_m+\omega}{e^{\frac{E_{m+n}-E_m+\omega}{T}}-1}\right.\\
\left.+\frac{E_{m+n}-E_m-\omega}{e^{\frac{E_{m+n}-E_m-\omega}{T}}-1}\right),
\end{multline}
\begin{equation}
[\tilde \Gamma_\omega]_{m,m}=-\sum\limits_{n=1}^\infty\left([\hat \Gamma_\omega^{(0)}]_{m+n,m}+ [\hat \Gamma_\omega^{(0)}]_{m-n,m}\right).
\end{equation}
Within the same approximation for PC noise power one finds
\begin{equation}
S_\omega=2\Re\sum\limits_{m,n}I_m\left(\tilde{\mathcal U}_{m,n}^{m,n}(\omega)-\frac{iP_m}{\omega+i0}\right)I_nP_n
\end{equation}
where $I_n=2E_C(m-\phi_x)$ and $P_n=e^{-E_C(n-\phi_x)^2/T}/\mathcal Z$ define respectively the current and the distribution function in the absence of interactions. The latter quantity also involves the partition function for our system $\mathcal Z=\sum_n e^{-E_C(n-\phi_x)^2/T}$.
We have numerically evaluated both the kernel of the evolution operator and PC noise power. Our results are
displayed in Fig. \ref{fig2} at different values of temperature and the magnetic flux.
\begin{figure}[t]
\includegraphics[width=0.9\columnwidth]{fig2.eps}
\caption{PC noise power at $\pi\alpha =0.05$, $r=5$ at different values of $T$ and $\phi_x$. The frequency $\omega$ and noise power $S_\omega$ are normalized respectively by $E_C$ and by $e^2E_C/(4\pi^2)$.}
\label{fig2}
\end{figure}
One observes a strong dependence of PC noise power on the external magnetic flux $\phi_x$. This feature
indicates the coherent nature of PC noise \cite{SZ10} which clearly persists also in the presence of dissipation provided the effect of the latter is
sufficiently weak. It turns out that PC noise grows with increasing $\phi_x$ and formally diverges in the vicinity of the point $\phi_x=0.5$.
This divergence has to do with the fact that the distance between the two lowest energy levels $\delta E(\phi_x)=E_C(1-2|\phi_x|)$
becomes small in this limit. Hence, the system undergoes rapid transitions between these states corresponding to different PC values.
As a result, PC fluctuations in our system get greatly enhanced as soon as the flux approaches the value $\phi_x=0.5$.
We also emphasize that PC noise does not vanish even in the limit $T \to 0$. In this case $S_\omega$ equals to zero only at frequencies
below the inter-level distance $\omega <\delta E(\phi_x)$ and remains non-zero at higher values of $\omega$. We also point out that PC noise
vanishes at $\phi_x=0$ if evaluated perturbatively in the lowest order in $\alpha$. However, non-zero PC noise at $\phi_x=0$ is recovered
if one goes beyond perturbation theory in the interaction, as it will be demonstrated below. The latter property also follows from
the general expressions formulated in terms of the exact eigenstates of the total Hamiltonian \cite{SZ10}.
Finally, we observe that at non-zero $T$ there appears additional zero frequency peak in $S_\omega$. This peak grows with increasing temperature and eventually merges with all other peaks forming a wide hump at
sufficiently high $T$. In this case quantum coherence gets essentially suppressed and PC noise becomes flux-independent.
\subsection{Non-perturbative regime}
Let us now go beyond perturbation theory in the effective coupling constant $\alpha$.
At the first sight this step might be considered unnecessary since within the applicability range of our model
this coupling constant always remains small $\alpha \ll 1$. However, it turns out \cite{GHZ} that the actual parameter that controls the strength
of interaction effects is $\alpha r$ rather than $\alpha$. Hence, should the ring radius be sufficiently large, i.e.
\begin{equation}
4\pi\alpha r \gg 1,
\label{npl}
\end{equation}
perturbation theory in the interaction fails and non-perturbative analysis of the problem becomes inevitable.
In the limit (\ref{npl}) and not too low temperature one can employ the semiclassical approximation which amounts to expanding the action
(\ref{inffunc1b}), (\ref{inffunc1a}) up to quadratic in $\theta_{-}$ terms. The resulting effective action can be exactly
reformulated in terms of the quasiclassical Langevin equation \cite{Schmid,AES,GZ92}
for the "center-of-mass" variable $\theta_+$. For the model under consideration this equation reads
\begin{eqnarray}
-\frac{1}{2E_C}\ddot \theta_+(t)-\frac{\gamma}{2}\dot \theta_+(t)=\sum\limits_{n=1}^\infty (\xi_n(t)\cos(n\theta_+(t))\quad\nonumber\\+\lambda_n(t)\sin(n\theta_+(t))),
\label{langev}
\end{eqnarray}
where we introduced the parameter
\begin{equation}
\gamma=2\pi\alpha \sum\limits_{n=1}^\infty a_n n^2=4\pi\alpha r^2
\end{equation}
and defined Gaussian stochastic fields $\xi_n(t)$ with the correlators
\begin{eqnarray}
\langle\xi_n(t)\xi_m(t')\rangle_{\xi,\lambda}=\langle\lambda_n(t)\lambda_m(t')\rangle_{\xi,\lambda}=\qquad\nonumber\\=-\delta_{m,n}\pi\alpha a_n n^2\frac{\pi T^2}{\sinh^2(\pi T(t-t'))},
\label{cor1}
\end{eqnarray}
\begin{equation}
\langle\xi_n(t)\lambda_m(t')\rangle_{\xi,\lambda}=0.
\label{cor2}
\end{equation}
At high temperatures the white noise limit is realized,
\begin{equation}
\langle\xi_n(t)\xi_m(t')\rangle_{\xi,\lambda}=2\delta_{m,n}\pi\alpha a_n n^2T\delta(t-t'),
\label{wn}
\end{equation}
and Eq. (\ref{langev}) can be solved exactly. In this limit we obtain
\begin{equation}
S_\omega=\frac{e^2\gamma T E_C^2}{\pi^2(\omega^2+(\gamma E_C)^2)}.
\label{htpn}
\end{equation}
At lower values of $T$ the approximation (\ref{wn}) fails and more accurate Eqs. (\ref{cor1}), (\ref{cor2}) should be employed.
Treating the noise terms in Eq. (\ref{langev}) perturbatively \cite{GZ92} and taking into account only the zeroth and the first order
contributions we arrive at the result
\begin{equation}
\theta_+(t)=\theta_+^{(0)}+\theta_+^{(1)}(t),
\label{fior}
\end{equation}
where $\theta_+^{(0)}$ is some physically irrelevant constant and $\theta_+^{(1)}(t)$ obeys the equation
\begin{eqnarray}
-\frac{1}{2E_C}\ddot \theta_+^{(1)}(t)-\frac{\gamma}{2}\dot \theta_+^{(1)}(t)=\sum\limits_{n=1}^\infty \xi_n(t),
\label{langev1}
\end{eqnarray}
which allows to immediately recover the noise power
\begin{equation}
S_\omega=\frac{e^2\gamma E_C^2}{2\pi^2(\omega^2+(\gamma E_C)^2)}\omega\coth\frac{\omega}{2T},
\label{noiseHT}
\end{equation}
Obviously, this result reduces back to Eq. (\ref{htpn}) in the limit $T \gg \omega$.
At $\omega \ll \gamma E_C$ the parameter $E_C$ drops out from the expression for the noise power and we get
\begin{equation}
S_\omega =\frac{e^2\omega}{2\pi^2\gamma}\coth\frac{\omega}{2T},
\end{equation}
i.e. in this case $S_\omega \propto 1/\alpha$. For $ \omega \to 0$ this expression further reduces to $S_0 \propto T/\gamma$.
The function $S_\omega$ (\ref{noiseHT}) is displayed in Fig. \ref{f5} at different temperatures.
\begin{figure}[t]
\includegraphics[width=0.97\columnwidth]{fig3.eps}
\caption{Noise power at different temperatures for $\pi\alpha =0.05$ and $r=10$. Frequency $\omega$ and PC noise power are normalized respectively by $E_C$ and by $e^2E_C/(4\pi^2)$.}
\label{f5}
\end{figure}
Comparing these results with those derived perturbatively in Sec. 2.B we observe that, while at weak interactions
the PC noise remains coherent and, hence, depends on the external magnetic flux $\Phi_x$, in the limit of strong interactions (\ref{npl})
this dependence is practically absent. This is due to strong decoherence effect produced by our dissipative bath. As a result, in the limit
(\ref{npl}) the average value of PC gets exponentially suppressed, while PC noise does not vanish but becomes essentially incoherent.
Note that, strictly speaking, the flux-dependent contribution to $S_\omega$ survives also in this limit, but it remains exponentially small, as
it is demonstrated by the analysis \cite{SZ11}. Technically, the presence of such $\Phi_x$-dependent correction to the result (\ref{noiseHT})
is related the fact that the angle variable $\theta$ is defined on a ring, i.e. is compact. The Langevin equation approach employed here
"decompactifies" this variable, thereby capturing only the $\phi_x$-independent contributions to PC noise power.
In order to estimate the leading $\phi_x$-dependent correction to Eq. (\ref{noiseHT}) one can follow the analysis initially developed
for the problem of weak Coulomb blockade in metallic quantum dots \cite{GZ96}.
This approach allows to obtain the relation between the density matrices and expectation values evaluated for the problems described by
the same Hamiltonian but respectively compact and non-compact variables. Without going into corresponding details here we only quote the
result \cite{SZ11}
\begin{eqnarray}
S_\omega=\frac{e^2\gamma E_C^2\omega\coth(\omega/2T)}{2\pi^2(\omega^2+(\gamma E_C)^2)}\qquad\qquad\qquad\qquad\nonumber\\
\times\left(1-A e^{-4\pi \alpha r}\cos(2\pi\phi_x)\right),
\label{noiseTot}
\end{eqnarray}
where $A \propto \alpha r$. Thus, in the non-perturbative limit (\ref{npl}) the coherent flux-dependent contribution is indeed
exponentially small and can be safely neglected as compared to the main incoherent term (\ref{noiseHT}).
\section{PC noise in thin superconducting nanorings}
Let us now turn to the analysis of persistent current noise in a dissipativeless system which will be a superconducting nanoring.
As long as the ring remains sufficiently thick superconducting fluctuations can be ignored and, hence, there exists no physical mechanism
that could cause PC fluctuations. If, however, the ring becomes thin, superconductivity may be disrupted in various places in the ring
due to fluctuations of the order parameter. At low temperatures most important fluctuations of that kind are quantum phase slips (QPS) \cite{AGZ}.
Below we will demonstrate that the properties of superconducting nanorings in the presence of QPS are described by the effective theory
equivalent to that for a quantum particle on a ring in a periodic potential.
The starting point of our derivation is the expression for the grand partition function $\mathcal Z$.
This expression can be represented in terms of a path integral over the phase $\varphi$ of the superconducting order parameter.
Employing the low-energy effective action for a quasi-1d superconducting wire \cite{ZGOZ,GZ01,anne} one finds
\begin{equation}
\mathcal Z=\sum\limits_{m,n}\int \mathcal D\varphi e^{-\frac{\lambda}{2\pi}\int dxd\tau\left(v(\partial_x\varphi)^2+v^{-1}(\partial_\tau\varphi)^2\right)},
\label{partf}
\end{equation}
where we defined $\lambda =\frac{\pi^{2} N_{0}D\Delta s}{2v}$, $s$ is the ring cross section, $N_{0}$ is the density of states at Fermi level, $\Delta$ is
the absolute value of the superconducting order parameter, $D$ is the diffusion coefficient and $v=\sqrt{\pi\sigma \Delta s/C}$ is the velocity
of low energy plasmon mode propagating along the wire (the so-called Mooji-Sch\"on mode). Here $\sigma$ is the Drude normal state conductivity of our metallic ring and
$C$ is the wire capacity per unit length.
The path integral in Eq. (\ref{partf}) should be performed with periodic (in imaginary time) boundary conditions
$\varphi(x,0)=\varphi(x,\beta)+2\pi m$, where $m$ is an arbitrary integer number (the so-called winding number).
The boundary conditions should also be periodic with respect to the spatial coordinate along the ring and, in addition, should
be sensitive to the magnetic flux piercing the ring, i.e. $\varphi(L,\tau)=\varphi(0,\tau)+2\pi(\phi_x+n)$.
Here $L=2\pi R$ is the ring perimeter, $\phi_{x}=\Phi/\Phi_{sc0}$ and $\Phi_{sc0}=\Phi_0/2$ is the superconducting flux quantum.
The partition function (\ref{partf}) can be evaluated semiclassically. As usually, in the main approximation it suffices
to take into account all relevant saddle point configurations of the phase variable $\varphi$ which satisfy the equation
\begin{equation}
(\partial_\tau^2+v^2\partial_x^2)\varphi(x,\tau)=0.
\label{speq}
\end{equation}
Apart from trivial solutions of this equation linear in $\tau$ and $x$
there exist nontrivial ones which correspond to virtual phase jumps by $\pm 2\pi$ at various points of a superconducting ring.
These quantum topological objects can be viewed as vortices in space-time and represent QPS events \cite{AGZ,ZGOZ,GZ01}.
All these configurations can be effectively summed up with the aid of the approach involving the so-called duality transformation.
In order to proceed let us express the general solution of Eq. (\ref{speq}) in the form
\begin{equation}
\varphi(x,\tau)=a_{m}\tau+b_{n}x+\varphi^{qps}(x,\tau),
\end{equation}
where $a_m$ and $b_n$ are some constants fixed by the boundary conditions. We also introduce the vorticity field $\varpi(x,\tau)$ with the aid of the relations
\begin{equation}
v\partial_x\varpi=\partial_\tau\varphi^{qps}\quad v\partial_\tau\varpi=-v\partial_x\varphi^{qps}.
\end{equation}
This field is single-valued and it obeys the equation
\begin{equation}
\partial^2_\tau\varpi+v^2\partial^2_x\varpi=2\pi v\sum\limits_j\nu_j\delta(x-x_j)\delta(\tau-\tau_j),
\end{equation}
where $x_j$, $\tau_j$ and $\nu_j$ denote respectively the space and time coordinates of the $j$-th phase slip and its topological charge (phase winding). The partition function for a given saddle point solution can be rewritten as a path integral over the $\varpi$-field containing the functional delta function which follows from the above equation. Performing a summation over all possible QPS configurations we obtain
\begin{widetext}
\begin{multline}
\mathcal Z=\sum\limits_{N=0}^\infty\sum\limits_{\nu_1,..,\nu_N=\pm1}\int\limits_0^{2\pi}\frac{dz}{2\pi}\int dx_1d\tau_1...dx_Nd\tau_N\sum\limits_{m,n=-\infty}^\infty e^{2\pi in\phi_x-\frac{\pi v\beta m^2}{2g L}-\frac{\pi L n^2}{2g \beta v}} \left(\frac{\gamma_{QPS}}{2}\right)^N e^{2\pi i m\sum\limits_j\nu_j\frac{x_j}{L} -2\pi i n\sum\limits_j\nu_j\frac{\tau_j}{\beta}}
\\\int\mathcal D\varpi
e^{-\frac{g}{2\pi}\int dxd\tau\left(v(\partial_x\varpi)^2+v^{-1}(\partial_\tau\varpi)^2\right)}\ e^{iz\sum\limits_j\nu_j}
\delta\left(\partial^2_\tau\varpi+v^2\partial^2_x\varpi-2\pi v\sum\limits_j\nu_j\delta(x-x_j)\delta(\tau-\tau_j)\right).
\label{zgen}
\end{multline}
\end{widetext}
Here $\beta =1/T$
\begin{equation}
\gamma_{QPS} \sim (g_\xi \Delta /\xi )\exp (-a g_\xi )
\label{gqps}
\end{equation}
is the QPS rate \cite{GZ01}, $g_\xi = 4\pi N_0Ds/\xi $ is the dimensionless conductance of the wire segment of length equal to the
superconducting coherence length $\xi$ and $a$ is a numerical prefactor of order one. Eqs. (\ref{zgen}), (\ref{gqps}) are applicable
provided $g_\xi$ is sufficiently large, i.e. $g_\xi e^{-ag_\xi /2}\ll 1$.
Rewriting the delta function in Eq. \ref{zgen}) as a path integral of the exponent and performing summation over all
QPS configurations as well as integration over $\varpi$ we get
\begin{equation}
\mathcal Z =\sum\limits_{m,n=-\infty}^\infty e^{2\pi in\phi_x}\int\mathcal D\theta e^{-S_{\rm eff}[\theta ]},
\end{equation}
where
\begin{multline}
S_{\rm eff}=\int dxd\tau\left(\frac{(\partial_\tau\theta )^2+v^2(\partial_x\theta )^2}{8\pi v \lambda}-\gamma_{QPS}\cos\theta \right).
\label{efacchi}
\end{multline}
In contrast to the original problem here the path integration is performed over the single-valued field $\theta $ with periodic boundary conditions
\begin{equation}
\theta (x,\beta)-\theta (x,0)=2\pi n\quad \theta (L,\tau)-\theta (0,\tau)=2\pi m.
\end{equation}
PC noise power can be evaluated directly making use of the above equations and the expression for the current $I$, which is
just proportional to the phase difference around the ring,
\begin{equation}
I(\tau)=\frac{2\pi e v\lambda}{L}\left(\varphi(L,\tau)-\varphi(0,\tau)\right).
\end{equation}
Having evaluated the Matsubara current-current correlation function one performs analytic continuation to real times and,
taking into account the fluctuation-dissipation theorem, arrives at PC noise power spectrum $S_\omega$.
In the limit of low temperature $T \ll v/L$ and provided the ring perimeter is not too large
one can ignore the spatial dependence of the field $\theta$ and, hence neglect the term $v^2(\partial_x\theta)^2$ in the effective action (\ref{efacchi}).
After that our problem becomes effectively zero-dimensional and exactly equivalent to that of a particle on a ring in the presence of the cosine
external potential. In other words, we have mapped our problem onto that described by the Hamiltonian
\begin{equation}
\hat H=\frac{(\hat \phi -\phi_x)^2}{2MR^2}+U_0 (1-\cos (\kappa \theta )) ,
\label{H1}
\end{equation}
where one should now identify $\kappa=1$,
\begin{equation}
\frac{1}{MR^2}\to E_R\equiv\frac{\pi^2 N_0 D\Delta s}{R}\sim \frac{g_{\xi}\Delta\xi}{R}
\end{equation}
and
\begin{equation}
U_0\to 2\pi R\gamma_{QPS} \sim \frac{g_{\xi}\Delta R}{\xi} e^{-ag_\xi}.
\end{equation}
The analysis of PC fluctuations for the model (\ref{H1}) in the limit of strong external potential $U_0 \gg E_R$ was carried
out in Ref. \onlinecite{SZ10}. In the case of superconducting nanorings it yields
\begin{multline}
S_\omega=\frac{e^2\Omega^3}{2\pi U_0} \left(\delta(\omega-\Omega-\Lambda\cos(2\pi\phi_{x}))\right.\\\left.+\delta(\omega+\Omega+\Lambda\cos(2\pi\phi_{x}))\right),
\label{scnol}
\end{multline}
where $\Omega=\pi\sqrt{\pi N_0 D\Delta s \gamma_{qps}} \sim g_{\xi}\Delta e^{-ag_{\xi}/2}$ is the frequency of oscillations of the ''particle'' near the bottom of cosine potential and
\begin{equation}
\Lambda=256\sqrt{\frac{U_{0}^{3}}{\pi\Omega}}e^{-\frac{8U_{0}}{\Omega}}.
\end{equation}
The result (\ref{scnol}) demonstrates that in the low temperature limit PC noise power spectrum $S_\omega$ differs from zero due to the effect of
QPS and has the form of two sharp peaks
at frequencies well below the superconducting gap $\Delta$. The exact positions of these peaks can be tuned by the flux $\phi_x$ piercing the ring,
though only weakly, since in this case $\Omega \gg \Lambda$. Note that the condition $U_0 \gg E_R$ is equivalent to $R \gg R_c \sim \xi \exp (ag_\xi /2)$
in which case the average value of PC is exponentially suppressed \cite{AGZ} $\langle I\rangle \propto \exp (-R/R_c)$. The PC noise power spectrum peaks
also remain small in this case, though they decrease only as $S_\omega \propto R_c/R$ with increasing ring radius.
In the opposite case $E_R\gg U_0$ the cosine potential term remains small as compared to the particle kinetic energy. Hence,
in this limit the effect of QPS may be considered perturbatively in $\gamma_{QPS}$. In the absence of QPS the noise power $S_\omega$
shows only one peak at zero frequency with the amplitude equal to the current dispersion, i.e.
\begin{equation}
S_\omega=2\pi (\langle \hat I^2\rangle-\langle\hat I\rangle^2)\delta(\omega).
\end{equation}
The amplitude of this peak decreases with temperature and tends to zero in the limit $T \to 0$. Employing the standard quantum mechanical perturbation theory,
in the lowest non-vanishing order in $U_0$ one recovers additional peaks at frequencies corresponding to the transitions between neighboring energy levels.
These peaks survive even at zero temperature $T=0$ in which case one finds
\begin{widetext}
\begin{multline}
S_{\omega}=\frac{e^{2}U_{0}^{2}}{\pi (1+2\phi_{x})^{2}}\left(\delta(\omega-E_{R}(1/2+\phi_{x}))+\delta(\omega+E_{R}(1/2+\phi_{x}))\right) \\ +\frac{e^{2}U_{0}^{2}}{\pi (1-2\phi_{x})^{2}}\left(\delta(\omega-E_{R}(1/2-\phi_{x}))+\delta(\omega+E_{R}(1/2-\phi_{x}))\right).
\end{multline}
\end{widetext}
This equation is applicable as long as the condition
\begin{equation}
\xi g_\xi \ll R\ll\min\left(R_c,\frac{v}{2\pi T}\right)
\end{equation}
remains fulfilled. We observe that in this case PC noise power $S_\omega$ has the form of four sharp peaks at frequencies which strongly depend on the external flux $\phi_x$.
E.g., by tuning the flux to the value close to one half of the superconducting flux quantum $\phi_x \approx \pm 1/2$ one should observe strong enhancement of
noise peaks which occur in the vicinity of zero frequency $\omega =0$. The physical reason for this enhancement is the same as that already
discussed in Sec. 2B: The energies of two lowest levels become close to each other at such values of $\phi_x$ implying the
possibility of rapid transitions between these states. Such intensive transitions, in turn, imply strong fluctuations of PC.
Exactly at resonance $\phi_x = \pm 1/2$ the second order perturbation theory fails and more accurate treatment becomes necessary.
\section{Conclusions}
In this paper we analyzed fluctuations of persistent current in nanorings with and without dissipation.
Specifically, we restricted our attention to PC noise and evaluated symmetric current-current correlation function.
Comparing the results obtained within two different models analyzed in Sec. 2 and 3 we observe both similarities and
important differences in the behavior of these systems.
To begin with, in the absence of interactions and dissipation in the model of a particle on a ring (Sec. 2) as well as in the absence of phase slips
in superconducting nanorings (Sec. 3) PC fluctuates only at non-zero $T$ and
no such fluctuations could occur provided the system remains in its ground state at $T=0$.
In the presence of interactions in the first model or quantum phase slips in the second model
the current operator does not anymore commute with the total Hamiltonian of the system and
fluctuations of PC do not vanish down to zero temperature. Yet another qualitative similarity
between these systems is that in both cases PC noise decreases with increasing the ring radius $R$.
The most important physical difference between the models considered in Sec. 2 and 3 is the presence of dissipation and, hence, decoherence
in the first model and their total absence in the second model. Accordingly, at low temperatures
PC noise always remains coherent in the second case which implies
that PC noise power spectrum essentially depends on the magnetic flux $\Phi_x$ piercing the ring. In the absence of dissipation at $T \to 0$ PC noise
has the form of sharp peakes at frequencies corresponding to energy differences between the system states for which quantum mechanical transitions
are possible. At flux values close to one half of the flux quantum some energy levels also become close to each other which means strong
enhancement of PC fluctuations.
Coherent fluctuations of PC are also possible in the presence of dissipation provided its effect remains sufficiently weak
and the ring radius remains small. In this limit decoherence effect of the external dissipative bath is still insignificant.
Narrow peaks in PC noise get somewhat broadened even at $T \to 0$ due to the presence of dissipation, but the dependence of $S_\omega$ on the magnetic flux
persists also in this case. In rings with larger radii, on the contrary, fluctuations in the dissipative bath strongly suppress quantum
coherence down to $T=0$ and induce incoherent $\Phi_x$-independent
current noise in the ring which persists even at $\Phi_x=0$ when the average PC is absent.
Thus, quantum coherence and its suppression by interactions in meso- and nanorings can be
experimentally investigated by measuring PC noise and its dependence on the external
magnetic flux. It would be interesting to carry out such experiments in the near future.
|
1,314,259,993,400 | arxiv | \section{Introduction}
An {\it almost K{\"a}hler structure} on a manifold $M^{2n}$ is an
almost Hermitian structure
$(g, J, \Omega)$ with a closed, and
therefore symplectic fundamental 2-form $\Omega$.
If additionally the almost complex structure $J$
is integrable, then $(g, J, \Omega)$ is a K{\"a}hler structure.
Almost K{\"a}hler metrics for which the almost complex structure is not
integrable
will be called {\it strictly} almost K{\"a}hler metrics.
Many efforts have been done in the direction of finding curvature
conditions on the metric which insure the integrability of the
almost complex structure. For example, an old, still open
conjecture of Goldberg \cite{Go} says that a compact almost
K{\"a}hler, Einstein manifold is necessarily K{\"a}hler.
Important progress was made by K. Sekigawa who proved that the
conjecture is true if the scalar curvature is non-negative
\cite{Se2}. The case of negative scalar curvature is still wide
open, despite of recent progress in dimension 4. Nurowski and
Przanowski \cite{NuP} and K.P.Tod \cite{Arm1,Tod} constructed
4-dimensional local examples of Einstein (in fact, Ricci flat),
strictly almost K{\"a}hler manifolds. Thus, it is now known that
compactness must play an essential role, should the Goldberg
conjecture be true. In all these examples the structure of the
Weyl tensor is unexpectedly special --- the anti-self-dual part
of the Weyl tensor vanishes and the fundamental form is an
eigenform of the self-dual Weyl tensor (equivalently, $W^- =0$
and $W^+_2=0$, see below). Conversely, a recent result of
\cite{Arm1} states that any 4-dimensional strictly almost
K{\"a}hler, Einstein manifold is obtained by
Nurowski-Przanowski-Tod construction, provided that the
fundamental form is an eigenform of the Weyl tensor. It follows
that such a manifold can never be compact. Some other positive
partial results on the Goldberg conjecture in dimension 4 have
been obtained by imposing additional assumptions on the structure
of Weyl tensor, cf. \cite{AA,Arm0,Arm1,Arm,OS2}.
For an oriented four dimensional Riemannian manifold, it is well known
the ${\rm{SO}}(4)$--decomposition of the Weyl tensor $W$ into its
self-dual and anti-self-dual parts, $W^+$ and
$W^-$. Moreover, for
every almost-Hermitian $4$-manifold $(M, g, J, \Omega)$ the self-dual
part of the Weyl
tensor decomposes further under the action of the unitary group ${\rm{U}}(2)$.
To see this, consider
$W^+$ as a trace-free, self-adjoint endomorphism of the bundle of self-dual
2-forms $\Lambda^+M$. Since $\Lambda^+M$ decomposes under
${\rm{U}}(2)$ as ${\Bbb R}\Omega \oplus [\hbox{\hspace{-0.15em}}[ \Lambda^{0,2} M]\hbox{\hspace{-0.15em}}]$, we
can write $W^+$ as a matrix with
respect to this block decomposition as follows:
\[
\left( \begin{array}{c|c} \frac{\kappa}{6} & W^+_2 \\ \hline
(W^+_2)^* & W^+_3 - \frac{\kappa}{12}
{\rm Id}_{|\Lambda^{0,2} M}
\end{array}
\right),
\]
where the smooth function $\kappa$ is the so-called conformal scalar curvature,
$W^+_2$ corresponds to the part of $W^+$ that interchanges the two
factors of the ${\rm{U}}(2)$-splitting of $\Lambda^+M$, and $W^+_3$ is a
trace-free, self-adjoint endomorphism of the real vector
bundle $[\hbox{\hspace{-0.15em}}[ \Lambda^{0,2}M ]\hbox{\hspace{-0.15em}}]$ underlying the anti-canonical
bundle $\Lambda^{0,2}M$. Also, the traceless part of the
Ricci tensor ${\rm Ric}_0$ decomposes under $\rm{U}(2)$ into two
irreducible components
--- the invariant part and
the anti-invariant part with respect to $J$, ${\rm Ric}_0^{\rm inv}$ and
${\rm Ric}_0^{\rm anti}$.
Correspondingly, there are several interesting types of
almost Hermitian $4$-manifolds, each imposing the vanishing of certain
${\rm U}(2)$-components of ${\rm Ric}_0$ and $W$, cf. \cite{TV}.
The curvature of a K{\"a}hler metric
$(g,J)$, for instance, satisfies any of the following three conditions:
\vspace{0.2cm}
\noindent
\begin{center}
(i) ${\rm Ric}_0^{\rm anti}=0$; \ (ii) $W^+_2=0$, and (iii) $W^+_3=0$.
\end{center}
\vspace{0.2cm}
\noindent
These three conditions are equivalent to the fact that
the curvature (considered as a ${\mathbb C}$-linear symmetric
endomorphism of the bundle of complex 2-forms) preserves the type
decomposition of 2-forms with respect to $J$, a property commonly
referred to as the {\it second Gray condition of the curvature}, cf.
\cite{Gr}.
Of course, the curvature of an arbitrary almost
K{\"a}hler metric may have none of these algebraic symmetries.
It is natural, therefore, to wonder if the integrability of the
almost complex structure is implied by the conditions (i)-(iii) above.
In \cite{ADK} and \cite{AD} an affirmative answer to this question is
given for {\it compact} almost K{\"a}hler 4-manifolds,
by using some powerful global arguments
coming from the Seiberg-Witten theory and Kodaira
classification of compact complex surfaces. One is then motivated to ask
what local rigidity, if any, do the conditions (i)-(iii) impose
on almost K\"ahler 4-manifolds. The goal of our paper is to answer
this question.
\vspace{0.2cm} We first provide a family of strictly almost K{\"a}hler
4-manifolds satisfying, more generally, the conditions (i) and
(ii), see Proposition \ref{prop1} below. Note that the strictly
almost K{\"a}hler, Ricci-flat flat examples of Nurowski,
Przanowski \cite{NuP} and Tod \cite{Arm1,Tod} satisfy (i) and
(ii) (but not (iii)), and our examples appear as a generalization
of Tod's construction \cite{Tod,Arm1}; instead of the
Gibbons-Hawking ansatz, we consider its generalized version
introduced by LeBrun in \cite{LeB}, and observe that appropriate
variable reductions lead to strictly almost K{\"a}hler metrics with
$J$-invariant Ricci tensor and with special structure of the Weyl
tensor. While the Nurowski-Przanowski-Tod examples are just
particular metrics in this family, it turns out that for other
distinguished metrics the conditions (i)-(iii) are fulfilled.
Looking more carefully at the metrics satisfying conditions
(i)-(iii) from our family, one can further see that all of them
are, in fact, (locally) isometric to the unique 4-dimensional
proper (i.e. non-symmetric) {\it 3-symmetric space} described by
Kowalski \cite{kow} (see Section 4 below); as a homogeneous space
it is isomorphic to $({\rm Isom}({\mathbb E}^2)\cdot
Sol_2)/SO(2)$ equipped with a left-invariant metric, or, by
introducing an invariant complex structure compatible with the
opposite orientation, it becomes isomorphic to the irreducible
homogeneous K{\"a}hler surface corresponding to the ${\bf
F_4}$-geometry of \cite{wall}. It might be also interesting to
note that this same example was discovered in yet a different
context by R. Bryant \cite{Br} (see also Remark 1).
Although one consequence of the existence of this example is that the
conditions
(i)-(iii) are not enough to insure the local integrability of an almost K{\"a}hler
structure, we prove that, in fact, this is the only
such example in dimension four.
\begin{theo}\label{th1}
Any strictly almost K{\"a}hler 4-manifold whose curvature satisfies
${\rm Ric}_0^{\rm anti}=0, \ W^+_2=0, \ W^+_3=0$ is locally isometric to
the (unique) 4-dimensional proper 3-symmetric space.
\end{theo}
\noindent {\it Remarks.} 1.--- It follows by Theorem \ref{th1}
and the general theory of 3-symmetric spaces \cite{gray} that
any complete, simply connected strictly almost K{\"a}hler 4-manifold
satisfying the conditions (i)-(iii) is {\it globally} isometric
to the proper 3-symmetric 4-space.
2.--- Since
any 3-symmetric 4-space is almost K{\"a}hler and satisfies (i)-(iii)
\cite{gray}, Theorem
\ref{th1} in turn provides a differential geometric proof of the
existence and the uniqueness of the proper 3-symmetric 4-space (see,
however,
\cite{kow} for more general results obtained by using Lie algebra
techniques).
3.--- Combining Theorem \ref{th1} with Wall's classification of
compact locally homogeneous complex surfaces \cite{wall}, one
sees that there are no {\it compact} strictly almost K{\"a}hler
4-manifolds whose curvature satisfies the conditions (i)-(iii). This
provides an alternative proof of the integrability result
in \cite{ADK} (see also Corollary 3 below).
\vspace{0.2cm}
Although our main goal of this paper is the study of almost K{\"a}hler
4-manifolds which satisfy the three conditions (i)-(iii), Theorem
\ref{th1} is derived from the local classification of a larger
class of strictly almost K{\"a}hler 4-manifolds (Theorem
\ref{th2}), including as particular cases both the Einstein
metrics of \cite{NuP,Arm1} and the almost K{\"a}hler 4-manifold
satisfying the conditions (i)-(iii) (see Remark 2). Our results
therefore generalize those in \cite{Arm1}.
The proof of our results relies on the strategy already developed
in \cite{Arm1} for finding out whether a given Riemannian metric
locally admits a compatible almost K{\"a}hler structure, which
allows us, as in \cite{Arm1}, to reduce the problem to an
integrable Frobenius system. However, the more general class of
almost K{\"a}hler 4-manifolds that we consider in the current paper
leads to more involved proofs and makes the spinorial approach
invented in \cite{Arm1} somehow less adequate. We thus prefer to
use classical tensorial notations, which we hope will ease the
task of the reader in following the technical parts.
\vspace{0.2cm} The paper is organized as follows: In Sections 2
and 3, we prepare the necessary background of almost K{\"a}hler geometry,
with a detailed analysis of the Riemannian curvature and its
covariant derivative, based on some representation theory. In
Section 4, we introduce our main examples of strictly almost K{\"a}hler
4-manifolds satisfying conditions (i) and (ii), and describe
those which satisfy conditions (i)-(iii); we show that the latter
are isometric to the unique proper 3-symmetric 4-space. The last
section is devoted to the proof of our main result which is
stated in Theorem 2; Theorem 1 is then just a particular case.
\section{The curvature tensor of almost K{\"a}hler 4-manifolds}
Let $(M,g)$ be an oriented, 4-dimensional Riemannian manifold. The involutive
action of the Hodge
operator $*$ on the bundle of 2-forms $\Lambda ^2M$ induces the decomposition
$\Lambda^{2}M = \Lambda^{+}M \oplus \Lambda^{-}M$ into the sub-bundles
of self-dual, resp. anti-self-dual 2-forms,
corresponding to the $+1$, resp. $-1$ eigenspaces of $*$.
We will implicitly identify vectors and covectors via the metric $g$ and,
accordingly, a 2-form $\phi$ with the corresponding skew-symmetric
endomorphism of the tangent bundle $TM$, by putting: $g(\phi(X),Y) =
\phi(X,Y)$ for any vector fields $X,Y$. Also, if $\phi,
\psi
\in TM^{\otimes 2}$, by $ \phi \circ \psi$ we understand the
endomorphism
of $TM$ obtained by the composition of the endomorphisms corresponding to
the two tensors. The inner product on $\Lambda^2M$ induced by $g$ will be
denoted by $\langle \cdot , \cdot \rangle$, so as the induced norm
differs by a factor
$\frac{1}{2}$ from the usual tensor norm of $TM^{\otimes 2}$.
Considering the Riemannian curvature tensor $R$ as a symmetric
endomorphism of $\Lambda^2M$ we have the following well known
$\rm{SO}(4)$-splitting
\begin{equation}\label{so4}
R = \frac{s}{12}{\rm Id}_{| \Lambda^2M} + \widetilde{{\rm Ric}_{0}} + W^{+} +
W^{-},
\end{equation}
where $s$ is the scalar curvature, $\widetilde{{\rm Ric}_{0}}$ is the the
Kulkarni-Nomizu
extension of the traceless Ricci tensor ${\rm Ric}_0$ to an endomorphism of
$\Lambda^2M$ (anti-commuting with $*$), and
$W^{\pm}$ are
respectively the self-dual and anti-self-dual parts of the Weyl tensor $W$.
The self-dual Weyl tensor $W^{+}$
is viewed as a section of the bundle $S_{0}^2(\Lambda^{+}M)$ of symmetric,
traceless endomorphisms of $\Lambda^{+}M$ (also considered as a sub-bundle of
the tensor product $\Lambda^{+}M \otimes \Lambda^{+}M$).
\vspace{0.2cm}
Let $(M,g,J)$ be an almost Hermitian 4-manifold, {\it i.e.}, an
oriented Riemannian 4-manifold $(M,g)$ endowed with a
$g$-orthogonal almost complex structure $J$ which induces the
chosen orientation of $M$. We denote by $\Omega$ the corresponding
fundamental 2-form, defined by $\Omega(X,Y) = g(JX,Y)$.
The action of $J$ extends to the cotangent bundle $\Lambda^1M$ by putting
$(J\alpha)(X) = -\alpha(JX)$, so as to be compatible with the Riemannian
duality between $TM$ and $\Lambda^1M$. This action defines an {
involution}, $\imath_{J}$,
on $\Lambda ^2M$ by putting $\imath_J(\phi)(X,Y) = \phi(JX,JY)$, which
in turn gives rise to
the following orthogonal splitting of $\Lambda^{+}M$:
\begin{equation}\label{2}
\Lambda^{+}M = {\Bbb R} \Omega \oplus [\hbox{\hspace{-0.15em}}[ \Lambda^{0,2}M ]\hbox{\hspace{-0.15em}}] ,
\end{equation}
where $[\hbox{\hspace{-0.15em}}[ \Lambda^{0,2}M ]\hbox{\hspace{-0.15em}}] $ denotes the
bundle of $J$-anti-invariant real 2-forms, {\it i.e.}, the 2-forms
$\phi$ such that
$\imath_{J}(\phi)= - \phi$. Note that $[\hbox{\hspace{-0.15em}}[ \Lambda^{0,2}M ]\hbox{\hspace{-0.15em}}] $
is
the real underlying bundle of the anti-canonical bundle $(K_J)^{-1}=
\Lambda^{0,2}M$ of $(M, J)$; the induced complex structure $J$ on
$[\hbox{\hspace{-0.15em}}[ \Lambda^{0,2}M ]\hbox{\hspace{-0.15em}}] $ acts by $(J\phi)(X,Y)=-\phi(JX,Y)$.
Consequently, the vector bundle ${\cal W^+}=S_0^2(\Lambda^+M)$ of the
symmetric traceless
endomorphisms of $\Lambda ^+ M $ decomposes into the sum of three sub-bundles,
${\cal W}_{1}^+$, ${\cal W}_{2}^+$, ${\cal W}_{3}^+$, defined as follows,
see \cite{TV}:
\begin{enumerate}
\item[$\bullet$] ${\cal W}_1 ^+ = {\Bbb R} \times M $ is the sub-bundle of
elements
preserving the
decomposition (\ref{2}) and acting by homothety on the two factors;
hence it is
the trivial line
bundle generated by the element $ \frac{1}{8} \Omega \otimes \Omega - \frac{1}{12}
{\rm Id}_{| \Lambda^+M}$.
\item[$\bullet$] ${\cal W}_2 ^+ = [\hbox{\hspace{-0.15em}}[ \Lambda^{0,2}M ]\hbox{\hspace{-0.15em}}] $ is the
sub-bundle of elements which
exchange the
two factors in (\ref{2}); the real isomorphism with $[\hbox{\hspace{-0.15em}}[ \Lambda^{0,2}M
]\hbox{\hspace{-0.15em}}] $ is seen
by identifying each
element $\phi$ of $[\hbox{\hspace{-0.15em}}[ \Lambda^{0,2}M ]\hbox{\hspace{-0.15em}}] $ with the element $
\frac{1}{2} (\Omega \otimes
\phi + \phi \otimes \Omega )$ of ${\cal W}_2 ^+$.
\item[$\bullet$] ${\cal W}_3 ^+ = S_0^2([\hbox{\hspace{-0.15em}}[ \Lambda^{0,2}M ]\hbox{\hspace{-0.15em}}] )$ is
the sub-bundle of
elements preserving the splitting (\ref{2}) and acting trivially on
the first factor ${\Bbb R} \Omega $.
\end{enumerate}
We then obtain the following $\rm{U}(2)$-splitting of the Riemannian curvature
operator, cf. \cite{TV}:
\begin{equation}\label{u(2)}
R = \frac{s}{12} {\rm Id}_{| \Lambda^2M} + ({\widetilde {{\rm Ric}_0}})^{\rm
inv} + ({\widetilde {{\rm Ric}_0}})^{\rm anti}
+
W_1 ^+ + W_2 ^+ + W_3 ^+ + W^- ,
\end{equation}
\noindent
where $ ({\widetilde {{\rm Ric}_0}})^{\rm inv} $ and $ ({\widetilde {{\rm
Ric}_0}})^{\rm anti} $ are
the
Kulkarni-Nomizu extensions of the $J$-invariant and the
$J$-anti-invariants
parts of
the traceless Ricci tensor, respectively, and $W_i ^+$ are the projections
of $W^+$ on
the spaces ${\cal W}_i ^+, \; i = 1,2,3$. The component $W_1^+$
is given by
\begin{equation}\label{w^+1}
W_1^+ = \frac{\kappa}{8} \Omega \otimes \Omega - \frac{\kappa}{12} {\rm Id}_{|
\Lambda^+M} ,
\end{equation}
where the smooth function $\kappa $ is the so called {\it conformal
scalar curvature} of $(g, J)$;
\begin{equation}\label{w^+2}
W^+_2=-\frac{1}{4}(\Psi\otimes \Omega + \Omega\otimes \Psi),
\end{equation}
for a section $\Psi$ of $[\hbox{\hspace{-0.15em}}[ \Lambda^{0,2}M ]\hbox{\hspace{-0.15em}}] $.\\
For any (local) section $\phi$ of $[\hbox{\hspace{-0.15em}}[ \Lambda^{0,2}M
]\hbox{\hspace{-0.15em}}] $ of square-norm 2,
the component in ${\cal W}^+_3$ is given by
\begin{equation}\label{w^+3}
W^+_3 = \frac{\lambda}{2}[\phi\otimes \phi - J\phi\otimes J\phi]
+ \frac{\mu}{2}[\phi\otimes J\phi + J\phi\otimes\phi],
\end{equation}
where $\lambda $ and $\mu$ are (locally defined) smooth functions.
\vspace{0.2cm}
For any almost K{\"a}hler structure $(g,J,\Omega)$, the covariant derivative
$\nabla \Omega$ of the fundamental form is identified with the {\it
Nijenhuis} tensor of $(M,J)$, the obstruction for the
integrability of the almost complex structure $J$. Moreover,
$\nabla \Omega$ can be viewed as a section of the real vector bundle
underlying $\Lambda^{0,1}M\otimes \Lambda^{0,2}M$, which allows us to write with
respect to any section $\phi$ of $[\hbox{\hspace{-0.15em}}[ \Lambda^{0,2}M ]\hbox{\hspace{-0.15em}}]$:
\begin{equation}\label{na-om}
\nabla\Omega = a\otimes \phi - Ja \otimes J\phi.
\end{equation}
The 1-form $a$ satisfies $|\nabla \Omega|^2 = 4|a|^2$, provided that
$\phi$ is of square-norm 2. Consequently,
the covariant derivatives of $\phi$ and $J\phi$ are given
by
\begin{equation}\label{na-phi}
\nabla \phi = - a\otimes \Omega + b\otimes J\phi; \ \nabla J\phi = Ja\otimes \Omega -
b\otimes \phi,
\end{equation}
for some 1-form $b$.
Observe that
we have an $S^1$-freedom for
the choice of $\phi$ into the formulas (\ref{w^+3}) and (\ref{na-om}).
We shall refer to this as a {\it gauge
dependence} and any local section $\phi$ of $[\hbox{\hspace{-0.15em}}[ \Lambda^{0,2}M
]\hbox{\hspace{-0.15em}}] $ of square-norm 2
will be called a {\it gauge}.
\vspace{0.2cm}
\noindent
{\bf Convention.} From now on, $\phi$ will be assumed to be an
eigenform of $W^+_3$,
{\it i.e.}, the function $\mu$ in (\ref{w^+3}) identically
vanishes.
\vspace{0.2cm}
Note that the above assumption can be locally arranged (for a
smooth gauge $\phi$ !) on the open dense subset of points, $x$,
where either
$W^+_3(x)
\neq 0$, or $W^+_3\equiv 0$ in the neighbourhood of $x$; however, by
continuity, all gauge independent properties will hold everywhere on
$M$.
\vspace{0.2cm}
Once the gauge $\phi$ is fixed as above, one can
determine the smooth functions
$\kappa$ and $\lambda$ and the 2-form $\Psi$
in terms of the 1-forms $a$ and $b$ and the 2-form
$\phi$, or, equivalently in terms of $2$-jets of $J$.
For that, we first make use of the {\it Weitzenb{\"o}ck formula} for
self-dual 2-forms, cf. {\it e.g.} \cite{bourg}:
\begin{equation}\label{weitz}
\Delta \psi = \nabla^*\nabla \psi + \frac{s}{3}\psi - 2W^+(\psi).
\end{equation}
Since the fundamental form $\Omega$ is a self-dual, closed 2-form, it is
therefore harmonic and (\ref{weitz}) implies
$$|\nabla \Omega|^2 + \frac{2}{3}s-2\langle W^+(\Omega), \Omega \rangle = 0,$$
which, by (\ref{w^+1})--(\ref{w^+3}), is equivalent to
\begin{equation}\label{kappa}
\kappa - s = 6|a|^2 = \frac{3}{2}|\nabla \Omega|^2.
\end{equation}
Formula (\ref{kappa}) shows that the smooth
function $\kappa - s$ is everywhere non-negative on $M$; it
vanishes exactly at the points where the Nijenhuis tensor is
zero. Observe also that applying (\ref{weitz}) to $\Omega$ we involve
the 2-jets of $J$. Thus (\ref{kappa}) can be considered
as an ``obstruction''
to lifting the 1-jets of $J$ to 2-jets (see \cite{Arm1}), although
eventually it
takes form of a condition on the 1-jets.
\vspace{0.2cm}
\noindent
In order to express $W^+_2$ and $W^+_3$ we make use of the Ricci
identity
\begin{equation}\label{ricid}
(\nabla^2_{X,Y} - \nabla ^2_{Y,X})(\Omega)(\cdot,\cdot) =
-R_{X,Y}(J\cdot,\cdot)-R_{X,Y}(\cdot,J\cdot).
\end{equation}
From (\ref{na-om}) we get
$$\nabla ^2|_{\Lambda^2 M}\Omega =(da - Ja\wedge b)\otimes \phi -
(d(Ja) + a \wedge b)\otimes J\phi, $$
so, (\ref{ricid}) can be rewritten as
\begin{equation}\label{*}
da - Ja\wedge b = - R(J\phi); \ d(Ja) + a\wedge b = - R(\phi).
\end{equation}
Projecting on $\Lambda^+M$ and using (\ref{u(2)})--(\ref{w^+3}) and (\ref{kappa}),
the equalities in (\ref{*}) give
\begin{eqnarray}\label{ricident}\label{lambda}
\lambda &=& - \frac{1}{2}\big{(} |a|^2 - \langle da,J\phi \rangle
+ \phi(a,b)\big{)}; \\ \label{mu}
\mu &=& - \frac{1}{2}\big{(} \langle da,\phi\rangle + J\phi(a, b)
\big{)} =0;
\end{eqnarray}
\begin{equation}\label{Psi}
\Psi = \big{(} \langle d(Ja),\Omega \rangle +
\Omega(a,b)\big{)}\phi + \big{(} \langle da,\Omega \rangle +
g(a,b)\big{)} J\phi.
\end{equation}
We observe again that the relations
(\ref{lambda})--(\ref{Psi}) are conditions on the
2-jets of the compatible almost K{\"a}hler structure $J$, and can be viewed as a
further ``obstruction'' to lifting the $1$-jets to $2$-jets, see \cite{Arm1}.
Similarly, projecting formulae (\ref{*}) on $\Lambda^-M$ we completely
determine the $J$-anti-invariant part of the Ricci tensor.
In order to determine its $J$-invariant part one
needs the 3-jets of $J$, involved in
the Ricci identity for the Nijenhuis tensor (viewed as a section of
$\Lambda^1M\otimes \Lambda^2M$).
Writing the Ricci identity with respect
to $\nabla \Omega$
is nothing but adding to (\ref{*}) one more relation coming from
$$(\nabla^2_{X,Y} - \nabla ^2_{Y,X})(\phi)(.,.) =
-R_{X,Y}(\phi.,.)-R_{X,Y}(.,\phi.).$$
Using (\ref{na-om}),(\ref{na-phi}) and
(\ref{u(2)})--(\ref{w^+3}) we eventually obtain
\begin{equation}\label{**}
db = a\wedge Ja -R(\Omega) = a\wedge Ja - \frac{(s+2\kappa)}{12}\Omega
-J\circ ({\rm Ric}_0^{\rm inv}) + \frac{1}{2}\Psi.
\end{equation}
The closed 2-form $db$ is gauge independent and is thus defined
on whole $M$; in fact, up to a factor $-\frac{1}{2\pi}$, $db$ is a De
Rham representative of the first Chern class of $(M,J)$, see e.g.
\cite{H-F}.
Note that the relations (\ref{*}) and (\ref{**})
completely determine the Ricci tensor and the self-dual Weyl tensor of
$(M,g,J)$ in terms of the
$3$-jets of $J$. One can further see that
the remaining part of the curvature, the anti-self-dual Weyl tensor,
is determined by the $4$-jets of $J$. But we shall show in Section 5
that when the metric satisfies some additional properties, the relations
(\ref{*}) and (\ref{**}) are sufficient to write down
the whole Riemannian curvature of $g$. A careful
analysis of the above mentioned ``obstructions'' to lifting the $1$, $2$ and
$3$-jets of $J$ will eventually permit us to apply
the Frobenius theorem in
order to obtain the desired classification.
\section{Almost K{\"a}hler 4-manifolds and Gray conditions. Preliminary results}
For a 4-dimensional almost Hermitian manifold, the relations
(i)--(iii) mentioned in the introduction are closely related to
the following conditions on the curvature defined by A. Gray
\cite{Gr} (not necessarily in the 4-dimensional context).
\newline $ (G_1) \; \; \; \; R_{XYZW} = R_{XYJZJW} $
;
\newline $ (G_2) \; \; \; \; R_{XYZW} - R_{JXJYZW} = R_{JXYJZW} + R_{JXYZJW}
$
;
\newline $ (G_3) \; \; \; \; R_{XYZW} = R_{JXJYJZJW}.$
\newline
Identity $(G_i)$ will be called the $i$-th Gray condition. Each imposes on
the curvature
of the almost Hermitian structure a certain degree of resemblance to that
of a K{\"a}hler one.
A simple application of the first Bianchi identity
yields the implications $(G_1) \Rightarrow (G_2) \Rightarrow (G_3)$.
Also elementary is
the fact that a K{\"a}hler structure satisfies relation $(G_1)$ (hence, all
of the
relations $(G_i)$). Following \cite{Gr}, if ${\cal AK}$ is the class of
almost
K{\"a}hler
manifolds, let ${\cal AK}_i$ be the subclass of manifolds whose curvature
satisfies
identity $(G_i)$. We have the obvious inclusions
$$ {\cal AK} \supseteq {\cal AK}_{3} \supseteq {\cal AK}_2 \supseteq
{\cal AK}_1 \supseteq {\cal K} , $$
where ${\cal K}$ denotes the class of K{\"a}hler manifolds. In \cite{Go}
it was observed that the equality $ {\cal AK}_1 = {\cal K}$ holds
locally (this fact is an immediate consequence of
(\ref{kappa})).
From the examples of Davidov
and Mu\u{s}karov \cite{DM}, multiplied by compact K{\"a}hler manifolds,
it follows that the inclusion $ {\cal AK}_2 \supset {\cal K}$ is
strict in dimension $2n \ge 6$, even in the compact case. This is
no longer true in dimension 4; it was proved in \cite{ADK} that
the equality ${\cal AK}_2 = {\cal K}$ holds for compact
4-manifolds (see also Corollary \ref{cor3} in Section 5 for a
partially different proof of this result).
\vspace{0.2cm}
Let us first observe that the conditions $(G_i)$ fit in with
the $\rm{U}(2)$-decomposition (\ref{u(2)}) of the curvature in the
following manner:
\begin{Lemma}\label{lem1}
An almost Hermitian 4-manifold $(M, g, J)$ satisfies the property $(G_3)$
if and only if the Ricci tensor is $J$-invariant and $W_2 ^+ = 0$. It
satisfies $(G_2)$ if moreover $W_3 ^+ = 0$.
\end{Lemma}
\noindent
{\it Proof:} A consequence of (\ref{u(2)}), see \cite{TV}. ${\bf Q.E.D.}$
\vspace{0.2cm}
Denote by ${\cal D}=\{ X \in T: \nabla_X \Omega =0\}$ the {\it K{\"a}hler nullity}
of $(g,J)$ and by
${\cal D}^{\perp}$ its $g$-orthogonal
complement. According to (\ref{na-om}), ${\cal D}$ is
$J$-invariant at every point and has rank 4 or 2,
depending on whether or not the Nijenhuis tensor $N$ vanishes at that point.
As an easy consequence of (\ref{*}), we have the following useful
observation:
\begin{Lemma}\label{lem2} A non-K{\"a}hler, almost K{\"a}hler 4-manifold with
$J$-invariant
Ricci tensor belongs to the class ${\cal AK}_3$ if and only if the K{\"a}hler nullity
${\cal D}$ is a rank 2 involutive distribution on the open set of points
where the Nijenhuis tensor does not vanish.
\end{Lemma}
\noindent
{\it Proof:}
Let $\{ B,JB\}$ be any (local)
orthonormal frame of ${\cal D}$ and let
$\{A, JA\}$ be an orthonormal frame of ${\cal D}^{\perp}$, so that $A$
and $JA$ are the dual orthonormal frame of $\{ a,Ja\}$, see
(\ref{na-om}). Then the fundamental form can be written as
\begin{equation}\label{om}
\Omega = A\wedge JA + B\wedge JB.
\end{equation}
By (\ref{*}) we see that ${\cal D}$
is involutive if and only if
\begin{equation}\label{temp1}
R(\phi)(B,JB)=0, \ \ \ R(J\phi)(B,JB)=0.
\end{equation}
On the other hand, as the Ricci tensor is $J$-invariant, it follows by
(\ref{u(2)})--(\ref{w^+3}) and (\ref{om}):
$$R(\phi)(B,JB)=-\frac{1}{4}\langle \Psi, \phi \rangle; \
R(J\phi)(B,JB)=-\frac{1}{4}\langle \Psi, J\phi \rangle,$$
{\it i.e.}, according to (\ref{temp1}), we obtain that ${\cal D}$ is
involutive if and only if $W^+_2=0$ (see (\ref{w^+2})). The claim now
follows by
Lemma \ref{lem1}. ${\bf Q.E.D.}$
\vspace{0.2cm}
We shall further use the following refined version
of the differential Bianchi identity \cite{AA}:
\begin{Lemma}\label{lem3}{\rm {\bf (Differential Bianchi
identity)}}
Let $(M, g, J)$ be an almost K{\"a}hler 4-manifold
in the class ${\cal AK}_3$. Then the following relations hold:
\begin{equation}\label{lem3-1}
d(\kappa -s) = - 12\lambda J\phi(a);
\end{equation}
\begin{equation}\label{lem3-2}
{\rm Ric}_0(a)= \frac{\kappa}{4}a + 2\lambda\phi(b) - J\phi(d\lambda);
\end{equation}
\begin{equation}\label{lem3-3}
\Delta (\kappa - s) = -\frac{\kappa}{2}(\kappa - s) - 24\lambda^2 + 12
{\rm Ric}_0(a,a).
\end{equation}
\end{Lemma}
\noindent
{\it Proof:} The co-differential $\delta W^+$ of the self-dual Weyl
tensor of $(M, g)$
is a section of the rank 8 vector bundle
$ {\cal V} = {\rm Ker} ({\it trace} : \Lambda^1M \otimes \Lambda^+M \rightarrow
\Lambda^1M ),$ where the trace is defined by $ { trace} (\alpha \otimes
\phi) =
\phi(\alpha) $ on decomposed elements. For every almost-Hermitian
4-manifold the vector bundle
${\cal V}$ splits as
${\cal V} = {\cal V}^+ \oplus {\cal V}^-$, see \cite{AG}, where
${\cal V}^+$
is identified with the cotangent bundle $\Lambda^1 M$ by
\begin{equation}
\label{alpha}
\Lambda^1M \ni \alpha \mapsto A = J\alpha \otimes \Omega -
\frac{1}{2}\sum_{i=1}^{4}e_{i} \otimes (\alpha \wedge e_{i} - J\alpha \wedge
Je_{i}),
\end{equation}
while $\cal V^{-}$ is identified (as a real vector bundle) with
$\Lambda^{0,1}M \otimes \Lambda^{0,2}M$. For any gauge
$\phi$ the vector bundle ${\cal V}^-$
can be again identified with $\Lambda^1 M$ by
\begin{equation}\label{beta}
\Lambda^1M \ni \beta \mapsto B = J\beta \otimes \phi + \beta\otimes
J\phi.
\end{equation}
We denote by $(\delta W^{+})^{+}$, resp. $(\delta W^{+})^{-}$, the
component of $\delta W^{+}$ on $\cal V^{+}$, resp. on $\cal
V^{-}$, and, for any gauge $\phi$
satisfying the Convention of Section 2 we consider the corresponding 1-forms
$\alpha$ and $\beta$. By (\ref{alpha}),
(\ref{beta}) and (\ref{w^+1})--(\ref{w^+3}) one directly calculates:
\begin{equation}\label{alpha1}
\alpha= -\frac{1}{2}J\langle \delta W^+, \Omega \rangle=
-\frac{d\kappa}{12} -
\lambda J\phi(a);
\end{equation}
\begin{eqnarray}\label{beta1}
\beta &=& \frac{1}{2}\big{(}-J\langle \delta W^+, \phi \rangle +
\frac{1}{2}\phi\langle \delta W^+, \Omega \rangle \big{)}\\ \nonumber
&=& -\frac{\kappa}{8}a +\lambda\phi(b) - \frac{1}{2}J\phi(d\lambda).
\end{eqnarray}
Recall that the {\it Cotton-York tensor} $C$ of $(M, g)$ is defined by:
$$ C_{X,Y,Z} = \frac{1}{2} \Big[ \nabla_{Z} (\frac{s}{12} g + {\rm Ric}_0) (Y,X) -
\nabla_{Y} (\frac{s}{12} g + {\rm Ric}_0)(Z,X) \Big],$$
for any vector fields $X,Y,Z$. Considering $C$ as a 2-form with values
in $\Lambda^1M$, the {\it second Bianchi identity} reads as
$\delta W = C$. In dimension 4 we have also the
``half'' Bianchi identity
\begin{equation}\label{half}
\delta W^+ = C^+,
\end{equation}
\noindent
where $C^+$ denotes the self-dual part of $C_X$, $X \in TM$.
When the Ricci tensor is $J$-invariant, we
make use of (\ref{half}) to give an equivalent expression for the
1-forms $\alpha$ and $\beta$ in terms of the Ricci tensor and the
1-form $a$. According to (\ref{alpha}) we get
$$ \alpha(X) = -\frac{1}{2}J\langle C^+,\Omega \rangle = - \frac{1}{4}
\sum_{i=1}^{4} \nabla_{e_{i}} (\frac{s}{12} g + {\rm Ric}_0)(Je_i, JX) = $$
$$ = - \frac{1}{4} \Big[ \frac{ds}{12}(X) - (\delta {\rm Ric}_0)(X) +
\sum_{i=1}^{4} {\rm Ric}_0(e_i, J(\nabla_{e_i} J)(X)) \Big] =$$
$$ = - \frac{1}{4} \Big[ \frac{ds}{3}(X) +
\sum_{i=1}^{4} {\rm Ric}_0(e_i, J(\nabla_{e_i} J)(X))\Big].$$
Using (\ref{na-om}) and the fact that the Ricci tensor is
$J$-invariant,
we obtain
$$ \sum_{i=1}^{4} {\rm Ric}_0(e_i, J(\nabla_{e_i} J)(X)) = 0,$$
and then
\begin{equation}\label{bianchi-a}
\alpha = -\frac{ds}{12}.
\end{equation}
Regarding the component of $C^+$ in ${\cal V}^-$, we have by (\ref{beta}):
$$\beta = \frac{1}{2}\big{(}-J\langle C^+, \phi \rangle
+\frac{1}{2}\phi\langle C^+,\Omega \rangle \big{)}.$$
To compute $J\langle C^+, \phi \rangle $ we proceed in the same way as
computing $J\langle C^+, \Omega \rangle$; instead of $J$ we
consider the almost complex structure $I_{\phi}$
whose K{\"a}hler form is $\phi$. Observe that ${\rm Ric}_0$ is now
$I_{\phi}$-anti-invariant. By (\ref{na-om}),
(\ref{na-phi}) and (\ref{bianchi-a}) we eventually get
\begin{equation}\label{bianchi-b}
\beta = -\frac{1}{2}{\rm Ric}_0(a).
\end{equation}
Comparing (\ref{bianchi-a}) and (\ref{bianchi-b}) with (\ref{alpha1})
and (\ref{beta1}) we obtain the equalities (\ref{lem3-1}) and
(\ref{lem3-2}). Finally, taking co-differential of
both sides of (\ref{lem3-1})
and using (\ref{lem3-2}) and (\ref{kappa}) we derive
\begin{eqnarray}\nonumber
\Delta (\kappa - s) &=& -12 J\phi(d\lambda, a) - 12\lambda \delta(J\phi(a)) \\
\nonumber
&=& 12{\rm Ric}_0(a,a) -\frac{\kappa}{2}(\kappa -s) + \\
\nonumber
& & 12\lambda \big(2\phi(a,b) - \langle d a, J\phi
\rangle + \delta (J\phi)(a)\big).
\end{eqnarray}
By (\ref{lambda}) and (\ref{na-phi}) we calculate
$$ 12\lambda \big(2\phi(a,b) - \langle da, J\phi \rangle + \delta
(J\phi)(a)\big) = -24\lambda^2,$$
and we reach the equality (\ref{lem3-3}). ${\bf Q.E.D.}$
\vspace{0.2cm}
\noindent
We have the following consequence of Lemma \ref{lem3}
(see also \cite[Prop.2]{AD} and \cite[Prop.4]{Jel}):
\begin{cor}\label{cor1} A 4-dimensional almost K{\"a}hler structure $(g, J,
\Omega)$ in the
class ${\cal AK}_3$ belongs to ${\cal AK}_2$
if and only if the norm of $\nabla \Omega$ is constant.
Moreover, if
$(g,J, \Omega)$ is an ${\cal AK}_2$, non-K{\"a}hler structure,
then the traceless Ricci tensor ${\rm Ric}_0$ is given by
$$ {\rm Ric}_0=\frac{\kappa}{4}[-g^{\cal D} + g^{{{\cal D}}^{\perp}}],$$
where $g^{\cal D}$ (resp. $g^{{\cal D}^{\perp}}$) denotes the restriction
of $g$ on ${\cal D}$ (resp. on ${\cal D}^{\perp}$).
\end{cor}
\noindent
{\it Proof:}
According to (\ref{kappa}), we have
$|\nabla \Omega|^2=\frac{\kappa - s}{6}$. We then get
by Lemma \ref{lem3} the equality $d(|\nabla \Omega|^2) = -2\lambda J\phi(a),$
and the first part of the claim
follows by Lemma \ref{lem1} and (\ref{w^+3}). Since $W^+_3 \equiv 0$
({\it i.e.} $\lambda \equiv 0$ according to (\ref{w^+3})),
the second relation stated in Lemma \ref{lem3} reads as
${\rm Ric}_0(a)=\frac{\kappa}{4}a.$
As ${\rm Ric}_0$ is symmetric traceless and $J$-invariant tensor, in
the case when
$(g,J)$ is not K{\"a}hler the
expression above implies the second part of the corollary. ${\bf Q.E.D.}$
\section{Examples of almost K{\"a}hler 4-manifolds satisfying Gray conditions}
\subsection{3-symmetric spaces} In this subsection we briefly describe
an already known example of strictly almost K\"ahler 4-manifold satisfying
the condition ($G_2$). This example comes from works of Gray
\cite{gray} and Kowalski \cite{kow} on {\it 3-symmetric spaces} and we
refer to their papers for more details on the subject.
A Riemannian 3-symmetric space is a manifold $(M,g)$ such that for each point
$p \in M$ there
exists an isometry $\theta_p : M \rightarrow M$ of order 3
(i.e. $\theta_p^3 = 1$), with $p$ as an isolated fixed point.
Any such manifold has a naturally defined (canonical) $g$-orthogonal
almost complex structure $J$, and we further require that each $\theta_p$
is a holomorphic map
with respect to $J$. Moreover, the canonical almost Hermitian structure
$(g,J)$ of a 3-symmetric space always satisfies the second Gray condition
and, in dimension 4, is automatically almost K\"ahler
(it is K\"ahler if and only if the manifold is Hermitian symmetric,
see \cite{gray}).
It only remains the question of whether there exists a 4-dimensional
example of a 3-symmetric space with a non-integrable almost complex
structure (we shall call this a {\it proper} 3-symmetric space).
This is solved by Kowalski, who constructs such an example
and, moreover, shows that this is the only proper 3-symmetric space
in dimension 4 (in fact, this is the only proper {\it generalized}
symmetric space in dimension 4, \cite[Theorem VI.3]{kow}).
Explicitly, up to a homothety, Kowalski's example
is defined on ${\Bbb R}^4=\{(u_1,v_2,u_2,v_2 )\}$ with the metric
\begin{eqnarray} \label{kowalm}
g &=& \Big(-u_1 + \sqrt{u_1^2 + v_1^2 + 1}\Big)du_2^2 +
\Big(u_1 + \sqrt{u_1^2 + v_1^2 + 1}\Big)dv_2^2 \\
\nonumber
& & - 2v_1 du_2\odot dv_2 \\ \nonumber
& & + \frac{1}{(u_1^2 + v_1^2 +1)}\Big[(1+ v_1^2)du_1^2 +
(1 + u_1^2)dv_1^2
- 2u_1v_1~ du_1 \odot dv_1 \Big],
\end{eqnarray}
where as usually $\odot$ stands for symmetric tensor products.
\subsection{Generalized Gibbons-Hawking Ansatz}
We now present a different and more general approach
of obtaining examples of almost K{\"a}hler 4-manifolds satisfying Gray conditions
$(G_3)$ and $(G_2)$, which is based on the idea of generalizing Tod's
construction
of Ricci-flat strictly almost K{\"a}hler
4-manifolds \cite{Arm1,Tod}. For this purpose, we consider
instead of the Gibbons-Hawking ansatz, its
generalized version, introduced by LeBrun \cite{LeB} to construct
scalar-flat K{\"a}hler surfaces. Following \cite{LeB}, let $w>0$ and $u$ be
smooth real-valued
functions on an open, simply-connected set $V \subset {\Bbb R}^3=\{(x,y,z)\}$,
which satisfy
\begin{equation}\label{1}
w_{xx} + w_{yy} + (we^u)_{zz}=0.
\end{equation}
Let $M = {\Bbb R}\times V$ and $\omega$ be a 1-form on $M$
non-vanishing when restricted to the ${\Bbb R}$-factor and
determined (up to gauge equivalence) by
\begin{equation}\label{dom}
d\omega = w_xdy\wedge dz + w_y dz\wedge dx + (we^u)_z
dx\wedge dy .
\end{equation}
It is shown in \cite{LeB} that the metric
\begin{equation}\label{g}
g = e^{u}w(dx^2 + dy^2) + w dz^2 + w^{-1}\omega^2
\end{equation}
admits a K{\"a}hler structure $I$, defined by its fundamental form
\begin{equation}\label{I}
\Omega_{I}=dz\wedge \omega + e^{u}w dx \wedge dy.
\end{equation}
Moreover, if we denote by $\frac{\partial}{\partial t}$
the dual vector field of $w^{-1}\omega$ with respect to $g$,
then $\frac{\partial}{\partial t}$ is
Killing and preserves $I$. Conversely, every K{\"a}hler metric admitting a
hamiltonian Killing field locally arises by this construction \cite{LeB}.
\vspace{0.2cm}
\noindent
Besides the K{\"a}hler
structure $I$, we shall consider the almost Hermitian structure $J$
whose fundamental form is
\begin{equation}\label{J}
\Omega_J= - dz\wedge \omega + e^{u}w dx \wedge dy.
\end{equation}
Clearly, the almost complex structures $I$ and $J$ commute
and yield different orientations on $M$.
Our objective is the following generalization of \cite{Tod}:
\begin{prop}\label{prop1} Let $w>0$ and $u$ be smooth functions
satisfying (\ref{1}). Then
the almost Hermitian structure $(g,J)$
defined via (\ref{g}) and (\ref{J}) is almost K{\"a}hler if and only if
$u$ and $w$ satisfy
\begin{equation}\label{2'}
(e^{u}w)_z=0.
\end{equation}
It is K{\"a}hler if moreover $w$ does not depend on $x$ and $y$.
Furthermore, the following are true:
\begin{enumerate}
\item[{\rm (i)}] The almost
Hermitian manifold $(M,g,J)$ is non-K{\"a}hler and belongs to ${\cal AK}_3$
if and only if $w$ is a non-constant, positive harmonic
function of $x$ and $y$, and $u(x,y)$ is any function defined on
$U = V \cap {\Bbb R}^2$.
\item[{\rm (ii)}] The manifold $(M,g,J)$ belongs to ${\cal
AK}_2$ if and only if, in addition, $w$ has no critical values on $U$ and
$u$ is
given by
\end{enumerate}
\begin{equation}\label{u}
u= \ln(w_x^2 + w_y^2) - 3 \ln w + const.
\end{equation}
\end{prop}
\noindent
{\bf Remark 1.} (a) If $w$ is a non-constant harmonic function of $(x,y)$,
then the holomorphic function $h$ of $x+iy$ such that ${\rm Re}(h)=w$
can be used as a holomorphic coordinate in place of $x+iy$. Up to a change
of the smooth function $u$ and the transversal coordinate $t$, the
metrics described in Proposition \ref{prop1}(i) are then all
isometric to
\begin{equation}\label{ak3}
g = e^ux(dx^2 + dy^2) + xdz^2 + \frac{1}{x}(dt + ydz)^2,
\end{equation}
and therefore is defined on $M =\{(x,y,z,t) \in {\Bbb R}^4, x>0\}$
for any smooth function $u$ of $(x,y)$.
It is easily checked \cite{LeB} that the Ricci tensor
of the metrics (\ref{ak3}) has two vanishing eigenvalues while the
scalar curvature $s$ is given by
$s=\frac{u_{xx} + u_{yy}}{4xe^u}.$
It thus follows that the Ricci-flat Tod's examples are obtained precisely
when $u$ is a harmonic function.
(b) Concerning the metrics given in Proposition \ref{prop1}(ii), by
(\ref{u}) we obtain
$e^u = const.\frac{1}{x^3}$, so that
(up to homothety of $(z,t)$) all these metrics
are homothetic to
\begin{equation}\label{canonic}
g = \frac{dx^2}{x^2} + \frac{1}{x^2}\sigma_1^2 + x\sigma_2^2 +
\frac{1}{x}\sigma_3^2,
\end{equation}
where $\sigma_1=dy~; \sigma_2=dz~; \sigma_3=dt + ydz$ are the standard
generators of the Lie algebra of the three dimensional Heisenberg group
${\rm Nil}^3$. It turns out that (\ref{canonic}) defines a complete
metric, in fact,
a homogeneous one which is nothing else than the (unique)
proper 3-symmetric metric (\ref{kowalm}) mentioned in Sec. 4.1.
To see this directly, one should do the change of variables
\begin{equation}\label{chvar}
u_1 = \frac{x^2 + y^2 - 1}{2x} , \; \; v_1 = - \frac{y}{x} , \; \; u_2 = t
, \; \; v_2 = z ,
\end{equation}
and after a straightforward calculation it can be seen that
the metric of Kowalski defined by (\ref{kowalm}) reduces exactly to
(\ref{canonic}).
In fact, we were motivated to look for and were able to find this change of
variables
only {after} we realized that one must have
the uniqueness stated in Theorem 1 (see also Remark 4).
(c) One can easily write down the whole Riemannian curvature of
the metric (\ref{canonic}): it turns out that it is completely
determined by the (constant) scalar curvature $s=\frac{u_{xx} +
u_{yy}}{4xe^u}=-\frac{3}{4}$. Indeed, it is easily checked that
the conformal scalar curvature (which determines $W^+$) is equal
to $\frac{3}{4}$, the Ricci tensor has constant eigenvalues
$(0,0,-\frac{3}{8},-\frac{3}{8})$, and as $g$ is K{\"a}hler with
respect to $I$ (see (\ref{I})), the anti-self-dual Weyl tensor is
also determined by $s$, see {\it e.g.} \cite{Ga}. The metric
(\ref{canonic}) with its negative K{\"a}hler structure $I$
provide therefore a non-symmetric, homogeneous K{\"a}hler
surface which corresponds to the ${\bf F_4}$-geometry of
\cite{wall}; it is thus a complete irreducible K{\"a}hler metric with
two distinct {\it constant} eigenvalues of the Ricci tensor. From
this point of view, the metric (\ref{canonic}) has been
independently discovered by R. Bryant in \cite{Br}. Remark that
many others (non-homogeneous in general) K\"ahler metrics of
constant eigenvalues of the Ricci tensor arise from (\ref{ak3}),
provided that $u$ is a smooth solution to the elliptic equation
$$u_{xx} + u_{yy} = 4sxe^u,$$
where $s$ is a non-zero constant, the scalar curvature of the metric.
\vspace{0.1cm}
\noindent
{\it Proof of Proposition \ref{prop1}:}
By (\ref{J}) and (\ref{dom}) one readily sees that $\Omega_J$ is closed
if and only if (\ref{2'}) holds.
In order to determine the K{\"a}hler nullity ${\cal D}$ we consider the
$J$-anti-invariant 2-forms
$$\phi = e^{\frac{u}{2}}(wdz\wedge dx + \omega\wedge dy),$$
$$J\phi = e^{\frac{u}{2}}(wdz\wedge dy -\omega\wedge dx).$$
They are both of square-norm 2 and we then have
$$d\phi = \tau_{\phi}\wedge \phi; \ \ d (J\phi) = \tau_{J\phi}\wedge
J\phi,$$
where, according to (\ref{na-phi}), the 1-forms $\tau_{\phi},
\tau_{J\phi}$ are given by
\begin{equation}\label{tau-phi}
\tau_{\phi}= -Jb - J\phi(a); \ \ \ \tau_{J\phi}= -Jb + J\phi(a).
\end{equation}
On the other hand, computing $d \phi$ and $d (J\phi)$ directly by
making use of (\ref{dom}) we get
$$\tau_{\phi}= \frac{du}{2} + 2 (\ln w)_y dy; \ \ \tau_{J\phi} =
\frac{du}{2} + 2 (\ln w)_x dx.$$
We conclude by (\ref{tau-phi}) that
$ J\phi(a)= (\ln w)_x dx - (\ln w)_y dy.$
But we know from (\ref{na-om}) that $J\phi(a)$ belongs to ${\cal D}$;
the latter implies the following relations:
\begin{enumerate}
\item[(a)] $(g,J)$ is K{\"a}hler if and only if $w$ does not depend on $x$
and $y$;
\item[(b)] if $(g,J)$ is not K{\"a}hler, then ${\cal D}= span \{
\frac{\partial}{\partial x},
\frac{\partial}{\partial y} \}$;
\item[(c)] $|\nabla (\Omega_J)|_g^2 = \frac{w_x^2
+ w_y^2}{4e^u w^3}$.
\end{enumerate}
The Ricci form of the K{\"a}hler structure $(g,I)$ is given by
$\frac{1}{2}dd^c_{I}u$ (see \cite{LeB}). Here, and in the rest of the
paper, the operator
$d^c_I$ denotes the composition $ I\circ d $, where $d$ is the usual
differential.
Clearly, the Ricci tensor of $g$ is
$J$ invariant if and only if $dd^c_I u$ is a $(1,1)$-form with respect to $J$.
One easily checks that
the latter is equivalent to
$$\big(\frac{u_z}{w}\big)_x= \big(\frac{u_z}{w}\big)_y=0.$$ Thus $u_z
= f w$ for some function $f$ of $z$.
By (\ref{2'}) we get moreover $w=\frac{1}{F+ h},$ where $F$ is a
primitive of $f$, i.e.,
$\frac{d}{dz}F= f$, and $h$ is a function of $x$ and $y$. According to
the relation (a), we know that $h$ is constant if and only if $(g,J)$
is K{\"a}hler. Substituting
into (\ref{1}) we obtain that if $h$ is not constant, then $F$ is
constant, or equivalently, $w_z=0, \
u_z=0.$ Thus, if $(g,J)$ is not K{\"a}hler, then $u$ and $w$ are
functions of $x$ and $y$ and the
equation (\ref{1}) simply means that $w$ is a harmonic function of
$x$ and $y$. The
Ricci tensor is then given by
$${\rm Ric}=(u_{xx} + u_{yy})[dx^2 + dy^2].$$
Therefore, according to Corollary \ref{cor1}, the implication in (b) gives
$(g,J) \in {\cal AK}_3$, while according to Lemma
\ref{lem3}, the equality stated in (c) shows that
$(g,J) \in {\cal AK}_2$ if and only if
$e^u = const.\frac{w_x^2 + w_y^2}{w^3}.$ ${\bf Q.E.D.}$
\begin{cor} \label{cor2}
The inclusions ${\cal K} \subset {\cal AK}_2 \subset {\cal AK}_3$
are strict in any dimension $2n, n\ge 2$.
\end{cor}
{\it Proof:} Multiplying the examples obtained via Proposition
\ref{prop1} by Riemann surfaces one provides appropriate examples in
any dimension. ${\bf Q.E.D.}$
\section{Classification results}
The proof of Theorem \ref{th1} stated in the introduction
will be a consequence of a slightly more general classification
that we shall prove in Theorem \ref{th2} (see below).
The key idea of the proof is to investigate the properties of
the negative almost complex structure that we define as follows:
\vspace{0.1cm}
\noindent
{\bf Definition.} Let $(M,g,J)$ be a strictly almost-K{\"a}hler 4-manifold.
On the open set of points where the Nijenhuis tensor of $(g, J)$
does not vanish, let $I$ be the almost complex
structure defined to be equal to $J$ on ${\cal D}$ and to $-J$
on ${\cal D}^{\perp}$.
\vspace{0.1cm} \noindent Clearly, the almost complex structure
$I$ is $g$-orthogonal and yields on the manifold the opposite
orientation than the one given by $J$. We show that curvature
symmetry properties of the almost K{\"a}hler structure $(g,J, \Omega)$ have a
strong effect on the negative almost Hermitian structure
$(g,I,{\bar \Omega})$, where ${\bar \Omega}$ denotes the fundamental
form of $(g, I)$.
\vspace{0.1cm}
Let us assume that $(M, g, J, \Omega)$ is a 4-dimensional, strictly almost K{\"a}hler
manifold of the class ${\cal AK}_3$. We use the same notations as in
the previous
sections, in particular
for the 1-forms $a$ and $b$ defined by (\ref{na-om}) and (\ref{na-phi})
under the
same convention for the choice of the gauge $\phi$. Our first goal is
to show that the negative almost Hermitian structure $(g, I, {\bar
\Omega})$
is almost K{\"a}hler, and then to determine
the 1-forms ${\bar a}, {\bar b}$ corresponding to
the negative gauge
\begin{equation} \label{bargauge}
{\bar \phi} = \phi + \frac{12}{(\kappa - s)} Ja \wedge J\phi(a),
\end{equation}
see (\ref{kappa}).
This is summarized in the following
\begin{Lemma}\label{lem4} Let $(M,g,J,\Omega)$ be a strictly almost K{\"a}hler
4-manifold in the
class ${\cal AK}_3$ and let $I$ be the negative, orthogonal, almost complex
structure defined as above. Then $(g,I, {\bar \Omega})$ is an almost K{\"a}hler
structure compatible
with the reversed orientation of $M$. Moreover,
${\cal D}^{\perp}$ belongs to
the K{\"a}hler nullity of $(g,I)$ and, with the choice of the negative gauge
as above,
\begin{equation} \label{bar-b}
{\bar b} = 3b + \frac{12\lambda}{(\kappa - s)} \phi(a) .
\end{equation}
\end{Lemma}
{\it Proof:} Defining the 1-forms $m_i, n_i, \ i=1,2$, by
\begin{equation}\label{na-a-0}
\nabla a = m_1\otimes a + n_1\otimes Ja + m_2\otimes \phi(a) +
n_2\otimes J\phi(a),
\end{equation}
we use (\ref{na-om}) and
(\ref{na-phi}) to derive the next three equalities:
\begin{eqnarray}\nonumber
\nabla(Ja) &=& -n_1 \otimes a + m_1 \otimes Ja + (a-n_2)\otimes \phi(a)
\\ \nonumber
& & + (m_2 - Ja) \otimes J\phi(a); \\ \label{na-a-1}
\nabla (\phi(a)) &=& -m_2 \otimes a + (n_2 - a) \otimes Ja + m_1 \otimes
\phi(a) \\ \nonumber
& & + (b - n_1) \otimes J\phi(a); \\ \nonumber
\nabla (J\phi(a)) &=& -n_2 \otimes a + (Ja - m_2) \otimes Ja +
(n_1-b) \otimes \phi(a) \\ \nonumber
& & + m_1 \otimes J\phi(a).
\end{eqnarray}
From (\ref{na-a-0}), (\ref{kappa}) and Lemma
\ref{lem3}-(\ref{lem3-1}) we obtain
\begin{eqnarray}\nonumber
m_1 &=& \frac{1}{|a|^2}g(\nabla a,a) = \frac{1}{2}d(\ln{(\kappa -s)})\\
\label{m1}
&=&- \frac{6 \lambda}{(\kappa -s)} J\phi(a).
\end{eqnarray}
We further use the Ricci relations (\ref{*}) in order
to
determine the 1-forms $n_1, m_2$, and $n_2$.
For that we replace the left-hand sides of the two equalities (\ref{*})
respectively by
$$da = m_1\wedge a + n_1\wedge Ja + m_2\wedge \phi(a) + n_2\wedge
J\phi(a),$$
$$d(Ja) = -n_1 \wedge a + m_1 \wedge Ja + (a-n_2)\wedge \phi(a) +
(m_2 - Ja) \wedge J\phi(a),$$
(see (\ref{na-a-1})), and also take into account that under the ${\cal
AK}_3$ assumption we have
$$R(\phi)= (\frac{s-\kappa}{12} + \lambda)\phi; \ \
R(J\phi)=(\frac{s-\kappa}{12} - \lambda)J\phi,$$
see Lemma \ref{lem1} and (\ref{u(2)})--(\ref{w^+3}).
After comparing the components of both sides, we obtain
\begin{equation}\label{part}
n_1 = -b -\frac{6\lambda}{(\kappa -s)}\phi(a); \ m_2 = \frac{1}{2}Ja + Jm_0;
\ n_2 =\frac{1}{2}a + m_0,
\end{equation}
where $m_0$ is a 1-form which belongs to ${\cal D}$.
With relations (\ref{na-a-0})--(\ref{part}) in hand, we
can now compute $\nabla {\bar \Omega}$, starting from ${\bar \Omega} = \Omega -
\frac{12}{(\kappa - s)} a \wedge Ja $ (see (\ref{kappa})), and also
using (\ref{na-om}). We get:
\begin{equation} \label{na-bar-Om}
\nabla {\bar \Omega} = 2m_0 \otimes {\bar \phi} -
2{I}m_0 \otimes {I}{\bar \phi} .
\end{equation}
This proves that $(g, I, {\bar \Omega})$ is an almost K{\"a}hler structure, since
$d{\bar \Omega} = 0$ is immediate from (\ref{na-bar-Om}).
The claim about the
K{\"a}hler nullity of $(g,I)$ follows from ${\bar a} = 2 m_0 \in {\cal D}$.
Similarly, starting from (\ref{bargauge}) and using (\ref{na-phi}),
(\ref{na-a-0})--(\ref{part}) we obtain
\begin{equation} \label{na-bar-phi}
\nabla {\bar \phi} = (3b + \frac{12\lambda}{(\kappa -s)} \phi(a))
\otimes {I}{\bar \phi}
- 2m_0 \otimes {\bar \Omega} ,
\end{equation}
and the relation (\ref{bar-b}) follows. ${\bf Q.E.D.}$
\vspace{0.2cm}
As our statements are purely local,
for brevity purposes, we now introduce the following
\vspace{0.1cm}
\noindent
{\bf Definition.} Let $(M,g,J)$ be a strictly almost K{\"a}hler 4-manifold
in the class ${\cal AK}_3$, and suppose that the Nijenhuis tensor
of $(g,J)$ does not vanish anywhere.
We say that $(M,g,J)$ is a
{\it doubly ${\cal AK}_3$} manifold, if the almost K{\"a}hler structure $(g,I)$
defined above belongs to the class ${\cal AK}_3$ as well.
\vspace{0.1cm}
\noindent
{\bf Remark 2.} Every non-K{\"a}hler 4-manifold in the class ${\cal
AK}_3$, which is
Einstein, or belongs to class ${\cal AK}_2$ is a doubly ${\cal AK}_3$
manifold. Indeed, this is an immediate consequence of Lemma \ref{lem2}
and Corollary \ref{cor1}.
Note also that all the examples arising
from Proposition 1 are doubly ${\cal AK}_3$ manifolds --- the
negative almost K{\"a}hler
structure $(g,I)$ is in fact K{\"a}hler for all these examples.
\vspace{0.2cm}
To anticipate, the end result of this section, slightly more general
than Theorem \ref{th1}, will be that every
non-K{\"a}hler, doubly
${\cal AK}_3$ 4-manifold is necessarily given by Proposition \ref{prop1}.
Getting closer to this goal, we now prove
\begin{prop}\label{prop2} Let $(M,g,J)$ be a non-K{\"a}hler,
doubly ${\cal AK}_3$
4-manifold. Then the negative almost K{\"a}hler structure $(g,I)$ is
K{\"a}hler. Moreover, the Ricci tensor is given by
$$ {\rm Ric} = \frac{s}{2}g^{\cal D}, $$
where $g^{\cal D}$ denotes the restriction
of the metric to the K{\"a}hler nullity ${\cal D}$ of $(g,J)$. \end{prop}
\noindent
{\it Proof of Proposition \ref{prop2}:}
For the beginning, we assume only that
$(M, g, J)$ is a strictly almost K{\"a}hler manifold of the class ${\cal AK}_3$.
We use the Bianchi identity (\ref{lem3-1}), together with
(\ref{lem3-2}) rewritten as
\begin{equation}\label{lem3-2'}
d\lambda = 2 \lambda Jb - \frac{\kappa}{4} J\phi(a) + J\phi({\rm
Ric}_0(a)),
\end{equation}
and the relation (see (\ref{na-a-1})--(\ref{part}))
\begin{equation}\label{na-J-phi-a}
d(J\phi(a)) = - 2b\wedge \phi(a) - m_0\wedge a - Jm_0\wedge Ja .
\end{equation}
Differentiating (\ref{lem3-1}), we get by (\ref{lem3-2'}) and
(\ref{na-J-phi-a}):
\begin{equation} \label{frob1}
0 = 2\lambda (b \wedge \phi(a) - Jb \wedge J\phi(a))
+ \lambda ( m_0 \wedge a + Jm_0 \wedge Ja)
\end{equation}
$$ - J\phi({\rm Ric}_0(a)) \wedge J\phi(a). $$
Taking various components, the relation (\ref{frob1}) can be seen to be
equivalent to:
\begin{equation} \label{nicefrob1}
\lambda m_0 = 2\lambda \phi( b^{{\cal D}^{\perp}}) =
\frac{1}{2} ({\rm Ric}_0(a))^{{\cal D}} ,
\end{equation}
where the super-scripts ${\cal D}$ and ${\cal D}^{\perp}$ denote the
projections
on those spaces.
Now we shall consider separately the following two cases:
\vspace{0.2cm}
\noindent
{\it Case 1.} $(M,g,J)$ is a doubly ${\cal AK}_3$ manifold which does
not belong to ${\cal AK}_2$. Then by Corollary \ref{cor1} we have
$\lambda \neq 0$. Since, by assumption, the Ricci tensor is both $J$ and $I$
invariant, it follows that ${\cal D}$ and ${\cal D}^{\perp}$ are eigenspaces
for the traceless Ricci tensor ${\rm Ric}_0$. In other words, we have
\begin{equation}\label{ric0}
{\rm Ric}_0 = \frac{f}{4}[-g^{\cal D} + g^{{{\cal D}}^{\perp}}],
\end{equation}
where $f$ is a smooth function. This implies that $({\rm
Ric}_0(a))^{{\cal D}} = 0$. Since
$\lambda \neq 0$, from (\ref{nicefrob1}) it follows that $m_0 = 0$, {\it i.e.},
$(g, I)$ is K{\"a}hler, see (\ref{na-bar-Om}). Also, from
(\ref{nicefrob1}) it follows that
$ b \in {\cal D}$. Under the doubly ${\cal AK}_3$ assumption, the
Ricci relation (\ref{**}) takes
the form
$$db = a\wedge Ja - \frac{(s+2\kappa)}{12}\Omega + \frac{f}{4}{\bar
\Omega},$$
or further (see (\ref{kappa}))
\begin{equation}\label{**''}
db = - \frac{(s+f)}{4}A \wedge JA
+ \frac{(3f-s-2\kappa)}{12} B\wedge JB ,
\end{equation}
where $\{B, JB \}$ is an orthonormal basis for ${\cal D}$ and $\{ A, JA \}$
is an orthonormal basis for ${\cal D}^{\perp}$.
Similarly, the Ricci relation (\ref{**}), written with respect to the
K{\"a}hler structure $(g,I)$, reads as
\begin{equation}\label{**'''}
d{\bar b} = \frac{(f+s)}{4}A\wedge JA + \frac{(f-s)}{4}B\wedge JB.
\end{equation}
On the other hand, using Lemma \ref{lem3}-(\ref{lem3-1}), the
equality (\ref{bar-b}) can be rewritten as
$${\bar b} = 3b + d^c_J\ln(\kappa -s),$$
where, we recall, $d^c_J = J \circ d$.
After differentiating we obtain the gauge
independent equality
\begin{equation}\label{db-db}
d{\bar b}= 3db + dd^c_J(\ln(\kappa
-s)).
\end{equation}
For computing $dd^c_J(\ln (\kappa -s))$, we remark first that by
Lemma \ref{lem3}-(\ref{lem3-1}) the vector field dual to
$d^c_J(\ln (\kappa -s))$ belongs to the kernel ${\cal D}$ of the
Nijenhuis tensor of $J$, so that $dd^c_J(\ln (\kappa -s))$ is a
(1,1)-form with respect to $J$. Furthermore, from Lemma
\ref{lem3}-(\ref{lem3-1}) it also follows that $d^c_J\ln(\kappa
-s) = d^c_I (\ln (\kappa -s))$, and then
\begin{equation}\label{explain}
dd^c_J(\ln (\kappa -s))=
dd^c_I (\ln (\kappa -s)),
\end{equation}
where $d^c_I = I \circ d$ stands for the $d^c$ operator with
respect to $I$. Since $I$ is integrable, the latter equality
shows that the 2-form $dd^c_J(\ln (\kappa -s))$ it is of type
(1,1) with respect to $I$ as well. Finally, keeping in mind that
$I$ is K{\"a}hler and $J$ is almost K{\"a}hler , from (\ref{explain}),
(\ref{lem3-3}) and (\ref{lem3-1}) we compute
\begin{eqnarray}\nonumber
\langle dd^c_J(\ln
(\kappa -s)), {\bar \Omega}\rangle &=& \langle dd^c_J(\ln (\kappa -s)), \Omega
\rangle \\ \nonumber
&=& -\Delta \ln(\kappa -s) \\ \nonumber &=& -\frac{\Delta (\kappa
-s)}{(\kappa -s)}
+\frac{|d(\kappa -s)|^2}{(\kappa -s)^2} \\ \nonumber
&=& \frac{\kappa -f}{2}.
\end{eqnarray}
Since $dd^c_J(\ln(\kappa -s))$ is a (1,1)-form with respect to both $J$
and $I$, the latter equality shows that
\begin{equation}\label{ddc}
dd^c_J(\ln(\kappa -s))= \frac{(\kappa -f)}{2}
B\wedge JB.
\end{equation}
By (\ref{**''}), (\ref{**'''}) and (\ref{ddc}), the equality
(\ref{db-db}) finally reduces to $f+s=0$ which, together with
(\ref{ric0}), imply the claimed expression of the Ricci tensor.
\vspace{0.2cm}
\noindent
{\it Case 2.} $(M,g,J)$ is non-K{\"a}hler manifold in the class ${\cal
AK}_2$. Now $\lambda =0$ by Lemma \ref{lem1},
so the equality (\ref{nicefrob1}) is not useful anymore, as all terms
vanish trivially. However, applying Case 1 to the structure $(g, I)$, we
conclude that it must be itself in the class
${\cal AK}_2$, since otherwise it would follow that $(g, J)$ is K{\"a}hler,
a contradiction.
With the same choices of the gauge as in Lemma \ref{lem4},
we have in this case ${\bar b} = 3b$. This leads to the gauge independent
relation $d{\bar b} = 3db$. Assuming that $(g,I)$ is not K{\"a}hler, we
interchange
the roles of $J$ and $I$ to also get
$db = 3d{\bar b}$, \ {\it i.e.}, $db =0$ holds. But this leads to a
contradiction.
Indeed, according to
Corollary \ref{cor1} we have $f= \kappa$, so from the Ricci relation
(\ref{**''}) we get
$\kappa - s = 0$, {\it i.e.}, $(g,J)$ is K{\"a}hler which contradicts the
assumption.
Thus $(g,I)$ must be K{\"a}hler and
(\ref{**'''}) holds. It is easily checked that $d{\bar b} = 3db$ is,
in this case,
equivalent to $\kappa + s = 0$.
This and Corollary 1 imply the desired form of the Ricci tensor. ${\bf Q.E.D.}$
\begin{prop}\label{prop4} Let $(M,g,J)$ be a non-K{\"a}hler,
doubly ${\cal AK}_3$ 4-manifold. Then ${\cal D}^{\perp}$ is spanned by
commuting Killing vector fields.
\end{prop}
\noindent
{\it Proof of Proposition \ref{prop4}:}
For any smooth functions $p$ and
$q$ we consider the vector field $X_{p,q}$ in ${\cal D}^{\perp}$, the dual to
the 1-form $pa + qJa$. The condition that $X_{p,q}$ is Killing
is equivalent to $\nabla (pa + qJa)$ being a section of $\Lambda^2M$. To
write explicitly the equation on $p$ and $q$ that arise from the
latter condition we need the covariant derivative of $a$
and $Ja$. But we know
already
by Proposition \ref{prop2} that $(g,I)$ is K{\"a}hler, {\it i.e.}, the 1-form
$m_0$ defined in (\ref{part}) vanishes (see (\ref{na-bar-Om})).
We thus have by (\ref{na-a-0})--(\ref{part})
\begin{eqnarray}\label{na-a}
\nabla a &=& -\frac{6\lambda}{(\kappa -s)}J\phi(a)\otimes a -
\frac{6\lambda}{(\kappa -s)} \phi(a)\otimes Ja \\ \nonumber
& & - b\otimes Ja + \frac{1}{2}Ja\otimes \phi(a) +
\frac{1}{2}a\otimes J\phi(a); \\ \label{na-J-a}
\nabla (Ja) & = & \frac{6\lambda}{(\kappa - s)} \phi(a)\otimes a -
\frac{6\lambda}{(\kappa -s)} J\phi(a)\otimes Ja \\ \nonumber
& & + b\otimes a + \frac{1}{2} a\otimes\phi(a) + \frac{1}{2}
Ja\otimes J\phi(a).
\end{eqnarray}
Using (\ref{na-a}) and (\ref{na-J-a}) the
condition that $\nabla(pa + qJa)$ belongs to $\Lambda^2M$ can be
rewritten as
\begin{equation}\label{system}
\left.
\begin{array}{c@{ \ = \ }c}
dp & -qb - \frac{p}{2}(1-\frac{12\lambda}{(\kappa - s)})J\phi(a) -
\frac{q}{2}(1 + \frac{12\lambda}{(\kappa -s)})\phi(a) \ + rJa; \\
dq & pb - \frac{p}{2}(1-\frac{12\lambda}{(\kappa - s)}) \phi(a) \ +
\frac{q}{2}(1 + \frac{12\lambda}{(\kappa -s)})J\phi(a) - ra,
\end{array}
\right\}
\end{equation}
where $r$ is a smooth function. Since we are looking for
commuting Killing fields, we have $r\equiv 0$, and we thus obtain
a Frobenius type system. To show that (\ref{system}) has solution
in a neighborhood of a point $x\in M$ for any given values
$(p(x), q(x))$, we apply the Frobenius theorem. Accordingly, we
have to check
\begin{equation}\label{edno}
d\big( \ \ 2qb + p(1-\frac{12\lambda}{(\kappa - s)})J\phi(a) +
q(1 + \frac{12\lambda}{(\kappa -s)})\phi(a)\big)= 0;
\end{equation}
\begin{equation}\label{dve}
d\big(-2pb + p(1-\frac{12\lambda}{(\kappa - s)})\phi(a) - q(1 +
\frac{12\lambda}{(\kappa -s)})J\phi(a)\big)=0.
\end{equation}
For that we further specify the relations (\ref{na-a-1}) and
(\ref{**''}), taking into account that $m_0=0$ and
$f=-s$ (see Proposition \ref{prop2}). We thus get:
$$d(J\phi(a)) = -2 Jb \wedge J\phi(a),$$
$$d(\phi(a)) = 2b\wedge J\phi(a)+ 2\lambda B\wedge JB,$$
$$db = -\frac{(2s+ \kappa)}{6} B\wedge JB,$$
where $B=\frac{1}{|a|}\phi(a)$ and $JB=\frac{1}{|a|}J\phi(a)$ is an
orthonormal frame of ${\cal D}$. By Lemma \ref{lem3} and (\ref{ddc})
we also have
$$ d\ln(\kappa -s) = -\frac{12\lambda}{(\kappa - s)}J\phi(a); \ \
d^c_J\ln(\kappa -s)= \frac{12\lambda}{(\kappa -s)}\phi(a),$$
$$
dd^c_J(\ln(\kappa -s))= \frac{(\kappa +s)}{2}B\wedge JB.$$
Using the above equalities, together with (\ref{system}) and
(\ref{kappa}), it is
now straightforward to check (\ref{edno})
and (\ref{dve}). ${\bf Q.E.D.}$
\vspace{0.2cm}
\noindent
{\bf Remark 3.} The miraculous
cancellation that appears by checking the equalities (\ref{edno}) and
(\ref{dve}) can be explained by simply observing that if the cancellation
hadn't occurred we would then derive an integrability condition depending
on $\lambda $ and $\kappa -s$. But these take arbitrary values for the
examples provided by Proposition 1. We thus conclude that the integrability
conditions (\ref{edno}) and (\ref{dve}) must be satisfied.
\begin{theo}\label{th2} Any 4-dimensional non-K{\"a}hler, doubly ${\cal
AK}_3$ metric
is locally isometric to one of the metrics described by Proposition
{\rm \ref{prop1}(i)} {\rm (}or equivalently, by {\rm (\ref{ak3})}{\rm )}.
\end{theo}
\noindent {\it Proof of Theorem 2:} Let $(M,g,J)$ be a
non-K{\"a}hler, doubly ${\cal AK}_3$ 4-manifold. By Proposition
\ref{prop2}, there exists a K{\"a}hler structure $I$, which yields the
opposite orientation of $M$. Moreover, we know by Proposition
\ref{prop4} that in a neighborhood of any point there exists a
Killing vector field $X \in {\cal D}^{\perp}$, determined by a
solution of the system (\ref{system}). It is not difficult to
check that $X$ preserves $I$. Indeed, we have to verify
$${\cal L}_X {\bar \Omega} = d(I(pa + q Ja)) = d(qa -pJa)=0.$$
The latter equality is a consequence of (\ref{system})
and the Ricci identities (\ref{*}) (If the manifold is not Ricci flat,
the invariance of $I$ also follows from the fact that
$I$ is determined up to sign by the two eigenspaces of ${\rm Ric}$).
According to
\cite{LeB}, the metric
$g$ has the form (\ref{g}), where the functions $w$ and $u$ satisfy
(\ref{1}) and
$X=\frac{\partial}{\partial t}$. From Proposition \ref{prop2} we also know
that ${\rm Ric}(X)=0$. But the Ricci form of the K{\"a}hler structure $(g,I)$
is given by
$\frac{1}{2}dd^c_{I}u$ (see \cite{LeB}); we thus obtain $w=const.u_z$ and
then
$${\rm Ric}=(u_{xx} + u_{yy} + (e^u)_{zz})[dx^2 + dy^2].$$
The above equality shows that either $g$ is Ricci flat (then $g$ is
given by Tod's ansatz, see \cite{Arm1}),
or else, according to Proposition \ref{prop2},
the K{\"a}hler nullity ${\cal D}$ of $(g,J)$ is spanned by the (Riemannian) dual
fields
$dx$ and $dy$. The latter means that the K{\"a}hler \ form $\Omega$ of $(g,J)$
is given by (\ref{J}), and the result follows by Proposition \ref{prop1}
and Remark 1.
${\bf Q.E.D.}$
\vspace{0.2cm}
\noindent
Theorem 1 is now just a particular case.
\vspace{0.2cm}
\noindent
{\bf Proof of Theorem 1.} By Remark 2 we know that every strictly
almost-K\"ahler 4-manifold $(M,g,J,\Omega)$ satisfying ($G_2$) is
doubly ${\cal AK}_3$;
it follows by Theorem 2 and Proposition 1 that $(M,g,J,\Omega)$ arises from
Proposition
\ref{prop1}(ii). According to Remark 1(b) the metric $g$ is
locally isometric to
(\ref{canonic}) which, in turn, is isometric to Kowalski's metric, doing
the change
of variables (\ref{chvar}). ${\bf Q.E.D.}$
\vspace{0.1cm}
\noindent
{\bf Remark 4.} Avoiding the use of the change of variables (\ref{chvar}),
one could have
completed the proof of Theorem 1 as follows: as above one shows that
any strictly almost-K\"ahler 4-manifold $(M,g,J,\Omega)$ satisfying ($G_2$) is
locally isometric to (\ref{canonic}). On the other hand,
A.Gray \cite{gray} showed that any Riemannian 3-symmetric
space has a canonical almost-Hermitian structure, which in 4-dimensions, is
necessarily almost-K{\"a}hler (K{\"a}hler iff the manifold is
symmetric) and satisfies the condition ($G_2$).
It thus follows that the proper 3-symmetric metric of Kowalski \cite{kow}
is isometric to (\ref{canonic}) as well. In particular, this
provides a differential geometric proof of existence and uniqueness
of proper
3-symmetric 4-dimensional manifolds, result proved by Kowalski
using Lie algebra techniques \cite{kow}.
\begin{cor}\label{cor3} {\rm { (\cite{ADK})}} Every compact almost
K{\"a}hler 4-manifold satisfying the second curvature condition of Gray is
K{\"a}hler.
\end{cor}
{\it Proof of Corollary \ref{cor3}:} Suppose for contradiction that
$(M,g,J)$ is a compact,
non-K{\"a}hler, almost K{\"a}hler 4-manifold in the class ${\cal AK}_2$.
According to Corollary \ref{cor1},
the distributions ${\cal D}$ and ${\cal D}^{\perp}$ are globally
defined on $M$, and by Proposition 2 they give rise to a
negative K{\"a}hler structure $(g,I)$. We
know by Theorem 1 that $(g,J,I)$ locally arise from Proposition 1. Then
the whole curvature of $g$ is completely determined
by the (negative constant) scalar curvature $s$, cf. Remark 1. More precisely,
the conformal
curvature $\kappa$ is given by $\kappa= -s$ (Corollary \ref{cor1} and
Proposition \ref{prop2}). Since
$(g,I)$ is K{\"a}hler, we also have $|W^-|^2 = \frac{s^2}{24}$, see {\it
e.g.} \cite{Ga}. As
$(g,J)$ is in the class ${\cal AK}_2$, the self-dual Weyl tensor satisfies
$W^+_2=0$, $W^+_3=0$ and then $|W^+|^2 = \frac{\kappa^2}{24}$ (see
(\ref{w^+1})); by $\kappa = -s$ we conclude $|W^+|^2 = |W^-|^2
=\frac{s^2}{24}$.
We then get by the Chern-Weil formula
$$\sigma(M)=\frac{1}{12\pi^2}\int_M
|W^+|^2 - |W^-|^2 dV_g$$
that the signature $\sigma(M)$ vanishes.
Similarly, the Euler characteristic $e(M)$ is given
by
$$e(M) = \frac{1}{8\pi^2}\int_M |W^+|^2 + |W^-|^2 + \frac{s^2}{24} -
\frac{1}{2}|{\rm Ric}_0|^2 dV_g.$$
But we know that the Ricci tensor of $g$ has
eigenvalues $(0,0,\frac{s}{2},\frac{s}{2})$ (Proposition \ref{prop2}) and
then $|{\rm Ric}_0|^2=\frac{s^2}{4}$; we thus readily see that $e(M)=0$.
Furthermore, since $({\bar M},g,I)$ is a K{\"a}hler surface of
(constant) negative
scalar curvature, we have $H^0({\bar M}, { K}^{\otimes - m})=0$, where
$K$ denotes the canonical bundle of $({\bar M},I)$. The
conditions
$\sigma({\bar M})=-\sigma(M)=0$, $e({\bar M})=e(M)=0$ then imply that
the Kodaira dimension of $({\bar M}, I)$ is necessarily equal to 1,
cf. {\it e.g.}
\cite{BPV}. Thus $({\bar M},I)$
is a minimal
properly elliptic surface with vanishing Euler
characteristic. Using
an argument from \cite{ADK}, we conclude that, up to a finite cover, $({\bar
M},I)$ admits a
non-vanishing holomorphic vector field $X$. Now the well known
Bochner formula for holomorphic fields and the fact that the
Ricci tensor of $({\bar M},g,I)$ is semi-negative whose kernel is the
distribution ${\cal D}^\perp$ (Proposition \ref{prop2}) imply
that $X$ is
parallel and belongs to ${\cal D}^{\perp}$. Then ${\cal
D}^{\perp}$ (hence
also ${\cal D}$) is parallel. Since $(g,I)$ is a K{\"a}hler structure,
$I$ is parallel, and consequently, the almost
complex structure
$J$ must be parallel as well, {\it i.e.}, $(g,J)$ is K{\"a}hler,
which contradicts our assumption. ${\bf Q.E.D.}$
\vspace{0.1cm}
\noindent
{\bf Remark 5.} For obtaining a contradiction
in the proof of Corollary \ref{cor3}
one can alternatively argue as follows: We
know by Theorem 1 that $(g,J,I)$ locally arise from Proposition 1.
The metric $g$ is therefore locally homogeneous and the complex structure $I$
is invariant as being determined by the eigenspaces of the Ricci
tensor. It thus follows that $(M,g,I)$ is a {\it compact} locally homogeneous
K\"ahler surface; it is well known that any such surface is locally
(Hermitian) symmetric (cf. {\it e.g.} \cite{wall}), while the metric
$g$ given by Proposition 1(ii) is not.
\vspace{0.1cm}
\noindent
{\bf Remark 6.} Using the method of ``nuts and bolts'' \cite{GH},
C.LeBrun \cite{LeB1} successfully ``compactified'' certain K{\"a}hler
metrics arising from (\ref{g}) and obtained explicit examples
of compact scalar-flat K{\"a}hler surfaces admitting a circle
action. The idea is the following: Starting from an open (incomplete)
manifold $M_0$
where the metric $g$ has the form (\ref{g}), one adds points and (real)
surfaces in order to obtain a larger, complete manifold $M$, such that $M_0$ is
a dense open subset of $M$, and the circle action on $M_0$ generated
by the Killing vector field $X=\frac{\partial}{\partial t}$ extends to
$M$; the added points and surfaces become the fixed point of this
action.
It is thus natural to wonder if similar ``compactification'' exists
for the metrics given by Proposition \ref{prop1}, providing
compact
examples of non-K{\"a}hler, almost K{\"a}hler
4-manifolds in the class ${\cal AK}_3$. (The interest in such compact examples
is motivated by some variational problems on compact symplectic
manifolds \cite{Bl,BI}). Corollary \ref{cor3} shows that this is
impossible if we
insist that (\ref{u}) is satisfied. Unfortunately, even in the case when
$(\ref{u})$ does not hold, the variable reduction we have for the
functions $u$ and $w$ does not permit us to obtain compact
examples directly following LeBrun's approach. Indeed, if $(M,g,J)$ was
a compactification of $(M_0,J,g)$ with
extended circle action generated by a Killing vector field
$X= \frac{\partial}{\partial t}$, then by
Propositions \ref{prop2} and \ref{prop4}, we would have
${\rm Ric}(X,X)=0$ on $M_0$, hence
also, on $M$ as $M_0$ is a dense subset; by the Bochner formula
$X$ would then be parallel. In particular, the $g$-norm of $X$ would be
constant, hence also, the smooth function $w=\frac{1}{g(X,X)}$. Therefore,
$(g,J)$ would be K{\"a}hler by Proposition \ref{prop1}, a contradiction.
\vspace{0.2cm} \noindent As a final note, it is tempting to
conjecture that the local classification obtained in Theorem 2
could be further extended to the general case of strictly ${\cal
AK}_3$ 4-manifolds (in other words, we believe that the doubly
${\cal AK}_3$ assumption in the Theorem 2 could be removed). For
this goal a further analysis of the higher jets of $J$ would be
needed, with computations becoming more involved, but it is
possible that some nice cancellations might still take place.
\vspace{0.2cm}
\noindent {\bf Acknowledgements}: The first author thanks the
Mathematical Institute of Oxford for hospitality during the preparation
of an important part of this paper.
The authors are
grateful to R. Bryant, G. Gibbons, C. LeBrun, S. Salamon and P. Tod for
their interest and some stimulating discussions. We would also like to
express our thanks to O. Mu\u{s}karov whose comments essentially improved
the presentation of the results in Section 4, to D. Blair for his friendly
assistance in reading the manuscript and suggesting several improvements,
and to A. Moroianu for bringing to our attention the unpublished work
\cite{Br}.
|
1,314,259,993,401 | arxiv | \section{Introduction}
\label{sec:1}
The angular and spectral dependence of coma polarization are signatures of the size, composition and structure of coma grains \cite{hanner02, hadamcik03c, kolokolova04}. The degree of polarization typically increases with wavelength, called a red polarimetric color for the dust coma, as well as phase angle \cite{kolokolova04, kolokolova97, lev03}. A few comets have shown a blue polarimetric color such as comet 21P/Giacobini-Zinner \cite{kiselev00}. The polarization and polarimetric color are influenced by the size, composition, and geometry of the individual scattering monomers that make up an aggregate dust particle. On 2005 July 4 at 5:52:02 UT comet 9P/Tempel 1 was impacted by a 364 kg spacecraft as part of the NASA Deep Impact mission \cite{ahearn05}. The event presented a unique opportunity to compare spectropolarimetric measurements with other measurements performed simultaneously as part of the Deep Impact mission to further constrain the dust properties.
For comets at phase angle of 40.9$^\circ$, average coma polarization in a red filter is 4\% to 10\% and perpendicular to the scattering plane plane, usually increasing towards 1$\mu$m \cite{lev03}. Some exceptions are known however \cite{kiselev00, lev99, lev03}. Some dependence on comet activity was also found and explained as a consequence of the influence of scattering particles of differing size and composition \cite{hadamcik03a, hadamcik03b, hadamcik03c}. A change in polarization due to a change of composition and size is also seen in scattering models \cite{kolokolova97, kolokolova04, kimura06, las06}.
The polarization and color of cometary dust scattering has been studied theoretically and experimentally in great depth over a great range of variables such as size, composition, scattering geometry, particle roughness, and aggregation process \cite{gus99, kolokolova04, kimura06, las06}. Models that fit the typical angle and red wavelength dependence of the polarization best use aggregates of more than 1000 monomers having small radii ($\sim$100nm). Model aggregates that fit the typical polarization measurements have a high index of refraction (m=n + ik, n=1.8 to 2.0, k$\sim$0.4 to 0.6). This corresponds to volume fractions of one-third silicates, two-thirds carbonaceous materials, and a small amount of iron-bearing sulfides \cite{kimura06}. This material is quite dark (high imaginary part of the complex index of refraction) and similar to the composition of 1P/Halley's dust \cite{mann04}. The carbonaceous material in the model is roughly two thirds amorphous carbon and one third organic-refractory material. The type of aggregation (cluster-cluster vs. particle-cluster) did not play a major role in the polarization models, but it does influence the thermal IR spectra through the porosity \cite{kimura04, kolokolova04}. A decrease in polarization with wavelength, called blue polarization gradients or slopes, were not extensively modeled because of their rarity. However, some models demonstrated the tendency of polarimetric color to decrease, {\it i.e.} get more blue, with increasing size, albedo ({\it e.g.} icy), and transparency of the material (crystallinity), whereas more absorbing materials typically show an increase in polarzation with wavelength, called a red polarization slope \cite{kimura06}. Multiple scattering in optically thick dust clouds depolarizes the scattered light. With all this information contained in the polarization of scattered light, we expected the polarimetric data to shed light on the change in dust properties resulting from the Deep Impact encounter.
\section{The Deep Impact Spectropolarimetry}
The AEOS telescope is a 3.67m, altitude-azimuth telescope. The HiVIS spectrograph is a cross-dispersed echelle spectrograph using the f/200 coud\`e optical path \cite{thornton03}. Since non-normal incidence reflections change the polarization state of the incident light, a careful calibration of the telescope has been performed \cite{harrington06}. We used a 1.5$''$$\times$7$''$ slit (970$\times$4530 km at a geocentric distance of $\Delta$=0.89 AU) and the ''red" setting (nominally 637.5-968.0 nm) for all comet observations. The spectropolarimetry module for the AEOS spectrograph consists of a rotating achromatic half-wave plate and a calcite Savart Plate. The Savart plate separates an incoming beam into two parallel, orthogonally polarized beams separated in the spatial direction at the focal plane.
A common definition in planetary science is that the degree of polarization is the difference between the intensity polarized parallel and perpendicular to the scattering plane \cite{kolokolova04}. Since the HiVIS image rotator was not used in order to simplify the polarization calibration, the projection of the slit's position angle onto the sky was not constant, and knowledge of the scattering plane is difficult to extract. In this paper, we will be using an alternative definition that does not require knowledge of the scattering plane, simply calculating the polarization in the instrumental reference frame (see \cite{harrington07} for details).
Comet 9P/Tempel 1 was bright enough to obtain useful spectropolarimetry only on the night of impact. We aquired two complete data sets, from 6-7 and 7-8 UT. The data were reduced using the AEOS-specific reduction pipelines developed for this instrument \cite{harrington06}. Calibration of comet spectropolarimetry is performed by interpolating a sky-map of unpolarized standard star observations to the pointing of the comet to create a telescope-induced polarization calibration. The corrections were typically 2-3\% with mild wavelength dependence ($\leq1$\%) and they did not alter the slope-change seen in the comet 9P/Tempel 1 \cite{harrington07}. In order to present high signal to noise measurements of the polarization, the spectra were averaged 1000:1, giving 19 independent polarization measurements. The spectropolarimetry is plotted in Fig. \ref{fig:1}. The 6-7 UT data set, started 8 minutes after impact, shows a slightly anomalous blue-sloped degree of polarization of 4\% falling to 3\% from 650 to 950nm (-0.9$\pm$0.2\%/10$^3$\AA). In contrast, the 7-8UT data set, started 75 minutes after impact, shows a more pronounced blue slope from 7\% at 650nm to 2\% at 950nm (-2.3$\pm$0.3\%/10$^3$ \AA). This is an indication of the change in particle scattering properties. The leading edge of the ejecta plume was moving out from the nuclues at $\sim$200m s$^{-1}$ \cite{meech05}. A body moving this speed could move across the slit in 40 minutes, setting the timescale for significant change in scattering by ejected gas and dust, consistent with other observations \cite{meech05, jehin06, sugita05, schleicher06}.
For comparison, Comet C/2004 Q2 Machholz was observed on November 27, 2004 at a phase angle of 30$^\circ$. Two complete data sets (8 images at 1200s, roughly double the flux) were reduced and calibrated in the same way as the comet 9P/Tempel 1 data. The measured degree of polarization was in the usual range and showed a typical red polarization slope which did not change very signifigantly between the two image sets. This result gives us confidence in our instrument calibration and reduction techniques.
\section{Discussion and Conclusions}
Our observation of 4\% and 7\% polarization at 650nm is typical for comets at these wavelengths and phase angles, 4\% being somewhat low. However, the 1\% and 3\% polarization at 950nm is not at all typical. The few computer or laboratory simulations that have modeled blue polarization slopes show it can result from larger particles or a predominance of transparent particles (larger crystalline silicates or ices) \cite{hadamcik02, kolokolova04, kimura04, kimura06, las06, kiselev00}. At this point, we must discuss other observations of silicates, ices, and particle sizes to fully interpret the polarization.
The infrared observations of comet 9P/Tempel 1 right after the impact showed a strong and complex silicate feature developing by 1 hour after impact, fading 1.8 hours after impact \cite{harker06, sugita05, lisse06}. The size of the silicate particles also evolved strongly. The dust particles were reported to have size distribution peaks (of the aggregates, not monomers) increasing after impact \cite{harker06, lisse06, lisse06s}. These models showed that pre impact there was an absence of sub-$\mu$m silicates, and the spectrum was mineralogically dominated by larger (0.9$\mu$m) amorphous olivine with no carbon or crystalline silicates. They reported an increase in the number of sub-$\mu$m silicates, an emission from relatively transparent Mg-rich crystalline olivine, as well as a doubling of the silicate to carbon ratio and a 4-fold increase in the crystalline to amorphous silicate ratio between the first and second hour after impact. A relative lack of organics was also seen with the amorphous carbon being roughly 20\% of the silicates (30\% to 50\% is typical) \cite{lisse06}.
There was evidence for ice, but in a smaller amount than necessary to produce a blue polarization slope. There was a change in color of the dust in the near-IR post impact interpreted as icy grains, or icy grain mantles being liberated in the impact event \cite{fernandez06}. There was direct observation of icy grains in the ejecta plume from the main spacecraft in the form of the 3 $\mu$m ice absorbtion from immediately after impact through lookback 46 min later \cite{sunshine06}. Evidence for icy grains in the inner 600km of the coma was seen with the sublimation maximized 1.5 hours after impact \cite{schulz06}. Particle disintegration was suggested from the different spatial evolution of CN, [OI], and dust continuum flux in spectrophotometric measurements \cite{hodapp06}. The Spitzer spectra also required ice covering 3\% of the dust surface area in their 10" FOV \cite{lisse06}. Thus, ice is also a possibility as a contribution to the transparency of the particles. However, the amount of ice necessary to dominate the scattering is much more than is suggested by the Spitzer and spacecraft data.
Multiple scattering depolarizes light in cometary dust and may be responsible for the low polarization just after impact. An optically thick plume was seen, thinning to $\tau\sim0.4$ 20-25 minutes after impact \cite{schleicher06}. The look-back images from the fly-by spacecraft also showed an optically thick ejecta plume after impact \cite{ahearn05}. Since the optical depth was a strong function of time, the influence of multiple scattering is assumed to change strongly as well.
We can explain our observations by a depolarization due to multiple scattering in the first hour and subsequent domination of larger and more transparent monomers (silicates), and possibly ices, in the ejected dust aggregates of comet 9P/Tempel 1. This is consistent with the infrared data on Deep Impact, which indicate a high amount of silicates in the DI dust whereas in situ data for comets 1P/Halley and 81P/Wild 2, typical red polarization comets, show that their dust contained two-thirds of carbonaceous materials and organics \cite{lisse06, harker06, sugita05, kimura06, kis04}. We can speculate that subsurface materials in comet 9P/Tempel 1 had more volatile organics and ices that quickly evaporated or decayed leaving the disrupted and fragmenting dust, rich in silicates, to produce the observed blue polarization slope \cite{fernandez06, mumma05, ahearn05}. Harker et al. have suggested that smaller particles in the ejecta moved faster, size sorting the cloud, leaving larger and more crystalline silicates behind \cite{harker06}. This could also contribute to the anomalous blue polarization slope we detected for the inner coma.
\begin{figure}
\centering
\includegraphics[height=14cm, angle=90]{fig1.eps}
\caption{Polarization for comet 9P/Tempel 1 with 3-$\sigma$ error bars. The data was taken on impact night from 6-7 UT (diamonds) and 7-8 UT (triangles) at a phase angle of 40.9$^\circ$. A linear fit is plotted to guide the eye. A single point at 805nm (order 11) in the 6-7 UT data set has been replaced by the average of neighbor points, and has no error bars. The polarization spectra from 6-7 UT curve had a shallow negative slope of -0.9$\pm$0.2\%/ 10$^3$ \AA. The 7-8UT curve had a slope of -2.3$\pm$0.3\% / 10$^3$ \AA.}
\label{fig:1}
\end{figure}
|
1,314,259,993,402 | arxiv | \section{Introduction}
\label{intro}
The mission of natural language processing (NLP) as a computational research field is to enable machines to function in human-oriented environments where language is the medium of communication.
We want them to understand our utterances, to connect these utterances with the objects and concepts of the surrounding world, to produce language which is meaningful to us and helps us navigate a task or satisfy an emotional need.
Over the years of its existence, the mainstream of NLP has known shifts motivated by developments in computation, in linguistics, in foundational artificial intelligence, and in learning theory.
Since the mid-2010's, the clear dominant framework for tackling NLP tasks, and an undeniably powerful one, has been that of deep neural networks (DNNs).
This connectionist approach was originally motivated by the workings of the human brain, but has since developed its own characteristics, and formed a well-defined landscape for exploration which includes constraints stemming from the fundamental properties of its design.
This survey focuses on one of these built-in constraints, which I believe to be central to DNNs in the context of natural language, and specifically of text processing, namely that of \textbf{representations}.
DNNs \say{live} in metric space: their operation manipulates real numbers organized into vectors and matrices, propagating function applications and calculated values within instantiations of pre-defined architectures.
This mode of existence is very well-suited to problem domains that inhabit their own metric space, like the physical realms of vision and sound.
In stark contrast to these, the textual form of linguistic communication is built atop a discrete alphabet and hinges on notions such as symbolic semantics, inconsistent compositionality, and the arbitrariness of the sign~\cite{saussure1916cours}.
The example in (\ref{ex:sent}) exhibits all of these: the symbol \textit{dog} refers to two distinct objects bearing no semantic resemblance; \textit{large} and \textit{white} each describe the (canine) dog's physical properties, while \textit{dining} categorizes the table based on its function, and \textit{hot} does not modify (the second) \textit{dog} at all, but rather joins it to denote a distinctive atomic concept.
\vspace{8pt}
\begin{example}
\label{ex:sent}
\textit{The large white dog ate the hot dog left on the dining table.}
\end{example}
\vspace{8pt}
Given these properties of language, it is far from straightforward to decide the means by which to transform raw text into an input for a neural NLP system tasked with a goal which requires a grasp on the overall communicative intent of the text, such that this initial representation does not lose basic semantics essential to the eventual outcome.
This transformation process is known as embedding, after which its artifacts are themselves known as \textbf{embeddings}, often used synonymously in context with \say{vectors} or \say{distributed representations}.
Indeed, the choice for default representations has known several shifts within the short DNN era, motivated in part by advances in computational power but also by a collective coming to terms with the limitations of the preceding methods.
The great challenge of representation is compounded by the unboundedness of it all --- human concept space is ever-expanding, and each new concept may be assigned an arbitrary sign (e.g.,~\textit{zoomer}); within an existing concept space, associations capable of inspiring new utterances occupy a combinatorial magnitude which is essentially infinite; and even the form-meaning relationship itself exhibits malleability by humans' interaction with text input devices and various cognitive biases.\footnote{As a case in point, over the course of writing this survey I have manually added dozens of new terms to the Overleaf editor's spell-check dictionary, two in the referring sentence alone.}
Each of these sources of expansion weighs any proposed representational method with the additional burden of generalizing to novel inputs while maintaining consistency in the manner by which they are represented in the system.
In the NLP literature, the surface manifestation of the expanding spaces of concept and form, and of the more locally-constrained disparity between text available at different points in time of a model's training and deployment, is known as the \textbf{out-of-vocabulary} problem, and the unseen surface forms themselves are termed OOVs.
In this survey, I consider three central approaches to representing the fundamental units of natural language text in its input stage and the consequences of each approach's selection on the goals of the systems they are applied in.
The first, most popular, and most successful one when used in isolation, is the \textbf{distributional} approach where the representation function is trained to embed textual units which appear in similar contexts close to each other in vector space.
The second is the \textbf{compositional} approach which seeks to assemble embeddings for workable textual units by breaking them down into more fundamental elements and applying functions over their own representations, less committed to semantic guarantees.
The last is the \textbf{relational} approach which makes use of large semantic structures curated manually or in a semi-supervised fashion, leveraging known connections between text and concepts and among concepts in order to create embeddings manifesting humans' notions of \say{meaning}.
\textbf{The OOV problem} features heavily in the motivation and analysis of the work presented, as it presents challenges to each of the approaches described, yet the exact definition of vocabularies and OOV-ness themselves are challenged by the advent of NLP systems that have become mainstream following the processed described in this work, namely \textbf{contextualized subword embeddings}.
\section{The Atoms of Language}
\label{int:atoms}
Natural language is ultimately a system for conveying meaning, information, and social cues from the realm of human experience into a discrete linear form by encoding them as auditory, visual, and/or textual symbols, which are then iteratively composed into more complex units.
In order to process such a system's outputs by computational means, it seems fitting to identify those symbols which carry the basic units of meaning, and then find the proper ways to map those meanings into representations for a program which can compose them.
The first step, that of identifying linguistic atoms, proves to be a formidable challenge.
From the surface output perspective, the common wisdom is that the basic semantic unit of language is what is known as a \textbf{morpheme}.
The English word \textit{unbelievable}, for example, is composed of a stem morpheme \textit{believe}, a semantic-syntactic suffix \textit{-able} recasting the verb into an adjective pertaining to potential, and a semantic prefix \textit{un-} denoting negation.
But this morpheme = atom stipulation is not unassailable.
Processes below the morpheme level have been documented across languages, for example the sound symbolism phenomenon known as phonaesthesia, where arbitrary sound patterns correlate with a concept or conceptual properties, such as /gl/ in the English light/shine-related words \textit{glow}, \textit{glitter}, and \textit{glare}~\cite{blake2017sound}.
Less arbitrarily, patterns and even individual sounds in names are known to evoke semantic qualities based on their acoustic properties~\cite{kohler1947gestalt,bergh1984sound}.
In English-language informal communication modes, writers sometimes employ the practice of expressive lengthening, where a single character in a word is repeated in order to amplify its referent's extension on some scale.
For example, \textit{looooong} would be used to describe a particularly long object or period of time.
In addition to these sub-morpheme phenomena, the morpheme symbolism and the atoms of our conceptual space relate at neither a univalent nor a one-to-one relation.
Certain stem morphemes, like \textit{star}, denote multiple types of concepts or objects (\textbf{polysemy} and \textbf{homonymy}), while some concepts may be referred to using different morphemes like the relevant meanings of \textit{room} and \textit{space} (\textbf{synonymy}).
The suffix \textit{-s} can denote both a third-person present verb or a plural noun (\textbf{polyexponence}), and both are replaced by \textit{-es} under certain local conditions (\textbf{flexivity}).
Theoretical quibbles notwithstanding, NLP is a practical field, and from its nascence it was clear that finding the most appropriate way to break text down to its purest elements should not set back our efforts to perform sequence-level tasks and develop useful applications.
Thus, concessions must be made in the form of selecting a unit easily extractable from text and working with it.
This necessity coincides with the reality of having English as the overwhelmingly central target of NLP applications and easiest source of data.
The focus on a language with mostly isolating morphology, where morphemes often occupy distinct word forms that are related through sentence-level syntax, conspired with the technical ease of detecting whitespace in text and led to an inevitable starting point for the community in using the \textbf{space-delimited word} as the basic unit of text analysis.\footnote{I will continue throughout to use \say{space-delimited} to describe a family of simple string tokenization techniques which typically also include minimal heuristics for punctuation separation and a handful of language-specific rules like separating English contractions based on a short closed list, in partial accommodation of the difference between grammatical words and orthographic words~\cite{dixon2002word}.}
The very name of the fundamental bag-of-words approach (BoW) illustrates the implicit synonymity of \say{word} and \say{basic unit of representation} in NLP jargon.
Although subword- and multiword-level systems were designed and developed outside this paradigm, mostly citing a non-English motivation, when the neural revolution came the predominant methods again anchored the field to the space-delimited word as the atom.
The most obvious advantage of this approach is its simplicity, considering how difficult it is in practice to extract correct sub-word morphemes directly from text.
Historically-entrenched orthographic conventions and local-context phonological processes lead to phenomena such as variance in morpheme form at different instantiations, such as the disappearance of the stem's final \textit{e} in the \textit{unbelievable} or the \textit{s}-\textit{t} alteration in derivations like \textit{Mars}-\textit{Martian}, making a deterministic mapping from surface form to morpheme sequence impossible.
The lack of overt textual marking of morpheme boundaries (except for the uncommon case of hyphenation) also leads to ambiguous segmentation in words like \textit{unionize}, and the general property of our sound and writing systems' inventory being relatively small leads to the incidence of affix-identical sequences in single-morpheme words like \textit{reply} (cf. \textit{shortly}) and \textit{bring} (cf. \textit{lying}).
Automatic detection of morphemes can be achieved today by unsupervised data-driven systems like Morfessor~\cite{creutz-lagus-2002-unsupervised,creutz2007unsupervised}, which rely on large amounts of training data and provide no guarantee to finding the true morphemes in all cases or downstream applications.
\section{Neural Representations}
\label{int:reps}
The idea of breaking down concepts in language into numerically-valued axes has played a role in the formation of the modern research landscape in linguistics.
\citet{osgood1952nature} proposed a low-dimensional space in which nominal objects and concepts are represented by values associated with characteristics which may describe them, such that \say{eager} and \say{burning} share a value along the \textit{weak} $\Leftrightarrow$ \textit{strong} dimension, while differing along the \textit{cold} $\Leftrightarrow$ \textit{hot} dimension.
The values were elicited from human subjects.
Scaling this very linguistically-motivated approach manually over an entire language is at the very least impractical, and over the years some relaxations of this scheme to define representations for words which are \textbf{distributed} along dimensions gave rise to more automation-friendly processing techniques.
Most crucial was the realization that the individual dimensions in the representation space do not have to be meaningful in and of themselves.
Liberating the dimensions from their labels allowed the number of dimensions to be governed by concerns of data availability and computational memory and power, rather than by the precision of our semantic theory and ontological thoroughness; it allows for the discovery of unnamed but possibly useful similarities and distinctions between concepts; and it \say{leaves room} for new properties to be learned if, for example, a domain shift occurs during the process of applying an embedding-based system to a downstream task.
Embedding concepts into a \say{blank} vector space using learning methods turns the implied causal direction that motivated Osgood's framework on its head: instead of creating the embeddings based on what we know about language and the relations between concepts, the latter become the proxy target by which we can measure whether or not the embeddings learned by our model are useful to us.
Starting with an arbitrary metric space with well-known properties such as $\mathds{R}^d$ becomes a great advantage, as the space comes with metrics and operations which are easy to conceptualize and imagine as the necessary proxies.\footnote{One heroic departure from the shackles of euclidean space is the line of work on embeddings in hyperbolic space~\cite{nickel2017poincare}, touted as a more suitable representation framework for hierarchical structures, including the semantic structure of a language.}
As the formative instance of this realization served the ability to score the relative directionality of two vectors using the cosine similarity function, which can be compared to annotations in word similarity resources such as WordSim-65~\cite{rubenstein1965contextual}, where human subjects were asked to score word pairs without the hassle of decomposing them into their semantic properties first.
Metric space also affords the intuitive parallelogram metaphor of word analogy, haunting every introductory text and presentation on embeddings with the equation \texttt{king $-$ man $+$ woman $\approx$ queen}.
\section{Distributional Semantics}
\label{int:dist}
The development of the distributed view of representation for linguistic objects accompanied the rise of methodologies making use of the distributional hypothesis, traditionally attributed to~\citet{harris1954distributional} and framed as \say{you shall know a word by the company it keeps}.
The maximalist interpretation of this adage as \say{a word is defined by applying a combination function to the set of its contexts}, used pre-modern-neurally in influential methods such as Brown Clustering~\cite{brown-etal-1992-class}, is an appealing principle to the embedding movement for good reason:
breaking words down into contexts provides us with just the distributed fixed dimensions we seek.
Once we decide exactly what \say{context} means to us, we can
programatically extract all contexts for all target words given only a corpus, and base our latent dimensions (whose number is limited to hundreds or thousands for practical reasons) on them.
The two methods which ended up dominating the distributional embeddings landscape share a definition of context, essentially \say{words that appear near the target word}, but translate this decision into embedding differently.
In SkipGram~\cite{mikolov2013efficient}, dimension significance is built \say{bottom-up} from a random initialization and a traversal of the corpus; in GloVe~\cite{pennington-etal-2014-glove}, dimensions are the result of an implicit reduction of the full $V \times V$ co-occurrence matrix, where $V$ is the number of words in our vocabulary.
The former approach was inspired by early embedding systems~\cite{bengio2003neural} developed around the task of language modeling, which is defined with an expectation based in distributional signals, while the latter has origins in latent semantic analysis~\cite[LSA;][]{deerwester1990indexing}.
Evaluation on intrinsic tasks such as similarity datasets and analogy benchmarks~\cite[e.g.,][]{finkelstein2001placing,mikolov-etal-2013-linguistic,hill-etal-2015-simlex} cemented distributional word embeddings as the representation go-to and an accessible replacement to one-hot encodings for a host of applications, while performance on \textbf{downstream} tasks within deep learning systems advanced the understanding of the utility that \textbf{pre-training} can afford end-to-end systems which include an embedding layer~\cite{collobert2008unified,collobert2011natural}.
\section{Out-of-Vocabulary Words}
\label{int:oov}
The choice of space-delimited words as the basic unit for representation, and the large resource investment necessary to pre-train a distributional model over a large corpus, in both money and time, create a situation where vectors can mostly be trusted \textbf{as long as the words they represent are present in the pre-training corpus}.
The models so far discussed have no intrinsic ability to represent words not present in their lookup table, or out-of-vocabulary, or \textbf{OOV}s~\cite{brill-1995-transformation,brants2000tnt,plank2016non,heigold2017robust,young2018recent}.
Empirical analyses such as the one in \citet{mimick} show that indeed, the overwhelming majority of downstream datasets contain words not present in the pre-training corpora.
\citet{pinter-etal-2020-nytwit} present a diachronical dataset showcasing the volume of novel terms entering a large, steady daily publication in English over time; but even a snapshot of a language at a given moment contains unlimited domain-specific terms, morphological derivations, named entities, potential loanwords, typographical errors, and other sources of OOVs which would appear very reasonably in text analysis tasks and which the downstream model should be given the faculty to handle.
In fact, according to \citet{kornai2002many}, statistical reasoning leads us to conclude that languages have an infinite vocabulary.
But even if a language's word set were finite, and all present in some corpus, practical memory and lookup constraints would still limit embedding tables to non-exhaustive vocabularies.
To overcome the intrinsic limits of corpus-learned embedding tables, the distributional system has begotten some heuristics that try and initialize embeddings for OOVs beyond the trivial random initialization fallback.
If one were to stay true to Firth's maxim, one possible strategy would be to keep SkipGram's context embedding table as well as the main table (for \say{target} words), and initialize OOV embeddings based on the context in which they are first encountered~\cite{horn-2017-context}.
This approach has not caught on, and instead most practitioners took to the use of a special \texttt{<UNK>} embedding, named as an abbreviation of \textit{unknown}~\cite{bengio2003neural}.
In a pre-training stage, such an embedding is learned by replacing a small percentage of the corpus with a dedicated \texttt{<UNK>} token, thus gaining at least some prior for an initialization, in some sense an average over possible contexts for encountering \emph{any} word.
This approach is brutally simplistic;
it assumes not only that all novel words are representable using the same approximation technique, but that they are all \textit{exactly the same}.
The first assumption alone is easy to dispute:
a careful observation of any taxonomy of word formation processes~\cite{lieber2005english,plag2018word} suggests that embedding new words into an existing space must involve considering multiple approaches in parallel.
\begin{itemize}
\item Words created by processes at the multi-word level, such as compounding or blending, require means of extracting the underlying constructed words and composing the semantic contribution from each word.
For example, \textit{brunch} is a blend of \textit{breakfast} and \textit{lunch}; a reasonable initial embedding can be the mean vector for these two words, hopefully keeping it at a high similarity with other meals and the appropriate time of day.
\item Words that are inflections of known words, for example \textit{ameliorating}, can benefit from a morphological analysis which finds its stem and syntactic suffix, placing the new vector at the sum of the verb \textit{ameliorate} and the generalized notion of \textit{-ing} verbs, if one is realized in the embedding space (arguably, in a good space it should at least be reliably extractable).
\item Novel named entities such as \textit{Lyft} or \textit{SARS-COV-2}, more often than not, reflect arbitrary naming practices and cultural primitives, and even recognition of their type (person / organization / location, etc.) might well be impossible without access to knowledge bases covering the appropriate domain, noting explicitly where in concept space the novel word should be embedded.
\item Some OOVs are the result of unpredictable subword processes such as typographical errors (typos) and stylistic variation, like the aforementioned expressive lengthening.
In such cases, it is sometimes best to opt out of creation of a new embedding at all and simply map the new form to the existing embedding of its intended canonical word form.
This choice will depend on the intended application; in certain cases like sentiment analysis, the stylistic information itself is essential.
\item Loanwords like \textit{vespa} originate in a different language than the one the embedding was produced for, but in some cases we have access to an embedding space for the origin language and a function which translates between the two languages' space.
A system which can detect the word and its origin, perhaps overcoming processes like writing-system transliteration and phonological adaptation, can start by embedding the target language word in a position projected from the source language's embedding for the equivalent word form.
\end{itemize}
This is not a comprehensive list.
More types of novel words are identified in \citet{pinter-etal-2020-nytwit}, and not all suggestions in the taxonomy above correspond to actual existing work.
Limiting this discussion to a strict interpretation of written-form uniqueness also prevents us from considering as OOVs concepts which are spelled in the same way as other words, either by chance (homography, for example \textit{row} as a line or a fight), by naming (e.g., \textit{Space Force}), or by processes such as zero-derivation (the verb \textit{smoke}, derived from the noun).
In languages other than English, some OOV-creating forces may be more dominant in word formation than in English.
Morphologically-rich languages, as one edge case, feature large percentages of OOVs in novel texts for a given task's text size compared to English, and this property is often compounded by the fact that many of these are low-resource languages, possessing a relatively small corpus-extracted vocabulary to begin with.
The richness and unpredictability of the OOV problem calls for complementing the word representation systems obtained distributionally with additional approaches, which is the focus of this survey.
\section{Subword Compositionality}
\label{int:comp}
The first approach considered is an attempt to break the space-delimited word paradigm and get at the finer atomic units of meaning, which can then either be used as the fundamental representation layer, or induce better representations at the word level.
This perspective, known as the \textbf{compositional} approach, is inspired mostly by the cases where insufficient generalizations are made for cases of morphological word formation processes.
Under the compositional framework, an ideal representation for \textit{unbelievable} can be obtained by (1) detecting its three morphological components \textit{un-}, \textit{believe}, and \textit{-able}, (2) querying reliable representations learned for each of them, distributionally or otherwise, and (3) properly assembling them via some appropriate function.\footnote{I will use the term \textbf{subword} to denote textual units which are largely between the character level and the word level, when no guarantee of their morphological soundness is attempted. In appropriate contexts, this can also denote word-long or character-long elements which are nevertheless obtained by a subword tokenizer.}
Each of these three steps is a challenge in itself and open to various implementational approaches.
Learning representations for subword units is usually done by considering the subword elements in unison with the full word while applying a distributional method~\cite[e.g.,][]{bojanowski-etal-2017-enriching}, but some have opted for pre-processing the pre-training corpus such that only lemma forms exist as raw text and the other tokens are explicit representations of the morphological attributes attached to each lemma~\cite{avraham-goldberg-2017-interplay,tan-etal-2020-mind}, inducing the production of more consistent vocabularies.
Others yet leave the learning to the downstream task itself, feeding off the backpropagated signal from the training instances~\cite{sutskever2011generating,ling-etal-2015-finding,lample-etal-2016-neural,garneau2019attending}; while others train a compositional network based on the word embedding table in an intermediate phase between pre-training and downstream application~\cite{mimick,zhao-etal-2018-generalizing}.
The composition function from subwords to the word level is also open to many different approaches:
prior work has opted for construction techniques as diverse as using the subword strings as one-hot entries to represent the words themselves~\cite{huang2013learning};
summing morpheme embeddings to produce word embeddings~\cite{botha2014compositional};
traversing a possibly deep morphological parse tree using a recursive neural network~\cite{luong-etal-2013-better};
positing probabilistic word embeddings for which the morpheme embeddings act as a prior distribution~\cite{bhatia-etal-2016-morphological};
side-by-side training of both word-level and character-level modules followed by concatenating the resulting representations, to allow the downstream model to learn from both levels independently and control the interaction terms directly~\cite{plank-etal-2016-multilingual};
assembling a hierarchical recurrent net that progressively encodes longer portions of text in each layer~\cite{chung2019hierarchical};
or dispensing with the word level altogether and just representing text with a single atomic layer of characters or subwords~\cite{sennrich-etal-2016-neural}.
Most challenging of all is the detection of the subwords themselves.
As noted above, morphemes are hard to detect from the surface form of a word.
For the default setting where no curated resources exist to allow correct morpheme extraction from a word's form, as is the case in nearly all languages in the world, the mainstream of compositional representation research has centered on the raw character sequence, the unarguable atom of text,\footnote{At least in languages using the Latin script, like English. Chinese text analysis has benefitted from decomposing characters into strokes or radicals; Hebrew and Arabic include diacritical marks that are not character-intrinsic; and elsewhere, treatment of individual bytes from the Unicode representation of characters has also shown merit.} which is used either via direct operation or as a basis for heuristics that define subword units based on statistical objectives.
The great advantage of using characters or primitive character n-grams as the atomic unit for the model~\cite{santos2014learning,kim2016character,wieting-etal-2016-charagram,bojanowski-etal-2017-enriching,peters-etal-2018-deep} is that it rids us of the need to explicitly designate morphemes altogether; the challenge is to still capture the information they convey, somehow.
In contrast, heuristically learning a subword vocabulary from information-theoretic notions~\cite{sennrich-etal-2016-neural,kudo-richardson-2018-sentencepiece} or character-sequence unigram distribution~\cite{kudo-2018-subword} may find us many true morphemes, but there is no guarantee of either precision or recall: corpus collection effects are significant in determining the ultimate vocabulary, orthographic norms may still obfuscate many useful generalized morphemes, and many frequent character sequences may enter the subword vocabulary as the result of coincidental quirks.
For example, the character sequence \textit{eva} might contribute to the representation of \textit{unbeli\textbf{eva}ble}, passing along signals learned from unrelated words such as \textit{Eva} or \textit{\textbf{eva}luate}.
The ever-growing popularity of systems which use such vocabularies in conjunction with the null composition function that ignores sub-word hierarchy and passes the downstream model embeddings corresponding to the raw subword sequence (see~\S\ref{int:contextual}) prevents any possibility of correcting incorrect subword tokens at the word level: in this scenario, the next processing layer of the model will use the embedding for \textit{eva} as if it were part of the input equally important to a frequent word like \textit{house}.
\section{Relational Semantics}
\label{int:rel}
Another way to complement distributionally-trained embeddings is to incorporate signals from curated type-level \textbf{relational} resources.
The prominent category of such resources is semantic graphs, such as \WN{}~\cite{wordnet} and BabelNet~\cite{navigli-ponzetto-2010-babelnet}, which encode the structural qualities of language as a representation of human knowledge.
The core goal of semantic graphs is to describe connections between referents in the perceived and conceived world, and to this end they make an explicit distinction between words as character sequences and an internal semantic primitive which we can call \textbf{concepts}.
Concepts form the chief node type in the semantic graph, connected by individual edges typed into relations such as hypernymy (\textit{elm} \say{is a} \textit{tree}) or meronymy (\textit{branch} \say{is part of a} \textit{tree}), as well as linguistic facts about concept names (\textit{shop.verb} \say{is derivationally related to} \textit{shop.noun}) which make use of the word-form partition of the graph's node set.
In similar vein, relations which straddle the divide between form and function, like synonymy, are extractable from the bipartite subgraph relating word forms and their available meanings.
In the context of language representation, these structures offer a notion of atomicity stemming from our conceptual primitives, an attractive premise.
They may not answer all needs arising from inflectional morphology (since syntactic properties do not explicitly denote concepts) or some of the other word formation mechanisms, but the rich ontological scaffolding offered by the graph and the prospects of assigning separate embeddings for homonyms in a model-supported manner, assuming sense can be disambiguated in usage, seems much \say{cleaner} than relying on large corpora and heuristics to statistically extract linguistic elements and their meaning.
In addition to this conceptual shift, as it were, the graph structure itself provides a learning signal not present in linear corpus text, relating the basic units to each other through various types of connections and placing all concepts within some quantifiable relation of each other (within each connected component, although lack of any relation path is also a useful signal).
The structure can also occupy the place of the fragile judgment-based word similarity and analogy benchmarks, allowing more exact, refined, well-defined relations to be used for both learning the representations and evaluating them.
Methods which embed nodes and relations from general graph structures before even considering any semantics attached to individual nodes and edges, like Node2vec~\cite{grover2016node2vec} and graph convolutional nets~\cite{gcn}, indeed serve as a basis and inspiration for many of the works in this space.
The fundamentally different manner in which the relational paradigm is complementary to the distributional one in contrast with the compositional one has bearing on the OOV problem, which can be viewed from several perspectives.
First is the potential of semantic graphs to improve representation of words that are rare or not present in a large corpus used to initialize distributional embeddings.
This has proven to be a powerful direction by methods such as retrofitting~\cite{faruqui-etal-2015-retrofitting}, where embeddings of related concepts are pushed together in a post-processing learning phase, showcasing \WN{}'s impressive coverage of English domain-specific taxonomies such as classical natural sciences.
Elsewhere, properly modelling hypernymy, for example, has been found to help understand text with rare words whose hypernyms are well-represented in the pre-training corpus~\cite{shwartz-etal-2017-hypernyms}.\footnote{A tangential but noteworthy approach considers relations that are not curated in large graphs, but rather corpora annotated for inter-word relations such as syntactic dependencies~\cite{madhyastha-etal-2016-mapping}.
Their system creates a mapping between a distributionally-obtained embedding table and one trained on the annotated parses, and generalizes this mapping to words which are now out-of-vocabulary for a further downstream task (e.g., sentiment analysis).
In this case, the reference vocabulary (for defining OOV-ness) is not the unsupervised corpus, but rather an intermediate downstream task.}
Still, semantic graphs provide only a partial solution to the overall goal of OOV impact mitigation, given their limited scope and heavy reliance on expert annotation.
From the other direction, systems relying on semantic graphs for applications such as question answering and dialogue generation are likely to encounter \say{OOVs} of their own, i.e.~words and concepts not present in the underlying graph.
Unlike the corpus-OOV problem, which cannot be quantified convincingly without selecting a specific downstream task first, coping with graph-OOVs can be examined through tasks intrinsic to the graph structure itself.
One such task is \textbf{relation prediction}, where we assume a concept has a known connection with \textit{some} other concept, and need to figure out which one.
Depending on our perspective, either the source or target of the relation may be the OOV concept; for example, on first encounter of the concept \textit{indian lettuce}, we wish to know its hypernym from our set of known concepts.
This task is also useful for a similar class of graphs known as \textbf{knowledge graphs} (KGs), such as Freebase~\cite{bollacker2008freebase}\footnote{Now defunct.} and WikiData~\cite{wikidata}, which differ from semantic graphs in several aspects.
While \WN{} curates connections between semantic concepts and dictionary entries, including certain aspects of the physical world (e.g. \say{an elm is a tree}), KGs focus on real-world entities and often time-sensitive encyclopedic knowledge (e.g. \say{Satya Nadella is the CEO of Microsoft}).
\WN{} is a manually-crafted resource created by language and domain experts, whereas many KGs are either crowdsourced or automatically extracted from databases and large text corpora.
As a result, KGs are typically disconnected, shallow, and sparse, boasting areas of hubness and areas of isolation; this contrasts with semantic graphs, where systematic connectedness and hierarchy have been observed~\cite{sigman2002global}.
KGs are also distinguished by the richness of their relation type variety, in the hundreds or thousands, compared to \WN{}'s 18 relation types (including seven pairs of relations reciprocal to each other).
Nevertheless, much of the work on the relation prediction problem has been developed and evaluated on both semantic and knowledge graphs, as well as on derived tasks like \textbf{graph completion}, where the entirety of a node's connections are to be inferred at once, imitating real-world scenarios of knowledge discovery.
Over the years, distributional methods have been used to feed increasingly complex neural nets predicting relations by embedding both concept nodes and relation edges based on corpus-trained tables, to a large degree of success~\cite[e.g.][]{nickel2011three,socher2013reasoning,bordes2013translating,yang2014embedding,toutanovachen2015,neelakantan2015compositional,ji-etal-2015-knowledge,shi2017proje,dettmers2018conve,nathani-etal-2019-learning}.
The basic idea calls for embedding concepts into a metric space and modeling relations by some operator that induces a score for an embedding pair input, either by translating the concept vectors, combining them via bilinear operators, projecting them onto a \say{scoring scale}, or designing an intricate deep system that finds complex relationships.
While these systems achieve impressive results, they all build on an implicit assumption that relation prediction is a \textbf{strictly local} task: the fit of an edge can be estimated from the nodes it connects and the intended label alone.
In KGs, where structure is of secondary concern, this assumption may go a long way before its limitations stress out performance; in the much more structure-crucial semantic graphs, it is increasingly likely that connections are predicted which should not be permissible from enforceable structural constraints alone, e.g. that the hypernym graph cannot contain cycles.
Some systems indeed go beyond the individual edge to embed and predict relations, for example the idea of a path prediction task~\cite{guu-etal-2015-traversing} which demands more structure reliance, or embedding methods leveraging local neighborhoods of relation interactions and automatic detection of relations from syntactically parsed text in an iterative manner~\cite{riedel-etal-2013-relation,toutanova-etal-2015-representing,gcn}.
Others have constructed prediction models where an adversary produces examples which violate structural constraints such as symmetry and transitivity~\cite{minervini-riedel-2018-adversarially}.
\citet{pinter-eisenstein-2018-predicting} present a system which improves \WN{} prediction by augmenting the distributionally-obtained signal with features (motifs) representing the global structure of the semantic edifice.
In addition to the task benefit, the emerging feature weights lead to discovery of some general properties of English semantics.
\section{Contextualized Representations}
\label{int:contextual}
Recent developments in NLP have brought about a shift in the balance depicted so far with respect to the atomic level chosen to represent language in applications and the approaches taken to create these representations.
Advances in multi-task learning and transfer learning, both in non-neural NLP and in non-NLP deep methods, matured well enough to allow deep NLP to use them effectively as well.
The increase of available computation power and the extreme utility found to lie in recurrent nets, most notably the Long Short-Term Memory cell~\cite[LSTM;][]{hochreiter1997long}, led to a series of works suggesting the incorporation of instance-specific context into the feature extraction part of a model, before applying any task-specific elements, beginning with simple prediction tasks~\cite{melamud-etal-2016-context2vec}, followed by near-full coverage of core NLP~\cite{peters-etal-2018-deep}.
The next step was to continue training the shared-architecture context learner, which we can now safely call a language model, during the downstream step, in a process known as fine-tuning~\cite{howard-ruder-2018-universal}.
Design and processing power considerations, but also downstream performance, fueled the shift~\cite{radford2018improving} from recurrent net infrastructure to transformer models~\cite{vaswani2017attention}, which in turn facilitated another major conceptual innovation where autoregressive token prediction was replaced by masked language modeling, where sequence-medial tokens are hidden from the representation layer and must be predicted based on the remaining context~\cite{devlin-etal-2019-bert,liu2019roberta}.
Throughout this evolution, one main principle remained stable: the language prediction task acts as the pre-training step, providing a scaffolding model which is capable of representing tokens within a sequence at a level of effectiveness that allows downstream tasks to begin training with meaningful \textbf{contextualized} representations.
The heart of contextualization lies in the distributional approach.
The design of these pre-training tasks meant they can no longer tolerate OOV tokens at the rate encountered by static embedding algorithms, as that might render the models unusable for any words that appear in context with OOVs downstream, rather than just the OOVs themselves.
On the other hand, the prediction layer creates a computational bottleneck which scales with the size of the vocabulary, since every token must be available for prediction at all model steps.
Therefore, these models resorted to compositional techniques for the bottom layer where the input sequence is processed into tokens.
The character convolution net selected for ELMo~\cite{peters-etal-2018-deep} did not gain traction, possibly because it didn't provide an adequate method for predicting text from the output layer, and so subsequent models, particularly those relying on transformers, operate over a sequence of equal-status tokens, each representing a word or a subword, from a mid-size vocabulary (tens of thousands) built in a pre-pre-training phase using statistical heuristic techniques mentioned in~\S\ref{int:comp}.
These models inherit the problems endemic to these methods like inadequacy for certain OOV classes, morphological unsoundness, and length-imbalance; as well as issues like the added burden they impose on already limited-length token sequences.
Common wisdom seems to hold that they make up for these shortcomings within the depths of their fully-connected transformer layers, and end up with satisfactory top-layer representations.
Recent work challenging these models with truly novel word forms suggest otherwise~\cite{pinter-etal-2020-nytwit,pinter-etal-2020-will}, while work on either incorporating the compositional signal into subword-vocabulary transformers~\cite{ma-etal-2020-charbert,aguilar2020char2subword,el-boukkouri-etal-2020-characterbert,pinter2021learning}, or replacing the subwords with characters or bytes altogether~\cite{clark2021canine,xue2021byt5}, is rapidly gaining traction as well.
\section*{Acknowledgments}
This survey is an adapted version of the introduction my PhD thesis.
I thank my committee for helping to shape it: my advisor, Jacob Eisenstein; Mark Riedl, Dan Roth, Wei Xu, and Diyi Yang.
|
1,314,259,993,403 | arxiv |
\section{Linear Problem}\label{supp:SectionLinearProblem}
In this section we provide equations of motion for the short range network. For convenience reasons we will use a bra-ket notation which slightly differs from the main text. Due to the nature of the evolution it is more convenient to represent the vector $\vec \Psi$ as a wave function constructed of $N/2$ unit cells with two components in each cell:
\begin{eqnarray}
&&\ket{\Psi(t)} = \sum_{n = 1}^{N/2} \Big[\psi^A_n(t)\ket{A} + \psi^B_n(t)\ket{B}\Big] \otimes \ket{n} \nonumber \\
&&= \sum_{n,\; p = A, B}\psi_n^p \ket{n, p}
\end{eqnarray}
We also rewrite the map $\hat U$ as an evolution operator using shift operation $\hat T$ and mixing operator $\hat C$:
\begin{equation}
|\Psi(t+1)\rangle = \hat U^{(0)}|\Psi(t)\rangle=\hat T^{\dagger} \hat C \hat T \hat C |\Psi(t)\rangle
\label{evol}
\end{equation}
Here the mixing operator $\hat C$ are $2\times 2$ is described by unitary matrices parametrized by the rotation angle $\theta$, which act locally on neighboring sites:
\begin{eqnarray}
\hat{C}=\sum \hat c \otimes \ket{n}\bra{n} = \begin{pmatrix}
\cos\theta & \sin\theta \\
-\sin\theta & \cos\theta
\end{pmatrix}\otimes|n\rangle\langle n|
\end{eqnarray}
and $\hat T$ is the shifting operator moving all components of the lattice to the left:
\begin{eqnarray}
\hat{T} = \sum_n \ket{A, n}\bra{B, n} + \ket{B, n}\bra{ A,n + 1}
\end{eqnarray}
The resulting equations of motion of the linear evolution are as follows:
\begin{eqnarray}
&& \psi_n^{A}(t + 1) = \cos^2\theta\psi_n^A(t)- \cos\theta\sin\theta\psi_{n-1}^B(t)\nonumber \\
&&+ \sin^2\theta\psi^A_{n +1}(t) + \cos\theta\sin\theta\psi_n^B(t) \nonumber \\
\nonumber\\
&& \psi_n^{B}(t + 1) = \sin^2\theta\psi_{n - 1}^B(t)- \cos\theta\sin\theta\psi_{n}^A(t) \nonumber \\
&&+ \cos^2\theta\psi^B_{n}(t) + \cos\theta\sin\theta\psi_{n + 1}^A(t)
\label{eomsLinear}
\end{eqnarray}
The solution comes in terms of plain waves $\vec{\psi}_n = e^{i k n}\vec{\psi}_k$, where $\vec{\psi}$ is a two component vector and $k$ is a wave number. The final diagonalization procedure leads to the dispersion relation (see Fig. \ref{fig:spectrum})
\begin{equation}
\cos\omega = \cos^2\theta + \sin^2\theta\cos k
\label{supp:dispersion}
\end{equation}
\begin{figure}
\centering
\includegraphics[width = 0.45\textwidth]{supp_fig1.pdf}
\caption{The dispersion relation corresponding to Unitary Circuits (see eq. \eqref{supp:dispersion}). The parameter $\theta$ is varied to showcase a dispersionless flat band (red), a case of constant group velocity case (black) and the generic case (green).}
\label{fig:spectrum}
\end{figure}
\section{Short Range Network}\label{supp:SectionSRN}
For convenience we rewrite the linear part of the evolution as follows:
\begin{eqnarray}
&& \psi_n^{A}(t + 1) = \alpha^A_n(\Psi(t)) \nonumber \\
&& \psi_n^{B}(t + 1) = \alpha^B_n(\Psi(t))
\end{eqnarray}
The nonlinearity inducing operator $\hat G$ is applied after the linear part of the evolution:
\begin{eqnarray}
\hat U_\text{nonlin} = \hat G \hat U^{(0)}
\label{nonlinEvol}
\end{eqnarray}
The nonlinearity is induced through an additional norm-dependent phase rotation depending on the result of the local linear evolution:
\begin{eqnarray}
\hat G = \sum_{n, p} e^{i g |\alpha^p_n|^2}\otimes\ket{n , p}\bra{n, p}
\end{eqnarray}
The final equations of motion are:
\begin{eqnarray}
&& \psi_n^{A}(t + 1) = e^{i g |\alpha^A_n|^2}\big[\cos^2\theta\psi_n^A(t)- \cos\theta\sin\theta\psi_{n-1}^B(t)\nonumber \\
&&+ \sin^2\theta\psi^A_{n +1}(t) + \cos\theta\sin\theta\psi_n^B(t)\big] \nonumber \\
\nonumber\\
&& \psi_n^{B}(t + 1) = e^{i g |\alpha^B_n|^2}\big[\sin^2\theta\psi_{n - 1}^B(t)- \cos\theta\sin\theta\psi_{n}^A(t) \nonumber \\
&&+ \cos^2\theta\psi^B_{n}(t) + \cos\theta\sin\theta\psi_{n + 1}^A(t)\big]
\label{eomsNonLinear}
\end{eqnarray}
The equations of motion are next to nearest neighbor form which falls under the definition of a short range network. The integrable limit is reached for $\theta = 0$. The system turns integrable and the equations of motion read:
\begin{eqnarray}
&& \psi_n^{A}(t + 1) = e^{i g |\psi^A_n|^2}\psi_n^A(t) \nonumber \\
&& \psi_n^{B}(t + 1) = e^{i g |\psi^B_n|^2}\psi_n^B(t)
\label{eomsSRNintegrable}
\end{eqnarray}
\section{Long range network}\label{supp:SectionLRN}
The long range network of observables is obtained in the normal mode space of the model. The wave function can be represented as a sum of normal modes:
\begin{equation}
\ket{\Psi(t)} = \sum_{k, r} e^{i \omega_k^r t} c_k^r(t) \ket{\psi_k^r},
\label{wfNormalModeRepr}
\end{equation}
where the index $r$ corresponds to one of the two bands and $\ket{\psi_k^r}$ is a corresponding normal mode. In the linear setup the normal mode coefficients are conserved in time $c_k^r(t) = const$ and as such are integrals of motion. In the reciprocal space the network consists of disconnected nodes with $c_k^r$ associated to each node.
Let us expand the nonlinear evolution operator eq.\eqref{nonlinEvol} for small values of the parameter $g$:
\begin{eqnarray}
\hat U_\text{nonlin} = \hat U^{(0)} +i g \sum_{n,p} |\alpha_n^p|^2 \hat U^{(0)},
\label{nonlinEvolExpanded}
\end{eqnarray}
where $|\alpha_n^p|^2$ can be represented as:
\begin{eqnarray}
A_n^p = \bra{\Psi(t)}\hat U^{0}\ket{n, p}\bra{n, p}\hat U^{0}\ket{\Psi(t)}.
\end{eqnarray}
Using the normal mode representation of the wave function \eqref{wfNormalModeRepr} we obtain the evolution equations of normal mode coefficients:
\begin{eqnarray}
&& c_k^r(t+1) = e^{i \omega^r_k} c_k^r(t) + \nonumber \\
&& ig\sum_{r_1,r_2,r_3}\sum_{k_1,k_2,k_3} I_{k,k_1,k_2,k_3}^{r,r_1,r_2,r_3}c^{r_1}_{k_1}(t)c^{r_2}_{k_2}(t)\left(c^{r_3}_{k_3}(t)\right)^* ,
\label{EoMs_EOs}
\end{eqnarray}
where the overlap integrals $I$ are given by the following expression:
\begin{eqnarray}
&&I_{k,k_1,k_2,k_3}^{r,r_1,r_2,r_3} = e^{i \omega_{k_1}}\sum_{n,p}\bra{n,p}\hat U^{(0)}\ket{\psi_{k_2}^{r_2}}\bra{\psi_{k_3}^{r_3}}\hat U^{(0)}\ket{n, p} \nonumber \\
&&= e^{i (\omega_{k_1}^{r^1} + \omega_{k_2}^{r^2} - \omega_{k_3}^{r^3})}\sum_{n,p}\langle n,p \ket{\psi_{k_2}^{r_2}}\langle{\psi_{k_3}^{r_3}}\ket{n, p}.
\end{eqnarray}
The number of triplet terms induced by nonlinearity in equation \eqref{EoMs_EOs} is proportional to $N^3$. These equations correspond to a long range network.
\section{Deviation vectors}\label{supp:SectionDeviationVectors}
To compute the set of Lyapunov exponents we follow the evolution of tangent vectors $\lbrace \vec w_i \rbrace$. Each vector corresponds to the direction of the exponential growth or shrinking of the deviation from the initial trajectory - in total $2 N$ vectors. The evolution of tangent vectors is done using the corresponding equations of motion derived below. We measure the magnitude of growth $\gamma(t) = |\vec w(t)|$ of each tangent vector and compute transient Lyapunov exponents $X_i(t) = 1/t\sum_\tau^t \log \gamma(\tau)$ after which the tangent vectors are orthonormalized using a Gram-Schmidt procedure. The evolution of positive transient Lyapunov exponents $X(t)$ is shown in Fig. \ref{supp:fig2}. After an initial decay the transient Lyapunov exponents saturate. The saturated values are taken as final values for Lyapunov exponents $\Lambda$. Due to the conservation of the norm two exponents are expected to attain zero value. In the figure we see one of them (bottom most purple line) tending to zero with increasing time and no saturation.
\begin{figure}
\centering
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{supp_fig2a.pdf}
\end{subfigure}
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{supp_fig2b.pdf}
\end{subfigure}
\caption{The evolution of positive transient Lyapunov exponents. a) SRN case with angle $\theta = 0.1$ and nonlinearity $g = 1.0$, b) LRN case with angle $\theta = 0.33\pi$ and $g = 0.1$. For both cases system size $N = 200$. }
\label{supp:fig2}
\end{figure}
\subsection{Equations of Motion}
We start from the nonlinear EoM \eqref{eomsNonLinear}:
\begin{eqnarray}
&& \psi_n^{A}(t + 1) = e^{ i g |\alpha_n^A(\Psi(t))|^2} \alpha_n^A(\Psi(t)) \nonumber \\
&& \psi_n^{B}(t + 1) = e^{ i g |\alpha_n^B(\Psi(t))|^2} \alpha_n^B(\Psi(t)),
\end{eqnarray}
where $\alpha^{A,B}_n$ are linear functions of the local components of the wave function $\ket{\Psi(t)}$ according to equations $\eqref{eomsLinear}$. We consider a small deviation $\vec{\varepsilon}(t)$ from the initial trajectory $\vec{x}(t)$:
\begin{eqnarray}
\vec{\psi} = \vec{x} + \vec{\varepsilon}
\end{eqnarray}
Substituting into \eqref{eomsNonLinear}:
\begin{eqnarray}
&& \psi_n^{A}(t + 1) = e^{ i g |\alpha^A_n[\vec{x}(t)+\vec{\varepsilon}(t)]|^2} \alpha^A_n\left[(\vec{x}(t)+\vec{\varepsilon}(t))\right] \nonumber \\
&& \psi_n^{B}(t + 1) = e^{ i g |\alpha^B_n[\vec{x}(t)+\vec{\varepsilon}(t))]|^2} \alpha^B_n\left[(\vec{x}(t)+\vec{\varepsilon}(t))\right].
\label{nonlinEomInsertedSols}
\end{eqnarray}
Expanding the nonlinear term and keeping terms only in the $1$st order of $\vec{\varepsilon}$ results in
\begin{eqnarray}
&& |\alpha^p_n[\vec{x}(t)+\vec{\varepsilon}(t)]|^2 = |\alpha^p_n[\vec{x}(t)]+\alpha^p_n[\vec{\varepsilon}(t)]|^2 = \nonumber \\ &&\alpha^p_n(\vec{x}(t))[\alpha^p_n(\vec{x}(t))]^* + \alpha^p_n(\vec{\varepsilon}(t))[\alpha^p_n(\vec{\varepsilon}(t))]^* + \nonumber \\ &&\alpha^p_n(\vec{\varepsilon}(t))[\alpha^p_n(\vec{x}(t))]^* + \alpha^p_n(\vec{x}(t))[\alpha^p_n(\vec{\varepsilon}(t))]^* \approx \nonumber \\
&& |\alpha^p_n(\vec{x}(t))|^2 + \Delta^p_n(t),
\end{eqnarray}
where
\begin{eqnarray}
&&\Delta^p_n(t) = \alpha_n^p(\vec{x}(t))[\alpha_n^p(\vec{\varepsilon}(t))]^* + c.c. \nonumber \\
\end{eqnarray}
Thus we can rewrite the exponential term by expanding $e^{i g \Delta_n^{A,B}(t)}$:
\begin{eqnarray}
e^{ i g |\alpha_n^p[\vec{x}(t)+\vec{\varepsilon}(t))]|^2} = e^{ i g |\alpha_n^p(\vec{x}(t))|^2}\left[1 + i g \Delta_n^p(t)\right]
\end{eqnarray}
With \eqref{nonlinEomInsertedSols} and using the linearity of $\alpha_n^p$ we finally arrive at the
following linear equations:
\begin{eqnarray}
&&\varepsilon_n^p(t+1) = e^{ i g |\alpha_n^p(\vec{x}(t))|^2}\Big\lbrace \alpha_n^p[\vec{\varepsilon}(t)] + i g \Delta_n^p(t)\alpha_n^p[\vec{x}(t)]\Big\rbrace. \nonumber \\
\end{eqnarray}
\end{document}
|
1,314,259,993,404 | arxiv | \section{Introduction}
\label{intro}
Turbulence subjected to rotation is a commonly occurring phenomenon
in geophysical and astrophysical flows, such as those in the
oceans and atmospheres of planetary bodies.
The effects of rotation on decaying homogeneous
turbulence is typically studied by considering either an isotropic or anisotropic state as the initial condition \cite{camb97,mans91}.
In direct numerical simulations,
different external forcing mechanisms have also been considered in order to
vary the degree of anisotropy and the typical correlation time
\cite{SM14,SM15} of the flow.
In most cases, it is customary to fix the orientation of the rotation axis.
The general consensus is that when rotation is strong enough,
the forward energy cascade from the large scales to the
small scales is inhibited and an inverse cascade develops resulting in
a quasi-two-dimensional behavior characterized by columnar structures
along the fixed rotation axis \cite{Pouquet2010,sen12}.
The dominance of the inverse energy cascade entails the presence of a
large scale energy sink in order to reach stationarity \cite{xia11}.
In this work, we
consider the effects of instantaneous changes to the orientation
of the rotation axis on homogeneous turbulence.
The interest in the orientation of rotation axis
is engendered by the phenomenon of precession,
which is the rotational motion of the spin axis of
a rotating body \cite{kida11}.
{
Even a weakly precessing container is known to
sustain turbulence at a sufficiently large Reynolds number,
due to viscous forces on the container
walls \cite{goto07,Malkus}.
}
For instance, the turbulent convection of
liquid metals in the Earth's outer core is influenced by its slow
precession.
However details of the flow structure and
the energy transfer dynamics in turbulence subjected to precession
remain unclear
partly due to the fact that experiments as well as simulations
are difficult to conduct \cite{goto14}.
As a recourse, we compare the spectral transfer properties of a system
perturbed by changing the orientation
of the axis at a given time instant
and consequently allowed to relax,
with another system
which is perturbed at regular time intervals by repeatedly
changing the orientation
of the axis. We show that the latter has different
large scale properties and transfer dynamics
as compared with the former. The remainder
of this work is organized as follows. In Sec.~\ref{sec2} we briefly
review the equations involved, simplifying assumptions made
and details about the simulations
performed. Results are given in Sec.~\ref{sec3}, with three
subsections which focus respectively on (\ref{sec3a}) the evolution
of the kinetic energy and dissipation rate, (\ref{sec3b})
energy spectra and flux and (\ref{sec3c}) large scale structure evolution.
In Sec.~\ref{sec4} we summarize our results and discuss briefly the
possible implications of this work.
\section{Numerical method}
\label{sec2}
The fluctuating velocity
$ \mathbf{u} ( \mathbf{x} ,t)$ for a constant density flow in the co-ordinate system
rotating with angular
velocity $\mathbf{\Omega} \equiv \mathbf{\Omega} (t)$ is given by \cite{gspan}
\begin{eqnarray}
\nonumber
\mkern-18mu \!\!\!
\frac{\partial \mathbf{u} }{\partial t} + \mathbf{u} \cdot \mathbf{\nabla} \mathbf{u} + 2\mathbf{\Omega}\times \mathbf{u} +
\frac{d (\mathbf{\Omega} \times \mathbf{r} )}{dt} & = & \\
\label{ns.eq}
- \mathbf{\nabla} p + \mathbf{f} + \nu \mathbf{\Delta} \mathbf{u} - \gamma \mathbf{\Delta} ^{-1} \mathbf{u} \;,
& &
\end{eqnarray}
where $\mathbf{f}$ is the
forcing stirring the fluid,
$\nu$ is the constant viscosity, $\gamma$ is the large scale
damping constant needed to remove energy at large scale and $ \mathbf{\Delta} $ denotes the Laplacian operator.
The distance of the fluid particle from the
rotation axis which precesses around a fixed axis is $| \mathbf{r} |=r$.
In Eq.~\ref{ns.eq} the pressure $p$ accounts for the
centrifugal acceleration $\mathbf{\Omega} \times
(\mathbf{\Omega} \times \mathbf{r} )$ in the usual manner.
In Eq.~\ref{ns.eq}, the precession term
$d(\mathbf{\Omega} \times \mathbf{r} )/dt \sim \Omega L/\tau_p$, where $\tau_p$
is the precession time scale.
If $\tau_p$ is
large enough,
the precession term is negligible compared to the (time dependent)
Coriolis term
since
$\Omega L/\tau_p \ll \Omega L/T_E$
where $L$ and $T_E$ denote the integral scale and the
large eddy timescale of the flow.
Neglecting the precession term assuming
$\tau_p \gg T_E$ is a reasonable approximation in many
geodynamo applications which are characterized by large precession
time periods
\cite{nore,triana}. Another relevant scenario is that of a sudden change
in the orientation of the
rotation axis at a given time instant, say at $t=0$,
as a sort of an instantaneous perturbation.
In this case,
one might expect that neglecting the non-homogeneous term becomes less and less important for the late-time dynamics, i.e. for $t \gg 0$.
Accordingly, we solve the following equation numerically under the assumption that
the precession term $d(\mathbf{\Omega} \times \mathbf{r} )/dt$ can be neglected:
\begin{equation}
\label{nsolve.eq}
\frac{\partial \mathbf{u} }{\partial t} + \mathbf{u} \cdot \mathbf{\nabla} \mathbf{u} + 2\mathbf{\Omega} \times \mathbf{u}
=
- \mathbf{\nabla} p +\mathbf{f} + \nu \mathbf{\Delta} \mathbf{u} - \gamma \mathbf{\Delta} ^{-1} \mathbf{u} \;.
\end{equation}
{
Equation \ref{nsolve.eq} is valid for weakly precessing flows which
are characterized by a large precession time scale $\tau_p$.
Alternatively, for sub-volumes of the flow close to the rotation axis,
such that
$|d(\mathbf{\Omega} \times \mathbf{r} )/dt| \sim \Omega r/\tau_p \to 0$
as $r \to 0$,
the precession
term can be neglected in comparison to the Coriolis term.}
The aim of this paper is to understand the evolution of the
rotating flow under such sudden changes to the orientation of the rotation axis.
It also allows us
to assess the robustness and universality of the large scale structures
in strongly rotating turbulence.
We solve Eq.~\ref{nsolve.eq} using a
Fourier pseudo-spectral method for spatial
discretization and a second order
Adams-Bashforth scheme for the time integration.
The domain is a
periodic cube with edge length
$L_0 = 2\pi$ with $N$ grid points to a side. The smallest
wave number in the domain is $k_0 = 1$.
The stochastic forcing applied to a shell
around the forcing wave number $k_f/k_0 = 4$ is based on a
second-order Ornstein-Ulhenbeck process \cite{saw91}.
The hypo-viscous mechanism used to damp the large scales
is applied to wave numbers $k \in [0.5,2.5]$, with a
large scale damping constant $\gamma = 0.1$ (refer Eqs.~\ref{ns.eq} and
\ref{nsolve.eq}).
Simulations wherein energy was depleted from different large scale ranges
were
also performed to test if the large scale properties
systematically depend
on the particular choice of wave numbers at which the energy was removed.
This issue of choice of low wave numbers from which energy is depleted
is further discussed in \ref{sec3c}.
Aliasing errors from the nonlinear
term are effectively controlled by removing all coefficients
with wave number magnitude greater than $k_{max}/k_0 = N/3$.
\section{Results and Analysis}
\label{sec3}
\begin{figure}
\center
\resizebox{0.5\textwidth}{!}{
\includegraphics{versar_rev2.pdf}
}
\begin{picture}(1,1)
\put(106,62){$\scriptstyle{t < 0}$}
\put(-22,62){$\scriptstyle{t< 0}$}
\put(-50,73){$\scriptstyle{\Delta\theta}$}
\put(85,73){$\scriptstyle{\Delta\theta}$}
\put(-12,70){$\scriptstyle{X}$}
\put(120,70){$\scriptstyle{X}$}
\put(104,110){$\scriptstyle{t=0}$}
\put(-36,110){$\scriptstyle{t > 0}$}
\put(70,130){$\scriptstyle{t_1}$}
\put(26,105){$\scriptstyle{t_2}$}
\put(4,70){$\scriptstyle{t_3}$}
\put(30,28){$\scriptstyle{t_4}$}
\put(70,0){$\scriptstyle{t_5}$}
\put(104,30){$\scriptstyle{t_6}$}
\put(75,120){$\scriptstyle{Y}$}
\put(-75,120){$\scriptstyle{Y}$}
\end{picture}
\caption{Schematic of the orientation of the rotation vector
$\mathbf{\Omega}$ in the $X$-$Y$ plane for simulations
(left) R1 and (right) R2, $\Delta \theta = \pi/4$. In run R1
$\mathbf{\Omega}$ is rotated only at $t=0$ while in run R2 it is rotated
at $t_i/\tau_{\Omega} = 10 i$, $i=0,1,2,\ldots$
}
\label{omgv.fig}
\end{figure}
We present results from two direct numerical simulations,
at grid resolution $1024^3$ and constant rotation magnitude $\Omega = 10$.
The rotation vector $\mathbf{\Omega}$ lies in the $X$-$Y$ plane and is defined
by the angle $\theta$ it makes with the $X$-axis (Fig.~\ref{omgv.fig}).
A rotating, stationary
flow at $1024^3$, $\Omega=10$, $\theta=0$
is used as the initial condition ($t = 0$)
for both the simulations.
Statistical stationarity for $t \le 0$ is achieved using the
same large scale friction mechanism that is used in
simulations R1 and R2.
\begin{table}
\centering
\caption{Initial and final values of Rossby numbers and related parameters
in the $1024^3$, $\Omega=10$ simulations.
The forcing wave number $k_f/k_0 = 4$. Large scale
dissipation is applied to wave number
$k \in [0.5,2.5]$.
Notes:
(1) $T_{E}^{0} \equiv K(0)/\epsilon(0)$ is the turbulence
time scale at $t=0$.
(2) $\tau_{\eta} = (\nu/\epsilon)^{1/2}$ is the
Kolmogorov time scale and $\eta = (\nu^3/\epsilon)^{1/4}$ is
the Kolmogorov length scale.
}
\label{dns.tab}
\begin{tabular}{c|c|c|c}
\hline\noalign{\smallskip}
& $t=0$ & R1 & R2 \\
\noalign{\smallskip}\hline\noalign{\smallskip}\\
$t/T_{E}^{0}$ & $0.0$ & $11.75$ & $6.95$\\
$ {Ro}_T = \epsilon/(2K\Omega)$ & $0.0063$ & $0.0050$ & $0.0218$\\
$ Ro^{\omega} = 1/(2\tau_{\eta}\Omega)$ & $1.4231$ & $1.3151$ & $1.6863$\\
$k_{\Omega} \eta$ & $0.2083$ & $0.2344$ & $0.1615$\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
Simulation
R1 is performed using $\theta=\pi/4$, while R2 is performed by
instantaneously incrementing
$\theta$ by $\Delta \theta = \pi/4$ every $10 \tau_{\Omega}$, where
$\tau_{\Omega} \equiv 1/\Omega$ is the
rotation time scale. A schematic of the orientation of
$\mathbf{\Omega}$ in the two runs is depicted in Fig.~\ref{omgv.fig}.
A conventional
measure of the strength of the rotation is given by comparing the rotation
time scale
to the turbulence time scale ($K/\epsilon$),
giving the turbulent Rossby number $Ro_T \equiv \epsilon/(2K\Omega)$. Here
$K$ is the kinetic energy and
$\epsilon$ is the
mean dissipation and are defined in
Eqs. \ref{tke.eq} and \ref{diss.eq} respectively.
The definitions of the Rossby numbers including
initial and final values of these quantities
are summarized in Tab.~\ref{dns.tab}.
The initial large eddy time scale is defined as $T_{E}^{0} \equiv K/\epsilon$ at
$t=0$.
The Rossby numbers for run R2 at the end of the simulation
show an increase compared to their initial values
showing that the effect of rotation has possibly decreased.
{In contrast, for run R1 the turbulent Rossby number $Ro_T$
and the micro-Rossby number $Ro^{\omega}$ have decreased
indicating that effects of rotation are still significant.}
A characteristic wave number of rotation ($k_\Omega$)
which delimits the region of the spectrum where rotation
effects are important ($k < k_\Omega$) is given by
$k_\Omega = ({\Omega}^3/\epsilon)^{1/2}$ \cite{zee94}.
The non-dimensional rotation wave number
$k_{\Omega} \eta \sim 1$ in the simulations (Tab.~\ref{dns.tab}),
indicating that the rotation effects extend to the viscous
scales in the flow.
\subsection{Energy and dissipation}
\label{sec3a}
\begin{figure*}
\resizebox{1.0\textwidth}{!}{
\includegraphics{ee_withhist_2axis.pdf}
\includegraphics{diss_withhist_2axis.pdf}
}
\begin{picture}(1,1)
\put(120,2){$t/T_{E}^{0}$}
\put(120,187){${t/\tomg}$}
\put(380,2){$t/T_{E}^{0}$}
\put(380,187){${t/\tomg}$}
\put(3,70){\rotatebox{90}{$K(t)/K(0)$}}
\put(255,70){\rotatebox{90}{$\epsilon(t)/\epsilon(0)$}}
\end{picture}
\caption{Evolution of kinetic energy (left) and
dissipation (right) in normalized time,
compared with respective values at $t=0$.
Solid curves corresponds to run R1, curves with symbols ($\bigtriangleup$)
correspond to run R2. Dashed curves show
histories of $K(t)$ and $\epsilon(t)$ prior
to start ($t < 0$) of the perturbation.
Time axis is normalized by
large eddy time scale at $t=0$ ($T_{E}^{0}$) in bottom horizontal
axis and by rotation time scale ($\tau_{\Omega}$) in top horizontal axis.
}
\label{eediss.fig}
\end{figure*}
The mean turbulent kinetic energy and dissipation provide important global
measures of the state of the turbulence under rotation. In homogeneous
turbulence, the mean turbulent kinetic energy is
defined as
\begin{equation}
\label{tke.eq}
K = \frac{1}{2} \Big{\langle} \sum_{i=1}^3 u_i^2 \Big{\rangle} \;,
\end{equation}
and the mean dissipation rate is given by
\begin{equation}
\label{diss.eq}
\epsilon=2\nu \Big{\langle} \sum_{i,j=1}^3 \frac{\partial u_i}{\partial x_j} \frac{\partial u_j}{\partial x_i}\Big{\rangle}\;,
\end{equation}
where,
$\langle \cdot \rangle$ denote space averages.
In Fig.~\ref{eediss.fig}
we show the evolution of these quantities normalized by their
initial values.
Despite the
considerable statistical variability,
it is clear that the kinetic
energy initially decreases for simulation R1,
but subsequently
increases with time. On the other hand, the energy
for simulation R2 drops in the early stages and then remains nearly
constant at a value well below that of R1.
The steep drop in the energy
at early times is accompanied by a sharp increase in
mean dissipation. The dissipation rates then drop
to nearly constant values with that of simulation R2 stabilizing
at a value larger than that of R1.
Furthermore, the time histories of energy and dissipation prior
to the start of the perturbation
($t < 0$ in Fig.~\ref{eediss.fig})
indicate that the initial changes in energy and dissipation caused by the
change in orientation of $\mathbf{\Omega}$ are
significant. The previous observations indicate that a single perturbation in the
form of an
instantaneous change to the orientation of the rotation axis
results in a slow recovery of the
``universal''
inverse energy transfer mechanism. In contrast, if the system is
subjected to such perturbations repeatedly, the
inverse energy transfer is eventually destroyed as the dynamical
reconstruction of the large scale structures is too slow to survive.
Indeed, the dissipation of run R2 attains
a quasi-stationary state which is higher than its initial value,
while its energy stabilizes at a lower value.
This indicates a net energy transfer
from large scales to the small scales in simulation R2.
\begin{figure*}
\resizebox{1.0\textwidth}{!}{
\includegraphics{velvar_exp3_2axis.pdf}
\includegraphics{velvar_exp1_2axis.pdf}
}
\begin{picture}(1,1)
\put(120,2){$t/T_{E}^{0}$}
\put(120,187){$({t/\tomg}){10}^{-2}$}
\put(380,2){$t/T_{E}^{0}$}
\put(380,187){$({t/\tomg}){10}^{-2}$}
\put(3,70){\rotatebox{90}{$\langle u_\alpha^2 \rangle/2$}}
\end{picture}
\protect\caption{(Color online) Evolution of the three components of the turbulent kinetic energy
$\langle u_\alpha^2 \rangle, \; \alpha=1,2,3$ as a function of normalized time
for run
(left) R1 and (right) R2. Open symbols ($\Circle$), ($\bigtriangleup$) and
($\square$)
correspond to $\langle u_1^2 \rangle$, $\langle u_2^2 \rangle$ and $\langle u_3^2 \rangle$
respectively in the simulations.
Closed symbols ($\CIRCLE$), ($\blacklozenge$) and ($\blacksquare$) correspond
to time histories ($t < 0$) of $\langle u_1^2 \rangle$, $\langle u_2^2 \rangle$ and $\langle u_3^2 \rangle$
respectively prior to the start of the perturbation.
Time axis is normalized by
large eddy time scale at $t=0$ ($T_{E}^{0}$) in bottom horizontal
axis and by rotation time scale ($\tau_{\Omega}$) in top horizontal axis.
}
\label{vvar.fig}
\end{figure*}
The evolution of the Cartesian components of the turbulent kinetic
energy are indicative of large scale anisotropy in the flow.
Figure \ref{vvar.fig} compares the variance of the
velocity components $\langle u_\alpha^2 \rangle$
in the two simulations. Prior to the start of the perturbations,
the velocity fluctuations perpendicular to the rotation axis
($\theta = 0$ for $t<0$ in Fig.~\ref{omgv.fig}) are dominant due to
an inverse energy cascade in the $Y$-$Z$ plane. The
instantaneous rotation of $\mathbf{\Omega}$ at $t=0$
disrupts the spectral transfer to the largest scales in the $Y$-$Z$
plane. With time, in run R1 an inverse cascade in the plane normal to the
new rotation axis is established resulting in increasing
energy in the $Z$-direction. The variance of velocity components in
the $X$ and $Y$ directions evolve similarly at a lower value than
the $Z$ component.
Whereas in run R2, the disruption of the inverse cascade at
$t=0$ is sustained by the regular change in the orientation of the
rotation axis. As a result, the energy components reach a stationary isotropic
state.
\vspace{-1.0em}
\subsection{Energy spectra and flux }
\label{sec3b}
\begin{figure*}
\resizebox{1.\textwidth}{!}{
\includegraphics{inset/comega_1.75/gmetaexp3.pdf}
\includegraphics{inset/comega_1.75/gmetaexp1.pdf}
}
\begin{picture}(1,1)
\put(120,2){$k/k_\Omega$}
\put(145,45){$\scriptstyle{k/k_\Omega}$}
\put(200,115){$\scriptstyle{k^{1/3}}$}
\put(68,47){$k_f/k_\Omega$}
\put(380,2){$k/k_\Omega$}
\put(410,42){$\scriptstyle{k/k_\Omega}$}
\put(455,112){$\scriptstyle{k^{1/3}}$}
\put(325,47){$k_f/k_\Omega$}
\put(89,25){\vector(0,1){20}}
\put(344,25){\vector(0,1){20}}
\put(-12,70){\rotatebox{90}{$E(k,t)(\epsilon(t)\Omega)^{-1/2}k^2$}}
\put(85,70){\rotatebox{90}{$\scriptstyle{E(k)(\epsilon\Omega)^{-1/2}k^2}$}}
\put(340,70){\rotatebox{90}{$\scriptstyle{E(k)(\epsilon\Omega)^{-1/2}k^2}$}}
\end{picture}
\protect\caption{Energy spectrum normalized by rotation scaling at
different times for (left) simulation R1 and (right) simulation R2.
Symbols $A$-$G$ correspond to $t/T_{E}^{0} = 1/4,1/2,1,2,3,4,5$
respectively.
Curve with (long) dashes corresponds to $t=0$.
Dashed horizontal lines at $C_\Omega = 1.75$ for reference.
Forcing wave number magnitude $k_f/k_\Omega \approx 0.1 $ for
simulations R1 and R2 at all times shown. {Inset shows blow-up of the
intermediate scale range $k_f \le k \le k_\Omega$ at late-time.
Dashed horizontal line at $C_\Omega = 1.75$ corresponds to
$k^{-2}$ scaling, while long-dashed
line corresponds to Kolmogorov $-5/3$ spectrum \cite{krs95}.}
}
\label{ek.fig}
\end{figure*}
In the rotation-modified inertial range, theoretical arguments
\cite{zhou95} suggest
that for $k_f \ll k \ll k_\Omega$ the
energy spectrum is of the form
\begin{equation}
\label{spec.eq}
E(k) = C_\Omega (\epsilon \Omega)^{1/2} k^{-2}\;,
\end{equation}
where the constant $C_\Omega = 1.22-1.87$ \cite{zhou95}.
In Fig.~\ref{ek.fig} we show the development of the compensated
energy spectrum $E(k)(\epsilon \Omega)^{-1/2} k^{2}$ at
different times for both simulations R1 and R2. The energy at low
wave numbers ($k < k_f$) initially decreases ($t/T_{E}^{0} < 1$)
and then increases for run R1
whereas the energy at the largest scales in run R2 monotonically
decreases with time. These results are
consistent with energy evolution trends of the two
simulations shown in Fig.~\ref{eediss.fig}.
{At the intermediate
wave numbers $k_f \ll k \ll k_\Omega$,
run R1 exhibits a $k^{-2}$ behavior with an inertial range
plateau of $C_\Omega = 1.75$.}
{
In contrast, run R2 shows a greater tendency towards
a transition to the classical $k^{-5/3}$ scaling in the inertial range
which is
typical of flows without strong rotation \cite{min12}.
On the other hand, the energy at the high wave numbers ($k > k_\Omega$)
is greater in run R2 than in run R1, indicating a
stronger forward cascade in the former.}
The results
confirm that the
transfer dynamics in the case with ``precession-like'' perturbations
are inherently different than the case with a fixed rotation axis.
\begin{figure*}
\resizebox{1.0\textwidth}{!}{
\includegraphics{flxgmetaexp3.pdf}
\includegraphics{flxgmetaexp1.pdf}
}
\begin{picture}(1,1)
\put(120,165){$t=0$}
\put(124,163){\vector(1,-1){20}}
\put(120,2){$k/k_f$}
\put(380,165){$t=0$}
\put(384,163){\vector(1,-1){20}}
\put(380,2){$k/k_f$}
\put(-10,80){\rotatebox{90}{$\Pi(k,t)/K(t)$}}
\end{picture}
\caption{Spectral flux (Eq.~\ref{flx.eq}) normalized
by kinetic energy at
different times for (left) run R1 and (right) run R2.
Different curves correspond to different instants of time
(refer Fig.~\ref{ek.fig}).
Dashed horizontal line at $0$ for reference.
The maximum (negative) amplitude of flux increases with time (curves $A$-$G$) for
run R2. Positive spectral flux corresponds to an inverse energy cascade while negative
flux indicates a forward cascade (see Eq.~\ref{flx.eq}).
}
\label{flux.fig}
\end{figure*}
The direction of energy transfer can be conveniently studied by
examining the contribution of the nonlinear terms
in Eq.~\ref{nsolve.eq}
to the rate of change of energy in $k$-space.
Following \cite{MY.II} we define the spectral
flux as
{
\begin{equation}
\label{flx.eq}
\Pi(k) = \!\! \int\displaylimits_0^k \!{\textrm{Im}\Big{\langle}{
k^{'}_m P_{ij}(\mathbf{k^{'}})\hat{u}_i^{*}(\mathbf{k^{'}})
\mkern-18mu \!\!\!
\int\displaylimits_{\mathbf{k^{'}}=\mathbf{p}+\mathbf{q}}{
\mkern-18mu
\hat{u}_j(\mathbf{p})
\hat{u}_m (\mathbf{q}) }}\Big{\rangle} d \mathbf{p} \; dk^{'} }.
\end{equation}
}
Here $\textrm{Im}(\cdot)$ denotes imaginary part of $(\cdot)$,
overcarets represent Fourier coefficients, $(\cdot)^{*}$ is the
complex conjugate of $(\cdot)$ and the tensor
$P_{ij}(\mathbf{k})=k_i k_j/k^2 - \delta_{ij}$ represents projections
onto the plane perpendicular to
$\mathbf{k}$ in wave number space ($\delta_{ij}$ is
the Kronecker delta tensor).
Figure \ref{flux.fig} shows the flux normalized by kinetic
energy $K(t)$ at various times for both
simulations.
The positive plateau at low wave numbers ($k < k_f$)
accompanied by a decrease in the
flux magnitude at higher
wave numbers ($k > k_f$)
at later times is a clear indication of large-scale
energy transfer in simulation R1.
Whereas, the low wave number flux is almost zero for simulation R2
at later times,
indicating that the inverse cascade is suppressed by the repeated
change in the rotation axis orientation.
Furthermore the flux magnitude
at higher wave numbers ($k > k_f$) increases with
time in run R2 indicating that the
perpetual change in the orientation of the rotation axis enhances the
forward energy cascade.
\begin{figure*}
\resizebox{1.0\textwidth}{!}{
\includegraphics{vizcropped_new/T1.pdf}
\includegraphics{vizcropped_new/exp3/T10.pdf}
\includegraphics{vizcropped_new/exp3/T92.pdf}
}
\caption{
(Color online) Iso-contours of magnitude of velocity fluctuations
($\sqrt{K(t)}$) in run R1 for (left-right) $t/T_{E}^{0}=0,1,11.5$.
}
\label{exp3.fig}
\end{figure*}
\begin{figure*}
\resizebox{1.0\textwidth}{!}{
\includegraphics{vizcropped_new/T1.pdf}
\includegraphics{vizcropped_new/exp1/T11.pdf}
\includegraphics{vizcropped_new/exp1/T55.pdf}
}
\caption{
(Color online) Iso-contours of magnitude of velocity fluctuations
($\sqrt{K(t)}$) in run R2 for (left-right) $t/T_{E}^{0}=0,1,6.95$.
Note that the iso-contours at $t=0$ for R2 are the same as that
for R1 at $t=0$ (left panel in Fig.~\ref{exp3.fig}) as both runs
start from same initial conditions.
}
\label{exp1.fig}
\end{figure*}
\subsection{Large scale structure}
\label{sec3c}
Independent of the realizability of the numerical experiments shown here,
an important consideration in turbulence subjected to rotation is
the robustness and universality of the large scale structures under sudden perturbations
of the large scale set-up \cite{SM14}.
Figures \ref{exp3.fig} and \ref{exp1.fig} show the iso-contours of the
velocity magnitude at three different time instants in simulations
R1 and R2 respectively. For $t \le 0$, the inverse cascade in the
plane normal to the rotation axis manifests itself as columnar structures
along the axis of rotation \cite{map09}.
Shortly after the change in the orientation of the rotation axis
at $t=0$, the inverse cascade is suppressed for both
runs R1 and R2 but their further evolution differ.
In run R1 after a transient, the energy flux becomes positive again at the
large scales and diminishes in magnitude at the small scales (Fig.~\ref{flux.fig}).
On the other hand, in run R2 the inverse cascade dynamics has insufficient time
to recover and the direct cascade is stronger.
This can be illustrated by the time evolution of the maximum (negative)
amplitude of the flux which is non-monotonous only for run R1 (Fig.~\ref{flux.fig}).
The large scale structure evolution can be associated with the life-time
of the individual eddies versus
the time required for the build-up of the forward cascade. At the
low Rossby numbers considered,
the time between switching the rotation axis is comparable with the
rotation time scale. In such a scenario,
the inverse cascade does not have the time to rebuild as evidenced by the
lack of large scale structures for $t > 0$ in Fig.~\ref{exp1.fig}.
Thus, the columnar structures characteristic of turbulence subjected
to rotation in a fixed direction
(Fig.~\ref{exp3.fig})
are absent when the orientation of the rotation axis
is changed fast enough (Fig.~\ref{exp1.fig}).
The velocity field structure in the case where the
rotation axes is regularly changed
resembles that of an isotropic field, a visual proof that the
inverse energy cascade is not sustained.
\begin{figure}
\center
\resizebox{0.5\textwidth}{!}{
\includegraphics{twoshellcomp.pdf}
}
\begin{picture}(1,1)
\put(-0,4){$t/T_{E}^{0}$}
\put(-20,95){$\sst/T_{E}^{0}$}
\put(-120,80){\rotatebox{90}{$K(t)/K(0)$}}
\put(-70,120){\rotatebox{90}{$\scriptstyle{L_{21,1}/(\frac{1}{2}L_0)}$}}
\end{picture}
\caption{
Evolution of energy for simulation R1 for two different cases
of large-scale damping. Solid curve corresponds to energy removal
for wave number $k \in [0.5,2.5]$, while curve with symbols
($\times$) corresponds to $k \in [0.5,1.5]$. Inset shows
late-time evolution of transverse integral length scale
$L_{21,1}$ normalized by half the length
of the domain ($L_0$).
}
\label{damp.fig}
\end{figure}
Another question we attempt to address is the robustness of
the large scale structures as a function of the
energy sink mechanism applied at large scales.
In simulations R1 and R2 a hypo-viscous mechanism
is used to prevent
energy condensation that can occur because of upscale energy transfer
owing to
finite domain considerations in rotating flows \cite{xia11,SY93}.
It is reasonable to
expect that the large scale statistics are influenced
by the details of the friction mechanism that is used at the low
wave numbers.
In order to verify this,
we changed the
energy removed from the largest scales. For instance, in
run R1 we removed energy from different
wave number shells at the large scales,
thus depleting the system differently.
Figure \ref{damp.fig} shows two such scenarios where
the energy
is removed from shells $0.5 \le k \le 2.5$ and
$0.5 \le k \le 1.5$, respectively. Removing energy from a thinner shell
results in a steeper increase in energy initially, but ultimately the
energy decreases due to the action of large-scale viscosity
\cite{cher07}.
Eventually the energy for the two large scale friction mechanisms
become approximately equal and evolve similarly with time.
We have also verified that the large scale structures for these two
cases (not shown) are qualitatively the same.
The inset of Fig.~\ref{damp.fig} reports the late time ($t > 7.5$)
evolution
of the transverse integral length
scale $L_{22,1}$ defined in terms of the two-point
correlation as
(with no sum over Greek subscripts)
\begin{equation}
\label{lint.eq}
L_{\alpha\alpha,\beta}=
\frac{1}{\langle u_\alpha^2 \rangle}
\int_0^\infty \Big{\langle} u_\alpha( \mathbf{x} )
u_\alpha ( \mathbf{x} +r \mathbf{e}_\beta) \Big{\rangle} \;,
\end{equation}
along the direction of the unit vector
$\mathbf{e}_\beta$. The integral scales for the case where
energy is removed from the thicker shell are smaller than for the case
where energy is removed from the thinner shell and are thus contaminated
by the periodic boundary conditions to a lesser extent.
\section{Conclusions}
\label{sec4}
In this study we have used direct numerical simulations
to study the response of rotating turbulence to ``precession-like''
perturbation.
A major emphasis has been to examine the large scale structure
and energy transfer characteristics
when the orientation of the rotation axis is repeatedly
changed.
In the case of uniform solid-body rotation
with a fixed rotation direction, the
spectral transfer and hence dissipation
is greatly reduced by rotation.
If the orientation of the rotation axis is changed with a
time scale comparable with the rotation time scale,
the down scale energy transfer and hence dissipation is shown
to increase. After a transient period the kinetic energy reaches
a quasi-stationary state as the energy input by forcing is
balanced by the dissipation at the small scales. The large scales
are devoid of columnar structures typically seen in
rotating flows and resemble that of an isotropic state.
A quantitative assessment of the
degree of isotropization due to the change in the
orientation of the rotation axis
will require a systematic projection onto the eigenfunctions of the
group of rotation and will be reported
elsewhere \cite{LP05}.
This work is a first step in studying the influence of precession-like perturbation
on rotating turbulence. We have neglected the time dependent
precession term $d(\mathbf{\Omega} \times \mathbf{r} )/dt$ under the assumption
that the instantaneous perturbation induced by the sudden change in the
rotation axis does not affect the long-time dynamical evolution and/or
the evolution of the fluid region close enough to the rotation axis.
{Using a penalization technique the non-homogeneous precession term
$d(\mathbf{\Omega} \times \mathbf{r} )/dt$ can be taken into account exactly \cite{kai05}.}
Another potential source of spurious effects is due to periodic boundary
conditions, which force the large-scale columns to wrap around the lattice,
something that would not be possible in presence of a solid boundary \cite{fabien}.
The robustness of these approximations will be quantified in a study presented elsewhere.
\section{Acknowledgments}
We acknowledge P. Mininni for useful discussions.
Annick Pouquet is thankful to LASP for its hospitality.
This work was funded by the European Research Council under the
European Community’s Seventh Framework Program,
ERC Grant Agreement No.~$339032$.
We acknowledge the CINECA initiatives INF14\_fldturb and
IscrC\_RotEuler for the availability of
high performance computing resources and support.
\noindent
All authors contributed equally to the paper.
\bibliographystyle{epj}
|
1,314,259,993,405 | arxiv | \section{Introduction}
Most theoretical work on the switching of STT-MRAM\cite{MRAM} has used the single-macrospin model\cite{chap}\cite{ButlerEtal}. This is adequate for very small elements in which the exchange interaction keeps the local magnetizations parallel, but when the volume $V$ is small, the stability parameter (energy barrier/$k_B T$, or $KV/k_B T$, where $K$ is the anisotropy energy density) is small. For elements large enough to be stable, incoherent switching is possible.
One reason why it has been difficult to understand incoherent switching is that multi-macrospin switching simulations lead to a bewildering variety of motions -- precession can nucleate locally (perhaps in more than one place at the same time), precession or reversed domains can grow and shrink in an apparently random way, especially if the element is overdriven (\textit{i. e.}, the applied spin torque is much higher than the critical spin torque for onset of precession). We have tried to simplify the problem by starting with the infinitesimal normal modes of oscillation about an initial uniform state, and continuing them to finite amplitude (Sec. \ref{section:norm}).
\section{Model}
We assume a cylindrical STT-MRAM element of thickness $t$ and radius $R$, with perpendicular anisotropy, stacked next to a pinned polarizing layer such that there is a spin torque proportional to the current in the LLG (Landau-Lifshitz-Gilbert) equation:
\begin{equation}\label{LLG}
\frac{d\mathbf{M}}{dt} = -\gamma \mathbf{M} \times \mathbf{H} - \frac{\gamma \alpha}{M_s} \mathbf{M} \times \mathbf{M} \times \mathbf{H}
- \frac{\gamma J}{M_s} \mathbf{M} \times \mathbf{M} \times \mathbf{\hat{m_p}}
\end{equation}
Here $ \mathbf{H}$ is the total field, including the exchange, anisotropy, and magnetostatic fields; $M_s$, $\gamma$, $\alpha$ are the saturation magnetization, gyromagnetic factor, and LL damping. The coefficient $J$ of the spin torque is proportional to current, and has units of magnetic field (kA/m).
The anisotropy field is just $H_K M_z/M_s$, normal to the plane, where $H_K \equiv 2K/ \mu_0 M_s$. The simulations in this paper were done with our public-domain micromagnetic finite-difference simulator\cite{alamag} -- the magnetizations are defined on a cubic lattice, the exchange field is a linear combination of neighboring magnetizations, and the magnetostatic field is computed using the Fast Multipole Method (FMM). We have omitted terms in $\alpha ^2$ since $\alpha$ is small.
\section{Normal modes }
\label{section:norm}
To study the statistics of switching, we must first characterize the initial state. At zero temperature, this is the minimum energy state, which in an infinitely thin layer (or in a discretization with only one layer vertically) has magnetization vectors exactly along the normal (z) direction. We will refer to it as the "flower" state, because in a system with nonzero thickness the magnetization near the tops of the edges bends outward (and inward at the bottoms). At low temperatures, we will have only small fluctuations from this state, and the system can be characterized by a complete set of normal modes. We have classified all of these normal modes and calculated many of them\cite{normodes}, but for the present purpose we need only the lowest-frequency modes, which have different symmetries that can be classified by an integer winding number $w$: the magnetization winds (about a vertical axis) $w$ times when we move around the element circumference once. It turns out that these have the form\cite{normodes}
\begin{equation}\label{ansatz}
p(r,\theta) = ( \Re e^{iw\theta} r^w F(r), \Im e^{iw\theta}r^w F(r), 0)
\end{equation}
where F(r) is some smooth function. Magnetostatic interactions make it impossible to calculate $F(r)$ analytically -- the best way we have found to compute the low-frequency normal modes is to start with a simple \textit{Ansatz} with the correct symmetry (Eq. \ref{ansatz} with F(r) = constant) and let the system evolve according to the LLG equation. In the case of the lowest-frequency mode ($w=0$, a quasi-uniform state) the higher modes (which will initially be present with low amplitudes because the \textit{Ansatz} is not exact) will also have higher damping, and will gradually disappear, leaving the exact normal mode. We keep the lowest mode from disappearing by re-normalizing it after each cycle, or by applying a spin torque (current) to counter the effects of damping. The same can be done with the $w=1$ ("vortex") and $w=-1$ ("antivortex") modes, except that the known lower mode must be projected out to keep it from growing.
Of course, it is not possible to study switching using infinitesimal perturbations of the initial state. We have been able to continue these normal modes to finite amplitudes, preserving the symmetry, by applying a current slightly higher than the critical value, so the amplitude drifts upward slowly. The amplitude cannot be characterized by the precession angle $\theta$, as in a single-macrospin model, since the angle varies over the element, so we will characterize it by the total moment $m_z$ (which is $m_s \cos \theta$ in the single-macrospin model) instead.
The critical current to maintain precession (Fig. \ref{figure:Jc}) decreases as the precession amplitude increases (because the anisotropy field it must overcome decreases), so if the current is held constant the amplitude will increase uncontrollably. Thus we must decrease the current slightly at each time to ensure that the precession grows slowly (quasi-statically).
The result of this process is a unique exactly periodic orbit for each amplitude (each $m_z$ in Fig. \ref{figure:Jc}). The orbit can be continued past symmetry-breaking instabilities, by projecting the magnetization configuration onto the symmetry of the desired mode. In particular, the coherent mode transforms as the $l=1$ representation of the rotation group\cite{normodes} while the instability discussed in the next section is a combination of $l=0$ and $l=2$, so by imposing $l=1$ symmetry, we can suppress the instability in Fig. \ref{figure:Jc}.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=3 in]{Jc.eps}
\end{center}
\caption{\label{figure:Jc} "Critical current" J of normal modes, continued to finite amplitude by the numerical method described in the text, labeled by winding number (circles are positive $w$, line is negative $w$, but these seem to be nearly degenerate.) Precession amplitude increases to the left; "MI" indicates magnetostatic instability.}
\end{figure
\section{Magnetostatic instability of quasi-uniform mode} \label{section:inst}
The calculation leading to Fig. \ref{figure:Jc} is very stable for small amplitudes. But for
larger amplitudes, there is a magnetostatic instability, shown schematically in Fig. \ref{figure:MI}(a).
One can see that there must be an instability before the magnetization becomes in-plane (Fig. \ref{figure:MI}(b), which shows the case $\theta = \pi/2$ with a perturbation in which the magnetization tilts upward at the right and downward at the left.) This clearly lowers the anisotropy energy \textbf{and} the magnetostatic energy, analogously to stripe domains in an extended film\cite{fujiwara}, so clearly is unstable if exchange is weak.
The instability is also related to the Suhl instability of FMR precession in bulk systems\cite{suhl}.
It is not obvious at what angle this instability will occur, but we find numerically that it is unstable for $m_z$ below a critical value $m_{MI} \approx 0.875 m_s$ (Fig. \ref{eig}).
\begin{figure}[t]
\begin{center}
\includegraphics[width=3in, height=1.0 in]{MI.eps}
\caption{\label{figure:MI} Cartoon of (a) quasi-uniform state (solid arrows and precession circle) and perturbed by largest-eigenvalue eigenvector (dashed arrows and precession circle), for small precession angle; (b) the same for $90^\circ $ precession angle, where instability is easier to understand.}
\end{center}
\vspace{-3mm}
\end{figure}
To study this instability, we first determine the exact "unperturbed" orbit $\mathbf{M}_0(\mathbf{r},t)$ for a specific amplitude ($m_z$), which is periodic with period $T$ (\textit{i. e., }$\mathbf{M}_0(\mathbf{r},t+T) = \mathbf{M}_0(\mathbf{r},t)$).
Then we add a perturbation $\mathbf{p}(\mathbf{r})$ and evolve $\mathbf{M}_0(\mathbf{r},0) + \mathbf{p}(\mathbf{r})$ for one cycle to some configuration $\mathbf{M}'(\mathbf{r})$, defining the evolved perturbation $\mathbf{p}(\mathbf{r}) \equiv \mathbf{M}'(\mathbf{r})- \mathbf{M}_0(\mathbf{r},T)$. The map from $\mathbf{p}$ to $\mathbf{p'}$ is the Lyapunov map. The system is stable if the eigenvalues of this map are all $< 1$. Since this is a many-dimensional map, we can only determine its eigenvalues approximately. One approach which works well is to assume that the eigenvectors with the largest eigenvalues are near the subspace spanned by the low-frequency modes (Eq. \ref{ansatz}) with $ w = 0, \pm 1$. The eigenvectors with the same symmetry ($w=0$) as $M_0$ are easy to find -- one corresponds to shifting the phase of the orbit ($p = d\mathbf{M}_0/dt$) and the other to increasing the amplitude. The $ w = \pm 1$ modes are linear in $x$ and $y$ -- it turns out that they mix, but if we use the basis $\mathbf{b}_{xmx}(x,y) = (x,0)$, $\mathbf{b}_{xmy}(x,y) = (0,x)$, $\mathbf{b}_{ymx}(x,y) = (y,0)$, $\mathbf{b}_{ymy}(x,y) = (0,y)$, in the first two ("$b_{xm}$") there is a vertical nodal line, to the left of which the magnetization tilts to the left (for $\mathbf{b}_{xmx}$) and to the right of which it tilts to the right. The other two ("$b_{ym}$") have a horizontal nodal line, and don't mix with "$b_{xm}$". Thus our 4x4 Lyapunov matrix $L$ is block-diagonal, and the two 2x2 blocks are identical by rotational symmetry. Its elements are obtained by perturbing by a basis function $\mathbf{p} = \mathbf{b}_\beta (\beta = xmx$, etc): $L_{\alpha \beta} = \mathbf{b}_\alpha \cdot \mathbf{p'}$. The eigenvalues are plotted in Fig. \ref{eig}.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=3 in, height=2 in]{eig.eps}
\end{center}
\vspace{-0.2 in}
\caption{\label{eig}Lyapunov eigenvalues vs. precession amplitude. Since the matrix is real, they are either real ($\lambda_1$, $\lambda_2$) or complex conjugate pairs $\lambda_\pm = re^{\pm i \phi}$, in which case $r$ and $\phi$ are plotted. Eigenvalue $\lambda_1=1.17$, whose eigenvector is shown in Fig. \ref{evol}(a), is marked with "x". In these simulations $M_s = 500$ kA/m, $\alpha = 0.1$ for rapid convergence, $H_K = 1000$ kA/m, exchange $A = 10^{-11}$ j/m, $R = 30$ nm, $t = 4$ nm, cell size = 4 nm.}
\end{figure
The highest-eigenvalue eigenvectors can also be obtained by a more exact method (which does not assume it is in the 6D subspace described above.) If we start with an arbitrary perturbation $p$ and simply iterate the Lyapunov map (re-normalizing $p$ each time), components along eigenvectors with smaller eigenvalues will disappear, leaving us with the correct eigenvector, shown in Fig. \ref{evol}(a).
\begin{figure}[t]
\begin{center}
\includegraphics[width=3.2 in]{evol.eps}
\vspace{0.6 in}
\caption{\label{evol}
(a) Exact larger-eigenvalue eigenvector $p$ near the magnetostatic instability, at the point marked in Fig. \ref{eig} ($m_z/m_s= 0.8729$, $\lambda_1 = 1.17$). Addition of $\mathbf{p}$ to the rightward-tilting $\mathbf{M}_0$ increases the tilt at the right and decreases it at the left (b). In (b-d), color of vectors encodes $M_z$ (positive, out of the paper, as on left, is red, into the paper is blue). Subsequently the right edge reverses (c), forming a domain wall (with magnetizations precessing in the plane of the paper) that moves to the left (d). Switching is complete when the wall reaches the left edge (not shown). Only relative times are meaningful, because the initial amplitude is arbitrary.}
\end{center}
\end{figure}
\section{Switching mechanism}
Armed with this understanding of the magnetostatic instability, we can describe the switching mechanism and predict the rate. Clearly at high temperatures there will be many modes excited, so it is conceptually useful to consider the low-temperature limit in which the average energy $k_B T$ of the higher modes in a switching ensemble is much less than the energy in the quasi-uniform mode, which is biased in this ensemble to be near a switching energy $E_{sw}$. Thus we will consider the limit in which the stability factor $E_{sw}/k_BT \rightarrow \infty$, although a realistic finite value probably behaves similarly. Then in a switching trajectory, the quasi-uniform amplitude will perform a random walk, until $m_z \approx m_{MI}$, where the eigenvalue (Fig. \ref{eig}) passes 1. At this point the system no longer evolves quasi-statically -- the eigenvalue $\lambda_1$ rises so quickly that the system will shoot out along the corresponding eigenvector, shown in Fig. \ref{evol}(a). (Actually there are two degenerate eigenvectors differing by a $90^\circ$ rotation -- linear combinations produce nucleation at different points around the perimeter.) As long as this always leads to switching, the details may not matter. We have looked at the evolution of this eigenvector -- except for an initial latency\cite{latency} (time lag), it is independent of the initial amplitude of the eigenvector -- thus the switching trajectory in this limit is essentially deterministic and unique (except for spatial rotations and time translations). Fig. \ref{evol} shows several configurations along this trajectory\cite{mov} -- the left half of the system [where the perturbation (Fig. \ref{figure:MI}) narrows the precession cone] returns to the initial direction, but a reversed domain is formed in the other side, mostly at the edge. This reversed domain expands by domain-wall motion, until the system is entirely switched.
\section{Activation energy and switching rate} \label{section:act}
Within the macrospin approximation, one can write a Fokker-Planck equation\cite{V&A,Zhang} for the evolution of the probability distribution. In steady state the probability $\propto \exp(-E_{eff}/k_BT)$, where the effective energy\cite{V} satisfies
\begin{equation}\label{DE}
\frac{dE_{eff}(E)}{dE} = (1 - \frac{J}{J_c u})
\end{equation}
where we use the variable $u \equiv m_z/m_s = \cos \theta$ so we can generalize to a multi-macrospin system in which $\theta$ is not uniform.
Here $E = - KV u^2$, and $J_c = \alpha H_K$ is the critical current at which the initial state (with $\theta = 0, u=1$) is unstable against precession.
The solution to Eq. \ref{DE} is
\begin{equation}\label{Eeff}
E_{eff}/KV = 2 u J/J_c - u^2
\end{equation}
Determining the switching rate from the steady-state probability is not trivial\cite{rate}, but in the thermally activated regime it is proportional to the probability of being at the switching point $\theta_{sw}$, relative to the probability at the initial state: $\sim \exp(-E_b/k_BT)$ where the effective-energy barrier (activation energy) $E_b \equiv E_{eff}(u_{sw}) - E_{eff}({u=1})$. In the macrospin model, $u_{sw} = u_{max}$, the value for which $E_{eff}$ is maximum, which is $J/J_c$, giving
\begin{equation}\label{Ebm}
E_b^{macrospin}/KV = (1 - J/J_c)^2.
\end{equation}
When we generalize to a multi-macrospin model, $E(u)$ won't change much -- the main change is that there is an instability. We can construct a simple theory by keeping the single-macrospin $E_{eff}(u)$ but, if $u$ reaches $u_{MI}$ before it reaches $u_{max}$, i.e. $u_{MI} > u_{max}$, using $u_{sw} = u_{MI}$. Then the energy barrier becomes
\begin{equation}\label{Eb}
E_b/KV = 1 - u^2_{MI} -2 (1-u_{MI}) J/J_c
\end{equation}
The energy landscape is shown in the inset to Fig. \ref{barrier}, which also shows the magnetostatic-instability angle $\theta_{MI} \approx 0.5$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=3 in]{barrier.eps}
\caption{\label{barrier}The energy barrier as a function of spin-torque current, showing (from top) the conventional linear formula, the quadratic formula resulting from the single-macrospin theory, and the results of assuming various values for the instability threshold $u_{MI}$ -- each is a straight line, becoming tangent to the "macrospin" curve when the maximum in the energy landscape (inset) passes $\theta_{MI}$, an average angle at the instability defined by $\cos \theta_{MI}=u_{MI}$.}
\end{center}
\end{figure}
Clearly if we take $E_{sw}$ to be the instability-onset energy, the barrier (shown as double arrows in the inset) will be much smaller.
Several approximations to the barrier are shown in Fig. \ref{barrier}. The parabola is the macrospin result; because experiments give a straighter line, often the exponent "2" is omitted and the straight line labeled "conventional" is used\cite{taniguchi}. But also, the low-current energy barrier often appears\cite{lowbar} to be much less than $KV$, by a factor as small as $ \approx 1/6$; it has been suggested that $V$ should be replaced by a smaller "activation volume"\cite{actvol}. However, it can be seen from Fig. \ref{barrier} that both of these problems (straightness and size) can be resolved by using the instability energy to determine the barrier. In particular, the value $u_{MI} \approx 0.875$ we have found (the lowest line) gives a result close to the observed activation energies. (Of course, this will depend on the particular parameters assumed, but can be calculated as in Sec. \ref{section:inst} above.)
\section{Conclusion}
In this paper we have presented a simple model for accounting for incoherence in STT-MRAM switching, which may resolve the problem that the observed activation energy is much lower than the single-macrospin prediction. Our model assumes that incoherence can be neglected until the precession reaches a certain critical amplitude, at which a magnetostatic instability occurs and the system deterministically switches. It is hoped that this simple theory can be improved by including the incoherent degrees of freedom explicitly.
\section{Acknowledgements}
This work was supported by Samsung Corporation. We acknowledge useful conversations with Dmytro Apalkov and Alexey Khvalkhovskiy.
|
1,314,259,993,406 | arxiv | \section{Introduction}
Long Short Term Memory (LSTM) units are very effective when working on sequential data \cite{hochreiter1997long, gers1999learning}. For some Natural Language Processing (NLP) tasks, we often need to find a distributed representation of phrases and sentences \cite{le2014distributed, lin2017structured, dasgupta2018evaluating}. One obvious way of doing this is to use a sequential LSTM which captures word order in a sentence \cite{zhang2018sentence,palangi2016deep}. But we can also have information about sentence structure from a dependency parse tree or about phrase structure from a constituency tree \cite{klein2004corpus}. Despite the fact that RNN based models work well with sequence information, they frequently neglect to catch any sort of semantic compositionality if the information is structured rather than in the sequential frame \cite{socher2011parsing}. For example, the syntactic principles of natural language are known to be recursive, with noun phrases containing relative clauses that themselves contain noun phrases, e.g., \textit{``I went to the church which has nice windows''}\cite{socher2012semantic}. The term compositionality can also be explained in terms of a \textit{car}. A \textit{car} can be recursively decomposed into smaller \textit{car} parts, for example, tires and windows and these parts can occur in different contexts, like tires on airplanes or windows in houses.
Attention \cite{bahdanau2014neural, luong2015effective, vaswani2017attention} was first introduced for doing machine translation where the target word generated by the decoder at each time step is aligned with all the words in the source sentence. In its general form, attention allows a model to put importance on certain parts of the sentence for doing any specific downstream task \cite{yang2016hierarchical, du2018text}. In a dependency tree, the relationship between the entities (head and dependent) are organized as a structure where a head word can have multiple dependents under it. In the case of a constituency tree, a phrase is represented by one of the subtrees with the root being the phrase type and words or subtrees being the children. In both tree structured LSTMs, the derivation of the vector representation of the entire tree does not depend on all of the subtree components uniformly. Some parts of the tree have a larger influence on the root vector and some parts may have less. This contribution from subtrees for the building of the whole tree depends on the underlying task that the model is performing. For example, in a sentiment analysis task the sentiment of a tree depends on the sentiment of all of its children and how this information propagates. There may be scenarios where a single word (such as ``not'') flips the sentiment of the whole subphrase. These words should get more attention when deciding the sentiment of a subphrase containing them. On the other hand, when the problem is a regression problem with the task of assigning a score based on the semantic similarity of two sentences, this attention can be calculated as a cross sentence attention. In this case the representation of one sentence can guide the structural encoding of the other sentence on the dependency as well as constituency parse tree \cite{zhou2016modelling}.
Capturing semantic relatedness means recognizing the textual entailment between the hypothesis and the premise \cite{marelli2014sick}. The general approach of modeling sentence pairs (i.e., measuring the relatedness between sentences) using neural networks includes two steps: represent both of the sentences as vectors via a sentence encoder and then initializing a classifier with these vectors to do the classification. The sentence encoder can be viewed as a compositional function which maps a sequence of words in a sentence to a vector. Some of the common compositional functions are sequential LSTM \cite{zhou2016modelling}, Tree-LSTM \cite{tai2015improved, zhou2016modelling} and CNN \cite{he2015multi}.
In this paper, we propose two models to encode attention inside tree structured LSTM cells and verify their effectiveness by evaluating them on the semantic relatedness task where the model needs to give a score depending on how similar two sentences are. The tree data structure allows a set of dependents in the dependency tree or constituents in the constituency tree to be children of an immediately higher level (parent) tree node. Our tree attention model applies attention over the set of children in a subtree and decides which of them are important to reconstruct their parent node vector. We apply this attention with respect to four pieces of information: the vector representation of the sentence currently being represented as a tree, the vector representation of the sentence being compared with, dependent vectors (dependency tree) or phrase vector (binary constituency tree), and concatenated vectors of the dependents or the constituents. Our extensive evaluation proves the effectiveness of our attentive Tree-LSTM with respect to the plain Tree-LSTM models as well as some top performing models on the benchmark dataset.
\section{Related work}\label{relatedwork}
Socher et al.\ \cite{socher2011parsing} propose a number of recursive neural network (rNN) based models which take phrases as input rather than entire sentences. Phrases are represented as a vector as well as a parse tree. Vectors for higher level nodes in the tree are computed using a tensor-based composition function. Their best model was Matrix Vector rNN (MVrNN) where each word is represented as a vector as well as a matrix. In this model the children in a subtree interact more through their vectors rather than being influenced by some weights during the calculation of the parent's vector and matrix representation.
Tai et al.\ \cite{tai2015improved} developed two different variants of standard linear chain LSTMs: child sum Tree-LSTM and N-ary Tree-LSTM. The underlying concept of using input, output, update and forget gates in these variants is quite similar to how these gates are used in standard LSTMs, however there are few important changes. The standard LSTM works over the sequence data whereas these variants are compatible with tree structured data (constituency tree or dependency tree). Also, unlike standard LSTMs, the hidden and cell states of a word at the current time step does not depend on the entire sequence seen before. Instead, the hidden and cell state of a parent node depends only on its children hidden and cell states. Recently, Chen et al. \ \cite{chen2016enhanced} combined LSTM with Tree LSTM for natural language inference task and empirically proved that these two models complement each other very well.
Zhou et al.\ \cite{zhou2016modelling} extend the concept of standard Tree-RNNs and propose a number of attention based Tree-RNN models to perform the semantic relatedness task. Their insight was quite novel: in order to compute the semantic similarity of two sentences, one can encode attention in the tree structure of one sentence with respect to the vector representation of the other sentence. However, their proposed attention model only works with child sum Tree-LSTMs and GRUs. Attention with Tree LSTM has also been studied by Liu et al. \cite{liu2017attention} for text summarization task where they use two different kinds of alignment : block alignment for aligning phrases and word alignment for aligning inter-words within phrases.
Turning to machine translation, the attention mechanism is used to align the source and target sentences in the decoding phase. More formally, the attention mechanism allows the model to attend to some elements with the intention of emphasizing different elements. The well-known attention models from \cite{bahdanau2014neural} and \cite{luong2015effective} use recurrent models to attend over a set of source words during the generation of each target word. Using recurrent models to generate an attention score incorporates a memory mechanism inside the network which helps the model at run time to traverse and decide what to attend over. Also, this recurrency allows some positional information in the sequence to help ordering the generated words.
Parikh et al.\ \cite{parikh2016decomposable} propose a decomposable attention model for natural language inference tasks by removing the modules with recurrent behavior during the calculation of attention. First, they pick a single vector from a set of vectors representing the source sentence and then compare its point-wise similarity with every element of each word vector from the target sentence. Following this, they compare these alignments using a function which is a feed forward neural network and finally perform an aggregation through summation before doing the final classification. Gehring et al.\ \cite{gehring2017convolutional} propose a sequence to sequence learning framework utilizing a convolutional neural network which completely avoids recurrent models allowing their architecture to be parallelizable. In order to capture the positional information, they include a positional embedding layer which gives their model a sense of the portion of the sequence in the input or output it is currently dealing with. They encode \textit{sine} and \textit{cosine} frequencies for each dimension of every position in the sentence to create the positional embeddings and finally combine them with word embeddings. Vaswani et al.\ \cite{vaswani2017attention} combine the previous two works and propose a powerful machine translation framework utilizing attention without recurrence and positional embeddings. They also extend the decomposable attention mechanism by attending over the input sequence multiple times stating it as a multi-head attention where the target is to extract different features by different attentional heads.
\section{The Model}
In this section, we describe our work in detail. We first explain how the two variants of Tree-LSTM work. Following this, we describe our universal attention mechanism that is applicable for these two Tree-LSTM variants. Additionally, we give an in-depth analysis of adding this attention with respect to various information as discussed in Section \ref{relatedwork}.
\iffalse
\subsection{Recurrent unit: Bidirectional LSTM}
\iffalse
\begin{figure}[b!]
\centering
\includegraphics[width=8cm,height=6 cm]{LSTM.png}
\caption{\label{lstm} AN LSTM cell \cite{huang2015bidirectional}}
\end{figure}
\fi
Recurrent neural networks (RNNs) are the best known and most widely used neural network (NN) model for sequence data as they sequentially scan the entire sequence and generate a compressed form of it. Although in theory RNNs are capable of remembering long distance dependencies, practically, as the sequence becomes longer, RNNs suffer from the
vanishing gradient problem \cite{bengio1994learning, pascanu2013difficulty}. To overcome this drawback some RNN variants have been introduced such as Long Short Term Memory (LSTM) \cite{hochreiter1997long} and Gated Recurrent Unit (GRU) \cite{cho2014learning}. These variants use a gating mechanism to propagate new information further and at the same time to forget some previous information allowing the gradients to propagate further.
$$i_t = σ(W_{xi}{x_t} + W-{hi}{h_{t−1}} + W_{ci}{c_{t−1}} + b_i$$
$$f_t = σ(W_{xf}{x_t} + W-{hf}{h_{t−1}} + W_{cf}{c_{t−1}} + b_f$$
$$c_t = f_t{c_{t−1}} + i_t{tanh(W_{xc}{x_t} + W_{hc}{h{t−1}} + b_c)}$$
$$o_t = σ(W_{xo}{x_t} + W_{ho}{h_{t−1}} + W_{co}{c_t} + b_o)$$
$$h_t = o_t {tanh(c_t )}$$
\fi
\subsection{Incompatibility of standard LSTM and Tree structured data}
Recurrent neural networks (RNNs) are the best known and most widely used neural network (NN) model for sequence data as they sequentially scan the entire sequence and generate a compressed form of it. Although in theory RNNs are capable of remembering long distance dependencies, practically, as the sequence becomes longer, RNNs suffer from the
vanishing gradient problem \cite{bengio1994learning, pascanu2013difficulty}. To overcome this drawback some RNN variants have been introduced such as Long Short Term Memory (LSTM) \cite{hochreiter1997long} and Gated Recurrent Unit (GRU) \cite{cho2014learning}. These variants use a gating mechanism to propagate new information further and at the same time to forget some previous information allowing the gradients to propagate further.
Performance-wise, LSTMs are superior to GRUs because they have more parameters but in terms of computational complexity GRUs often surpass LSTMs.
Even though these gating variants effectively solve the RNN vanishing gradient problem, they are limited to linear data; however, a natural language sentence encodes more than a sequence of words. This extra information is usually represented in a tree structure. The tree structure
shows how the words combine through different sub-phrases to reflect the overall meaning. If a sentence gets traversed by a standard LSTM, the latter part of the sentence gets more importance comparatively as the traversal moves left to right. But if the tree structure of the sentence gets traversed from bottom to top then the information from different constituent or dependents first gets combined to represent the root at the upper level and then this roots gradually gets traversed as children and combined to represent the root at next level and so on. So in both cases an LSTM cell will forget previous information which for plain LSTM, is related to the length of the sentence and for Tree-LSTM, is related to the depth of the tree. Also in plain LSTM, the hidden and cell state of a word at time step $t$ depends on hidden and cell state of all the words from time step $1 \ldots t-1$. But in Tree-LSTM, the hidden and cell state of a root word depends only on the hidden and cell state of all of its children rather than all the words before it.
\iffalse
\begin{figure*}
\centering
\subfloat[]{{\includegraphics[width=8.5cm,height=5 cm]{c_lstm} }}
\qquad
\subfloat[]{{\includegraphics[width=8.5cm,height=5 cm]{n_lstm} }}
\caption{(a) Child sum Tree-LSTM (b) N-ary Tree-LSTM}
\label{lstm}
\end{figure*}
\fi
\begin{figure*}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.8\linewidth]{c_lstm}
\caption{Child Sum Tree-LSTM}
\label{fig:sub-first}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.8\linewidth]{n_lstm}
\caption{Binary Tree-LSTM}
\label{fig:sub-second}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.8\linewidth]{ca_lstm}
\caption{Attentive Child Sum Tree-LSTM}
\label{fig:sub-third}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.8\linewidth]{na_lstm}
\caption{Attentive Binary Tree-LSTM}
\label{fig:sub-fourth}
\end{subfigure}
\caption{Illustrations of different Tree-LSTM architectures}
\label{fig:fig}
\end{figure*}
\iffalse
\begin{figure*}[!ht]
\centering
\subfloat[][]{\includegraphics[width=.45\textwidth]{c_lstm}\label{It_Ith}}\quad
\subfloat[][]{\includegraphics[width=.45\textwidth]{n_lstm}\label{I_ds}}\\
\subfloat[][]{\includegraphics[width=.45\textwidth]{ca_lstm}}\quad
\subfloat[][]{\includegraphics[width=.45\textwidth]{na_lstm}}
\caption{\centering Illustrations of different Tree-LSTM architectures (a) Child Sum Tree-LSTM (b) Binary Tree-LSTM \\(c) Attentive Child Sum Tree-LSTM (d) Attentive Binary Tree-LSTM}
\label{lstm}
\end{figure*}
\fi
\subsection{Tree-LSTM}\label{treelstms}
There are two possible tree representations of a sentence: Dependency tree and Constituency tree \cite{chen2014fast}. As previously presented, the standard linear chain LSTM and BLSTM cannot correctly analyze this structured information. To properly deal with this structured data, Tai et al.\ \cite{tai2015improved} proposed two LSTM models which can analyze a tree structure preserving every property of the standard LSTM gating mechanisms. They called the first one child sum Tree-LSTM and the second one N-ary Tree-LSTM. Child sum Tree-LSTM fits well with dependency trees as it is well suited for high branching child-unordered trees. On the other hand N-ary Tree-LSTM (with $n=2$) works better with the binarized (Chomsky Normal Form) constituency trees.
Traditional LSTM generates a new hidden and cell state from the previous hidden state $h_{t-1}$, previous cell state $c_{t-1}$ and current sequential input $x_t$. In the child sum Tree-LSTM, a component node state is generated based on the states of its children in the tree as shown in Fig.\ \ref{lstm}(a). To do this, the internal gates (i.e., the input, output and intermediate cell states) are updated using the sum of the hidden states of the children of the component node as follows:
\begin{equation}\label{hsum}
\Tilde{\textbf{h}}_{j} = \sum_{k \in C(j)}^{}h_{jk}
\end{equation}
where $C(j)$ denotes the children of node $j$. Next, using this modified hidden state, $\Tilde{h}$, the input, output and intermediate cell states are calculated as follows:
\begin{equation}
\textbf{i}_{j} = \sigma (\textbf{W}^{(i)} x_{j} + \textbf{U}^{(i)} \Tilde{h}_{j} + \textbf{b}^{(i)})
\end{equation}
\begin{equation}
\textbf{o}_{j} = \sigma (\textbf{W}^{(o)} x_{j} + \textbf{U}^{(o)} \Tilde{h}_{j} + \textbf{b}^{(o)})
\end{equation}
\begin{equation}
\Tilde{\textbf{c}}_{j} = \textit{tanh} (\textbf{W}^{(c)} x_{j} + \textbf{U}^{(c)} \Tilde{h}_{j} + \textbf{b}^{(c)})
\end{equation}
where $W^{(i)}$, $W^{(o)}$, $W^{(c)}$, $U^{(i)}$, $U^{(o)}$, $U^{(c)}$, $b^{(i)}$, $b^{(o)}$, and $b^{(c)}$ are the parameters to be learned. Instead of having just a single forget gate, child sum Tree-LSTMs have $k$ forget gates where $k$ is the number of children of the target node. This multiple forget gate allows child sum Tree-LSTM to incorporate individual information from each of the children in a selective manner. Each forget gate is calculated as follows:
\iffalse
For example, semantic relatedness between two sentences depends on the relationship between their semantic parents which further depends on the the child nodes. Using Tree-LSTM, it is possible to encode this information as follows:
\fi
\begin{equation}
\textbf{f}_{jk} = \sigma (\textbf{W}^{(f)} x_{j} + \textbf{U}^{(f)} h_{jk} + \textbf{b}^{(f)})
\end{equation}
Next, the individual forget gate outputs are multiplied with corresponding cell state values and then combined to get a single forget vector which is used to get the final cell state of the model as follows:
\begin{equation}
\tilde{\textbf{f}}_{j} = \sum_{k \in C(j)}^{}f_{jk} \cdot {c_{k}}
\end{equation}
\begin{equation}\label{cell}
\textbf{c}_{j} = i_{j} \cdot \tilde{c}_{j} + \tilde{f}_{j}
\end{equation}
Finally, the update equation for the hidden state of a child sum Tree-LSTM cell is similar to the traditional LSTM:
\begin{equation}\label{hidden}
\textbf{h}_{j} = o_{j} \cdot \texttt{tanh}(c_{j})
\end{equation}
Each of the parameter matrices represents a correlation among the component vector, input $x_j$ and the hidden state $h_k$ of the $k^{th}$ child of the component unit. For example, the \texttt{sigmoid} function at the input gate represents semantically important words at input by giving values close to 1 (e.g., a verb) and relatively unimportant words by giving values close to 0 (e.g., a determiner).
Since the hidden state and cell state values of the parent node are generated based on the hidden state and the cell state of its children, child sum Tree-LSTM is well suited for dependency trees.
The N-ary Tree-LSTM is used where there are at most $N$ ordered children. Unlike the child sum Tree-LSTM, it has a different set of parameters for each child having its own cell and hidden state, shown in Fig.\ \ref{lstm}(b). The update equations for deriving input, output and update gate values are as follows:
\begin{equation}
\textbf{i}_{j} = \sigma (\textbf{W}^{(i)} x_{j} + \sum_{l=1}^{N}\textbf{U}^{(i)}_l \Tilde{h}_{jl} + \textbf{b}^{(i)})
\end{equation}
\begin{equation}
\textbf{o}_{j} = \sigma (\textbf{W}^{(o)} x_{j} + \sum_{l=1}^{N}\textbf{U}^{(o)}_l \Tilde{h}_{jl} + \textbf{b}^{(o)})
\end{equation}
\begin{equation}
\Tilde{\textbf{c}}_{j} = \textit{tanh} (\textbf{W}^{(c)} x_{j} + \sum_{l=1}^{N}\textbf{U}^{(c)}_l \Tilde{h}_{jl} + \textbf{b}^{(c)})
\end{equation}
where $W^{(i)}$, $W^{(o)}$, $W^{(c)}$, $U^{(i)}_l$, $U^{(o)}_l$, $U^{(c)}_l$, $b^{(i)}$, $b^{(o)}$, and $b^{(c)}$ are the parameters to be learned. As can be seen, for each gate, the N-ary Tree-LSTM has a set of $N$ parameter matrices associated with the $N$ hidden states whereas the child sum Tree-LSTM has just one. Next, for each of the children, forget gate values are calculated separately, as done in the child sum Tree-LSTM as follows:
\begin{equation}
\textbf{f}_{jk} = \sigma (\textbf{W}^{(f)} x_{j} + \sum_{l=1}^{N}\textbf{U}^{(f)}_{kl} h_{jl} + \textbf{b}^{(f)})
\end{equation}
Similar to the child sum Tree-LSTM, these new forget gate values are multiplied with corresponding cell state values and then summed to get the final values for the forget gate:
\begin{equation}
\tilde{\textbf{f}}_{j} = \sum_{l=1}^{N}f_{jl} \cdot {c_{jl}}
\end{equation}
Finally, the cell state and new hidden state values are updated using Equations \ref{cell} and \ref{hidden}.
\subsection{Attention}
The two tree structured LSTM models described in Section \ref{treelstms} treat every word within a sub-tree with equal probability. More specifically, in an N-ary Tree-LSTM, every word contributes uniformly to the building of the higher-level constituent. Likewise, the child sum Tree-LSTM architecture suggests that, within a dependency tree branch, a head word influences all of its dependent words in a similar way. When viewing the tree as a semantic representation of a sentence, this may not be the case in many scenarios. For a constituency tree, if a sub-tree contains some negative sentiment words, then it is not always the case that the sentiment of that particular constituent is negative. If the negative sentiment word is preceded by a negation, then the higher-level constituent becomes semantically positive because of the location of the negation word. To capture this type of information, attention is applied over the sub-tree components to apportion the importance of each sub-tree component when building the entire tree either semantically or syntactically. In this study, we are interested in applying semantic attention over the sub-tree components to see how they contribute to building a sub-tree.
Attentive Tree-LSTM was proposed by \cite{zhou2016modelling} for doing the semantic relatedness task.
They state that the effect of semantic relevance could be implemented as part of the sentence representation construction process using a Tree-LSTM where each child should be assigned a different weight. In their proposed model, a soft attention mechanism assigns an attention weight on each child in a subtree. Given a collection of hidden state $h_1, h_2, \cdots, h_n$ and an external vector $s$, their proposed attention mechanism assigns a weight $\alpha_k$ on each of these hidden states and produces a weighted vector $g$. To achieve this, first they perform an affine transformation on each of the child hidden states and calculate a vector $m_k$ as follows:
\begin{equation}\label{mk}
m_k = \texttt{tanh}(\textbf{W}^{(m)}h_k + \textbf{U}^{(m)}s),
\end{equation}
where $W^{(m)}$ and $U^{(m)}$ are the parameter matrices of size $d \times d$ and $s$ is the vector representation of the sentence learned by a sequential LSTM. Next, using this transformed hidden states $m_k$, the attention probabilities $\alpha_k$ are calculated as follows
\begin{equation}\label{alphak}
\alpha_k = \frac{\texttt{w}^Tm_k}{\sum_{j=1}^{n}\texttt{w}^Tm_j}
\end{equation}
where $w$ is a parameter vector of size $1 \times d$. Following this, a weighted combination of the hidden states is calculated using,
\begin{equation}\label{g}
g = \sum_{1 \leq k \leq n}^{}{\alpha_k}h_k
\end{equation}
This $g$ is of size $1 \times d$. Finally, an affine transformation is applied on this $g$ to get the new hidden state $\tilde{h}$ as follows:
\begin{equation}\label{htilde1}
\tilde{h} = \texttt{tanh}(\textbf{W}^{(a)}g + \textbf{b}^{(a)})
\end{equation}
This soft attention mechanism from \cite{zhou2016modelling} introduces four new parameters to derive the final attentive hidden state; two matrices in Eqn. \ref{mk}, one vector in Eqn. \ref{alphak} and one matrix in Eqn. \ref{htilde1}. This attention mechanism is only applicable to the child sum Tree-LSTM. It is not possible to apply this attention on N-ary Tree-LSTMs since the structure of the N-ary Tree-LSTM is such that it needs $N$ separate hidden states to work with whereas a child sum Tree-LSTM collapses all the hidden states to a single vector through summation. In this study, we develop two generalized attention models by adopting the decomposable attention framework proposed by \cite{parikh2016decomposable} and the soft attention mechanism proposed by \cite{zhou2016modelling}.
\textbf{Model 1:} Our first model is based on the self attention mechanism where we make some subtle changes to calculate the attention probability with respect to different segments of the sentence. Calculating attention in this way involves three matrices \textit{key}, \textit{query}, and \textit{value}. The \textit{key} matrix represents on which child to attend over, the \textit{query} matrix represents ``with respect to what'' is attention to be applied and the \textit{value} matrix extracts the final attention-able vector using attention probability. The \textit{key} matrix is calculated as follows:
\begin{equation} \label{key}
\textit{key} = \textbf{W}^{(k)}M^{(k)}
\end{equation}
where, $W^{(k)}$ is a parameter matrix of size $d \times d$ and $M^{(k)}$ is the matrix on which to attend over. For child sum Tree-LSTMs, this matrix is the concatenation of the vectors of all the words under a particular head word. For N-ary Tree-LSTMs, it is the concatenation of all the word vectors in a constituent. So in both cases the formal representation is $M^{(k)} = [h_1; h_2; \dots ; h_n]$. In order to encode self attention in the sub-tree, the \textit{query} and \textit{value} matrices also get calculated with respect to $M^{(k)}$ ($M^{(k)}=M^{(q)}=M^{(v)}$) but with a different set of parameter matrices $W^{(q)}$ and $W^{(v)}$ as follows:
\begin{equation}\label{query}
\textit{query} = \textbf{W}^{(q)}M^{(q)}
\end{equation}
\begin{equation}\label{value}
\textit{value} = \textbf{W}^{(v)}M^{(v)}
\end{equation}
Once the \textit{key} and \textit{query} get calculated, the next step is to align each of them by looking at the similarity at each dimension of their representation. This is done using:
\begin{equation}\label{align}
\textit{align} = (\textit{query})^T\textit{key}\cdot\frac{1}{\sqrt{d}}
\end{equation}
where the \textit{align} matrix is of size $n \times n$ with $n$ representing the number of children within this sub-tree. The $d$ is being used here as a normalizing factor. Finally, the attention probability is calculated by applying \texttt{softmax} over it as follows:
\begin{equation}\label{alpha}
\alpha = \texttt{softmax}(\textit{align})
\end{equation}
Here $\alpha$ is the matrix of attention probabilities where each row represents how much attention needs to be given on each of the children within that sub-tree according to the word at that row. As there are $n$ children within a sub-tree, the size of this matrix is $n \times n$. Finally, we calculate a new attention encoded hidden state $\tilde{h}$ through a batch-wise matrix multiplication between the $\alpha$ and \textit{value} matrices as follows:
\begin{equation}\label{htilde}
\tilde{h} = \texttt{bmm}(\alpha, \textit{value})
\end{equation}
The shape of this new $\tilde{h}$ is $n \times d$. It contains attention encoded hidden state values of all the children sequentially one on top of another. So in order to locate a specific hidden state value, the row number corresponding to the position of that child in the sub-tree is used. For child sum Tree-LSTMs, all of the hidden state vectors are summed to get a single vector and for N-ary Tree-LSTMs, one row of $\tilde{h}$ is selected as the hidden state of a child.
For the semantic relatedness task, where the objective is to assign a score based on the similarity between two sentences, it is better to calculate the query matrix with respect to the vector representation of the second sentence. Specially, given a pair of sentences, our generalized attentive encoder uses the representation of one sentence generated via a sequential LSTM to guide the structural encoding of the other sentence on both
the dependency as well as the constituency tree. In that case, $M^{(q)}$ is a vector rather than a matrix thus changing the shape of \textit{query} from Eqn. \ref{query} into $1 \times d$. This results in an alignment vector from Eqn. \ref{align} of size $1 \times n$. When \texttt{softmax} is applied over this vector, a vector of probabilities, $\alpha$, is produced. Finally, instead of doing a matrix multiplication as in Eqn. \ref{htilde}, a point-wise multiplication $\tilde{h} = \alpha * \texttt{value}$ is performed resulting in a new hidden state vector. For child sum Tree-LSTMs, we use this new hidden state vector in place of the one generated in Eqn. \ref{hsum} and for N-ary Tree-LSTMs, we use this hidden state vector as the hidden state of both the left and right children. This way of calculating self attention requires three additional matrices as parameters from Eqn. \ref{key}, \ref{query} and \ref{value}, a smaller number of parameters than found in \cite{zhou2016modelling}. We further continue our experiments by calculating a phrase vector representation using an additional LSTM cell and use it as the \textit{query} vector. Then, we adopt the same procedure as above to calculate the attention probability $\alpha$ and the final hidden state vector $\tilde{h}$. However, this requires more parameters than what is required in \cite{zhou2016modelling}.
\textbf{Model 2:} In our second model, we combine the concepts of decomposable attention mechanism with a soft attention layer. Here, we have two matrices \textit{key} and \textit{query} and their derivation are the same as Eqns. \ref{key} and \ref{query}. We further align and transform these matrices into probabilities using the same set of equations, Equations \ref{align} and \ref{alpha}. We again make some subtle changes which result in four different versions of this model. In Eqn. \ref{query}, when $M^{(q)} = M^{(k)}$, the dimension of the attention probability becomes $n \times n$ and when $M^{(q)}$ is either a sentence vector $M^{(q)} = \texttt{LSTM}(\textit{sentence}_2)$ or phrase vector $M^{(q)} = \texttt{LSTM}(M^{(k)})$, the dimension of this attention probability changes to $1 \times n$. Then, $\tilde{h}$ is calculated as follows,
\begin{equation}
\tilde{h} =
\begin{cases}
\texttt{bmm}(\alpha, \textit{M}^{(k)}), & \text{if}\; \alpha is a matrix \\
\alpha * \textit{M}^{(k)}, & \text{if}\; \alpha is a vector
\end{cases}
\end{equation}
Next we perform an affine transformation of this $\tilde{h}$ by multiplying it with a parameter matrix $W$ and passing it through a \texttt{tanh} layer as follows:
\begin{equation}
\hat{h} = \texttt{tanh}(\textbf{W}\tilde{h} + \textbf{b})
\end{equation}
In the case of child sum Tree-LSTMs, if $\hat{h}$ is a matrix, we do a summation of all the rows and use that as the final vector and if $\hat{h}$ is a vector. we use that as it is. In the case of N-ary Tree-LSTMs, if $\hat{h}$ is a matrix, then each row corresponds to the hidden state of a child and if $\hat{h}$ is a vector, then we just copy this vector as the hidden states of the children.
\section{Experimental Setup and Analysis}
In this section, we describe the detailed experimental setup for the evaluation of our study. We first explain the dataset statistics for evaluating our generalized attention frameworks. Following this, we explain the working environment details along with the hyper-parameter settings of our architecture.
We evaluated our model for the semantic similarity task on the Sentences Involving Compositional Knowledge (SICK) dataset \cite{marelli2014sick}. The task is to give a likeness score for a pair of sentences and then compare it to a human produced score. The SICK dataset contains 9927 sentence pairs configured as: 4500 training pairs, 500 development pairs and 4927 test pairs. Each sentence pair is annotated with
a similarity score ranging from 1 to 5. A high score shows that the sentence pair is strongly related. All sentences are derived from existing image and video comment datasets. The assessment measures are Pearson's $\rho$ and mean squared error (MSE).
\begin{table}[h]
\caption{\label{hyper} Ranges of different hyper-parameters searched during tuning. }
\centering
\small
\begin{tabular}{ p{1.5in} | p{1.2 in} } \hline
\textbf{Hyper-parameter} & \textbf{Range Selected} \\ \hline
Learning rate & 0.01 / 0.025 / 0.05 \\ \hline
Batch size & 10 / 25 / 30 \\ \hline
Momentum & 0.9 \\ \hline
Memory dimension & 150 \\ \hline
MLP hidden dimension & 50 \\ \hline
Attention layer dimension & 150 \\ \hline
Dropout & 0.5 / 0.2 / 0.1 \\ \hline
Word embedding size & 300 \\ \hline
Gradient clipping & 5 / 20 / 50 \\ \hline
Weight decay & $10^{-5}$\\ \hline
Learning rate decay & 0.05\\ \hline
\end{tabular}
\end{table}
Table \ref{hyper} shows the detailed hyper-parameter settings of our model. We trained our model on an Nvidia GeForce GTX 1080 GPU with `Adam', `SGD' and `Adagrad' optimizers. All of the results in the next section are reported using `Adagrad' as it was giving the best results. The `Learning rate decay' parameter was only used with the `SGD' optimizer. We used PyTorch 0.4 to implement our model under the Linux environment.
\begin{table}[t]
\centering
\caption{\label{cross}Test set results on the SICK dataset. The first group lists previous results, and the remainder are the results of our models. We mark models
that we re-implemented with a $\dagger$.}
\begin{tabular}{|c|c|c|l|l|}
\hline
\multirow{10}{*}{\begin{tabular}[c]{@{}c@{}}Previous\\ Models\end{tabular}} & \multicolumn{2}{c|}{\textbf{Model}} & \textbf{$\mathbf{r}$} & \textbf{MSE} \\ \cline{2-5}
& \multicolumn{2}{l|}{ECNU \cite{zhao2014ecnu}} & 0.8414 & \multicolumn{1}{c|}{---} \\ \cline{2-5}
& \multicolumn{2}{l|}{Combine-skip+COCO \cite{kiros2015skip}} & 0.8655 & 0.2561 \\ \cline{2-5}
& \multicolumn{2}{l|}{ConvNet\cite{he2015multi}} & 0.8686 & 0.2606 \\ \cline{2-5}
& \multicolumn{2}{l|}{Seq-GRU \cite{zhou2016modelling}} & 0.8595 & 0.2689 \\ \cline{2-5}
& \multicolumn{2}{l|}{Seq-LSTM \cite{zhou2016modelling}} & 0.8528 & 0.2831 \\ \cline{2-5}
& \multicolumn{2}{l|}{Dep. Tree-GRU \cite{zhou2016modelling}} & 0.8672 & 0.2573 \\ \cline{2-5}
& \multicolumn{2}{l|}{Dep. Tree-GRU + Attn. \cite{zhou2016modelling}} & 0.8701 & 0.2524 \\ \cline{2-5}
& \multicolumn{2}{l|}{ \multirow{2}{*}{Const. Tree-LSTM \cite{tai2015improved}}}& 0.8582 & 0.2734 \\ \cline{4-5} & \multicolumn{2}{l|}{}& 0.8460 $\dagger$ & 0.2895 $\dagger$ \\ \cline{2-5}
& \multicolumn{2}{l|}{ \multirow{2}{*}{Dep. Tree-LSTM \cite{tai2015improved}}}& 0.8676 & 0.2532 \\ \cline{4-5} & \multicolumn{2}{l|}{}& 0.8663 $\dagger$ & 0.2612 $\dagger$ \\ \cline{2-5}
& \multicolumn{2}{l|}{ \multirow{2}{*}{Dep. Tree-LSTM + Attn. \cite{zhou2016modelling}}}& 0.8730 & 0.2426 \\ \cline{4-5} & \multicolumn{2}{l|}{}& 0.8635 $\dagger$ & 0.2591 $\dagger$ \\ \cline{2-5}
\hline \hline
\multicolumn{1}{|l|}{\multirow{8}{*}{\begin{tabular}[c]{@{}c@{}}Child \\Sum\\ Tree\\
LSTM\end{tabular}}} & \multirow{4}{*}{Model 1} & Self & 0.7466 & 0.4545 \\ \cline{3-5}
\multicolumn{1}{|l|}{} & & Sentence 1 & 0.7305 & 0.4849 \\ \cline{3-5}
\multicolumn{1}{|l|}{} & & Sentence 2 & 0.7939 & 0.3801 \\ \cline{3-5}
\multicolumn{1}{|l|}{} & & Phrase & 0.7889 & 0.3877 \\ \cline{2-5}
\multicolumn{1}{|l|}{} & \multirow{4}{*}{Model 2} & Self & \multicolumn{1}{l|}{0.8577} & \multicolumn{1}{l|}{0.2695} \\ \cline{3-5}
\multicolumn{1}{|l|}{} & & Sentence 1 & 0.8620 & 0.2634 \\ \cline{3-5}
\multicolumn{1}{|l|}{} & & Sentence 2 & 0.8686 & 0.2518 \\ \cline{3-5}
\multicolumn{1}{|l|}{} & & Phrase & 0.8623 & 0.2615 \\ \hline
\multirow{8}{*}{\begin{tabular}[c]{@{}c@{}}Binary\\ Tree\\ LSTM\end{tabular}} & \multirow{4}{*}{Model 1} & Self & 0.8648 & 0.2567 \\ \cline{3-5}
& & Sentence 1 & 0.8692 & 0.2486 \\ \cline{3-5}
& & Sentence 2 & 0.8686 & 0.2507 \\ \cline{3-5}
& & Phrase & 0.8676 & 0.2517 \\ \cline{2-5}
& \multirow{4}{*}{Model 2} & Self & 0.8698 & 0.2476 \\ \cline{3-5}
& & Sentence 1 & 0.8698 & 0.2476 \\ \cline{3-5}
& & Sentence 2 & 0.8720 & 0.2435 \\ \cline{3-5}
& & Phrase & 0.8696 & 0.2479 \\ \hline
\end{tabular}
\end{table}
Table \ref{cross} shows the overall evaluation of our model in terms of Pearson's $\rho$ and Mean Squared Error (MSE). This table also contains the results of some top performing models on the SICK dataset. Among these models, \cite{tai2015improved} and \cite{zhou2016modelling} did their evaluation with plain Tree-LSTMs, whereas the rest of the models use some different composition functions such as CNN \cite{kiros2015skip}, ECNU \cite{zhao2014ecnu} and Combine-skip + COCO \cite{he2015multi}. However \cite{zhou2016modelling} also experimented with attentive Tree-LSTMs and GRUs, but they have only been able to design models compatible with the child sum variant. On the other hand, among our two proposed models, Model 2 performs very well on both Tree-LSTM variants showing significant improvements with every configuration. For both child sum as well as binary Tree-LSTMs, our second model with cross sentence attention has superior performance compared to the plain Tree-LSTM variants getting MSE of $0.2518$ and $0.2435$ respectively. For the child sum Tree-LSTM, Model 1 performs poorly compared to all the other models. This poor performance is due to the hard attention that it applies. If a subtree has $n$ children, this hard attention forces $n-1$ children to have probability close to $0$ which causes the domination of just one child hidden state in the summation. The rest will not contribute at all. On the other hand, the reason behind Model 2 performing better in every configuration with both variants is that even though a hard attention causes one of the children to get close to $0$, the normalization of N-ary tree into binary tree causes much more flexibility for the information to flow from bottom to top. During normalization, a branch with $n$ children gets split up to $n-1$ full binary trees resulting in $(n-1)/2$ nodes that are always chosen. Our best performing attentive child sum Tree-LSTM model with cross sentence attention achieves a better result ($0.2518$ MSE) than the plain child sum tree variant from \cite{tai2015improved} ($0.2532$ MSE). Our score did not surpass the reported result ($0.2426$ MSE) of the attentive child sum variant from \cite{zhou2016modelling}. However, our implementation of their model with their reported hyper-parameters gave a $0.2591$ MSE which is significantly worse than their claimed MSE. This suggests to us that the implementation environment has a strong impact on model performance. Our child sum Tree-LSTM Model 2 with cross sentence attention achieves better performance than our implementation of \cite{zhou2016modelling} using their hyper-parameter settings. To the best of our knowledge, our work is the first to encode attention inside a binary Tree-LSTM cell. In terms of binary tree LSTM, our best performing model with cross sentence attention achieves $0.2435$ MSE which is significantly better than the one reported in \cite{tai2015improved} ($0.2734$ MSE) for the non-attentive version. In our implementation of plain binary Tree-LSTM without attention from \cite{tai2015improved} we were not able to reproduce their reported result and ended up with $0.2895$ MSE which is much worse than the one we got with every configuration of our Model 1 and Model 2. This performance analysis does show the effectiveness of our generalized attention model.
\begin{figure}[b]
\centerline{\includegraphics[scale = 0.8]{dependency}}
\caption{ \label{depend}Probability of each node being selected by attentive child sum Tree-LSTM Model 2 with cross sentence attention (\textit{\textbf{Left}: A man is exercising
\textbf{Right}: A man is doing physical activity
\textbf{Label}: Entailment}
).}
\end{figure}
\begin{figure}[ht]
\centerline{\includegraphics[scale = 0.7]{constituency}}
\caption{ \label{const}Probability of each node being selected by attentive binary Tree-LSTM Model 2 with cross sentence attention (\textit{\textbf{Left}: A man is playing a violin
\textbf{Right}: A man is harping on about a play
\textbf{Label}: NEUTRAL}
).}
\end{figure}
Figure \ref{depend} depicts the probability assigned to each node in the dependency tree by our Model 2 with cross sentence attention. Unlike standard child sum Tree-LSTM, where the hidden states of all the children nodes are combined with a plain summation, our attentive child sum Tree-LSTM assigns a weight to each node and then does a weighted summation. The example used in this figure has ``\textit{A man is exercising}" as the left sentence, ``\textit{A man is doing physical activity}'' as the right sentence and ``\textit{Entailment}" as their relationship. As usual, the main verb from both of the sentences is selected as the \textit{root} node. The auxiliary verb (\textit{is}) gets high attention in both the left and right trees because of the word similarity. However, their absolute influence varies because of the presence of semantically related words in other branches as discussed above. Both of these trees share the same nominal subject (\textit{nsubj}) however with different probabilities (in the left tree its probability is significantly lower). The reason behind this is the cross sentence attention allows the word \textit{man} from the left sentence to align with two words \textit{man} and \textit{physical} from the right sentence. As they share a similar semantic meaning in the vector space, the branch in the left sentence that contains \textit{man} is diminished because the right sentence divides the attention between two branches (left sentence: \textit{exercising} $\xrightarrow[]{\textit{nsubj}}$ \textit{man}; right sentence: \textit{doing} $\xrightarrow[]{\textit{nsubj}}$ \textit{man} and \textit{doing} $\xrightarrow[]{\textit{dobj}}$ \textit{activity}).
Figure \ref{const} depicts the probability assigned to each node in a binary (Chomsky Normal Form) constituency tree using an attentive binary Tree-LSTM with cross sentence attention. In this setting, the attention on the structure of the left sentence is computed with respect to the vector representation of the right sentence and vice versa. As a result, the words in a specific phrase from the left sentence are aligned with very high attention probability if the same words appear anywhere in the right sentence. However, as \texttt{softmax} was operating with small values from Eqn. \ref{align}, it forced both children to have the same probabilities ($0.5$). In order to verify whether this probability has any effect or not, we have confirmed that replacing $\alpha$ in Eqn.\ \ref{htilde} with pairs of the same value other than $(0.5$) results in the model giving comparatively poor performance. Finally, for the inference of attention probabilities, we replaced \texttt{softmax} from Eqn.\ \ref{alpha} with plain normalization. For the example in Fig.\ \ref{const}, we have ``\textit{A man is playing a violin}'' as the left sentence, ``\textit{A man is harping on about a play}'' as the right sentence and ``\textit{Neutral}'' as their relationship. The phrase \textit{NP} gets almost the same probabilities in both the left ($0.49$) and right ($0.55$) trees because of having the same set of words: ``\textit{A man}''. The sub-phrase \textit{VBZ} under \textit{VP} in both trees gets very high attention due to having the same word ``\textit{is}'' at exactly the same position. Due to the Chomsky normalization, the tree on the right side gets an extra dummy node \textit{X} which contains \textit{VBG} and \textit{RP} as the child nodes. In the vector space, the words ``playing'' and ``harping'' are semantically connected which allows both of the models to align them with moderately high as well as equal probabilities. The left tree does not have any particle (``\textit{RP}'') words which causes the model to put low attention probability when it appears on the right tree. The left tree has \textit{NP} as the right child of \textit{VP} at level 3 with probability $0.55$ which is quite close to the amount of attention \text{PP} gets ($0.63$) as the right child of \textit{VP} at the same level in the right tree. Again in both of these trees, at the right most branch, the words ``play'' and ``violin'' share the same semantic space which causes them to get aligned with almost the same probabilities. The \textit{DT} in this branch gets the same high probability because of appearing in both sentences at relatively similar positions.
\section{Conclusion}
Previous attempts to encode the attention mechanism in Tree-LSTMs were only successful for the child-sum tree variant as the techniques used are not easily adaptable to binary trees like the Chomsky Normal Form constituency tree. In this paper, we have introduced two different ways of applying attention on tree structures. The second of these two methods gives superior performance for both tree variants. The proposed techniques can be used on both dependency as well as constituent tree structure. Our experimental results verify the superiority of the attentive variant of Tree-LSTMs over traditional Tree-LSTMs and linear chain LSTMs on the semantic relatedness task. With our extensive in depth analysis, we showed that our proposed attention models provide a good representation of how a sentence builds semantically from the words. Our generalized attention framework is adaptable to any tree like structures.
\bibliographystyle{IEEEtran}
|
1,314,259,993,407 | arxiv | \section{Introduction}
\label{intro}
The Gross-Neveu (GN) model \cite{gn} is a quantum field theory in $d=2$
spacetime dimensions with an $N$-component massless fermion $\psi_j$,
$j=1,...,N$, defined by the path integral
\begin{equation}
Z = \int \prod_x [{\cal D}\psi][{\cal D}\bar\psi] \,
e^{i\int d^2x \, {\cal L}} \ ,
\label{z}
\end{equation}
with the Lagrangian density \cite{indices}
\begin{equation}
{\cal L} = i\bar\psi \partial\hspace{-0.07in}\slash \psi + \frac{g}{2} (\bar\psi\psi)^2 \ .
\label{lag}
\end{equation}
This model is of interest because it exhibits, albeit in a lower-dimensional,
non-gauge-theory context, some properties of quantum chromodynamics (QCD),
namely asymptotic freedom, dynamical symmetry breaking of a certain chiral
symmetry, and the formation of a massive bound state of fermions. These
properties were shown by an exact solution of the model in \cite{gn} in an $N
\to \infty$ limit that enabled Gross and Neveu to obtain nonperturbative
information about the theory. A semiclassical calculation
of the bound-state spectrum of the model was carried out in \cite{dhn}.
The Gross-Neveu model has also been studied at finite $N$, where it is not, in
general, exactly solvable. In these studies, one again makes use of a property
that the model shares with QCD, namely asymptotic freedom, which allows one to
carry out reliable perturbative calculations at high Euclidean energy/momentum
scales $\mu$ in the deep ultraviolet (UV), where the running four-fermion
coupling, $g(\mu)$, approaches zero. In this context, there is an interesting
and fundamental question: how does this running coupling $g(\mu)$ change as the
scale $\mu$ decreases from the deep UV to the infrared (IR) limit at $\mu = 0$?
This change of $g(\mu)$ as a function of $\mu$ is described by the
renormalization group (RG) \cite{rg} and the associated beta function, $\beta =
dg/dt$, where $dt = d\ln\mu$. The asymptotic freedom property is equivalent to
the fact that $\beta$ is negative in the vicinity of the origin, $g=0$, so that
this point is a UV fixed point (UVFP) of the renormalization group. As $\mu$
decreases from the UV toward the IR, several different types of behavior of a
theory are, {\it a priori}, possible. One is that the (perturbatively
calculated) beta function has no IR zero, so that as $\mu$ decreases, $g(\mu)$
eventually increases beyond the range where perturbative methods can be used to
study its RG evolution. An alternative possibility is that $\beta$ has an IR
zero at sufficiently small coupling so that it can be studied using
perturbative methods. An exact IR zero of $\beta$ would be an IR fixed point
(IRFP) of the renormalization group. In the $N \to \infty$ limit used in
\cite{gn} to solve the model, the resultant beta function (given below in
Eq. (\ref{betagn})) does not exhibit any IR zero. Ref. \cite{schonfeld}
calculated $1/N$ corrections to the $N \to \infty$ limit in the Gross-Neveu
model and excluded the presence of an IR zero to this order. However, to our
knowledge, there has not been an analysis of the beta function of the GN model
for finite $N$ to higher-loop order to address the question of whether it
exhibits evidence for an infrared fixed point.
In this paper we shall carry out this analysis of the beta function of the
finite-$N$ Gross-Neveu model to address and answer the question of whether this
function exhibits an IR zero. We shall investigate the beta function to the
highest loop order to which it has been calculated, namely four loops, making
use of a recent computation of the four-loop term in Ref. \cite{gracey2016}.
This paper is organized as follows. In Section \ref{background_section} we
review some background information about the Gross-Neveu model.
In Section \ref{beta_section} we carry out
our analysis of the beta function of the finite-$N$ Gross-Neveu model up to the
four-loop level. In Section \ref{pade_section} we extend this analysis using
Pad\'e approximants. Section \ref{scheme_section} contains an analysis of the
effect of scheme transformations on the beta function. In Section
\ref{largeN_section} we comment further on the large-$N$ limit.
Our conclusions are given in Section \ref{conc}.
\section{Some Relevant Background on the Gross-Neveu Model}
\label{background_section}
Here we briefly review some relevant background concerning the Gross-Neveu
model. We first comment on some notation. In Ref. \cite{gn}, the coefficient in
front of the $(\bar\psi\psi)^2$ operator was written as a squared coupling,
which we denote as $(g_{GN}^2/2)$, while many subsequent works have written it
as $g/2$, so one has
\begin{equation}
g \equiv g_{GN}^2 \ .
\label{ggn}
\end{equation}
The analysis of the model in \cite{gn} made use of a functional integral
identity to express the path integral as the $m \to \infty$ limit of a path
integral containing an auxiliary real scalar field $\phi$ with a mass $m$ and a
Yukawa interaction
\begin{equation}
{\cal L}_Y = g_{GN}m[\bar\psi \psi]\phi \ .
\label{yuk}
\end{equation}
Since $\phi$ is a real field, the hermiticity of ${\cal L}_Y$ implies that
$g_{GN}$ must be real, which, in conjunction with Eq. (\ref{ggn}), implies that
$g$ must be non-negative:
\begin{equation}
g \ge 0 \ .
\label{gpos}
\end{equation}
For $d=2$ (as more generally, for any even spacetime dimension), one can define
a product of Dirac gamma matrices, denoted $\gamma_5$, that satisfies the
anticommutation relation $\{\gamma_5,\gamma_\mu\}=0$ for all $\gamma_\mu$.
This $\gamma_5$ matrix also satisfies $\gamma_5^2=1$ and
$\gamma_5^\dagger=\gamma_5$. (An explicit representation is $\gamma_0 =
\sigma_1$, $\gamma_1 = \sigma_2$, with $\gamma_0 \gamma_1 = i \gamma_5 =
i\sigma_3$, where $\sigma_j$ are the Pauli matrices.) One can then define
chiral projection operators $P_{L,R} = (1/2)(1 \pm \gamma_5)$.
As usual, one then defines left and right chiral components of the
fermion field as $\psi_L = P_L \psi$ and $\psi_R = P_R \psi$.
The Gross-Neveu model is invariant under a discrete
global ${\mathbb Z}_2$ group generated by the identity and the
chiral transformation
\begin{equation}
\psi \to \gamma_5 \psi \ .
\label{psidiscrete}
\end{equation}
This discrete chiral transformation (\ref{psidiscrete}) takes $\bar\psi\psi \to
-\bar\psi\psi$, and hence this ${\mathbb Z}_2$ symmetry forbids (i) a mass term
in the Lagrangian (\ref{lag}) and (ii) the generation of a nonzero condensate
$\langle \bar\psi\psi\rangle$. This is true to all (finite) orders of
perturbation theory.
The Gross-Neveu model is also invariant under the continuous
global (cg) symmetry group
\begin{equation}
G_{cg} = {\rm U}(N)
\label{cun}
\end{equation}
defined by the transformation
\begin{equation}
\psi \to U \psi \ ,
\label{psitran}
\end{equation}
where $U \in {\rm U}(N)$ (so $\bar\psi \to \bar\psi U^\dagger$). In terms of
the chiral components of the fermion field, the continuous global symmetry
transformation (\ref{psitran}) is $\psi_L \to U \psi_L$, $\psi_R \to U \psi_R$.
In contrast to the discrete $\gamma_5$ symmetry, the continuous symmetry
$G_{cg}$ leaves the operator $\bar\psi\psi$ invariant \cite{lsym}.
An exact solution of the theory was obtained in \cite{gn} in the limit
$N \to \infty$ and $g_{GN} \to 0$ with the product
\begin{equation}
\lambda \equiv g_{GN}^2 N \equiv gN
\label{lambda}
\end{equation}
a fixed and finite function of $\mu$. We shall denote this as the LN limit
(i.e., the large-$N$ limit with the condition (\ref{lambda}) imposed). In this
limit, there is a nonperturbative generation of a nonzero bilinear fermion
condensate, $\langle \bar \psi\psi \rangle$, dynamically breaking the discrete
${\mathbb Z}_2$ chiral symmetry. In this limit, there is also the formation
of a massive bound state of fermions.
The beta function for $g_{GN}$ is
\begin{equation}
\beta_{GN} = \frac{dg_{GN}}{dt} \ ,
\label{betagndef}
\end{equation}
where $dt = d\ln\mu$. (The $\mu$ dependence of the coupling will often be
suppressed in the notation.) This beta function is \cite{gn,otherbeta}
\begin{equation}
\beta_{GN} = -\frac{g_{GN}\lambda}{2\pi} \ .
\label{betagn}
\end{equation}
The fact that this beta function is negative is an expression of the asymptotic
freedom of the theory. This beta function does not exhibit any zero away from
the origin, i.e., any infrared zero. However, since the calculation in
\cite{gn} was performed in the LN limit, this leaves open the possibility that
at finite $N$, there could be an IR zero in the beta function that would
disappear in the LN limit. We discuss this LN limit further in Section
\ref{largeN_section} below.
\section{Beta Function for General $N$}
\label{beta_section}
Although the Gross-Neveu model is not, in general, solvable away from the LN
limit, there has also been interest over the years in analyzing it for finite
$N$. In terms of the coupling $g$, the beta function of the finite-$N$ GN
model is
\begin{equation}
\beta = \frac{dg}{dt} \ ,
\label{betag}
\end{equation}
where, as before, $dt = d\ln\mu$. For our purposes, it will be convenient to
introduce a variable $a$ that includes the factor $1/(2\pi)$ resulting from
Feynman integrals in $d=2$ dimensions, namely
\begin{equation}
a = \frac{g}{2\pi} = \frac{g_{GN}^2}{2\pi} \ .
\label{adef}
\end{equation}
The model defined by the Lagrangian of Eq. (\ref{lag}) can be generalized with
the addition of further four-fermion operators \cite{gn,rossi89}. The
regularization and renormalization of the Gross-Neveu model has been carried
out in this more general context \cite{rossi89}-\cite{rossi91},
\cite{gracey2016}.
As was true of other theories, such as the nonlinear $\sigma$ model
\cite{nlsm}, one may consider this model in spacetime dimension $d > 2$. At
finite $N$, the model is not renormalizable for $d > 2$, since the Maxwellian
dimension of a four-fermion operator is $2(d-1)$, which is larger than $d$ if
$d > 2$. As in the case of the nonlinear $\sigma$ model \cite{nlsm}, in the $N
\to \infty$ limit, one can still solve the model and study its properties.
Alternatively, for finite $N$, one can regard it as a low-energy effective
field theory. With this generalization and $d \gsim 2$, $\beta$ has the form
\begin{eqnarray}
\beta & = & g\Big [ d-2 + \sum_{\ell=1}^\infty b_\ell \, \Big ( \frac{g}{2\pi}
\Big )^\ell \ \Big ] \cr\cr
& = & 2\pi a\Big [d-2 + \sum_{\ell=1}^\infty b_\ell \, a^\ell \ \Big ] \ ,
\label{betaseries}
\end{eqnarray}
where $b_\ell a^\ell$ is the $\ell$-loop term. The $n$-loop ($n\ell$) beta
function, denoted $\beta_{n\ell}$, is obtained by the replacement of
$\ell=\infty$ by $\ell=n$ in Eq. (\ref{betaseries}). Early discussions of the
GN model for $d > 2$ include \cite{gn} and \cite{vasiliev}; for more recent
work see, e.g., \cite{gracey2016}, \cite{manashov}, and, for condensed-matter
applications, \cite{cm}, and references therein. In this paper, aside from
some comments in Section \ref{largeN_section}, we will restrict ourselves to
the Gross-Neveu model in $d=2$, where $g$ is dimensionless.
The $\ell=1$ and $\ell=2$ loop terms in $\beta$ are independent of the scheme
used for regularization and renormalization, while the terms at loop order
$\ell \ge 3$ are scheme-dependent. The beta function was calculated up to
two-loop level in \cite{wetzel85}, with the results
\begin{equation}
b_1 = -2(N-1)
\label{b1}
\end{equation}
and
\begin{equation}
b_2 = 2(N-1) \ .
\label{b2}
\end{equation}
(See also \cite{destri} for a two-loop calculation in a related Thirring
model.) The fact that $b_1$ in Eq. (\ref{b1}) is negative means that in $d=2$,
this theory is asymptotically free for any finite $N > 1$ as well as in the $N
\to \infty$ limit considered in \cite{gn}.
The three-loop coefficient, $b_3$, was
calculated in \cite{gracey90,rossi91} in the commonly used scheme with
dimensional regularization and modified minimal subtraction, denoted
$\overline{\rm MS}$ \cite{msbar}, yielding the result
\begin{equation}
b_3 = \frac{(N-1)(2N-7)}{2} \ .
\label{b3}
\end{equation}
Recently, the four-loop coefficient, $b_4$ has been calculated, again in the
$\overline{\rm MS}$ scheme, to be \cite{gracey2016}
\begin{equation}
b_4 = \frac{1}{3}(N-1) \Big [ -2N^2 - 19N + 24 - 6(11N-17)\zeta_3 \Big ] \ ,
\label{b4}
\end{equation}
where $\zeta_s = \sum_{n=1}^\infty n^{-s}$ is the Riemann zeta function.
We comment on the dependence of the beta function coefficients on $N$. The
property that these coefficients all contain a factor of $(N-1)$ is a
consequence of the fact that for $N=1$ the GN model is equivalent to the
massless abelian Thirring model \cite{thirring}, which has an identically zero
beta function \cite{klaiber,lowenstein}. Note that this statement about the
beta function of the Thirring model is scheme-independent; if a beta function
vanishes in one scheme, then it vanishes in all other schemes reached by
acceptable (nonsingular) scheme transformations \cite{sch}. It follows that all
of the coefficients $b_\ell$ contain a factor of $(N-1)$. Therefore, it is only
necessary to analyze the beta function of the Gross-Neveu model for $N > 1$,
where it is nonvanishing, and we will thus restrict to the physical integral
values $N \ge 2$ henceforth. We next discuss how the $b_\ell$ depend on $N$ in
the relevant range $N > 1$. For this discussion, we consider $N$ to be
extended from the positive integers to the real numbers. The three-loop
coefficient $b_3$ is a monotonically increasing function of $N$ that is
negative for $N < 7/2$, vanishes for $N=7/2$, and is positive for $N > 7/2$.
Thus, for physical, integral values, $b_3 < 0$ if $N=2$ or $N=3$ and $b_3 > 0$
if $N \ge 4$. The coefficient $b_4$ is negative for large $N$ and is positive
for $N$ in the interval
\begin{equation}
N_{b4z,m} < N < N_{b4z,p} \ ,
\label{ninterval}
\end{equation}
where the subscript $b4z$ stands for ``$b_4$ zero'' and
\begin{equation}
N_{b4z,(p,m)} = \frac{-19-66\zeta_3 \pm
\sqrt{553+3324\zeta_3+4356\zeta_3^2}}{4}
\label{Nb4z}
\end{equation}
with $(p,m)$ corresponding to the $\pm$ sign. These have the values
$N_{b4z,m} = -50.616$ and $N_{b4z,p}=1.448$ to the given floating-point
accuracy. Thus, in the relevant range $N > 1$ under consideration here, $b_4$
is negative.
We proceed to investigate the question of whether the beta function for the
Gross-Neveu model at finite $N$ exhibits evidence for an infrared
zero. We denote an IR zero of the $n$-loop beta function $\beta_{n\ell}$
as $a_{IR,n\ell}$, and the corresponding value of $g$ as $g_{IR,n\ell}=2\pi
a_{IR,n\ell}$. This IR zero of beta is a zero for positive $a$ closest to the
origin (if there is such a zero), which one would thus reach as $\mu$
decreases from the deep UV at large $\mu$ to the IR at small $\mu$ and $a$
increases from 0. At the two-loop level, $\beta_{2\ell}$ has an IR zero at
\begin{equation}
a_{IR,2\ell} = -\frac{b_1}{b_2} = 1 \ ,
\label{air_2loop}
\end{equation}
i.e., $g_{IR,2\ell}=2\pi$. Note that this value is independent of $N$. To
judge whether this constitutes convincing evidence of an IR zero in the
beta function, it is necessary to determine if higher-loop calculations confirm
it. We next carry out this task.
At the three-loop level, the condition that $\beta_{3\ell}=0$ away from the
origin is the quadratic equation $b_1+b_2a+b_3 a^2=0$. This has two solutions,
\begin{equation}
a=\frac{2[-1 \pm \sqrt{2(N-3)} \ ]}{2N-7} \ .
\label{asol3loop}
\end{equation}
If $N < 3$, then these solutions are complex and
hence unphysical. If $N=3$, these roots coincide, so that $a_{IR,3\ell}=2$,
i.e., $g_{IR,3\ell}=4\pi$. For $N \ge 3$, there is only one physical root,
namely
\begin{equation}
a_{IR,3\ell} = \frac{2[-1+\sqrt{2(N-3)} \ ]}{2N-7} \ .
\label{air_3loop}
\end{equation}
However, this is not, in general, close to the two-loop zero of the beta
function at $a_{IR,2\ell}=1$. Furthermore, while $a_{IR,2\ell}=1$ is
independent of $N$, $a_{IR,3\ell}$ has a completely different behavior as a
function of $N$; it decreases monotonically with $N$ in the
interval $N \ge 3$ over which it is physical and approaches zero asymptotically
like
\begin{equation}
a_{IR,3\ell} \sim \sqrt \frac{2}{N} - \frac{1}{N} +
O \Big ( \frac{1}{N^{3/2}} \Big ) \quad {\rm as} \ N \to \infty \ .
\label{air_3loop_largeN}
\end{equation}
At the four-loop level, the condition that $\beta_{4\ell}=0$ away from the
origin is the cubic equation
\begin{equation}
b_1+b_2a+b_3 a^2 + b_4 a^3=0 \ .
\label{acubic}
\end{equation}
The nature of the roots of this equation is determined by the discriminant,
\begin{equation}
\Delta_3 = b_2^2b_3^2-27b_1^2b_4^2 - 4(b_1b_3^3+b_4b_2^3) + 18b_1b_2b_3b_4 \ .
\label{disc3}
\end{equation}
This discriminant is negative for the relevant range $N \ge 2$ (indeed, it is
negative for all real $N$). This implies that Eq. (\ref{acubic}) has one real
root and a pair of complex-conjugate roots. The real root is negative and
hence is unphysical, since it violates the positivity requirement
(\ref{gpos}). Moreover, since it is negative, it is clearly incompatible with
the values of $a_{IR,2\ell}$ and $a_{IR,3\ell}$, which are positive (discarding
the unphysical complex value of $a_{IR,3\ell}$ at $N=2$). We therefore do not
label this root as $a_{IR,4\ell}$, but instead as $a_{rt,4\ell}$, where $rt$
stands simply for the real root of Eq. (\ref{acubic}). We find that the
magnitude of $a_{rt,4\ell}$ decreases toward zero monotonically as $N$
increases in the relevant interval $N \ge 2$, with the asymptotic behavior
\begin{equation}
a_{rt,4\ell} \sim -\frac{3^{1/3}}{N^{2/3}} + \frac{1}{2N} +
O \Big ( \frac{1}{N^{4/3}} \Big ) \quad {\rm as} \ N \to \infty \ .
\label{art_largeN}
\end{equation}
We list the values of $a_{IR,2\ell}$, $a_{IR,3\ell}$, and
$a_{rt,4\ell}$ in Table \ref{air_nloop_values} for $N$ from 2 to 10 and for
three representative larger values, $N=100$, 300, and $10^3$.
\begin{table}
\caption{\footnotesize{Values of $a_{IR,2\ell}$, $a_{IR,3\ell}$, and
$a_{rt,4\ell}$ for the beta function of the Gross-Neveu model, as a
function of $N$. Here, the three-loop and four-loop coefficients $b_3$ and
$b_4$ are calculated in the $\overline{\rm MS}$ scheme. If $N=2$, then
the zeros of $\beta_{3\ell}$ at nonzero $a$ form an unphysical complex (cmplx)
pair. As indicated, all of the values of $a_{rt,4\ell}$ are negative and hence
unphysical. See text for further details.}}
\begin{center}
\begin{tabular}{|c|c|c|c|} \hline\hline
$N$ & $a_{IR,2\ell}$ & $a_{IR,3\ell}$ & $a_{rt,4\ell}$ \\
\hline
2 & 1 & cmplx & $-0.573$ \\
3 & 1 & 2.000 & $-0.370$ \\
4 & 1 & 0.828 & $-0.302$ \\
5 & 1 & 0.667 & $-0.264$ \\
6 & 1 & 0.580 & $-0.239$ \\
7 & 1 & 0.522 & $-0.220$ \\
8 & 1 & 0.481 & $-0.205$ \\
9 & 1 & 0.448 & $-0.194$ \\
10 & 1 & 0.422 & $-0.184$ \\
100 & 1 & 0.134 & $-0.0567$ \\
300 & 1 & 0.0788 & $-0.0295$ \\
$10^3$ & 1 & 0.0438 & $-0.0138$ \\
\hline\hline
\end{tabular}
\end{center}
\label{air_nloop_values}
\end{table}
In our discussion above, we had stated that in order to judge whether the
result for $a_{IR,2\ell}$ constitutes convincing evidence of an IR zero in the
beta function, it is necessary to determine if higher-loop calculations confirm
it. A necessary condition for the reliability of a perturbative calculation is
that if one calculates some quantity to a given loop order, then there should
not be a large fractional change in this quantity if one computes it to one
higher order in the loop expansion. This condition applies, in particular, to
the calculation of a putative zero of the beta function. Quantitatively, in
order for the perturbative calculation of the IR zero of a beta function to be
reliable, it is necessary that the fractional difference
\begin{equation}
\frac{|a_{IR,(n-1)\ell} - a_{IR,n\ell}|}
{\frac{1}{2}[a_{IR,(n-1)\ell}+ a_{IR,n\ell}]}
\label{fracdif}
\end{equation}
should be reasonably small and should tend to decrease with increasing loop
order, $n$. As is evident both from our analytic formulas and from the
numerical results listed in Table \ref{air_nloop_values}, this
necessary condition is not satisfied in the present case.
The reason for this is clear from a plot of the beta functions $\beta_{n\ell}$
at loop orders $n=2$, $n=3$, and $n=4$. This shows that the IR zero in the
two-loop beta function occurs at a value of $a$ that is too large for the
perturbative calculation to be reliable. In Figs. \ref{beta_N3} and
\ref{beta_N10} we plot the two-loop, three-loop, and four-loop beta functions
for the Gross-Neveu model as functions of $a$ for two illustrative values of
$N$, namely $N=3$ and $N=10$. As is evident from these plots, the beta
function does not satisfy the necessary criterion for the reliability of a
calculation of an IR zero. For the IR zero of the two-loop beta function at
$a_{IR,2\ell}=1$ to be reliable, one requires that the curves for the
three-loop and four-loop beta functions should agree approximately with the
curve for the two-loop beta function for $a \simeq 1$, and that these
higher-loop beta functions should thus have respective IR zeros that are close
to the two-loop zero at $a_{IR,2\ell}=1$. But this is not the case; for $N=3$,
$\beta_{3\ell}$ has a double zero at the larger value, $a_{IR,3\ell}=2$ and
then goes negative again, while $\beta_{4\ell}$ has no IR zero in the physical
region, $a > 0$. For $N=10$ the three-loop beta function $\beta_{3\ell}$
vanishes at a smaller value of $a$ than $a=1$ (and this value, $a_{IR,3\ell}$
decreases as $N$ increases), while the four-loop beta function $\beta_{4\ell}$
again has no IR zero in the physical region, $a > 0$. The behavior illustrated
for $N=10$ is generic for other values of $N \ge 4$. Indeed, the curves for
these beta functions at loop order $n=2, \ 3, \ 4$ only agree with each
other close to the origin, and deviate strongly from each other before one gets
to values of $a$ where a zero occurs. Specifically, for $N=3$, $\beta_{2\ell}$
and $\beta_{3\ell}$ only agree with each other for $a$ up to about 0.5, while
$\beta_{4\ell}$ deviates from these lower-loop beta functions as $a$ increases
beyond approximately 0.2. As $N$ increases, these deviations occur for smaller
$a$. Thus, for $N=10$, $\beta_{2\ell}$ and $\beta_{3\ell}$ only agree with
each other for $a$ up to roughly 0.15, while $\beta_{4\ell}$ deviates from
these lower-loop beta functions as $a$ increases beyond about 0.08.
\begin{figure}
\begin{center}
\includegraphics[height=8cm,width=6cm]{betagn_N3.ps}
\end{center}
\caption{\footnotesize{Plot of the $n$-loop $\beta$ function
$\beta_{a,n\ell}$ of the Gross-Neveu model as a function of $a$ for $N=3$ and
(i) $n=2$ (red), (ii) $n=3$ (green), and (iii) $n=4$ (blue) (colors in
online version). At $a=0.16$, going from bottom to top, the curves are
$\beta_{4\ell}$, $\beta_{2\ell}$, and $\beta_{3\ell}$.}}
\label{beta_N3}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=8cm,width=6cm]{betagn_N10.ps}
\end{center}
\caption{\footnotesize{Plot of the $n$-loop $\beta$ function
$\beta_{a,n\ell}$ of the Gross-Neveu model as a function of $a$ for $N=10$ and
(i) $n=2$ (red), (ii) $n=3$ (green), and (iii) $n=4$ (blue) (colors in
online version). At $a=0.2$, going from bottom to top, the curves are
$\beta_{4\ell}$, $\beta_{2\ell}$, and $\beta_{3\ell}$.}}
\label{beta_N10}
\end{figure}
These results are similar to what was found in a search for a UV zero in the
beta function of an IR-free theory, namely the O($N$) $\lambda|{\vec \phi}|^4$
scalar field theory in $d=4$ spacetime dimensions \cite{lam}. In that theory,
although the two-loop beta function exhibits a UV zero, higher-loop
calculations up to five-loop order for general $N$ and up to six-loop order for
$N=1$ do not confirm the two-loop result, and the reason was found to be that
the two-loop UV zero occurs at too large a value of the quartic coupling for
the two-loop perturbative calculation to be applicable and reliable.
\section{Analysis with Pad\'e Approximants}
\label{pade_section}
In this section we carry out a further investigation of a possible IR fixed
point in the renormalization-group flow for the Gross-Neveu model by
calculating and analyzing Pad\'e approximants (PAs) to the beta function at
three-loop and four-loop level. Since we are interested in a possible zero of
the beta function away from the origin, it will be convenient to deal with a
reduced ($rd$) beta function,
\begin{equation}
\beta_{rd} \equiv \frac{\beta}{2\pi b_1 a^2} =
1 + \frac{1}{b_1} \sum_{\ell=2}^\infty
b_\ell a^{\ell-1} \ .
\label{betareduced}
\end{equation}
The $n$-loop reduced beta function with $n \ge 2$, denoted $\beta_{rd,n\ell}$,
is obtained from Eq. (\ref{betareduced}) by replacing $\ell=\infty$ by $\ell=n$
as the upper limit in the summand. This $n$-loop reduced beta function is thus
a polynomial of degree $n-1$ in $a$. The $[p,q]$ Pad\'e approximant to this
polynomial is the rational function
\begin{equation}
[p,q]_{\beta_{rd,n\ell}} =
\frac{1+\sum_{j=1}^p \, n_j x^j}{1+\sum_{k=1}^q d_k \, x^k}
\label{pqx}
\end{equation}
with
\begin{equation}
p+q=n-1 \ ,
\label{pqn}
\end{equation}
where the $n_j$ and $d_k$ are $a$-independent coefficients of the respective
polynomials in the numerator and denominator of
$[p,q]_{\beta_{rd,n\ell}}$. (Our notation follows \cite{smpade}.)
Hence, at a given $n$-loop order, there are $n$
Pad\'e approximants that one can calculate, namely
\begin{equation}
\{ \ [n-k,k-1]_{\beta_{rd,n\ell}} \ \} \quad {\rm with} \
1 \le k \le n \ .
\label{padeset}
\end{equation}
These provide rational-function approximations of the series expansion for
$\beta_{rd,n\ell}$ that fits this series to the loop order $n$. As in our
earlier work, e.g., \cite{bvh,flir}, these provide an
alternate approach to investigating zeros of a beta function.
We shall label one of the $p$ zeros of a
$[p,q]_{\beta_{rd,n\ell}}$ Pad\'e
approximant as $[p,q]_{zero}$ and one of the $q$ poles of this
approximant as $[p,q]_{pole}$; in each case, the value of $n$ is given by
Eq. (\ref{pqn}) as $n=p+q+1$. At the $n$-loop level, the Pad\'e approximant
$[n-1,0]_{\beta_{rd,n\ell}}$ is equal to the reduced $n$-loop beta function
$\beta_{rd,n\ell}$ itself, which we have already analyzed in the previous
section, and the PA $[0,n-1]_{\beta_{rd,n\ell}}$ has no zeros, and hence is not
useful for our study. Hence, at the $n$-loop level, we focus on the $n-2$
PAs
$[p,q]_{\beta_{rd,n\ell}}$ with $[p,q]=[n-k,k-1]$ having $2 \le k \le n-1$.
At the $n=3$ loop level, we thus consider the $[1,1]_{\beta_{rd,3\ell}}$
Pad\'e approximant. This is
\begin{equation}
[1,1]_{\beta_{rd,3\ell}}=\frac{1+\Big (\frac{b_2}{b_1}-\frac{b_3}{b_2} \Big )a}
{1-\Big ( \frac{b_3}{b_2}\Big ) a} = \frac{1- \Big ( \frac{2N-3}{4} \Big )a}
{1- \Big ( \frac{2N-7}{4} \Big )a} \ .
\label{pade11}
\end{equation}
where the coefficients $b_1$, $b_2$, and $b_3$ were given in
Eqs. (\ref{b1})-(\ref{b3}) above. This [1,1] PA has a zero at
\begin{equation}
[1,1]_{zero} = \frac{4}{2N-3}
\label{p11zero}
\end{equation}
and a pole at
\begin{equation}
[1,1]_{pole} = \frac{4}{2N-7} \ .
\label{p11pole}
\end{equation}
The $a=[1,1]_{pole}$ is not relevant, since if $N = 2$ or 3, it has the
respective negative and hence unphysical values $-4/3$ and $-4$, while for
$N \ge 4$, it lies farther from the origin than
the zero. This is clear from the fact that the difference
\begin{equation}
[1,1]_{pole} -[1,1]_{zero} = \frac{16}{(2N-3)(2N-7)}
\label{p11polezerodif}
\end{equation}
is positive for this range $N \ge 4$. Since the $[1,1]_{pole}$ lies farther
from the origin than $[1,1]_{zero}$, the coupling $a=a(\mu)$ never reaches the
pole as $\mu$ decreases from large values in the UV to $\mu=0$ and thus
$a(\mu)$ increases from 0 to $[1,1]_{zero}$. We list the values of the zero of
the $[1,1]_{\beta_{rd,3\ell}}$ Pad\'e approximant in Table \ref{pades}. For $N
\ge 3$, the value of $a=[1,1]_{zero}$ is smaller than $a_{IR,3\ell}$ and
decreases more rapidly to zero as $N \to \infty$ than $a_{IR,3\ell}$. If
$N=3$, the comparison cannot be made, since $a_{IR,3\ell}$ is complex. Thus,
this analysis of the [1,1] Pad\'e approximant to the reduced three-loop beta
function, $\beta_{rd,3\ell}$ yields further evidence against a (reliably
calculable) IR zero in the beta function up to the three-loop level.
At the $n=4$ loop level, there are two Pad\'e approximants to analyze,
namely $[2,1]_{\beta_{rd,4\ell}}$ and $[1,2]_{\beta_{rd,4\ell}}$. We calculate
\begin{equation}
[2,1]_{\beta_{rd,4\ell}}=\frac{1+\Big ( \frac{b_2}{b_1}-\frac{b_4}{b_3}\Big )a
+ \Big ( \frac{b_3}{b_1}-\frac{b_2b_4}{b_1b_3} \Big )a^2}
{1- \frac{b_4}{b_3}a} \ ,
\label{pade21}
\end{equation}
where the coefficients $b_n$ were given in Eqs. (\ref{b1})-(\ref{b4}). The
zeros of the numerator occur at $a=[2,1]_{zero,(i,ii)}$, where
\begin{eqnarray}
& & [2,1]_{zero,(i,ii)} = \cr\cr
& & \frac{b_2b_3-b_1b_4 \pm
\Big [ b_1^2b_4^2+b_2^2b_3^2 -4b_1b_3^3+2b_1b_2b_3b_4 \Big ]^{1/2}}
{2(b_2b_4-b_3^2)} \ . \cr\cr
& &
\label{pade21zeros}
\end{eqnarray}
and the subscripts $i$ and $ii$ correspond to the $\pm$ sign in front of the
square root. It is straightforward to substitute the explicit expressions for
the coefficients $b_2$, $b_3$, and $b_4$ in Eq. (\ref{pade21zeros}), but the
resultant expressions for these quadratic roots in terms of the explicit
coefficients $b_n$, $1 \le n \le 4$ are somewhat lengthy, so we do not display
them. The pole of the $[2,1]_{\beta_{rd,4\ell}}$ PA occurs at $a=
[2,1]_{pole}$, where
\begin{eqnarray}
& & [2,1]_{pole} = \frac{b_3}{b_4} \cr\cr
& = & -\frac{3(2N-7)}{2[2N^2+19N-24+6(11N-17)\zeta_3]} \ .
\label{p21pole}
\end{eqnarray}
If one has a series expansion of a function that contains $n_{zero}$ zeros
and $n_{pole}$ poles, and one calculates $[r,s]$ Pad\'e approximants to this
series with $r > n_{zeros}$ and $s > n_{poles}$, the approximants typically
exhibit sets of nearly coincident zero-pole pairs in addition to
fitting the actual zeros and poles of the function
(e.g., see \cite{smpade,flir}). These nearly coincident zero-pole pairs may
thus be ignored. This happens in the present case. For example, for $N=3$,
the $[2,1]_{\beta_{rd,4\ell}}$ PA has a zero at $a=0.99773$, a zero at
$a=0.009015$ and a pole at $a=0.009015$, and similarly for other values of
$N$. In Table \ref{pades} we list the first zero, denoted $[2,1]_{zero,i}$,
as a function of $N$.
\begin{table}
\caption{\footnotesize{Values of $[1,1]_{zero}$ from [1,1] Pad\'e approximant
to the reduced three-loop beta function, $\beta_{rd,3\ell}$, and
$[2,1]_{zero,i}$ from the [2,1] Pad\'e approximant to the four-loop beta
function, $\beta_{rd,4\ell}$. See text for further details. }}
\begin{center}
\begin{tabular}{|c|c|c|c|} \hline\hline
$N$ & $[1,1]_{zero}$ & $[2,1]_{zero,i}$ \\
\hline
2 & 4.000 & 0.940 \\
3 & 1.333 & 0.998 \\
4 & 0.800 & 0.999 \\
5 & 0.571 & 0.992 \\
6 & 0.444 & 0.982 \\
7 & 0.364 & 0.9725 \\
8 & 0.308 & 0.963 \\
9 & 0.267 & 0.953 \\
10 & 0.235 & 0.943 \\
100 & 0.0203 & 0.683 \\
300 & 0.00670 & 0.615 \\
$10^3$& 0.00200 & 0.585 \\
\hline\hline
\end{tabular}
\end{center}
\label{pades}
\end{table}
We calculate the $[1,2]_{\beta_{rd,4\ell}}$ Pad\'e approximant to be
\begin{equation}
[1,2]_{\beta_{rd,4\ell}}=\frac{1+
\Big [\frac{b_1^2b_4+b_2^3-2b_1b_2b_3}{b_1(b_2^2-b_1b_3)} \Big ]a}
{1 + \Big ( \frac{b_1b_4-b_2b_3}{b_2^2-b_1b_3} \Big ) a
+ \Big ( \frac{b_3^2-b_2b_4}{b_2^2-b_1b_3} \Big ) a^2 } \ .
\label{pade12}
\end{equation}
The two poles of the $[1,2]_{\beta_{rd,4\ell}}$ approximant occur at
$a=[1,2]_{pole,(i,ii)}$, where
\begin{widetext}
\begin{equation}
[1,2]_{pole,(i,ii)}= \frac{b_1b_4 - b_2b_3 \pm
\Big [ b_1^2b_4^2-3b_2^2b_3^2+4b_1b_3^3+4b_2^3b_4-6b_1b_2b_3b_4 \Big ]^{1/2}}
{2(b_2b_4-b_3^2)} \ .
\label{p12poles}
\end{equation}
\end{widetext}
The zero of this approximant occurs at $a=[1,2]_{zero}$, where
\begin{eqnarray}
& & [1,2]_{zero} = \frac{b_1(b_1 b_3-b_2^2)}{b_1^2b_4+b_2^3-2b_1b_2b_3} \cr\cr
& = & -\frac{3(2N-3)}{2[2N^2+13N-9+6(11N-17)\zeta_3]} \ .
\label{p12zero}
\end{eqnarray}
Both of the poles $[1,2]_{pole,i}$ and $[1,2]_{pole,ii}$ are negative.
Furthermore, we find that this approximant has nearly coincident zero-pole
pairs, which thus can both be ignored. For example, for $N=3$, the zero occurs
at $a=-0.027540$ while one of the poles occurs at the nearly equal value,
$a=-0.027556$, and the other pole is at $a=-0.97919$. Similar results hold for
other values of $N$, i.e., the $[1,2]_{\beta_{rd,4\ell}}$ PA has a nearly
coincident zero-pole pair (at negative $a$) together with a second unphysical
pole at negative $a$.
As we have discussed, the four-loop beta function yields a negative real root,
in strong disagreement with the two-loop and three-loop beta functions. At
this four-loop level, the [1,2] PA does not exhibit any true zero, but only a
zero that is nearly coincident with a pole and hence can be identified as an
artifact. The [2,1] PA yields a zero, but it is at a completely different
value than the only real root of the actual four-loop beta function,
$a_{rt,4\ell}$. Thus, our analysis of the [2,1] and [1,2] Pad\'e approximants
to the four-loop (reduced) beta function yield further evidence against a
robust IR zero in this four-loop beta function.
\section{Analysis Using Scheme Transformations}
\label{scheme_section}
Since the coefficients $b_\ell$ with $\ell \ge 3$ in the beta function are
scheme-dependent, it is necessary to check that the conclusions from our
analysis of the beta function with $b_3$ and $b_4$ calculated in the
$\overline{\rm MS}$ scheme are robust with respect to scheme
transformations. To begin, we study scheme transformations that are designed to
remove higher-loop terms in the beta function. We first review some relevant
background. In \cite{sch}, formulas were derived for the coefficients
$b_\ell'$ resulting from a general scheme transformation $f(a')$ of the form
\begin{equation}
a = a'f(a') \ .
\label{aap}
\end{equation}
Since a scheme transformation has no effect in the case of a free field theory,
$f(a')$ satisfies the condition that $f(0)=1$. Expressing $f(a')$ as a power
series in $a'$, one has
\begin{equation}
f(a') = 1 + \sum_{s=1}^{s_{max}} k_s (a')^s \ ,
\label{faprime}
\end{equation}
where the $k_s$ are constants and $s_{max}$ may be finite or infinite. It
follows that the Jacobian of this transformation, $J=da/da'$ satisfies the
condition $J(0)=1$ and has the expansion
\begin{equation}
J = 1 + \sum_{s=1}^{s_{max}} (s+1)k_s(a')^s \ .
\label{j}
\end{equation}
Then in the transformed scheme, the coefficients of the three-loop and
four-loop terms in the beta function are \cite{sch}
\begin{equation}
b_3' = b_3 + k_1b_2+(k_1^2-k_2)b_1 \ ,
\label{b3prime}
\end{equation}
\begin{equation}
b_4' = b_4 + 2k_1b_3+k_1^2b_2+(-2k_1^3+4k_1k_2-2k_3)b_1 \ ,
\label{b4prime}
\end{equation}
and so forth for higher $b'_\ell$.
In \cite{sch} a set of conditions was given that should be obeyed by a
nonpathological scheme transformation. Condition C$_1$ was that the scheme
transformation must map a physical (real, positive) $a$ to a real positive
$a'$, since a map that yields a negative or complex value of $a'$ would violate
the unitarity of the theory. As condition C$_2$, we required that the scheme
transformation should preserve perturbativity, and hence should not map a small
or moderate value of $a$ to an excessively large value of $a'$ or vice versa.
Condition C$_3$ stated that the Jacobian $J$ should not vanish or diverge,
since otherwise the transformation would be singular. More generally, if $J$
were to become too small or too large, it could lead to a violation of
condition C$_2$. Finally, condition C$_4$ was that if a beta function
exhibited a zero at a sufficiently small value as to be perturbatively
reliable, then a scheme transformation should not alter this property.
Ref. \cite{sch} also gave the first explicit scheme transformation to set
$b_\ell'=0$ for $\ell \ge 3$, at least in the local vicinity of the origin, but
it also showed that this does not, in general, work to remove these higher-loop
terms at a point located away from the origin, i.e., an IR zero in an
asymptotically free theory or a UV zero in an IR-free theory. The reason, as
shown in \cite{sch} and \cite{sch23}, if one attempts to apply such a scheme
transformation to remove these higher-loop terms at a point away from the
origin, then the transformation violates one or more of the conditions
C$_1$-C$_4$ for acceptability. As in \cite{sch23}, we denote the scheme
transformation presented in \cite{sch} (with $s_{max}=m$) that removes the
coefficients in the beta function up to loop order $\ell=m+1$, at least near
the origin, as $S_{R,m}$.
We proceed with our analysis with the $S_{R,m}$ scheme transformation.
The $S_{R,2}$ transformation has \cite{sch}
\begin{equation}
k_2 = \frac{b_3}{b_1}
\label{k2}
\end{equation}
and the $S_{R,3}$ transformation has this $k_2$ and
\begin{equation}
k_3 = \frac{b_4}{2b_1} \ .
\label{k3}
\end{equation}
We begin by determining whether the scheme transformation $S_{R,2}$ can be
applied in the relevant region of $a$ where we need to apply it to set $b_3'=0$
and thus remove the three-loop term in the beta function. Since the
(scheme-independent) two-loop value is $a_{IR,2\ell}=a_{IR,2\ell}' = 1$, the
relevant region is in the neighborhood of $a=1$. This $S_{R,2}$ transformation
is defined by Eq. (\ref{faprime}) with $s_{max}=2$ and $k_2$ given by
Eq. (\ref{k2}). If the application of this $S_{R,2}$ transformation in the
vicinity of $a=$ were possible, then it would follow from Eq. (\ref{b4prime})
that $b_4'=b_4$. For $S_{R,2}$, Eq. (\ref{aap}) is
\begin{equation}
S_{R,2} \ \Longrightarrow \ a=a'[1+k_2 (a')^2] = a'\Big
[1+\frac{b_3}{b_1}(a')^2 \Big ] \ .
\label{aeq_sr2}
\end{equation}
Solving Eq. (\ref{aeq_sr2}) for $a'$, we obtain three roots, and we require
that at least one of these should be a physical (real, positive) value for $a$
in the relevant range of values comparable to $a_{IR,2\ell}=1$. We find that
this necessary condition, C$_1$, is not satisfied. Instead, two of the
solutions of Eq. (\ref{aeq_sr2}) for $a'$ form a complex-conjugate pair, while
the third is negative. For example, for $a=a_{IR,2\ell}=1$ and $N=4$, the
three solutions for $a'$ are $1.191 \pm 0.509i$ and $-2.383$, while for $N=10$,
the three solutions for $a'$ are $0.4125 \pm 0.450i$ and $-0.825$. The
Jacobian also exhibits pathological behavior; $J$ is given by
\begin{eqnarray}
S_{R,2} \ \Longrightarrow \ J & = & 1 + 3k_2(a')^2
= 1 + \frac{3b_3}{b_1}(a')^2 \cr\cr
& = & 1 - \frac{3(2N-7)}{4} \, (a')^2 \ .
\label{j_sr2}
\end{eqnarray}
For $a_{IR,2\ell}=a_{IR,2\ell}'=1$, $J=(25-6N)/4$, which decreases through zero
as $N$ (continued to the real numbers) increases through the value $N=25/6$,
violating condition C$_3$. It is therefore not
possible to use this scheme transformation to remove the three-loop term in the
beta function in the region of $a$ where we are trying to do this, namely the
neighborhood of the (scheme-independent) value $a=a_{IR,2\ell}=1$.
We can also investigate whether the scheme transformation $S_{R,3}$ is
physically acceptable to be applied in the relevant range of values of $a$,
namely $a=a_{IR,2\ell}=1$. This transformation is defined by
Eq. (\ref{faprime}) with $s_{max}=3$ and $k_2$ and $k_3$ given by Eqs.
(\ref{k2}) and (\ref{k3}):
\begin{eqnarray}
S_{R,3} \ \Longrightarrow \ a & = & a'[1+k_2 (a')^2+k_3(a')^3] \cr\cr
& = & a'\Big [1+\frac{b_3}{b_1}(a')^2 + \frac{b_4}{2b_1} (a')^3 \Big ] \ .
\label{aeq_sr3}
\end{eqnarray}
The Jacobian for this transformation is
\begin{eqnarray}
S_{R,3} \ \Longrightarrow \ J & = & 1 + 3k_2(a')^2 + 4k_3(a')^3 \cr\cr
& = & 1 + \frac{3b_3}{b_1}(a')^2 + \frac{2b_4}{b_1}(a')^3 \ .
\label{j_sr3}
\end{eqnarray}
With this $S_{R,3}$ scheme transformation we find that for the
relevant range of $a \simeq 1$, $J$ can deviate excessively far from unity,
violating condition C$_1$. For example, for $a=1$ and $N=10$, we find that
$J=339.8$, much larger than unity.
One can also apply the various scheme transformations that we have devised in
\cite{sch}-\cite{schi} to the beta function calculated in the
$\overline{\rm MS}$ scheme and compare the resulting value(s) of the zero(s) of
the beta function with the value(s) obtained at the three-loop and four-loop
level in the $\overline{\rm MS}$ scheme. Our general analyses in
\cite{sch}-\cite{schi} (see also \cite{graceysch})
have shown that, for moderate values of the parameters
determining these scheme transformations, the resultant values of the zero(s)
are similar to those obtained in the original $\overline{\rm MS}$ scheme.
In particular, the negative, unphysical value of $a_{rt,4\ell}$ will still be
present in the transformed scheme.
Summarizing this section, we have shown that our conclusion, that the
beta function of the finite-$N$ Gross-Neveu model, calculated up to four-loop
order, does not exhibit an IR zero, is robust with respect to scheme
transformations.
\section{Comparison with Results in the LN Limit and Behavior for $d > 2$}
\label{largeN_section}
In this section we discuss how the conventional perturbative beta function
reduces in the LN limit, and we also comment on some properties of the theory
for spacetime dimension $d > 2$. From Eq. (\ref{lambda}), the quantity
that remains finite and nonzero in the LN limit is $\lambda = gN$, and hence
the corresponding beta function that is finite in this limit is
\begin{equation}
\beta_\lambda = \frac{d\lambda}{dt} = \lim_{LN} N \frac{dg}{dt}
= \lim_{LN} N \beta \ .
\label{betalambda}
\end{equation}
With the limit $N \to \infty$ having been taken, $\beta_\lambda$ has the
series expansion, for $d \gsim 2$, with $\epsilon_d = d-2$,
\begin{equation}
\beta_\lambda = \lambda \Big [ \epsilon_d + \sum_{\ell=1}^\infty \hat b_\ell
\xi^\ell \Big ] \ ,
\label{betalambdaseries}
\end{equation}
where
\begin{equation}
\xi = \lim_{LN} Na = \frac{\lambda}{2\pi}
\label{xi}
\end{equation}
and
\begin{equation}
\hat b_\ell = \lim_{LN} \frac{b_\ell}{N^\ell} \ .
\label{bellhat}
\end{equation}
Here we have used the fact that $b_\ell a^\ell = \hat b_\ell \xi^\ell$. We
find
\begin{equation}
\hat b_1 = -2
\label{b1hat}
\end{equation}
and
\begin{equation}
\hat b_\ell = 0 \quad {\rm for} \ \ell \ge 2 \ .
\label{b234hat}
\end{equation}
The latter result follows from the fact that the structure of the bubble graphs
in the calculation of $b_\ell$ in, e.g., the $\overline{\rm MS}$ scheme, means
that, for $\ell \ge 2$, $b_\ell$ is a polynomial in $N$ of degree $\ell-1$.
Although the $b_\ell$ with $\ell \ge 3$ are scheme-dependent, this property is
maintained by scheme transformations that are finite in the LN limit
\cite{sch}. Hence, for $\ell \ge 2$, $\lim_{LN} b_\ell/N^\ell = 0$, which is
the result given in Eq. (\ref{b234hat}). Similarly, although $\hat b_\ell$ with
$\ell \ge 3$ are, in general, scheme-dependent, if they are zero in one scheme,
such as the $\overline{\rm MS}$ scheme, then they are also zero in any other
scheme reached by a scheme transformation function that is finite in the LN
limit \cite{sch}. It follows that in the LN limit, with
$d=2+\epsilon \gsim 2$,
\begin{equation}
\beta_\lambda = \lambda [ \epsilon - 2\xi ] = \lambda \Big [ \epsilon -
\frac{\lambda}{\pi} \Big ] \ .
\label{betalambdaLN}
\end{equation}
Hence,
\begin{equation}
d=2 \ \Longrightarrow \ \beta_\lambda = -\frac{\lambda^2}{\pi} \ ,
\label{betalambda_d2}
\end{equation}
with only the UV zero in this beta function at $\lambda=0$, and no IR zero. We
can relate this to the beta function that was calculated in \cite{gn} in the LN
limit. From Eqs. (\ref{ggn}) and (\ref{betag}), we have
\begin{equation}
\beta = \frac{dg}{dt} = 2g_{GN} \, \frac{dg_{GN}}{dt} = 2g_{GN}\beta_{GN} \ .
\label{betabetagn}
\end{equation}
Explicitly, in the LN limit, from Eqs. (\ref{betalambda_d2}) and
(\ref{ggn}),
\begin{equation}
\beta_\lambda = -\frac{\lambda^2}{\pi} = -\lim_{LN} \frac{g_{GN}^4 N^2}{\pi} \
.
\label{betarel}
\end{equation}
Combining Eqs. (\ref{betalambda}), (\ref{betabetagn}), and (\ref{betarel})
yields $\beta_{GN} = -g_{GN}^3 N/(2\pi) = -g_{GN}\lambda/(2\pi)$, in agreement
with Eq. (\ref{betagn}) above, or equivalently, Eq. (3.7) in Ref. \cite{gn}.
This agreement was guaranteed,
since the LN limit is a special limit of the result for finite $N$.
Accordingly, our finding that there is no robust evidence for
an IR zero in the finite-$N$ beta function of the ($d=2$) Gross-Neveu model is,
{\it a fortiori}, in agreement with the fact that in the LN limit, the beta
function $\beta_\lambda$ in Eq. (\ref{betalambda_d2}) (equivalently,
$\beta_{GN}$ in Eq. (\ref{betagn}) above), does not exhibit an IR zero.
If $d > 2$, then for small $\lambda$, the GN theory is IR-free, with an IR zero
of $\beta_\lambda$ at the origin, $\lambda=0$, and a UV zero of $\beta_\lambda$
at
\begin{equation}
\lambda_{UV}= \pi \epsilon \quad {\rm for} \ d \gsim 2,
\quad {\rm LN \ limit} \ ,
\end{equation}
which is a UV fixed point of the renormalization group. This is closely
analogous to the result found from an exact solution of the O($N$) nonlinear
$\sigma$ model (NL$\sigma$ M) in $d=2+\epsilon$ dimensions in the $N \to
\infty$ limit \cite{nlsm}. In that theory, denoting the analogous finite
coupling in this limit as
\begin{equation}
x = \lim_{N \to \infty} N \lambda_{NL\sigma M} \ ,
\label{x}
\end{equation}
the exact solution yielded the beta function, for $d \gsim 2$,
\begin{equation}
\beta_x = \frac{dx}{dt} = x\Big [ \epsilon - \frac{x}{2\pi} \Big ] \ .
\label{betax}
\end{equation}
Thus, this nonlinear sigma model is, like the GN model in $d \gsim 2$, IR-free
with a UV fixed point at
\begin{equation}
x_{UV} = 2\pi \epsilon \ .
\label{xuv}
\end{equation}
\section{Conclusions}
\label{conc}
The Gross-Neveu model in $d=2$ spacetime dimensions has long been of value as
an asymptotically free theory which is exactly solvable in the LN limit and, in
that limit, exhibits nonperturbative fermion mass generation and associated
dynamical chiral symmetry breaking. In this paper we have considered the
finite-$N$ Gross-Neveu model. We have addressed and answered a fundamental
question about the UV to IR evolution of this model, as embodied in the beta
function, namely whether this beta function exhibits evidence for an IR
zero. For the purpose of our study, we have analyzed the beta function to the
highest-loop order to which it has been calculated, namely the four-loop
order. Our study used a combination of three methods, namely a direct analysis
of the three-loop and four-loop beta functions, a study of Pad\'e approximants,
and a study of the effect of scheme transformations. We find that in the range
of coupling where the perturbative calculation of the four-loop beta function
is reliable, it does not exhibit robust evidence for an infrared zero.
\begin{acknowledgments}
This research was supported in part by the Danish National
Research Foundation grant DNRF90 to CP$^3$-Origins at SDU (T.A.R.) and
by the U.S. NSF Grant NSF-PHY-16-1620628 (R.S.)
\end{acknowledgments}
|
1,314,259,993,408 | arxiv |
\subsection{Comparing Hundreds of Models}
\label{sxn:all_cv_models}
We have performed a large-scale analysis of hundreds of publicly-available models.
This broader analysis is on a much larger set of CV and NLP models, with a more diverse set of architectures, that have been developed for a wider range of tasks; and it complements the previous more detailed analysis on CV and NLP models, where we have analyzed only a single architecture series at a time.
See the Supplementary Information
(and our publicly-available repo)
for details.
To quantify the relationship between quality metrics and the reported test error and/or accuracy metrics, we use ordinary least squares
to regress the metrics on the Top1 (and Top5) reported errors (as dependent variables), and we report the RMSE, the $R^2$ (R2) regresssion metric, and the Kendal-$\tau$ rank correlation metric.
These include Top5 errors for the ImageNet-1K model, percent error for the CIFAR-10/100, SVHN, CUB-200-2011 models, and Pixel accuracy (Pix.Acc.) and Intersection-Over-Union (IOU) for other models.
We regress them individually on each of the norm-based and PL-based metrics.
\input{table_3}
Results are summarized in Table~\ref{table:results} (and Figures~\ref{fig:summary_regressions_A}--\ref{fig:summary_regressions_I} of the Supplementary Information).
For the mean,
smaller RMSE,
larger $R^2$, and
larger Kendal-$\tau$
are desirable;
and, for the standard deviation, smaller values are desirable.
Taken as a whole, over the entire corpus of data, PL-based metrics are somewhat better for both the $R^{2}$ mean and standard deviation;
and PL-based metrics are much better for RMSE mean and standard deviation.
Model diagnostics (Supplementary Information) indicate many outliers and imperfect fits.
Overall, though, these and other results suggest our conclusions hold much more generally.
\section{Reproducibility Appendix}
\section{Supplementary Information}
\label{sxn:appendix}
\subsection{Supplementary Details}
\paragraph{Reproducing Sections \ref{sxn:cv} and \ref{sxn:nlp}. }
We provide a github repository for this paper that includes Jupyter notebooks that fully reproduce all results (as well as many other results)~\cite{kdd20_sub_repo}.
All results have been produced using the \texttt{WeightWatcher} tool)~\cite{weightwatcher_package}.
The ImageNet and OpenAI GPT pretrained models are provided in the current
pyTorch~\cite{pytorch} and Huggingface~\cite{huggingface} distributions.
\begin{table}[t]
\small
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Table & Figure & Jupyter Notebook \\
\hline
1 & \ref{fig:vgg-metrics} & WeightWatcher-VGG.ipynb \\
1 & \ref{fig:resnet-accuracy} & WeightWatcher-ResNet.ipynb \\
1 & \ref{fig:resnet1k-accuracy} & WeightWatcher-ResNet-1K.ipynb \\
1 & \ref{fig:vgg-alpha-layers} & WeightWatcher-VGG.ipynb \\
1 & \ref{fig:resnet-alpha-layer} & WeightWatcher-ResNet.ipynb \\
1 & \ref{fig:densenet-alpha-layer} & WeightWatcher-DenseNet.ipynb \\
\hline
& \ref{fig:resnet204D5L} & WeightWatcher-Intel-Distiller-ResNet20.ipynb \\
\hline
2 & \ref{fig:GPT-hist} & WeightWatcher-OpenAI-GPT.ipynb \\
2 & \ref{fig:gpt-alpha-layers}, \ref{fig:gpt2-histograms} & WeightWatcher-OpenAI-GPT2.ipynb \\
\hline
3,7,8,9 & Appendix & OSMR-Analysis.ipynb \\
\hline
\end{tabular}
\end{center}
\caption{Jupyter notebooks used to reproduce all results in Sections~\ref{sxn:cv} and~\ref{sxn:nlp}.}
\label{table:notebooks}
\end{table}
\paragraph{Reproducing Figure~\ref{fig:resnet204D5L}, for the Distiller Model.}
In the \texttt{distiller} folder of our github repo,
we provide the original Jupyter Notebooks, which use the Intel \texttt{distiller} framework~\cite{distiller}. %
Figure~\ref{fig:resnet204D5L} is from the \texttt{``...-Distiller-ResNet20.ipynb''} notebook (see Table~\ref{table:notebooks}).
For completeness, we provide both the results described here, as well as additional results on other pretrained and distilled models using the \texttt{WeightWatcher} tool.
\paragraph{Reproducing Table~\ref{table:results} in Section~\ref{sxn:all_cv_models}. }
The reader may regenerate all of the results of Section~\ref{sxn:all_cv_models}
\texttt{WeightWatcher} results using the Google Colab Jupyter notebooks (in the \texttt{ww-colab} folder)
and the \texttt{WeightWatcher} tool,
and/or simply reproduce
Table~\ref{table:results}, as well as Tables~\ref{table:RMSEresults}-\ref{table:Ktauresults}
and Figures~\ref{fig:summary_regressions_A}-\ref{fig:summary_regressions_I},
using the Jupyter notebooks (shown in Table~\ref{table:notebooks}),
and the pre-computed \texttt{WeightWatcher} datasets (in the \texttt{data/osmr} folder).
The pretrained models, trained on ImageNet-1K and the other datasets,
are taken from the pyTorch models in the \texttt{omsr/imgclsmob}
``Sandbox for training convolutional networks for computer vision'' github repository~\cite{osmr}.
The full \texttt{WeightWatcher} results are provided in the datasets: \texttt{[data/osmr/data...xlsx]},
last generated in January 2020, using the Google Colab notebooks: \texttt{[ww\_colab/ww\_colab...ipynb]},
and \texttt{WeightWatcher} version ww0.2.7. Results can be recomputed using the current version (ww0.4.1)
using the \texttt{ww2x=True} back compatability option, although note the pretrained models must
be downloaded and may have changed slightly.
The data files currently provided are analyzed with the \texttt{OSMR-Analysis.ipynb} python Juyter Notebook,
which runs all regressions and which tabulates the results presented in
Table~\ref{table:results}
and generates the figures in Figures~\ref{fig:summary_regressions_A}-\ref{fig:summary_regressions_I}
and the Tables.
We attempt to run linear regressions for all pyTorch models for each architecture series for all datasets provided.
There are over $450$ models in all to consider, and we note that the \texttt{osmr/imgclsmob} repository is constantly being updated with new models.
We omit the results for CUB-200-2011, Pascal-VOC2012, ADE20K, and COCO datasets, as there are fewer than 15 models for those datasets.
Also, we filter out regressions with fewer than 5 datapoints.
We remove the following outliers, as identified by visual inspection: \texttt{efficient\_b0,\_b2}.
We also remove the entire \texttt{cifar100} \texttt{ResNeXT} series, which is the only example to show no trends with the norm metrics.
The final architecture series used are shown in Table~\ref{table:architectures}, with the number of models in each.
Tables and figures summarizing this analysis (in a more fine-grained way than provided by Table~\ref{table:results}) are presented next.
\input{table_arch}
\input{table_RMSE}
\input{table_R2}
\input{table_Ktau}
\subsection{Supplementary Tables and Figures}
Here, we present a more detailed discussion of our large-scale analysis of hundreds of models, which were summarized in Table~\ref{table:results} of Section~\ref{sxn:all_cv_models}.
We ran the \texttt{WeightWatcher} tool (version 0.2.7)~\cite{weightwatcher_package}
on numerous pretrained models taken from the \texttt{OSMR/imgclsmob Sandbox} github repository of pretrained CV DNN models~\cite{osmr},
performing OLS (Ordinary Least Squares) regressions for every dataset and architecture series listed in Table~\ref{table:architectures}.
Table~\ref{table:results} summarized the overall results, and Figures~\ref{fig:summary_regressions_A}--\ref{fig:summary_regressions_I} below present a more detailed visual summary, which enables model diagnostics.
For each Figure, each row of subfigures considers a given pretrained model and dataset, depicting the average Norm-based and Power Law
metrics---Log Frobenius norm
($\langle\log\Vert\mathbf{W}\Vert^{2}_{F}\rangle$),
Log Spectral norm
($\langle\log\Vert\mathbf{W}\Vert^{2}_{\infty}\rangle$),
Weighted Alpha
($\hat{\alpha}$),
and Log $\alpha$-Norm
($\langle\log\Vert\mathbf{X}\Vert^{\alpha}_{\alpha}\rangle$)---against
the Top1 Test Accuracy, as reported in the github repository README file~\cite{osmr}, along with a shaded area representing the $95\%$ confidence bound.
For each regression, we report the RMSE, the R2 regresssion metric, and the Kendal-$\tau$ rank correlation metric in the title of each subfigure.
We also present these same numerical values in Table~\ref{table:RMSEresults}, Table~\ref{table:R2results}, and Table~\ref{table:Ktauresults}, respectively.
To reproduce these Figures and Tables, see the \texttt{OSMR-Analysis.ipynb} python Jupyter Notebook, as listed in Table~\ref{table:notebooks}.
(These are provided in the github repo accompanying this paper.)
The reader may regenerate these Figures and Tables, as well as more fine grained results, by rerunning the \texttt{OSMR-Analysis.ipynb} python Jupyter Notebook (see Table~\ref{table:notebooks}), which analyzes the precomputed data in the \texttt{df\_all.xlsx} file.
This repository also contains the original \texttt{Google Colab} notebooks, run in January 2020, which download the pretrained models and run the \texttt{WeightWatcher} tool on them.
The reader may also run the \texttt{WeightWatcher} locally on each of the
pretrained models,
such as the ResNet models, trained on the ImageNet-1K dataset, using the \texttt{WeightWatcher-ResNet-1K.ipynb} notebook.
(We should note, however, that the publicly-available versions of these models may have changed slightly, giving slightly different results).
Our final analysis includes 108 regressions in all. %
See Figure~\ref{fig:summary_regressions_A}--\ref{fig:summary_regressions_I} for more details.
From these Figures, we recognize fits of varying quality, ranging from remarkably good to completely uncorrelated.
Starting with some of the best, consider the Imagenet-1K PreResNet results.
For example,
Figure~\ref{fig:summary_regressions_A_10} shows the
Log Spectral norm, %
which has a rather large $RMSE=3.93$, and rather small $R2=0.36$, and Kendal-$\tau=0.54$, and
which has 6 out of 13 points outside the $95\%$ confidence bands.
In contrast, the
Log Frobenius norm %
in Figure~\ref{fig:summary_regressions_A_12}
has a much smaller $RMSE=1.93$, a much larger $R2=0.85$, and Kendal-$\tau=0.87$, and
has only 2 points outside the $95\%$ confidence bands.
For examples of lower quality fits, consider the SqueezeNext results, as shown Figures~\ref{fig:summary_regressions_C_10} and~\ref{fig:summary_regressions_C_12}.
The
Log Spectral norm %
appears visually anti-correlated with the test accuracies (as it is with ShuffleNet, in Figure~\ref{fig:summary_regressions_B_02}).
It has a very large $95\%$ confidence band, with only 2 points close to the regression line, a large RMSE, $R2=0.07$ (i.e., near zero), and small Kendal-$\tau=0.33$.
The
Log Frobenius norm %
is (as always) positively-correlated with test accuracies, but with $R2=0.43$, it show some linear correlation, and a reasonable Kendal-$\tau=0.73$, showing moderately strong rank correlation.
Many more such conclusions can be drawn by examining these Tables and Figures and reproducing the results from our publicly-available repo.
\input{figs_largeScale_A}
\input{figs_largeScale_B}
\input{figs_largeScale_C}
\input{figs_largeScale_D}
\input{figs_largeScale_E}
\input{figs_largeScale_F}
\input{figs_largeScale_G}
\input{figs_largeScale_H}
\input{figs_largeScale_I}
\subsection{Supplementary Discussion: Additional Details on HT-SR Theory}
The original work on HT-SR Theory~\cite{MM18_TR,MM19_HTSR_ICML,MM20_SDM} considered DNNs including AlexNet and InceptionV3 (as well as DenseNet, ResNet, and VGG), and it showed that for nearly every $\mathbf{W}$, the (bulk and tail) of the ESDs can be fit to a truncated PL and the PL exponents $\alpha$ nearly all lie within the range $\alpha\in(1.5,5)$.
Our meta-analysis, the main results of which are summarized in this paper, has shown that these results are ubiquitous.
For example,
upon examining nearly 10,000 layer weight matrices $\mathbf{W}_{l,i}$ across hundreds of different modern pre-trained DNN architectures, the ESD of nearly every $\mathbf{W}$ layer matrix can be fit to a truncated PL:
$70-80\%$ of the time, the fitted PL exponent $\alpha$ lies in the range $\alpha\in(2,4)$; and
$10-20\%$ of the time, the fitted PL exponent $\alpha$ lies in the range $\alpha< 2$.
Of course, there are exceptions: in any real DNN, the fitted $\alpha$ may range anywhere from $\sim 1.5$ to $10$ or higher (and, of course, larger values of $\alpha$ may indicate that the PL is not a good model for the data).
Still, overall, in nearly all large, pre-trained DNNs, the correlations in the weight matrices exhibit a remarkable Universality, being both Heavy Tailed, and having small---but not too small---PL exponents.
\subsection{Comparison of CV models}
\label{sxn:cv}
Each of the VGG, ResNet, and DenseNet series of models consists of several pretrained DNN models, with a given base architecture, trained on the full ImageNet~\cite{imagenet} dataset, and each is distributed with the current open source pyTorch framework (version 1.4)~\cite{pytorch}.
In addition, we examine a larger set of ResNet models, which we call the ResNet-1K series, trained on the ImageNet-1K dataset~\cite{imagenet} and provided on the OSMR Sandbox~\cite{osmr}.
For these models, we first perform coarse model analysis, comparing and contrasting the four model series, and predicting trends in model quality.
We then perform fine layer analysis, as a function of depth.
This layer analysis goes beyond predicting trends in model quality, instead illustrating that PL-based metrics can provide novel insights among the VGG, ResNet/ResNet-1K, and DenseNet architectures.
\paragraph{Average Quality Metrics versus Reported Test Accuracies.}
We examine the performance of the four quality metrics---Log Frobenius norm
($\langle\log\Vert\mathbf{W}\Vert^{2}_{F}\rangle$),
Log Spectral norm
($\langle\log\Vert\mathbf{W}\Vert^{2}_{\infty}\rangle$),
Weighted Alpha
($\hat{\alpha}$),
and Log $\alpha$-Norm
($\langle\log\Vert\mathbf{X}\Vert^{\alpha}_{\alpha}\rangle$)---applied to each of the VGG, ResNet, ResNet-1K, and DenseNet series.
Figure~\ref{fig:vgg-metrics} plots the four quality metrics versus reported test accuracies~\cite{pytorch},%
\footnote{These test accuracies have been previously reported and made publicly-available by others. We take them as given. We do not attempt to reproduce/verify them, since we do not permit ourselves access to training/test data.}
as well as a basic linear regression line, for the VGG series.
Here, smaller norms and smaller values of $\hat{\alpha}$ imply better generalization (i.e., greater accuracy, lower error).
Quantitatively, Log Spectral norm is the best; but, visually, all four metrics correlate quite well with reported Top1 accuracies.
The DenseNet series has similar behavior.
(These and many other such plots can be seen on our publicly-available~repo.)
To examine visually how the four quality metrics depend on data set size on a larger, more complex model series, we next look at results on ResNet versus ResNet-1K.
Figure~\ref{fig:cv2-accuracy} compares the Log $\alpha$-Norm metric for the full ResNet model, trained on the full ImageNet dataset, against the ResNet-1K model, trained on a much smaller ImageNet-1K data set.
Here, the Log $\alpha$-Norm is much better than the Log Frobenius/Spectral norm metrics (although, as Table~\ref{table:cv-models} shows, it is slightly worse than the Weighted Alpha metric).
The ResNet series has strong correlation (RMSE of $0.66$, $R^2$ of $0.9$, and Kendall-$\tau$ of $-1.0$), whereas the ResNet-1K series also shows good but weaker correlation (much larger RMSE of $1.9$, $R^2$ of $0.88$, and Kendall-$\tau$ of $-0.88$).
\begin{figure}[t]
\centering
\subfigure[Log Frobenius Norm, VGG ]{
\includegraphics[width=4.9cm]{img/VGG_log_norm_accs.png}
\label{fig:vgg-fnorm}
}
\qquad
\subfigure[Log Spectral Norm, VGG ]{
\includegraphics[width=4.9cm]{img/VGG_log_spectral_norm_accs.png}
\label{fig:vgg-snorm}
}
\qquad
\subfigure[ Weighted Alpha, VGG ]{
\includegraphics[width=4.9cm]{img/VGG_alpha_weighted_accs.png}
\label{fig:vgg-walpha}
}
\qquad
\subfigure[Log $\alpha$-Norm, VGG ]{
\includegraphics[width=4.9cm]{img/VGG_log_alpha_norm_accs.png}
\label{fig:vgg-pnorm}
}
\caption{Comparison of Average Log Norm and Weighted Alpha quality metrics versus reported test accuracy for pretrained VGG models:
VGG11, VGG13, VGG16, and VGG19, with and without Batch Normalization (BN),
trained on ImageNet, available in pyTorch (v1.4).
Metrics fit by linear regression, RMSE, R2, and the Kendal-tau rank correlation metric reported.
}
\label{fig:vgg-metrics}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[ ResNet, Log $\alpha$-Norm ]{
\includegraphics[width=4.9cm]{img/ResNet_log_alpha_norm_accs.png}
\label{fig:resnet-accuracy}
}
\qquad
\subfigure[ ResNet-1K, Log $\alpha$-Norm ]{
\includegraphics[width=6.3cm]{img/ResNet-1K_log_alpha_norm_accs.png}
\label{fig:resnet1k-accuracy}
}
\caption{Comparison of Average $\alpha$-Norm quality metric versus reported Top1 test accuracy for the ResNet and ResNet-1K pretrained (pyTorch) models.
Metrics fit by linear regression, RMSE, R2, and the Kendal-tau rank correlation metric reported.
}
\label{fig:cv2-accuracy}
\end{figure}
See Table~\ref{table:cv-models} for a summary of results for Top1 accuracies for all four metrics for the VGG, ResNet, ResNet-1K, and DenseNet series.
Similar results are obtained for the Top5 accuracies.
The Log Frobenius norm performs well but not extremely well;
the Log Spectral norm performs very well on smaller, simpler models like the VGG and DenseNet architectures; and,
when moving to the larger, more complex ResNet series, the PL-based metrics, Weighted Alpha and the Log $\alpha$-Norm, perform the best.
Overall, though, these model series are all very well-trodden; and our results indicate that norm-based metrics and PL-based metrics can both distinguish among a series of well-trained versus very-well-trained models, with PL-based metrics performing somewhat (i.e., quantitatively) better on the larger, more complex ResNet series.
\begin{table}[t]
\small
\begin{center}
\begin{tabular}{|p{1in}|c|c|c|c|c|c|}
\hline
Series & \# & Metric & $\langle\log\Vert\mathbf{W}\Vert^{2}_{F}\rangle$ & $\langle\log\Vert\mathbf{W}\Vert^{2}_{\infty}\rangle$ & $\hat{\alpha}$ & $\langle\log\Vert\mathbf{X}\Vert^{\alpha}_{\alpha}\rangle$ \\
\hline
\multirow{3}{4em}{VGG} & \multirow{3}{1em}{6} & RMSE & 0.56 & \textbf{0.23} & 0.48 & 0.34 \\
& & $R^2$ & 0.88 & \textbf{0.98} & 0.92 & 0.96 \\
& & Kendall-$\tau$ & -0.79 & \textbf{-0.93} & \textbf{-0.93} & \textbf{-0.93} \\
\hline
\multirow{3}{4em}{ResNet} & \multirow{3}{1em}{5} & RMSE & 0.9 & 0.97 & \textbf{0.61} & 0.66 \\
& & $R^2$ & 0.92 & 0.9 & \textbf{0.96} & 0.9 \\
& & Kendall-$\tau$ & -1.0 & -1.0 & -1.0 & -1.0 \\
\hline
\multirow{3}{4em}{ResNet-1K} & \multirow{3}{1em}{19} & RMSE & 2.4 & 2.8 & \textbf{1.8} & 1.9 \\
& & $R^2$ & 0.81 & 0.74 & \textbf{0.89} & 0.88 \\
& & Kendall-$\tau$ & -0.79 & -0.79 & \textbf{-0.89} & -0.88 \\
\hline
\multirow{3}{4em}{DenseNet} & \multirow{3}{1em}{4} & RMSE & 0.3 & \textbf{0.11} & 0.16 & 0.21 \\
& & $R^2$ & 0.93 & \textbf{0.99} & 0.98 & 0.97 \\
& & Kendall-$\tau$ & -1.0 & -1.0 & -1.0 & -1.0 \\
\hline
\end{tabular}
\end{center}
\caption{Quality metrics
(for RMSE, smaller is better; for $R^2$, larger is better; and for Kendall-$\tau$ rank correlation, larger magnitude is better)
for reported Top1 test error for pretrained models in each architecture series.
Column \# refers to number of models.
VGG, ResNet, and DenseNet were pretrained on ImageNet.
ResNet-1K was pretrained on ImageNet-1K.
}
\label{table:cv-models}
\end{table}
In particular, the PL-based
Weighted Alpha and Log $\alpha$-Norm
metrics tend to perform better when there is a wider variation in the hyperparameters, going beyond just increasing the depth.
In addition, sometimes the purely norm-based metrics such as the Log Spectral norm can be uncorrelated or even anti-correlated with the test accuracy, while the PL-metrrics are positively-correlated.
This is seen in the Supplementary Information,
ShuffleNet in Figure~\ref{fig:summary_regressions_B_02},
SqueezeNext in Figure~\ref{fig:summary_regressions_C_10}, and
WRN in Figure~\ref{fig:summary_regressions_G_06}).
Going beyond coarse averages to examining quality metrics for each layer weight matrix as a function of depth (or layer id), our metrics can be used to perform model diagnostics and to identify fine-scale properties in a pretrained model.
Doing so involves separating $\hat{\alpha}$ into its two components, $\alpha$ and $\lambda_{max}$, and examining the distributions of each.
We provide examples of~this.
\paragraph{Layer Analysis: Metrics as a Function of Depth.}
Figure~\ref{fig:3models-alpha-layers} plots the PL exponent $\alpha$, as a function of depth, for each layer (first layer corresponds to data, last layer to labels) for the least accurate (shallowest) and most accurate (deepest) model in each of the VGG (no BN), ResNet, and DenseNet series.
(Many more such plots are available at our repo.)
\begin{figure}[t]
\centering
\subfigure[ VGG ]{
\includegraphics[width=4.9cm]{img/VGG_fnl_alpha_depth.png}
\label{fig:vgg-alpha-layers}
}
\qquad
\subfigure[ ResNet ]{
\includegraphics[width=4.9cm]{img/ResNet_fnl_alpha_depth.png}
\label{fig:resnet-alpha-layer}
}
\qquad
\subfigure[ DenseNet ]{
\includegraphics[width=4.9cm]{img/DenseNet_fnl_alpha_depth.png}
\label{fig:densenet-alpha-layer}
}
\qquad
\subfigure[ ResNet (overlaid) ]{
\includegraphics[width=4.9cm]{img/resnet_alpha_overlaid_depth.png}
\label{fig:resnet_alpha_overlaid_depth}
}
\caption{PL exponent ($\alpha$) versus layer id, for the least and the most accurate models in VGG (a), ResNet (b), and DenseNet (c) series.
(VGG is without BN; and note that the Y axes on each plot are different.)
Subfigure (d) displays the ResNet models (b), zoomed in to $\alpha\in[1,5]$, and with the layer ids overlaid on the X-axis, from smallest to largest, to
allow a more detailed analysis of the most strongly correlated layers.
Notice that ResNet152 exhibits different and much more stable behavior of $\alpha$ across layers.
This contrasts with how both VGG models gradually worsen in deeper layers and how the DenseNet models are much more erratic.
In the text, this is interpreted in terms of Correlation Flow.
}
\label{fig:3models-alpha-layers}
\end{figure}
In the VGG models, Figure~\ref{fig:vgg-alpha-layers} shows that the PL exponent $\alpha$ systematically increases as we move down the network, from data to labels, in the Conv2D layers, starting with $\alpha\lesssim 2.0$ and reaching all the way to $\alpha\sim 5.0$; and then, in the last three, large, fully-connected (FC) layers, $\alpha$ stabilizes back down to $\alpha\in[2,2.5]$.
This is seen for all the VGG models (again, only the shallowest and deepest are shown), indicating that the main effect of increasing depth is to increase the range over which $\alpha$ increases, thus leading to larger $\alpha$ values in later Conv2D layers of the VGG models.
This is quite different than the behavior of either the ResNet-1K models or the DenseNet models.
For the ResNet-1K models, Figure~\ref{fig:resnet-alpha-layer} shows that $\alpha$ also increases in the last few layers (more dramatically than for VGG, observe the differing scales on the Y axes).
However, as the ResNet-1K models get deeper, there is a wide range over which $\alpha$ values tend to remain small.
This is seen for other models in the ResNet-1K series, but it is most pronounced for the larger ResNet-1K (152) model, where $\alpha$ remains relatively stable at $\alpha\sim 2.0$, from the earliest layers all the way until we reach close to the final layers.
For the DenseNet models, Figure~\ref{fig:densenet-alpha-layer} shows that $\alpha$ tends to increase as the layer id increases, in particular for layers toward the end.
While this is similar to the VGG models, with the DenseNet models, $\alpha$ values increase almost immediately after the first few layers, and the variance is much larger (in particular for the earlier and middle layers, where it can range all the way to $\alpha\sim 8.0$) and much less systematic throughout the network.
Overall, Figure~\ref{fig:3models-alpha-layers} demonstrates that the distribution of $\alpha$ values among layers is architecture dependent,
and that it can vary in a systematic way within an architecture series.
This is to be expected, since some architectures enable better extraction of signal from the data.
This also suggests that, while performing very well at predicting trends within an architecture series, PL-based metrics (as well as norm-based metrics) should be used with caution when comparing models with very different architectures.
\paragraph{Correlation Flow; or How $\alpha$ Varies Across Layers.}
Figure~\ref{fig:3models-alpha-layers} can be understood in terms of what we will call Correlation Flow.
Recall that the average Log $\alpha$-Norm metric and the Weighted Alpha metric are based on HT-SR Theory~\cite{MM18_TR, MM19_HTSR_ICML, MM20_SDM}, which is in turn based on the statistical mechanics of heavy tailed and strongly correlated systems~\cite{BouchaudPotters03, SornetteBook, BP11, bun2017}.
There, one expects that the weight matrices of well-trained DNNs will exhibit correlations over many size scales, as is well-known in other strongly-correlated systems~\cite{BouchaudPotters03, SornetteBook}.
This would imply that their ESDs can be well-fit by a truncated PL, with exponents $\alpha\in[2,4]$.
Much larger values $(\alpha\gg 6)$ may reflect poorer PL fits, whereas smaller values $(\alpha\sim 2)$, are associated with models that generalize better.
Informally, one would expect a DNN model to perform well when it facilitates the propagation of information/features across layers.
In the absence of training/test data, one might hypothesize that this flow of information leaves empirical signatures on weight matrices, and that we can quantify this by measuring the PL properties of weight matrices.
In this case, smaller $\alpha$ values correspond to layers in which information correlations between data across multiple scales are better captured~\cite{MM18_TR,SornetteBook}.
This leads to the hypothesis that small $\alpha$ values that are stable across multiple layers enable better correlation flow through the network.
This is similar to recent work on the information bottleneck~\cite{TZ15,ST17_TR}, except that here we work in an entirely unsupervised~setting.
\begin{figure}[t]
\centering
\subfigure[$\lambda_{max}$ for ResNet20 layers]{
\includegraphics[width=4.9cm]{img/resnet4d_maxev.png}
\label{fig:resnet204Dmaxev}
}
\qquad
\subfigure[$\alpha$ for ResNet20 layers]{
\includegraphics[width=4.9cm]{img/resnet4d_alphas.png}
\label{fig:resnet204Dalpha}
}
\caption{%
ResNet20, distilled with Group Regularization, as implemented in the \texttt{distiller} (4D\_regularized\_5Lremoved) pretrained models.
Log Spectral Norm ($\log\lambda_{max}$) and PL exponent ($\alpha$) for individual layers, versus layer id, for both baseline (before distillation, green) and fine-tuned (after distillation, red) pretrained models.
}
\label{fig:resnet204D5L}
\end{figure}
\paragraph{Scale Collapse; or How Distillation May Break Models.}
The similarity between norm-based metrics and PL-based metrics may lead one to wonder whether the Weighted Alpha metric is just a variation of more familiar norm-based metrics.
Among hundreds of pretrained models, there are ``exceptions that prove the rule,'' and these can be used to show that fitted $\alpha$ values do contain information not captured by norms.
To illustrate this, we show that some compression/distillation methods \cite{CWZZ17_TR} may actually damage models unexpectedly, by introducing what we call Scale Collapse, where several distilled layers have unexpectedly small Spectral Norms.
By Scale Collapse, we mean that the size scale, e.g., as measured by the Spectral or Frobenius Norm, of one or more layers changes dramatically, while the size scale of other layers changes very little, as a function of some change to or perturbation of a model.
The size scales of different parts of a DNN model are typically defined implicitly by the model training process, and they typically vary in a gradual way for high-quality models.
Examples of changes of interest include model compression or distillation (discussed here for a CV model), data augmentation (discussed below for an NLP model), additional training, model fine-tuning, etc.
Consider ResNet20, trained on CIFAR10, before and after applying the Group Regularization distillation technique, as implemented in the \texttt{distiller} package~\cite{distiller}.
We analyze the pretrained 4D\_regularized\_5Lremoved baseline and fine-tuned models.
The reported baseline test accuracies (Top1$=91.45$ and Top5$=99.75$) are better than the reported fine-tuned test accuracies (Top1$=91.02$ and Top5$=99.67$).
Because the baseline accuracy is greater, the previous results on ResNet (Table~\ref{table:cv-models} and Figure~\ref{fig:cv2-accuracy}) suggest that the baseline Spectral Norms should be smaller on average than the fine-tuned ones.
The opposite is observed.
Figure~\ref{fig:resnet204D5L} presents the Spectral Norm (here denoted $\log\lambda_{max}$) and PL exponent ($\alpha$) for each individual layer weight matrix $\mathbf{W}$.
On the other hand, the $\alpha$ values (in Figure~\ref{fig:resnet204Dalpha}) do not differ systematically between the baseline and fine-tuned models.
Also, $\bar{\alpha}$, the average unweighted baseline $\alpha$,
from Eqn.~(\ref{eqn:alpha_bar}),
is smaller for the original model than for the fine-tuned model
(as predicted by HT-SR Theory, the basis of $\hat{\alpha}$).
In spite of this, Figure~\ref{fig:resnet204Dalpha} also depicts two very large $\alpha\gg 6$ values for the baseline, but not for the fine-tuned, model.
This suggests the baseline model has at least two over-parameterized/under-trained layers, and that the distillation method does, in fact, improve the fine-tuned model by compressing these layers.
Pretrained models in the \texttt{distiller} package have passed some quality metric, but they are much less well trodden than any of the VGG, ResNet, or DenseNet series.
The obvious interpretation is that, while norms make good regularizers for a single model, there is no reason a priori to expect them correlate well with test accuracies across different models, and they may not differentiate well-trained versus poorly-trained models.
We do expect, however, the PL $\alpha$ to do so, because it effectively measures the amount of information correlation in the model~\cite{MM18_TR, MM19_HTSR_ICML, MM20_SDM}.
This suggests that the $\alpha$ values will improve, i.e., decrease, over time, as distillation techniques continue to improve.
The reason for the anomalous behavior shown in
Figure~\ref{fig:resnet204D5L}
is that the \texttt{distiller} Group Regularization technique
causes the norms of the $\mathbf{W}$ pre-activation maps for two Conv2D layers to increase spuriously.
This is difficult to diagnose by analyzing training/test curves, but it is easy to diagnose with our~approach.
\section{Discussion}
\paragraph{Comparison of VGG, ResNet, and DenseNet Architectures.}
Going beyond the goal of predicting trends in the quality of state-of-the-art neural networks without access to training or testing data, observations such as
the layer-wise observations we described in Figure~\ref{fig:3models-alpha-layers} can be understood in terms of architectural differences between VGG, ResNet, and DenseNet.
VGG resembles the traditional convolutional architectures, such as LeNet5, and consists of several [Conv2D-Maxpool-ReLu] blocks, followed by 3 large Fully Connected (FC) layers.
ResNet greatly improved on VGG by replacing the large FC layers, shrinking the Conv2D blocks, and introducing residual connections.
This optimized approach allows for greater accuracy with far fewer parameters, and ResNet models of up to 1000 layers have been trained~\cite{resnet1000}.
The efficiency and effectiveness of ResNet seems to be reflected in the smaller and more stable $\alpha\sim 2.0$, across nearly all layers, indicating that the inner layers are very well correlated and more strongly optimized.
This contrasts with the DenseNet models, which contains many connections between every layer.
These results (large $\alpha$, meaning that even a PL model is probably a poor fit) suggest that DenseNet has too many connections, diluting high quality interactions across layers, and leaving many layers very poorly~optimized.
Fine-scale measurements such as these enable us to form hypotheses as to the inner workings of DNN models, opening the door to an improved understanding of why DNNs work, as well as how to design better DNN models.
Correlation Flow and Scale Collapse are two such~examples.
\paragraph{Related work.}
Statistical mechanics has long had influence on DNN theory and practice~\cite{EB01_BOOK, MM17_TR, BKPx20}.
Our best-performing PL-based metrics are based on statistical mechanics via HT-SR Theory~\cite{MM17_TR, MM18_TR, MM19_HTSR_ICML, MM19_KDD, MM20_SDM}.
The way in which we (and HT-SR Theory) use statistical mechanics theory is quite different than the way it is more commonly formulated~\cite{EB01_BOOK, BKPx20}.
Going beyond idealized models, we use statistical mechanics in a broader sense, drawing upon techniques from quantitative finance, random matrix theory, and the statistical mechanics of heavy tailed and strongly correlated systems~\cite{BouchaudPotters03, SornetteBook, BP11, bun2017}.
There is also a large body of work in ML on using norm-based metrics to bound generalization error~\cite{NTS15, BFT17_TR, LMBx18_TR}.
This theoretical work aims to prove generalization bounds, and this applied work then uses these norms to construct regularizers to improve training.
Proving generalization bounds and developing new regularizers is very different than our focus of validating pretrained models.
Our work also has intriguing similarities and differences with work on understanding DNNs with the information bottleneck principle~\cite{TZ15,ST17_TR}, which posits that DNNs can be quantified by the mutual information between their layers and the input and output variables.
Most importantly, our approach does not require access to any data, while information measures used in the information bottleneck approach do require this.
Nevertheless, several results from HT-SR Theory, on which our metrics are based, have parallels in the information bottleneck approach.
Perhaps most notably, the quick transition from a \textsc{Random-like} phase to \textsc{Bulk+Spikes} phase, followed by slow transition to a \textsc{Heavy-Tailed} phase, as noted previously~\cite{MM18_TR}, is reminiscent of the dynamics on the Information Plane~\cite{ST17_TR}.
Finally,
our work, starting in 2018 with the \texttt{WeightWatcher} tool~\cite{weightwatcher_package}, is the first to perform a detailed analysis of the weight matrices of DNNs~\cite{MM18_TR, MM19_HTSR_ICML, MM20_SDM}.
Subsequent to the initial version of this paper, we became aware of two other works that were posted to the in 2020 within weeks of the initial version of this paper~\cite{EJRUY20_TR,UKGBT20_TR}.
Both of these papers validate our basic result that one can gain substantial insight into model quality by examining weight matrices without access to any training or testing data.
However, both consider smaller models drawn from a much narrower range of applications than we consider.
Previous results in HT-SR Theory suggest that insights from these smaller models may not extend to the state-of-the-art CV and NLP models we consider.
\paragraph{Conclusions.}
We have developed and evaluated methods to predict trends in the quality of state-of-the-art neural networks---without access to training or testing data.
Our main methodology involved weight matrix meta-analysis, using the publicly-available \texttt{WeightWatcher} tool~\cite{weightwatcher_package}, and informed by the recently-developed HT-SR Theory~\cite{MM18_TR, MM19_HTSR_ICML, MM20_SDM}.
Prior to our work, it was not even obvious that norm-based metrics would perform well to predict trends in quality across models (as they are usually used within a given model or parameterized model class, e.g., to bound generalization error or to construct regularizers).
Our results are the first to demonstrate that they can be used for this important practical problem.
Our results also demonstrate that PL-based metrics perform better than norm-based metrics.
This should not be surprising---at least to those familiar with the statistical mechanics of heavy tailed and strongly correlated systems~\cite{BouchaudPotters03, SornetteBook, BP11, bun2017}---since our use of PL exponents is designed to capture the idea that well-trained models capture information correlations over many size scales in the data.
Again, though, our results are the first to demonstrate this.
Our approach can also be used to provide fine-scale insight (rationalizing the flow of correlations or the collapse of size scale) throughout a network.
Both Correlation Flow and Scale Collapse are important for improved diagnostics on pretrained models as well as for improved training methodologies.
\paragraph{Looking forward.}
More generally, our results suggest what a practical theory of DNNs should look like.
To see this, let's distinguish between two types of theories:
non-empirical or analogical theories, in which one creates, often from general principles, a very simple toy model that can be analyzed rigorously, and one then claims that the model is relevant to the system of interest; and
semi-empirical theories, in which there exists a rigorous asymptotic theory, which comes with parameters, for the system of interest, and one then adjusts or fits those parameters to the finite non-asymptotic data, to make predictions about practical problems.
A drawback of the former approach is that it typically makes very strong assumptions, and the strength of those assumptions can limit the practical applicability of the theory.
Nearly all of the work on DNN theory focuses on the former type of theory.
Our approach focuses on the latter type of theory.
Our results, which are based on using sophisticated statistical mechanics theory and solving important practical DNN problems, suggests that the latter approach should be of interest more generally for those interested in developing a practical DNN~theory.
\section{Introduction}
\label{sxn:intro}
A common problem in machine learning (ML)
is to evaluate the quality of a given model.
A popular way to accomplish this
is to train a model and then evaluate its training/testing error.
There are many problems with this approach.
The training/testing curves give very limited insight into the overall properties of the model;
they do not take into account the (often large human and CPU/GPU) time for hyperparameter fiddling;
they typically do not correlate with other properties of interest such as robustness or fairness or interpretability;
and so on.
A related problem, in particular in industrial-scale artificial intelligence (AI), arises when the model user is not the model developer.
Then, one may not have access to the training data or the testing data.
Instead, one may simply be given a model that has already been trained---a pretrained model---and need to use it as-is, or to fine-tune and/or compress it and then use it.
Na\"{\i}vely---but in our experience commonly, among ML practitioners and ML theorists---if one does not have access to training or testing data, then one can say absolutely nothing about the quality of a ML model.
This may be true in worst-case theory, but models are used in practice, and there is a need for a practical theory to guide that practice.
Moreover, if ML is to become an industrial process, then that process will become compartmentalized in order to scale: some groups will gather data, other groups will develop models, and other groups will use those models.
Users of models cannot be expected to know the precise details of how models were built, the specifics of data that were used to train the model, what was the loss function or hyperparameter values, how precisely the model was regularized,~etc.
Moreover, for many large scale, practical applications, there is no obvious way to define an ideal test metric.
For example, models that generate fake text or conversational chatbots may use a proxy, like perplexity, as a test metric.
In the end, however, they really require human evaluation.
Alternatively, models that cluster user profiles, which are widely used in areas such as marketing and advertising, are unsupervised and have no obvious labels for comparison and/or evaluation.
In these and other areas, ML objectives can be poor proxies for downstream goals.
Most importantly, in industry, one faces unique practical problems such as determining whether one has enough data for a given model.
Indeed, high quality, labeled data can be very expensive to acquire, and this cost can make or break a project.
Methods that are developed and evaluated on any well-defined publicly-available corpus of data, no matter how large or diverse or interesting, are clearly not going to be well-suited to address problems such as this.
It is of great practical interest to have metrics to evaluate the quality of a trained model---in the absence of training/testing data and without any detailed knowledge of the training/testing process.
There is a need for a practical theory for pretrained models which can predict how, when, and why such models can be expected to perform well or~poorly.
In the absence of training and testing data, obvious quantities to examine are the weight matrices of pretrained models, e.g.,
properties such as norms of weight matrices and/or parameters of Power Law (PL) fits of the eigenvalues of weight matrices.
Norm-based metrics have been used in traditional statistical learning theory to bound capacity and construct regularizers; and PL fits are based on statistical mechanics approaches to deep neural networks (DNNs).
While we use traditional norm-based and PL-based metrics, our goals are not the traditional goals.
Unlike more common ML approaches, we do not seek a bound on the generalization (e.g., by evaluating training/test errors), we do not seek a new regularizer, and we do not aim to evaluate a single model (e.g., as with hyperparameter optimization).
Instead, we want to examine different models across common architecture series, and we want to compare models between different architectures themselves.
In both cases, one can ask whether it is possible to predict trends in the quality of pretrained DNN models without access to training or testing data.
To answer this question, we provide a detailed empirical analysis, evaluating quality metrics for pretrained DNN models, and we do so at scale.
Our approach may be viewed as a statistical meta-analysis of previously published work, where we consider a large suite of hundreds of publicly-available models, mostly from computer vision (CV) and natural language processing (NLP).
By now, there are many such state-of-the-art models that are publicly-available, e.g.,
hundreds of pretrained models in CV ($\ge 500$) and NLP ($\approx 100$).%
\footnote{When we began this work in 2018, there were fewer than tens of such models; now in 2020, there are hundreds of such models; and we expect that in a year or two there will be an order of magnitude or more of such models.}
For all these models, we have no access to training data or testing data, and we have no specific knowledge of the training/testing protocols.
Here is a summary of our main results.
First, norm-based metrics do a reasonably good job at predicting quality trends in well-trained CV/NLP models.
Second, norm-based metrics may give spurious results when applied to poorly-trained models (e.g., models trained without enough data, etc.).
For example, they may exhibit what we call Scale Collapse for these models.
Third, PL-based metrics can do much better at predicting quality trends in pretrained CV/NLP models.
In particular,
a weighted PL exponent (weighted by the log of the spectral norm of the corresponding layer) is
quantitatively better at discriminating among a series of well-trained versus very-well-trained models
within a given architecture series; and
the (unweighted) average PL exponent is
qualitatively better at discriminating well-trained versus poorly-trained models.
Fourth, PL-based metrics can also be used to characterize fine-scale model properties, including what we call layer-wise Correlation Flow, in well-trained and poorly-trained models; and they can be used to evaluate model enhancements (e.g., distillation, fine-tuning, etc.).
Our work provides a theoretically-principled empirical evaluation---by far the largest, most detailed, and most comprehensive to date---and the theory we apply was developed previously~\cite{MM18_TR, MM19_HTSR_ICML, MM20_SDM}.
Performing such a meta-analysis of previously-published work is common in certain areas, but it is quite rare in ML, where the emphasis is on developing better training protocols.
\section{Methods}
\label{sxn:methods}
To be fully reproducible, we only examine publicly-available, pretrained models.
All of our computations were performed with the \texttt{WeightWatcher} tool (version 0.2.7)~\cite{weightwatcher_package}, and we provide all Jupyter and Google Colab notebooks used in an accompanying github repository~\cite{kdd20_sub_repo}, which includes more details and more results.
\paragraph{Additional Details on Layer Weight Matrices}
Recall that we can express the objective/optimization function for a typical DNN with $L$ layers and with $N\times M$ weight matrices $\mathbf{W}_{l}$ and bias vectors $\mathbf{b}_{l}$ as Equation~(\ref{eqn:dnn_energy}).
We expect that most well-trained, production-quality models will employ one or more forms of regularization, such as Batch Normalization (BN), Dropout, etc., and many will also contain additional structure such as Skip Connections, etc.
Here, we will ignore these details, and will focus only on the pretrained layer weight matrices $\mathbf{W}_{l}$.
Typically, this model would be trained on some labeled data $\{d_{i},y_{i}\}\in\mathcal{D}$, using Backprop, by minimizing the loss $\mathcal{L}$.
For simplicity, we do not indicate the structural details of the layers (e.g., Dense or not, Convolutions or not, Residual/Skip Connections, etc.).
Each layer is defined by one or more layer 2D weight matrices $\mathbf{W}_{l}$, and/or the 2D feature maps $\mathbf{W}_{l,i}$ extracted from 2D Convolutional (Conv2D) layers.
A typical modern DNN may have anywhere between 5 and 5000 2D layer~matrices.
For each Linear Layer, we get a single $(N\times M)$ (real-valued) 2D weight matrix, denoted $\mathbf{W}_{l}$, for layer $l$.
This includes Dense or Fully-Connected (FC) layers, as well as 1D Convolutional (Conv1D) layers, Attention matrices, etc.
We ignore the bias terms $\mathbf{b}_{l}$ in this analysis.
Let the aspect ratio be $Q=\frac{N}{M}$, with $Q\ge 1$.
For the Conv2D layers, we have a 4-index Tensor, of the form $(N\times M \times c\times d)$, consisting
of $c\times d$ 2D feature maps of shape $(N\times M)$.
We extract $n_{l}=c\times d$ 2D weight matrices $\mathbf{W}_{l,i}$, one for each feature map $i=[1,\dots,n_{l}]$ for layer $l$.
\paragraph{SVD of Convolutional 2D Layers.}
There is some ambiguity in performing spectral analysis on Conv2D layers.
Each layer is a 4-index tensor of dimension $(w,h,in,out)$, with an $(w\times h)$ filter (or kernel) and $(in, out)$
channels. When $w=h=k$, it gives $(k\times k)$ tensor slices, or pre-Activation Maps, $\mathbf{W}_{i,L}$ of dimension $(in\times out)$ each.
We identify 3 different approaches for running SVD on a Conv2D layer:
\begin{enumerate}
\item run SVD on each pre-Activation Map $\mathbf{W}_{i,L}$, yielding $(k\times k)$ sets of $M$ singular values;
\item stack the maps into a single matrix of, say, dimension $((k\times k\times out)\times in)$, and run SVD to get $in$ singular values;
\item compute the 2D Fourier Transform (FFT) for each of the $(in, out)$ pairs, and run SVD on the Fourier coefficients~\cite{CNNSVD}, leading to $\sim(k\times in\times out)$ non-zero singular values.
\end{enumerate}
Each method has tradeoffs.
Method (3) is mathematically sound, but computationally expensive. Method (2) is ambiguous.
For our analysis, because we need thousands of runs, we select method (1), which is the fastest (and is easiest to reproduce).
\paragraph{Normalization of Empirical Matrices.}
Normalization is an important, if underappreciated, practical issue.
Importantly, the normalization of weight matrices does \emph{not} affect the PL fits because $\alpha$ is scale-invariant.
Norm-based metrics, however, do depend strongly on the scale of the weight matrix---that is the point.
To apply RMT, we usually define $\mathbf{X}$ with a $1/N$ normalization, assuming variance of $\sigma^{2}=1.0$.
Pretrained DNNs are typically initialized with random weight matrices $\mathbf{W}_{0}$, with $\sigma^{2}\sim 1/\sqrt{N}$, or some variant, e.g., the Glorot/Xavier normalization~\cite{GloBen10}, or a $\sqrt{2/Nk^2}$ normalization for Convolutional 2D Layers. With this implicit scale, we do \emph{not} ``renormalize'' the empirical weight matrices, i.e., we use them as-is.
The only exception is that \emph{we do rescale} the Conv2D pre-activation maps $\mathbf{W}_{i,L}$ by $k/\sqrt{2}$ so that they are on the same scale as the Linear / Fully Connected (FC) layers.
\paragraph{Special consideration for NLP models.}
NLP models, and other models with large initial embeddings, require special care because the embedding layers frequently lack the implicit $1/\sqrt{N}$ normalization present in other layers.
For example, in GPT, for most layers, the maximum eigenvalue $\lambda_{max}\sim\mathcal{O}(10-100)$, but in the first embedding layer, the maximum eigenvalue is of order $N$ (the number of words in the embedding), or $\lambda_{max}\sim\mathcal{O}(10^{5})$.
For GPT and GPT2, we treat all layers as-is (although one may want to normalize the first 2 layers $\mathbf{X}$ by $1/N$, or to treat them as outliers).
\subsection{Comparison of NLP Models}
\label{sxn:nlp}
Within the past few years, nearly 100 open source, pretrained NLP DNNs based on the revolutionary Transformer architecture have emerged.
These include variants of BERT, Transformer-XML, GPT, etc.
The Transformer architectures consist of blocks of so-called Attention layers, containing two large, Feed Forward (Linear) weight matrices~\cite{Attn2017}.
In contrast to smaller pre-Activation maps arising in Cond2D layers, Attention matrices are significantly larger.
In general, they have larger PL exponents $\alpha$.
Based on HT-SR Theory
(in particular, the interpretation of values of $\alpha \sim 2$ as modeling systems with good correlations over many size scales~\cite{BouchaudPotters03, SornetteBook}),
this suggests that these models fail to capture successfully many of the information correlations in the data (relative to their size) and thus are substantially under-trained.
More generally, compared to CV models,
modern NLP models have larger weight matrices and display different spectral properties.
While norm-based metrics perform reasonably well on well-trained NLP models, they often behave anomalously on poorly-trained models.
For such models, weight matrices may display rank collapse, decreased Frobenius mass, or unusually small Spectral norms.
This may be misinterpreted as ``smaller is better.''
Instead, it should probably be interpreted as being due to a similar mechanism to how distillation can ``damage'' otherwise good models.
In contrast to norm-based metrics, PL-based metrics, including the Log $\alpha$-Norm metric and the Weighted Alpha metric, display more consistent behavior, even on less well-trained models.
To help identify when architectures need repair and when more and/or better data are needed,
one can use these metrics,
as well as the decomposition of the Weighted Alpha metric ($\alpha\log\lambda_{max}$) into its PL component ($\alpha$) and its norm component ($\log\lambda_{max}$), for each layer.
Many NLP models, such as early variants of GPT and BERT, have weight matrices with unusually large PL exponents (e.g., $\alpha\gg 6$).
This indicates these matrices may be under-correlated (i.e., over-parameterized, relative to the amount of data).
In this regime, the truncated PL fit itself may not be very reliable because the Maximum Likelihood estimator it uses is unreliable in this range.
In this case, the specific $\alpha$ values returned by the truncated PL fits are less reliable, but having large versus small $\alpha$ is reliable.
If the ESD is visually examined, one can usually describe these $\mathbf{W}$ as in the \textsc{Bulk-Decay} or \textsc{Bulk-plus-Spikes} phase from HT-ST Theory~\cite{MM18_TR,MM19_HTSR_ICML}.
Previous work~\cite{MM18_TR,MM19_HTSR_ICML} has conjectured that very well-trained DNNs would not have many outlier $\alpha\gg 6$.
Consistent with this, more recent improved versions of GPT (shown below) and BERT (not shown) confirm~this.
\paragraph{OpenAI GPT Models.}
The OpenAI GPT and GPT2 series of models provide the opportunity to analyze two effects: increasing the sizes of both the data set and the architectures simultaneously; and training the same model with low-quality data versus high-quality data.
These models have the ability to generate fake text that appears to the human to be real, and they have generated media attention because of the potential for their misuse.
For this reason, the original GPT model released by OpenAI was trained on a deficient data set, rendering the model interesting but not fully functional.
Later, OpenAI released a much improved model, GPT2-small, which has the same architecture and number of layers as GPT, but which has been trained on a larger and better data set, making it remarkably good at generating (near) human-quality fake text.
Subsequent models in the GPT2 were larger and trained to more data.
By comparing GPT2-small to GPT2-medium to GPT2-large to GPT2-xl, we can examine the effect of increasing data set and model size simultaneously, as well as analyze well-trained versus very-well-trained models.
By comparing the poorly-trained GPT to the well-trained GPT2-small, we can identify empirical indicators for when a model has been poorly-trained and thus may perform poorly when deployed.
The GPT models we analyze are deployed with the popular HuggingFace PyTorch library~\cite{huggingface}.
\begin{table}[t]
\small
\begin{center}
\begin{tabular}{|p{1in}|c|c|c|c|c|}
\hline
Series & \# & $\langle\log\Vert\mathbf{W}\Vert_{F}\rangle$ & $\langle\log\Vert\mathbf{W}\Vert_{\infty}\rangle$ & $\hat{\alpha}$ & $\langle\log\Vert\mathbf{X}\Vert^{\alpha}_{\alpha}\rangle$ \\
\hline
GPT & 49 & 1.64 & 1.72 & 7.01 & 7.28 \\
GPT2-small & 49 & 2.04 & 2.54& 9.62 & 9.87 \\
GPT2-medium & 98 & 2.08 & 2.58& 9.74 & 10.01 \\
GPT2-large & 146 & 1.85 & 1.99& 7.67 & 7.94 \\
GPT2-xl & 194 & 1.86 & 1.92 & 7.17 & 7.51 \\
\hline
\end{tabular}
\end{center}
\caption{Average value for the average Log Norm and Weighted Alpha metrics for pretrained OpenAI GPT and GPT2 models.
Column \# refers to number of layers treated.
Averages do not include the first embedding layer(s) because they are not (implicitly) normalized.
GPT has 12 layers, with 4 Multi-head Attention Blocks, giving $48$ layer Weight Matrices, $\mathbf{W}$.
Each Block has 2 components, the Self Attention (attn) and the Projection (proj) matrices.
Self-attention matrices are larger, of dimension ($2304\times 768$) or ($3072\times 768$).
The projection layer concatenates the self-attention results into a vector (of dimension $768$).
This gives $50$ large matrices.
Because GPT and GPT2 are trained on different data sets, the initial Embedding matrices differ in shape.
GPT has an initial Token and Positional Embedding layers, of dimension $(40478\times 768)$ and $(512\times 768)$, respectively, whereas GPT2 has input Embeddings of shape $(50257\times 768)$ and $(1024\times 768)$, respectively.
The OpenAI GPT2 (English) models are: GPT2-small, GPT2-medium, GPT2-large, and GPT2-xl, having $12$, $24$, $36$, and $48$ layers, respectively, with increasingly larger weight~matrices.
}
\label{table:nlp}
\end{table}
\paragraph{Average Quality Metrics for GPT and GPT2.}
We examine the performance of the four quality metrics (Log Frobenius norm, Log Spectral norm, Weighted Alpha, and Log $\alpha$-Norm) for the OpenAI GPT and GPT2 pretrained models.
See Table \ref{table:nlp} for a summary of results.
Comparing trends between GPT2-medium to GPT2-large to GPT2-xl,
observe that (with one minor exception involving the log Frobenius norm metric) all four metrics decrease as one goes from medium to large to xl.
This indicates that the larger models indeed look better than the smaller models, as expected.
GPT2-small violates this general trend, but only very slightly.
This could be due to under-optimization of the GPT2-small model, or since it is the smallest of the GPT2 series, and the metrics we present are most relevant for models at scale.
Aside from this minor discrepancy, overall for these well-trained models, all these metrics now behave as expected, i.e., there is no Scale Collapse and norms are decreasing with increasing~accuracy.
Comparing trends between GPT and GPT2-small reveals a different story.
Observe that all four metrics increase when going from GPT to GPT2-small, i.e., they are larger for the higher-quality model (higher quality since GPT2-small was trained to better data) and smaller for the lower-quality model, when the number of layers is held fixed.
This is unexpected.
Here, too, we can perform model diagnostics, by separating $\hat{\alpha}$ into its two components, $\alpha$ and $\lambda_{max}$, and examining the distributions of each.
In doing so, we see additional examples of Scale Collapse and additional evidence for Correlation Flow.
\paragraph{Layer Analysis: Scale Collapse in GPT and GPT2.}
We next examine the Spectral norm in GPT versus GPT2-small.
In Figure~\ref{fig:GPT-snorm-hist}, the poorly-trained GPT model has a smaller mean/median Spectral norm as well as, spuriously, many much smaller Spectral norms, compared to the well-trained GPT2-small.
This violates the conventional wisdom that smaller Spectral norms are better.
Because there are so many anomalously small Spectral norms, the GPT model appears to be exhibiting a kind of Scale Collapse, like that observed (in Figure~\ref{fig:resnet204D5L}) for the distilled CV models.
This demonstrates that, while the Spectral (or Frobenius) norm may correlate well with predicted test error, at least among reasonably well-trained models, it is not a good indicator of the overall model quality in general.
Na\"ively using it as an empirical quality metric may give spurious results when applied to poorly-trained or otherwise deficient~models.
\begin{figure}[hbt!] %
\centering
\subfigure[Log Spectral Norm ($\log\Vert\mathbf{W}\Vert_{\infty}$)]{
\includegraphics[width=5.5cm]{img/GPT-snorm-hist.png}
\label{fig:GPT-snorm-hist}
}
\qquad
\subfigure[PL exponent ($\alpha$)]{
\includegraphics[width=5.5cm]{img/GPT-alpha-hist.png}
\label{fig:GPT-alpha-hist}
}
\caption{Histogram of PL exponents
%
and Log Spectral Norms
%
for weight matrices from the OpenAI GPT and GPT2-small pretrained models.}
\label{fig:GPT-hist}
\end{figure}
\begin{figure}[hbt!] %
\centering
\subfigure[Log Spectral Norm ($\log\Vert\mathbf{W}\Vert_{\infty}$)]{
\includegraphics[width=5.5cm]{img/GPT-snorm-depth.png}
\label{fig:gpt-snorm-layer}
}
\quad
\subfigure[PL exponent ($\alpha$)]{
\includegraphics[width=5.5cm]{img/GPT-alpha-depth.png}
\label{fig:gpt-alpha-layer}
}
\caption{%
Log Spectral Norms
%
(in (a))
and
PL exponents
%
(in (b))
for weight matrices from the OpenAI GPT and GPT2-small pretrained models.
(Note that the quantities shown on each Y axis are different.)
In the text, this is interpreted in terms of
Scale Collapse
and
Correlation~Flow.
}
\label{fig:gpt-alpha-layers}
\end{figure}
Figure~\ref{fig:gpt-snorm-layer} shows the Spectral norm as a function of depth (layer id).
This illustrates two phenomenon.
First, the large value of Spectral norm (in Figure~\ref{fig:GPT-snorm-hist}) corresponds to the first embedding layer(s).
These layers have a different effective normalization, and therefore a different scale.
See the Supplementary Information
for details.
We do not include them in our computed average metrics in Table~\ref{table:nlp}.
Second, for GPT, there seems to be two types of layers with very different Spectral norms (an effect which is seen, but to a much weaker extent, for GPT2-small).
Recall that attention models have two types of layers, one small and large; and the Spectral norm (in particular, other norms do too) displays unusually small values for some of these layers for GPT.
This Scale Collapse for the poorly-trained GPT is similar to what we observed for the distilled ResNet20 model in Figure~\ref{fig:resnet204Dalpha}.
Because of the anomalous Scale Collapse that is frequently observed in poorly-trained models, these results suggest that scale-dependent norm metrics should not be directly applied to distinguish well-trained versus poorly-trained models.
\paragraph{Layer Analysis: Correlation Flow in GPT and GPT2.}
We next examine the distribution of $\alpha$ values in GPT versus GPT2-small.
Figure~\ref{fig:GPT-alpha-hist} shows the histogram (empirical density), for all layers, of $\alpha$ for GPT and GPT2-small.
The older deficient GPT has numerous unusually large $\alpha$ exponents---meaning they are not well-described by a PL fit.
Indeed, we expect that a poorly-trained model will lack good (i.e., small $\alpha$) PL behavior in many/most layers.
On the other hand, the newer improved GPT2-small model has, on average, smaller $\alpha$ values than the older GPT, with all $\alpha\le6$ and with smaller mean/median $\alpha$.
It also has far fewer unusually-large outlying $\alpha$ values than GPT.
From this (and other results not shown), we see that $\bar{\alpha}$
from Eqn.~(\ref{eqn:alpha_bar}),
provides a good quality metric for comparing the poorly-trained GPT versus the well-trained GPT2-small.
This should be contrasted with the behavior displayed by scale-dependent metrics such as the Frobenius norm (not shown) and the Spectral~norm.
This also reveals why $\hat{\alpha}$ performs unusually in Table~\ref{table:nlp}.
The PL exponent $\alpha$ behaves as expected, and thus the scale-invariant $\bar{\alpha}$ metric lets us identify potentially poorly-trained models.
It is the Scale Collapse that causes problems for $\hat{\alpha}$ (recall that the scale enters into $\hat{\alpha}$ via the weights $\log\lambda_{max}$).
Figure~\ref{fig:gpt-alpha-layer} plots $\alpha$ versus the depth (layer id) for each model.
The deficient GPT model displays two trends in $\alpha$, one stable with $\alpha\sim 4$, and one increasing with layer id, with $\alpha$ reaching as high as $12$.
In contrast, the well-trained GPT2-small model shows consistent and stable patterns, again with one stable $\alpha\sim 3.5$ (and below the GPT trend), and the other only slightly trending up, with $\alpha\le 6$.
These results show that the behavior of $\alpha$ across layers differs significantly between GPT and GPT2-small, with the better GPT2-small looking more like the better ResNet-1K from Figure~\ref{fig:resnet-alpha-layer}.
These results also suggest that smaller more stable values of $\alpha$ across depth is beneficial, i.e., that the Correlation Flow is also a useful concept for NLP~models.
\paragraph{GPT2: medium, large, xl.}
We now look across series of increasingly improving GPT2 models (well-trained versus very-well-trained models), by examining both the PL exponent $\alpha$ as well as the Log Norm metrics.
Figure \ref{fig:gpt2-histograms} shows the histograms over the layer weight matrices for fitted PL exponent $\alpha$ and the Log Alpha Norm metric.
In general, and as expected, as we move from GPT2-medium to GPT2-xl, histograms for both $\alpha$ exponents and the Log Norm metrics downshift from larger to smaller values.
From Figure~\ref{fig:gpt2-alpha-hist}, we see that
$\bar{\alpha}$, the average $\alpha$ value, decreases with increasing model size ($3.82$ for GPT2-medium, $3.97$ for GPT2-large, and $3.81$ for GPT2-xl), although the differences are less noticeable between the differing well-trained versus very-well-trained GTP2 models than between the poorly-trained versus well-trained GPT and GPT2-small models.
Also, from Figure~\ref{fig:gpt2-pnorm-hist}, we see that,
unlike GPT, the layer Log Alpha Norms behave more as expected for GPT2 layers, with the larger models consistently having smaller norms ($9.96$ for GPT2-medium, $7.982$ for GPT2-large, and $7.49$ for GPT2-xl).
Similarly, the Log Spectral Norm also decreases on average with the larger models ($2.58$ for GPT2-medium, $1.99$ for GPT2-large, and $1.92$ for GPT2-xl).
As expected, the norm metrics can indeed distinguish among well-trained versus very-well-trained models.
While the means and peaks of the $\alpha$ distributions are getting smaller, towards $2.0$, as expected, Figure~\ref{fig:gpt2-alpha-hist} also shows that the tails of the $\alpha$ distributions shift right, with larger GPT2 models having more unusually large $\alpha$ values.
This is unexpected.
It suggests that these larger GPT2 models are still under-optimized/over-parameterized (relative to the data on which they were trained) and that they have capacity to support datasets even larger than the recent XL $1.5B$ release~\cite{gpt2-xl}.
This does not contradict recent theoretical work on the benefits of over-parameterization~\cite{BHMM19}, e.g., since in practice these extremely large models are not fully optimized.
Subsequent refinements to these models, and other models such as BERT, indicate that this is likely the~case.
\begin{figure}[htb]
\centering
\subfigure[PL exponent ($\alpha$)]{
\includegraphics[width=5.5cm]{img/GPT2_fnl_alpha_hist.png}
\label{fig:gpt2-alpha-hist}
}
\quad
\subfigure[Log Alpha Norm]{
\includegraphics[width=5.5cm]{img/GPT2_fnl_log_alpha_norm_hist.png}
\label{fig:gpt2-pnorm-hist}
}
\caption{Histogram of PL exponents
%
and Log Alpha Norm
%
for weight matrices from models of different sizes in the GPT2 architecture series. (Plots omit the first 2 (embedding) layers, because they are normalized differently giving anomalously large values.)
}
\label{fig:gpt2-histograms}
\end{figure}
\subsection{Overall approach}
\label{sxn:approach}
Consider the objective/optimization function (parameterized by $\mathbf{W}_{l}$s and $\mathbf{b}_{l}$s) for a DNN with $L$ layers, and weight matrices $\mathbf{W}_{l}$ and bias vectors $\mathbf{b}_{l}$, as
the minimization of a general loss function $\mathcal{L}$ over the training data instances and labels, $\{\mathbf{x}_{i},y_{i}\}\in\mathcal{D}$.
For a typical supervised classification problem, the goal of training is to construct (or learn) $\mathbf{W}_{l}$ and $\mathbf{b}_{l}$ that capture correlations in the data, in the sense of solving
\begin{equation}
\underset{\mathbf{W}_{l},\mathbf{b}_{L}}{\text{argmin}}\;\sum_{i=1}^{N} \mathcal{L}(E_{DNN}(\mathbf{x}_{i}),y_{i}) ,
\end{equation}
where the loss function $\mathcal{L}(\cdot,\cdot)$ can take on a myriad of forms~\cite{JC17_TR}, and where the energy (or optimization) landscape function
\begin{equation}
E_{DNN} = f(\mathbf{x}_{i} ; \mathbf{W}_{1},\ldots,\mathbf{W}_{L},\mathbf{b}_{1},\ldots,\mathbf{b}_{L})
\label{eqn:dnn_energy}
\end{equation}
depends parametrically on the weights and biases.
For a trained model, the form of the function $E_{DNN}$ does not explicitly depend on the data (but it does explicitly depend on the weights and biases).
The function $E_{DNN}$ maps data instance vectors ($\mathbf{x}_i$ values) to predictions ($y_{i}$ labels), and thus the output of this function does depend on the data.
Therefore, one can analyze the form of $E_{DNN}$ in the absence of any training or test~data.
Test accuracies have been reported online for publicly-available pretrained pyTorch models~\cite{osmr}.
These models have been trained and evaluated on labeled data $\{\mathbf{x}_{i},y_{i}\}\in\mathcal{D}$, using standard techniques.
We do not have access to this data, and we have not trained any of the models ourselves.
Our methodological approach is thus similar to a statistical meta-analysis, common in biomedical research, but uncommon in ML.
Computations were performed with the publicly-available \texttt{WeightWatcher} tool (version 0.2.7)~\cite{weightwatcher_package}.
To be fully reproducible, we only examine publicly-available, pretrained models, and we provide all Jupyter and Google Colab notebooks used in an accompanying github repository~\cite{kdd20_sub_repo}.
See the Supplementary Information
for details.
\paragraph{Metrics for DNN Weight Matrices.}
Our approach involves analyzing individual DNN weight matrices, for (depending on the architecture) fully-connected and/or convolutional layers.
Each DNN layer contains one or more layer 2D $N_{l}\times M_{l}$ weight matrices, $\mathbf{W}_{l}$, or pre-activation maps, $\mathbf{W}_{i,l}$, e.g., extracted from 2D Convolutional layers, where $N > M$.
See the Supplementary Information
for details.
(We may drop the $i$ and/or $i,l$ subscripts below.)
The best performing quality metrics depend on the norms and/or spectral properties of each weight matrix,
$\mathbf{W}$, and/or, equivalently, it's empirical correlation matrix, $\mathbf{X}=\mathbf{W}^{T}\mathbf{W}$.
To evaluate the quality of state-of-the-art DNNs,
we consider the following metrics:
\begin{eqnarray}
& & \text{Frobenius Norm: $\Vert\mathbf{W}\Vert^{2}_{F}=\Vert\mathbf{X}\Vert_{F}=\sum\nolimits_{i=1}^{M} \lambda_{i}$ } \\
& & \text{Spectral Norm: $\Vert\mathbf{W}\Vert_{\infty}^{2}=\Vert\mathbf{X}\Vert_{\infty}=\lambda_{max}$ } \\
& & \text{Weighted Alpha: $\hat{\alpha}=\alpha\log\lambda_{max}$ } \\
& & \text{$\alpha$-Norm (or $\alpha$-Shatten Norm): $\Vert\mathbf{W}\Vert^{2\alpha}_{2\alpha}=\Vert\mathbf{X}\Vert^{\alpha}_{\alpha}=\sum\nolimits_{i=1}^{M}\lambda_{i}^{\alpha}$. }
\end{eqnarray}
To perform diagnostics on potentially-problematic DNNs,
we will decompose $\hat{\alpha}$ into its two components, $\alpha$ and $\lambda_{max}$.
Here, $\lambda_{i}$ is the $i^{th}$ eigenvalue of the $\mathbf{X}$, $\lambda_{max}$ is the maximum eigenvalue, and $\alpha$ is the fitted PL exponent.
These eigenvalues are squares of the singular values $\sigma_{i}$ of $\mathbf{W}$, $\lambda_{i}=\sigma^{2}_{i}$.
All four metrics can be computed easily from DNN weight matrices.
The first two metrics are well-known in ML.
The last two metrics deserve special mention, as they depend on an empirical parameter $\alpha$ that is the PL exponent that arises in the recently-developed Heavy Tailed Self Regularization (HT-SR) Theory~\cite{MM18_TR, MM19_HTSR_ICML, MM20_SDM}.
\paragraph{Overview of Heavy-Tailed Self-Regularization.}
In the HT-SR Theory, one analyzes the eigenvalue spectrum, i.e., the Empirical Spectral Density (ESD), of the associated correlation matrices~\cite{MM18_TR,MM19_HTSR_ICML,MM20_SDM}.
From this, one characterizes the amount and form of correlation, and therefore implicit self-regularizartion, present in the DNN's weight matrices.
For each layer weight matrix $\mathbf{W}$, of size $N \times M$, construct the associated $M\times M$ (uncentered) correlation matrix $\mathbf{X}$.
Dropping the $L$ and $l,i$ indices, one~has
$$
\mathbf{X} = \frac{1}{N}\mathbf{W}^{T}\mathbf{W}.
$$
If we compute the eigenvalue spectrum of $\mathbf{X}$, i.e., $\lambda_i$ such that
$ %
\mathbf{X}\mathbf{v}_{i}=\lambda_{i}\mathbf{v}_{i} ,
$ %
then the ESD of eigenvalues, $\rho(\lambda)$, is just a histogram of the eigenvalues, formally written as
$\rho(\lambda)=\sum\nolimits_{i=1}^{M}\delta(\lambda-\lambda_{i}) .$
Using HT-SR Theory, one characterizes the correlations in a weight matrix by examining its ESD, $\rho(\lambda)$.
It can be well-fit to a truncated PL distribution, given~as
\begin{equation}
\rho(\lambda)\sim\lambda^{-\alpha} ,
\label{eqn:eigenval_pl}
\end{equation}
which is (at least) valid within a bounded range of eigenvalues $\lambda\in[\lambda^{min},\lambda^{max}]$.
The original work on HT-SR Theory considered a small number of NNs, including AlexNet and InceptionV3.
It showed that for nearly every $\mathbf{W}$, the (bulk and tail) of the ESDs can be fit to a truncated PL, and that PL exponents $\alpha$ nearly all lie within the range $\alpha\in(1.5,5)$~\cite{MM18_TR,MM19_HTSR_ICML,MM20_SDM}.
As for the mechanism responsible for these properties, statistical physics offers several possibilities~\cite{SornetteBook,nishimori01}, e.g., self organized criticality~\cite{SOC87,SOCat25yrs} or multiplicative noise in the stochastic optimization algorithms used to train these models~\cite{HodMah20A_TR,SorCon97}.
Alternatively, related techniques have been used to analyze correlations and information propogation in actual spiking neurons~\cite{SYYRP11,YKYP14}.
Our meta-analysis does not require knowledge of mechanisms; and it is not even clear that one mechanism is responsible for every case.
Crucially, HT-SR Theory predicts that smaller values of $\alpha$ should correspond to models with better correlation over multiple size scales and thus to better models.
The notion of ``size scale'' is well-defined in physical systems, to which this style of analysis is usually applied, but it is less well-defined in CV and NLP applications.
Informally, it would correspond to pixel groups that are at a greater distance in some metric, or between sentence parts that are at a greater distance in text.
Relatedly, previous work observed that smaller exponents $\alpha$ correspond to more implicit self-regularization and better generalization, and that we expect a linear correlation between $\hat{\alpha}$ and model quality~\cite{MM18_TR,MM19_HTSR_ICML,MM20_SDM}.
\paragraph{DNN Empirical Quality Metrics.}
For norm-based metrics, we use the average of the log norm, to the appropriate power.
Informally, this amounts to assuming that the layer weight matrices are statistically independent, in which case we can estimate the model complexity $\mathcal{C}$, or test accuracy, with a standard Product Norm (which resembles a data dependent VC complexity),
\begin{equation}
\mathcal{C}\sim\Vert\mathbf{W}_{1}\Vert\times\Vert\mathbf{W}_{2}\Vert \times \cdots \times \Vert\mathbf{W}_{L}\Vert ,
\end{equation}
where $\Vert\cdot\Vert$ is a matrix norm.
The log complexity,
\begin{equation}
\label{eqn:eqn:sum_log_norm}
\log\mathcal{C} \sim \log\Vert\mathbf{W}_{1}\Vert+\log\Vert\mathbf{W}_{2}\Vert + \cdots + \log\Vert\mathbf{W}_{L}\Vert = \sum\nolimits_l \log\Vert\mathbf{W}_{l}\Vert ,
\end{equation}
takes the form of an average Log Norm.
For the Frobenius Norm metric and Spectral Norm metric, we can use Eqn.~(\ref{eqn:eqn:sum_log_norm}) directly (since, when taking $\log\Vert\mathbf{W}_{l}\Vert_{F}^{2}$, the $2$ comes down and out of the sum, and thus ignoring it only changes the metric by a constant factor).
The Weighted Alpha metric is an average of $\alpha_l$ over all layers $l \in \{1,\ldots,l\}$, weighted by the size, or scale, or each matrix,
\begin{equation}
\hat{\alpha} = \dfrac{1}{L}\sum_l \alpha_l\log\lambda_{max,l}\approx\langle\log\Vert\mathbf{X}\Vert_{\alpha}^{\alpha}\rangle ,
\end{equation}
where $L$ is the total number of layer weight matrices.
The Weighted Alpha metric was introduced previously~\cite{MM20_SDM}, where it was shown to correlate well with trends in reported test accuracies of pretrained DNNs, albeit on a much smaller and more limited set of models than we consider here.
Based on this, in this paper, we introduce and evaluate the $\alpha$-Shatten Norm metric,
\begin{equation}
\label{eqn:sum_log_alpha_norm_alpha}
\sum\nolimits_l \log \Vert\mathbf{X}_l\Vert_{\alpha_l}^{\alpha_l}
=
\sum\nolimits_l \alpha_l \log \Vert\mathbf{X}_l\Vert_{\alpha_l} .
\end{equation}
For the $\alpha$-Shatten Norm metric, $\alpha_l$ varies from layer to layer, and so in Eqn.~(\ref{eqn:sum_log_alpha_norm_alpha}) it cannot be taken out of the sum.
For small $\alpha$, the Weighted Alpha metric approximates the Log $\alpha$-Shatten norm, as can be shown with a statistical mechanics and random matrix theory derivation;
and the Weighted Alpha and $\alpha$-Shatten norm metrics often behave like an improved, weighted average Log Spectral Norm.
Finally, although it does less well for predicting trends in state-of-the-art model series, e.g., as depth changes, the average value of $\alpha$, i.e.,
\begin{equation}
\bar{\alpha} = \dfrac{1}{L}\sum_l \alpha_l = \langle\alpha\rangle ,
\label{eqn:alpha_bar}
\end{equation}
can be used to perform model diagnostics, to identify problems that cannot be detected by examining training/test accuracies, and to discriminate poorly-trained models from well-trained~models.
One determines $\alpha$ for a given layer by fitting the ESD of that layer's weight matrix to a truncated PL, using the commonly accepted Maximum Likelihood method~\cite{CSN09_powerlaw,ABP14}.
This method works very well for exponents between $\alpha\in(2,4)$; and it is adequate, although imprecise, for smaller and especially larger $\alpha$~\cite{newman2005_zipf}.
Operationally, $\alpha$ is determined by using the \texttt{WeightWatcher} tool~\cite{weightwatcher_package} to fit the histogram of eigenvalues, $\rho(\lambda)$, to a truncated PL,
\begin{equation}
\rho(\lambda)\sim\lambda^{\alpha},\;\;\lambda\in[\lambda_{min},\lambda_{max}] ,
\end{equation}
where $\lambda_{max}$ is the largest eigenvalue of $\mathbf{X}=\mathbf{W}^{T}\mathbf{W}$, and
where $\lambda_{min}$ is selected automatically to yield the best (in the sense of minimizing the K-S distance) PL fit.
Each of these quantities is defined for a given layer $\mathbf{W}$ matrix.
See Figure~\ref{fig:ww} for an illustration.
\begin{figure}[t]
\centering
%
\includegraphics[width=15.0cm]{img/WeightWatcher_v3}
\caption{Schematic of analyzing DNN layer weight matrices $\mathbf{W}$.
Given an individual layer weight matrix $\mathbf{W}$, from either a fully-connected layer or a convolutional layer, perform a Singular Value Decomposition (SVD) to obtain $\mathbf{W} = \mathbf{U} \mathbf{\Sigma} \mathbf{V}^{T}$, and examine the histogram of eigenvalues of $\mathbf{W}^{T}\mathbf{W}$.
Norm-based metrics and PL-based metrics (that depend on fitting the histogram of eigenvalues to a truncated PL) can be used to compare models.
For example, one can analyze one layer of a pre-trained model, compare multiple layers of a pre-trained model, make comparisons across model architectures, monitor neural network properties during training, etc.
}
\label{fig:ww}
\end{figure}
To avoid confusion, let us clarify the relationship between $\alpha$ and $\hat{\alpha}$.
We fit the ESD of the correlation matrix $\mathbf{X}$ to a truncated PL, parameterized by 2 values: the PL exponent $\alpha$, and the maximum eigenvalue $\lambda_{max}$.
The PL exponent $\alpha$ measures the amount of correlation in a DNN layer weight matrix $\mathbf{W}$.
It is valid for $\lambda\le\lambda_{max}$, and it is scale-invariant, i.e., it does not depend on the normalization of $\mathbf{W}$ or $\mathbf{X}$.
The $\lambda_{max}$ is a measure of the size, or scale, of $\mathbf{W}$.
Multiplying each $\alpha$ by the corresponding $\log\lambda_{max}$ weighs ``bigger'' layers more, and averaging this product leads to a balanced, Weighted Alpha metric $\hat{\alpha}$ for the entire~DNN.
We will see that for well-trained CV and NLP models, $\hat{\alpha}$ performs quite well and as expected, but for CV and NLP models that are potentially-problematic or less well-trained, metrics that depend on the scale of the problem can perform anomalously.
In these cases, separating $\hat{\alpha}$ into its two components, $\alpha$ and $\lambda_{max}$, and examining the distributions of each, can be helpful.
\section{Results}
After describing our overall approach, we study in detail three well-known CV architecture series (the VGG, ResNet, and DenseNet series of models).
Then, we look in detail at several variations of a popular NLP architecture series (the OpenAI GPT and GPT2 series of models), and we present results from a broader analysis of hundreds of pretrained DNN~models.
\input{NatCom_overall_approach}
\input{NatCom_cv_models}
\input{NatCom_nlp_models}
\input{NatCom_all_models}
\input{NatCom_discussion}
\input{NatCom_methods}
\noindent
\paragraph{Acknowledgements.}
MWM would like to acknowledge ARO, DARPA, NSF, and ONR as well as the UC Berkeley BDD project and a gift from Intel for providing partial support of this work.
Our conclusions do not necessarily reflect the position or the policy of our sponsors, and no official endorsement should be inferred.
We would also like to thank Amir Khosrowshahi and colleagues at Intel for helpful discussion regarding the Group Regularization distillation technique.
\noindent
\paragraph{Data availability.}
Data analyzed during the study are all publicly-available; and data generated during the study are available along with the code to generate them in our public repository.
\noindent
\paragraph{Code availability.}
Code sufficient to generate the results of the study is available in our public repository ({\url{https://github.com/CalculatedContent/ww-trends-2020}).
\bibliographystyle{unsrt}
{\small
|
1,314,259,993,409 | arxiv | \section{An Introduction}
Evolution of a nonrelativistic quantum system from a given initial state to
the final state is governed by the (time-dependent) Schr\"{o}dinger
equation. Unfortunately, its explicit solutions are available only for the
simplest Hamiltonians and, in general, one has to rely on a variety of
approximation, asymptotic and numerical methods. Luckily among the
integrable cases are the so-called quadratic Hamiltonians that attracted
substantial attention over the years in view of their great importance to
many advanced quantum problems. Examples can be found in quantum and
physical optics \cite{Delgadoetal98}, \cite{Klauder:Sudarshan}, \cit
{PadillaMaster}, \cite{Reithmaieretal97}, physics of lasers and masers \cit
{Sargent:Scully:Lamb74}, \cite{Tarasov83}, \cite{Scully:Zubairy97}, \cit
{Walls94}, molecular spectroscopy \cite{Dokt:Mal:Man77}, quantum chemistry,
quantization of mechanical systems \cite{Degas:Ruijsenaars01}, \cit
{Faddeyev69}, \cite{FeynmanPhD}, \cite{Feynman}, \cite{Fey:Hib}, \cit
{Kochan07}, \cite{Kochan10} and Hamiltonian cosmology \cit
{Bertoni:Finelli:Venturi98}, \cite{Finelli:Gruppuso:Venturi99}, \cit
{Finelli:Vacca:Venturi98}, \cite{Hawkins:Lidsey02}, \cite{IacobF}, \cit
{PadillaMaster}, \cite{Rosu:Espinoza99}, \cite{Rosu:Espinoza:Reyes99}, \cit
{Ryan72}. They include coherent states \cite{Malkin:Man'ko79}, \cit
{Malk:Man:Trif69}, \cite{Malk:Man:Trif70}, \cite{Klauder:Sudarshan} and
Berry's phase \cite{Berry85}, \cite{Berry:Hannay88}, \cit
{Cervero:Lejarreta89}, \cite{Hannay85}, \cite{Leach90}, \cite{Morales88},
asymptotic and numerical methods \cite{Goyaletal93}, \cite{Kamenshchiketal06
, \cite{Kruskal62}, \cite{Milne30}, \cite{Mun:Ru-Paz:Wolf09}, charged
particle traps \cite{Major:Gheorghe:Werth} and motion in uniform magnetic
fields \cite{Cor-Sot:Lop:Sua:Sus}, \cite{Corant:Snyder58}, \cit
{Dodonov:Man'koFIAN87}, \cite{La:Lif}, \cite{Lewis67}, \cite{Lewis68}, \cit
{Lewis:Riesen69}, \cite{Malk:Man:Trif70}, polyatomic molecules in varying
external fields, crystals through which an electron is passing and exciting
the oscillator modes and other interactions of the modes with external
fields \cite{Fey:Hib}. Quadratic Hamiltonians have particular applications
in quantum electrodynamics because the electromagnetic field can be
represented as a set of forced harmonic oscillators \cite{Bo:Shi}, \cit
{Fey:Hib}, \cite{Dodonov:Man'koFIAN87}, \cite{Gottf:T-MY}, \cit
{Ivan:Mal:Man74}, and \cite{Merz}. Nonlinear oscillators play a central role
in the novel theory of Bose--Einstein condensation \cit
{Dal:Giorg:Pitaevski:Str99} based on the nonlinear Schr\"{o}dinger (or
Gross--Pitaevskii) equation \cite{Kagan:Surkov:Shlyap96}, \cit
{Kagan:Surkov:Shlyap97}, \cite{Kivsh:Alex:Tur01}, \cite{Per-G:Tor:Mont
.\medskip\
The one-dimensional Schr\"{o}dinger equation with variable quadratic
Hamiltonians of the for
\begin{equation}
i\frac{\partial \psi }{\partial t}=-a\left( t\right) \frac{\partial ^{2}\psi
}{\partial x^{2}}+b\left( t\right) x^{2}\psi -i\left( c\left( t\right)
\frac{\partial \psi }{\partial x}+d\left( t\right) \psi \right) ,
\label{in1}
\end{equation
where $a\left( t\right) ,$ $b\left( t\right) ,$ $c\left( t\right) ,$ and
d\left( t\right) $ are real-valued functions of time $t$ only, can be
integrated in the following manner (see, for example, \cit
{Cor-Sot:Lop:Sua:Sus}, \cite{Cor-Sot:Sua:Sus}, \cite{Cor-Sot:Sus}, \cit
{Dod:Mal:Man75}, \cite{Lan:Sus}, \cite{Lop:Sus}, \cite{Me:Co:Su}, \cit
{SuazoF}, \cite{Sua:Sus}, \cite{Suaz:Sus}, \cite{Sua:Sus:Vega}, \cite{Wolf81
, and \cite{Yeon:Lee:Um:George:Pandey93} for a general approach and some
elementary solutions). The Green functions, or Feynman's propagators, are
given by \cite{Cor-Sot:Lop:Sua:Sus}, \cite{Suaz:Sus}
\begin{equation}
\psi =G\left( x,y,t\right) =\frac{1}{\sqrt{2\pi i\mu \left( t\right) }}\
e^{i\left( \alpha \left( t\right) x^{2}+\beta \left( t\right) xy+\gamma
\left( t\right) y^{2}\right) }, \label{in2}
\end{equation
wher
\begin{eqnarray}
&&\alpha \left( t\right) =\frac{1}{4a\left( t\right) }\frac{\mu ^{\prime
}\left( t\right) }{\mu \left( t\right) }-\frac{d\left( t\right) }{2a\left(
t\right) }, \label{in3} \\
&&\beta \left( t\right) =-\frac{h\left( t\right) }{\mu \left( t\right)
,\qquad h\left( t\right) =\exp \left( -\int_{0}^{t}\left( c\left( \tau
\right) -2d\left( \tau \right) \right) \ d\tau \right) , \label{in4} \\
&&\gamma \left( t\right) =\frac{a\left( t\right) h^{2}\left( t\right) }{\mu
\left( t\right) \mu ^{\prime }\left( t\right) }+\frac{d\left( 0\right) }
2a\left( 0\right) }-4\int_{0}^{t}\frac{a\left( \tau \right) \sigma \left(
\tau \right) h^{2}\left( \tau \right) }{\left( \mu ^{\prime }\left( \tau
\right) \right) ^{2}}\ d\tau \label{in5}
\end{eqnarray
and the function $\mu \left( t\right) $ satisfies the so-called\
characteristic equatio
\begin{equation}
\mu ^{\prime \prime }-\tau \left( t\right) \mu ^{\prime }+4\sigma \left(
t\right) \mu =0 \label{in6}
\end{equation
wit
\begin{equation}
\tau \left( t\right) =\frac{a^{\prime }}{a}-2c+4d,\qquad \sigma \left(
t\right) =ab-cd+d^{2}+\frac{d}{2}\left( \frac{a^{\prime }}{a}-\frac
d^{\prime }}{d}\right) \label{in7}
\end{equation
subject to the initial dat
\begin{equation}
\mu \left( 0\right) =0,\qquad \mu ^{\prime }\left( 0\right) =2a\left(
0\right) \neq 0. \label{in8}
\end{equation
(More details can be found in Refs.~\cite{Cor-Sot:Lop:Sua:Sus}, \cit
{Suaz:Sus} and a Hamiltonian structure is considered in Refs.~\cite{Berry85
, \cite{Cor-Sot:Sus}.) Then, by the superposition principle, solution of the
Cauchy initial value problem can be presented in an integral for
\begin{equation}
\psi \left( x,t\right) =\int_{-\infty }^{\infty }G\left( x,y,t\right) \
\varphi \left( y\right) \ dy,\quad \lim_{t\rightarrow 0^{+}}\psi \left(
x,t\right) =\varphi \left( x\right) \label{CauchyInVProb}
\end{equation
for a suitable initial function $\varphi $ on
\mathbb{R}
$ (a rigorous proof is given in Ref.~\cite{Suaz:Sus} and uniqueness is
analyzed in this paper).\medskip
We discuss integrals of motion for several particular models of the damped
and generalized quantum oscillators. The simple harmonic oscillator is of
interest in many quantum problems \cite{Fey:Hib}, \cite{La:Lif}, \cite{Merz
, and \cite{Schiff}. The forced harmonic oscillator was originally
considered by Richard Feynman in his path integrals approach to the
nonrelativistic quantum mechanics \cite{FeynmanPhD}, \cite{Feynman}, \cit
{Feynman49a}, \cite{Feynman49b}, and \cite{Fey:Hib}; see also \cite{Lop:Sus
. Its special and limiting cases were discussed in Refs.~\cite{Beauregard},
\cite{Gottf:T-MY}, \cite{Holstein}, \cite{Maslov:Fedoriuk}, \cite{Merz},
\cite{Thomber:Taylor} for the simple harmonic oscillator and in Refs.~\cit
{Arrighini:Durante}, \cite{Brown:Zhang}, \cite{Holstein97}, \cite{Nardone},
\cite{Robinett} for the particle in a constant external field; see also
references therein. The damped oscillations have been studied to a great
extent in classical mechanics~\cite{Bateman31}, \cite{BatemanPDE} and \cit
{Lan:Lif}. Their quantum analogs are introduced and analyzed from different
viewpoints by many authors; see, for example, \cite{Caldirola41}, \cit
{Chand:Senth:Laksh07}, \cite{ChruAnnPhys06}, \cite{Chru:JurkAnnPhys06}, \cit
{Chru:Jurk}, \cite{Cor-Sot:Sua:Sus}, \cite{Denman66}, \cite{Dekker81}, \cit
{Dito:Turr06}, \cite{Dod:Man79}, \cite{Dod:Miz:Dod}, \cite{LeachAmJPhys78},
\cite{LeachSIAM78}, \cite{Kanai48}, \cite{Mont03}, \cite{Nieto:Truax}, \cit
{Svin75}, \cite{Svin76}, \cite{Tarasov01}, \cite{Um:Yeon:George}, and
references therein. The quantum parametric oscillator with variable
frequency is also largely studied in view of its physical importance; see,
for example, \cite{Cher:Man08}, \cite{Dodonov:Man'koFIAN87}, \cite{HusimiI53
, \cite{HusimiII53}, \cite{Lan:Sus}, \cite{Malkin:Man'ko79}, \cit
{Malk:Man:Trif70}, \cite{Perelomov:Popov69}, \cite{Per:Zel}, \cit
{Popov:PerelomovI69}, \cite{Popov:PerelomovII69}, \cite{Schuch08}, and \cit
{Solimenoetal69}; a detailed bibliography is given in \cite{Camizetall71
.\medskip
In the present paper we revisit a familiar topic of the quantum integrals of
motion for the time-dependent Schr\"{o}dinger equatio
\begin{equation}
i\frac{\partial \psi }{\partial t}=H\left( t\right) \psi \label{in9}
\end{equation
with variable quadratic Hamiltonians of the form
\begin{equation}
H=a\left( t\right) p^{2}+b\left( t\right) x^{2}+d\left( t\right) \left(
px+xp\right) , \label{in10}
\end{equation
where $p=-i\partial /\partial x,$ $\hslash =1$ and $a\left( t\right) ,$
b\left( t\right) ,$ $c\left( t\right) =2d\left( t\right) $ are some
real-valued functions of time only (see, for example, \cite{Dod:Mal:Man75},
\cite{Leach90}, \cite{Lewis:Riesen69}, \cite{Malk:Man:Trif70}, \cit
{Malk:Man:Trif73}, \cite{Wolf81}, \cite{Yeon:Lee:Um:George:Pandey93} and
references therein). A related energy operator $E$ is defined in a
traditional way as a quadratic in $p$ and $x$ operator that has constant
expectation values \cite{Dodonov:Man'koFIAN87}
\begin{equation}
\frac{d}{dt}\left\langle E\right\rangle =\frac{d}{dt}\int_{-\infty }^{\infty
}\psi ^{\ast }E\psi \ dx=0. \label{in11}
\end{equation
It is well-known that such quadratic invariants are not unique. Although an
elegant general solution is known, say, for the parametric oscillator, it
involves an integration of nonlinear Ermakov's equation \cite{Lewis:Riesen69
. Here the simplest energy operators are constructed for several integrable
models of the damped and modified quantum oscillators. Then an extension of
the familiar Lewis--Riesenfeld quadratic invariant is given to the most
general case of the variable non-self-adjoint quadratic Hamiltonian (see
also \cite{Leach90}, \cite{Wolf81}, \cite{Yeon:Lee:Um:George:Pandey93}, we
do not use canonical transformations and deal only with real-valued
solutions of the corresponding generalized Ermakov system), which seems to
be missing in the available literature and may be considered as the main
result of this paper. (An attempt to collect relevant references is mad
\footnote
A complete bibliography on classical and quantum generalized harmonic
oscillators, their invariants, group-theoretical methods and applications is
very extensive. Only case of the damped oscillators in \cite{Dekker81}
includes about 600 references!}.) Group-theoretical aspects will be
discussed elsewhere, we only provide the factorization of the general
quadratic invariant (see also \cite{Suslov10}).\medskip\
In general the average $\left\langle E\right\rangle $ is not positive. A
complete dynamics of the expectation values of some energy-related positive
operators is found instead for each model, which is a somewhat interesting
result on its own. In addition to other works \cite{Berry85}, \cit
{Dodonov:Man'koFIAN87}, \cite{Dod:Mal:Man75}, \cite{Hannay85}, \cit
{Lewis:Riesen69}, \cite{Malkin:Man'ko79}, \cite{Malk:Man:Trif73}, \cit
{Wolf81}, \cite{Yeon:Lee:Um:George:Pandey93} these advances allow us to
discuss uniqueness of the corresponding Cauchy initial value problem for the
special models and for the general quadratic Hamiltonian under consideration
as a modest contribution to this well-developed area of quantum mechanics
and partial differential equations. Further relations of the quadratic
invariants with the solution of the initial value problem are discussed in
the forthcoming paper \cite{Suslov10}.\medskip
The paper is organized as follows. In Section~2 we review several exactly
solvable models of the damped and generalized oscillators in quantum
mechanics. Some of these \textquotedblleft exotic\textquotedblright\
oscillators with variable quadratic Hamiltonians appear to be missing,
and/or are just recently introduced, in the available literature. The
corresponding Green functions are found in terms of elementary functions.
The dynamical invariants and quadratic energy-related operators are
discussed in Sections~3 and 4. The last section is concerned with an
application to the Cauchy initial value problems. The classical equations of
motion for the expectation values of the position operator for the quantum
oscillators under consideration are derived in Appendix~A. The Heisenberg
uncertainty relation and linear dynamic invariants are revisited,
respectively, in Appendices~B and C. Solutions of a required differential
equation are given in Appendix~D to make our presentation is as
self-contained as possible.
\section{Some Integrable Quadratic Hamiltonians}
Quantum systems with the Hamiltonians (\ref{in10}) are called the
generalized harmonic oscillators \cite{Berry85}, \cite{Dod:Mal:Man75}, \cit
{Hannay85}, \cite{Leach90}, \cite{Wolf81}, \cite{Yeon:Lee:Um:George:Pandey93
. In this paper we concentrate, among others, on the following variable
Hamiltonians: the Caldirola-Kanai Hamiltonian of the quantum damped
oscillator \cite{Caldirola41}, \cite{Dekker81}, \cite{Kanai48}, \cit
{Um:Yeon:George} and some of its natural modifications, a modified
oscillator introduced by Meiler, Cordero-Soto and Suslov \cite{Me:Co:Su},
\cite{Cor-Sot:Sus}, the quantum damped oscillator of Chru\'{s}ci\'{n}ski and
Jurkowski \cite{Chru:Jurk} in the coordinate and momentum representations
and a quantum-modified parametric oscillator which is believed to be new.
The Green functions are derived in a united way.
\subsection{The Caldirola-Kanai Hamiltonian}
A model of the quantum damped oscillator with a variable Hamiltonian of the
for
\begin{equation}
H=\frac{\omega _{0}}{2}\left( e^{-2\lambda t}\ p^{2}+e^{2\lambda t}\
x^{2}\right) \label{CKham}
\end{equation
is called the Caldirola-Kanai model \cite{Bateman31}, \cite{Caldirola41},
\cite{Dekker81}, \cite{Kanai48}, \cite{Um:Yeon:George}. Nowadays it is a
standard way of adding friction to the quantum harmonic oscillator. The
Green function is given b
\begin{equation}
G\left( x,y,t\right) =\sqrt{\frac{\omega e^{\lambda t}}{2\pi i\omega
_{0}\sin \omega t}}\ e^{i\left( \alpha \left( t\right) x^{2}+\beta \left(
t\right) xy+\gamma \left( t\right) y^{2}\right) },\quad \omega =\sqrt{\omega
_{0}^{2}-\lambda ^{2}}>0, \label{sm1}
\end{equation
wher
\begin{eqnarray}
\alpha \left( t\right) &=&\frac{\omega \cos \omega t-\lambda \sin \omega t}
2\omega _{0}\sin \omega t}e^{2\lambda t}, \label{sm2} \\
\beta \left( t\right) &=&-\frac{\omega }{\omega _{0}\sin \omega t}e^{\lambda
t}, \label{sm3} \\
\gamma \left( t\right) &=&\frac{\omega \cos \omega t+\lambda \sin \omega t}
2\omega _{0}\sin \omega t}. \label{sm4}
\end{eqnarray
This popular model had been studied in detail by many authors from different
viewpoints; see, for example, \cite{Antonsen}, \cite{Britt50}, \cit
{Cari:Luc:Ra08}, \cite{Cari:Luc:Ra09}, \cite{Caval98}, \cite{Cheng84}, \cit
{Cheng85}, \cite{Dod:Man79}, \cite{Karavayev}, \cite{Kh:Am06}, \cit
{Kim:Sant:Khan02}, \cite{Kochan07}, \cite{Kochan10}, \cite{LeachAmJPhys78},
\cite{Nieto:Truax}, \cite{Oh:Lee:George89}, \cite{Ped:Gue03}, \cite{Safonov
, \cite{Svin75}, \cite{Svin76}, \cite{Tikoch78}, \cite{Yeon:Um:George} and
references therein, a detailed bibliography can be found in \cite{Dekker81},
\cite{Um:Yeon:George}.
\subsection{A Modified Caldirola-Kanai Hamiltonian}
In this paper, we would like to consider another version of the quantum
damped oscillator with variable Hamiltonian of the for
\begin{equation}
H=\frac{\omega _{0}}{2}\left( e^{-2\lambda t}\ p^{2}+e^{2\lambda t}\
x^{2}\right) -\lambda \left( px+xp\right) . \label{modCKham}
\end{equation
The Green functions in (\ref{sm1}) ha
\begin{eqnarray}
\alpha \left( t\right) &=&\frac{\omega \cos \omega t+\lambda \sin \omega t}
2\omega _{0}\sin \omega t}e^{2\lambda t}, \label{sm5} \\
\beta \left( t\right) &=&-\frac{\omega }{\omega _{0}\sin \omega t}e^{\lambda
t}, \label{sm6} \\
\gamma \left( t\right) &=&\frac{\omega \cos \omega t-\lambda \sin \omega t}
2\omega _{0}\sin \omega t}. \label{sm7}
\end{eqnarray
This can be derived directly from equations (\ref{in2})--(\ref{in8})
following Refs.~\cite{Cor-Sot:Lop:Sua:Sus} and \cite{Cor-Sot:Sua:Sus
.\medskip
The Ehrenfest theorem for both Caldirola-Kanai models has the same for
\begin{equation}
\frac{d^{2}}{dt^{2}}\left\langle x\right\rangle +2\lambda \frac{d}{dt
\left\langle x\right\rangle +\omega _{0}^{2}\left\langle x\right\rangle =0,
\label{cldamposc}
\end{equation
which coincides with the classical equation of motion for a damped
oscillator \cite{BatemanPDE}, \cite{Lan:Lif}. Details are provided in
Appendix~A.
\subsection{The United Model}
The following non-self-adjoint Hamiltonian
\begin{equation}
H=\frac{\omega _{0}}{2}\left( e^{-2\lambda t}\ p^{2}+e^{2\lambda t}\
x^{2}\right) -\mu xp \label{UMHam}
\end{equation
coincides with the original Caldirola-Kanai model when $\mu =0$ and the
Hamiltonian is self-adjoint. Another special case $\lambda =0$ corresponds
to the quantum damped oscillator discussed in \cite{Cor-Sot:Sua:Sus} as an
example of a simple quantum system with the non-self-adjoint Hamiltonian.
(This is an alternative way to introduce dissipation of energy to the
quantum harmonic oscillator.) Combining both cases we refer to (\ref{UMHam})
as the united Hamiltonian.\medskip
The Green function is given b
\begin{equation}
G\left( x,y,t\right) =\sqrt{\frac{\omega e^{\left( \lambda -\mu \right) t}}
2\pi i\omega _{0}\sin \omega t}}\ e^{i\left( \alpha \left( t\right)
x^{2}+\beta \left( t\right) xy+\gamma \left( t\right) y^{2}\right) },
\label{UMGreen}
\end{equation
wher
\begin{eqnarray}
\alpha \left( t\right) &=&\frac{\omega \cos \omega t+\left( \mu -\lambda
\right) \sin \omega t}{2\omega _{0}\sin \omega t}e^{2\lambda t},
\label{UMGreenA} \\
\beta \left( t\right) &=&-\frac{\omega }{\omega _{0}\sin \omega t}e^{\lambda
t}, \label{UMGreenB} \\
\gamma \left( t\right) &=&\frac{\omega \cos \omega t+\left( \lambda -\mu
\right) \sin \omega t}{2\omega _{0}\sin \omega t} \label{UMGreenC}
\end{eqnarray
with $\omega =\sqrt{\omega _{0}^{2}-\left( \lambda -\mu \right) ^{2}}>0.
\medskip
In this case the Ehrenfest theorem takes the form
\begin{equation}
\frac{d^{2}}{dt^{2}}\left\langle x\right\rangle +\ 2\left( \lambda +\mu
\right) \frac{d}{dt}\left\langle x\right\rangle +\left( \omega
_{0}^{2}+4\lambda \mu \right) \left\langle x\right\rangle =0.
\label{UMEhren}
\end{equation
It is derived in Appendix~A and the Heisenberg uncertainty relation is
discussed in Appendix~B.
\subsection{A Modified Oscillator}
The one-dimensional Hamiltonian of a modified oscillator introduced by
Meiler, Cordero-Soto and Suslov \cite{Me:Co:Su}, \cite{Cor-Sot:Sus} has the
for
\begin{eqnarray}
H &=&\left( \cos t\ p+\sin t\ x\right) ^{2} \label{mod1} \\
&=&\cos ^{2}t\ p^{2}+\sin ^{2}t\ x^{2}+\sin t\cos t\ \left( px+xp\right)
\notag \\
&=&\frac{1}{2}\left( p^{2}+x^{2}\right) +\frac{1}{2}\cos 2t\ \left(
p^{2}-x^{2}\right) +\frac{1}{2}\sin 2t\ \left( px+xp\right) . \notag
\end{eqnarray
(A physical interpretation of this Hamiltonian from the viewpoint of quantum
dynamical invariants will be discussed in Section~4.) The Green function is
given in terms of trigonometric and hyperbolic functions as follow
\begin{eqnarray}
G\left( x,y,t\right) &=&\frac{1}{\sqrt{2\pi i\left( \cos t\sinh t+\sin
t\cosh t\right) }} \label{modGreen} \\
&&\times \exp \left( \frac{\left( x^{2}-y^{2}\right) \sin t\sinh
t+2xy-\left( x^{2}+y^{2}\right) \cos t\cosh t}{2i\left( \cos t\sinh t+\sin
t\cosh t\right) }\right) . \notag
\end{eqnarray
More details can be found in\ \cite{Me:Co:Su}, \cite{Cor-Sot:Sus}. The
corresponding Ehrenfest theorem, namely
\begin{equation}
\frac{d^{2}}{dt^{2}}\left\langle x\right\rangle +\ 2\tan t\frac{d}{dt
\left\langle x\right\rangle -2\left\langle x\right\rangle =0,
\label{modEhrenfest}
\end{equation
is derived in Appendix~A.
\subsection{The Modified Damped Oscillator}
The time-dependent Schr\"{o}dinger equatio
\begin{equation}
i\hslash \frac{\partial \psi }{\partial t}=H\left( t\right) \psi
\label{SCHEQ}
\end{equation
with the variable quadratic Hamiltonian of the for
\begin{equation}
H=\frac{p^{2}}{2m\cosh ^{2}\left( \lambda t\right) }+\frac{m\omega _{0}^{2}}
2}\cosh ^{2}\left( \lambda t\right) \ x^{2},\quad p=\frac{\hslash }{i}\frac
\partial }{\partial x} \label{CJHam}
\end{equation
has been recently considered by Chru\'{s}ci\'{n}ski and Jurkowski \cit
{Chru:Jurk} as a model of the quantum damped oscillator; see also \cit
{Most98}.\medskip
In this case the characteristic equation (\ref{in6}) takes the for
\begin{equation}
\mu ^{\prime \prime }+2\lambda \tanh \left( \lambda t\right) \mu ^{\prime
}+\omega _{0}^{2}\mu =0. \label{CJcheq}
\end{equation
The particular solution is given b
\begin{equation}
\mu \left( t\right) =\frac{\hslash }{m\omega }\frac{\sin \left( \omega
t\right) }{\cosh \left( \lambda t\right) },\qquad \omega =\sqrt{\omega
_{0}^{2}-\lambda ^{2}}>0 \label{CJmu}
\end{equation
and the corresponding propagator can be presented as follow
\begin{equation}
G\left( x,y,t\right) =\sqrt{\frac{m\omega \cosh \left( \lambda t\right) }
2\pi i\hslash \sin \left( \omega t\right) }}\ e^{i\left( \alpha \left(
t\right) x^{2}+\beta \left( t\right) xy+\gamma \left( t\right) y^{2}\right)
}, \label{CJGreen}
\end{equation
wher
\begin{equation}
\alpha \left( t\right) =\frac{m\cosh \left( \lambda t\right) }{2\hslash \sin
\left( \omega t\right) }\left( \omega \cos \left( \omega t\right) \cosh
\left( \lambda t\right) -\lambda \sin \left( \omega t\right) \sinh \left(
\lambda t\right) \right) , \label{CJa}
\end{equation
\begin{equation}
\beta \left( t\right) =-\frac{m\omega \cosh \left( \lambda t\right) }
2\hslash \sin \left( \omega t\right) }, \label{CJb}
\end{equation
\begin{equation}
\gamma \left( t\right) =\frac{m\omega \cos \left( \omega t\right) }{2\hslash
\sin \left( \omega t\right) }. \label{CJc}
\end{equation
(We somewhat simplify the original propagator found in \cite{Chru:Jurk}; see
also \cite{Kochan}.) This Green function can be independently derived from
our equations (\ref{in3})--(\ref{in5}) with the help of the following
elementary antiderivative
\begin{eqnarray}
&&\left( \frac{\lambda \cos \left( \omega t+\delta \right) \sinh \left(
\lambda t\right) +\omega \sin \left( \omega t+\delta \right) \cosh \left(
\lambda t\right) }{\omega \cos \left( \omega t+\delta \right) \cosh \left(
\lambda t\right) -\lambda \sin \left( \omega t+\delta \right) \sinh \left(
\lambda t\right) }\right) ^{\prime } \label{CJanti} \\
&&\quad =\frac{\omega \omega _{0}^{2}\cosh ^{2}\left( \lambda t\right) }
\left( \omega \cos \left( \omega t+\delta \right) \cosh \left( \lambda
t\right) -\lambda \sin \left( \omega t+\delta \right) \sinh \left( \lambda
t\right) \right) ^{2}}. \notag
\end{eqnarray
Further details are left to the reader.\medskip
Special cases are as follows: when $\lambda =0,$ one recovers the standard
propagator for the linear harmonic oscillator \cite{Fey:Hib}, and $\omega
_{0}=0$ gives a pure damping case \cite{Kochan}
\begin{equation}
G\left( x,y,t\right) =\sqrt{\frac{m\lambda }{2\pi i\hslash \tanh \left(
\omega t\right) }}\exp \left( \frac{im\lambda \left( x-y\right) ^{2}}
2\hslash \tanh \left( \omega t\right) }\right) . \label{CJpure}
\end{equation
In the limit $\lambda \rightarrow 0$ formula (\ref{CJpure}) reproduces the
propagator for a free particle \cite{Fey:Hib}.\medskip
The Ehrenfest theorem for the quantum damped oscillator of Chru\'{s}ci\'{n
ski and Jurkowski coincides with our characteristic equation (\ref{CJcheq});
see Appendix~A for more details.\medskip
It is worth adding that in the momentum representation, when
p\leftrightarrow x,$ a rescaled Hamiltonian (\ref{CJHam}) ($\hslash =m\omega
_{0}=1$) takes the for
\begin{equation}
H=\frac{\omega _{0}}{2}\left( \cosh ^{2}\left( \lambda t\right) \ p^{2}
\frac{x^{2}}{\cosh ^{2}\left( \lambda t\right) }\right) . \label{CJhamP}
\end{equation
The corresponding characteristic equatio
\begin{equation}
\mu ^{\prime \prime }-2\lambda \tanh \left( \lambda t\right) \mu ^{\prime
}+\omega _{0}^{2}\mu =0 \label{CJcharP}
\end{equation
has a required elementary solutio
\begin{equation}
\mu =\frac{1}{\omega _{0}}\left( \lambda \cos \left( \omega t\right) \sinh
\left( \lambda t\right) +\omega \sin \left( \omega t\right) \cosh \left(
\lambda t\right) \right) \label{CJmuP}
\end{equation
with $\mu ^{\prime }\left( 0\right) =2a\left( 0\right) =\omega _{0}$ an
\begin{equation}
\mu \rightarrow \frac{1}{2\omega _{0}}e^{\lambda t}\left( \lambda \cos
\left( \omega t\right) +\omega \sin \left( \omega t\right) \right)
\label{CJmuasy}
\end{equation
as $t\rightarrow \infty .$ The Green function is given by formula (\ref{in2
) with the following coefficients
\begin{eqnarray}
\alpha \left( t\right) &=&\frac{\omega _{0}\cos \left( \omega t\right) }
2\cosh \left( \lambda t\right) \left( \lambda \cos \left( \omega t\right)
\sinh \left( \lambda t\right) +\omega \sin \left( \omega t\right) \cosh
\left( \lambda t\right) \right) }, \label{CJalphaP} \\
\beta \left( t\right) &=&-\frac{\omega _{0}}{\lambda \cos \left( \omega
t\right) \sinh \left( \lambda t\right) +\omega \sin \left( \omega t\right)
\cosh \left( \lambda t\right) }, \label{CJbetaP} \\
\gamma \left( t\right) &=&\frac{\omega _{0}\left( \omega \cos \left( \omega
t\right) \cosh \left( \lambda t\right) -\lambda \sin \left( \omega t\right)
\sinh \left( \lambda t\right) \right) }{2\omega \left( \lambda \cos \left(
\omega t\right) \sinh \left( \lambda t\right) +\omega \sin \left( \omega
t\right) \cosh \left( \lambda t\right) \right) }. \label{CJgammaP}
\end{eqnarray
The details are left to the reader.
\subsection{A Modified Parametric Oscillator}
In a similar fashion we consider the following Hamiltonian
\begin{eqnarray}
&&H=\frac{\omega }{2}\left( \tanh ^{2}\left( \lambda t+\delta \right) \
p^{2}+\coth ^{2}\left( \lambda t+\delta \right) \ x^{2}\right)
\label{MPOHam} \\
&&\qquad +\frac{\lambda }{\sinh \left( 2\lambda t+2\delta \right) }\left(
px+xp\right) \qquad \left( \delta \neq 0\right) , \notag
\end{eqnarray
which seems to be missing in the available literature. The corresponding
characteristic equation
\begin{equation}
\mu ^{\prime \prime }-\frac{4\lambda }{\sinh \left( 2\lambda t+2\delta
\right) }\mu ^{\prime }+\left( \omega ^{2}+\frac{2\lambda ^{2}}{\sinh
^{2}\left( \lambda t+\delta \right) }\right) \mu =0 \label{MPOchar}
\end{equation
has an elementary solution of the form
\begin{equation}
\mu =\sin \left( \omega t\right) \frac{\tanh \left( \lambda t+\delta \right)
}{\coth \delta }. \label{MPOmu}
\end{equation
In the limit $t\rightarrow \infty ,$ $\mu \rightarrow \sin \left( \omega
t\right) \tanh \delta .$\medskip
The Green function can be found as follow
\begin{equation}
G\left( x,y,t\right) =\sqrt{\frac{\coth \delta }{2\pi i\sin \left( \omega
t\right) \tanh \left( \lambda t+\delta \right) }}\ e^{i\left( \alpha \left(
t\right) x^{2}+\beta \left( t\right) xy+\gamma \left( t\right) y^{2}\right)
}, \label{MPOGreen}
\end{equation
wher
\begin{equation}
\alpha \left( t\right) =\frac{1}{2}\cot \left( \omega t\right) \coth
^{2}\left( \lambda t+\delta \right) , \label{MPOalpha}
\end{equation
\begin{equation}
\beta \left( t\right) =-\frac{\coth \delta }{\sin \left( \omega t\right)
\coth \left( \lambda t+\delta \right) , \label{MPObeta}
\end{equation
\begin{equation}
\gamma \left( t\right) =\frac{1}{2}\cot \left( \omega t\right) \coth
^{2}\delta . \label{MPOgamma}
\end{equation
The Ehrenfest theorem coincides with the characteristic equation (\re
{MPOchar}). One should interchange $a\leftrightarrow b$ and $d\rightarrow -d$
in the momentum representation \cite{Cor-Sot:Sus}. The corresponding
solutions can be found with the help of the substitution $\delta \rightarrow
\delta +i\pi /2.$ The trigonometric cases, when $\lambda \rightarrow
i\lambda ,$ $\delta \rightarrow i\delta $ and $\omega \rightarrow -\omega ,$
are left to the reader.
\subsection{Parametric Oscillators}
In conclusion a somewhat related quantum parametric oscillator
\begin{equation}
H=\frac{1}{2}\left( p^{2}+\left( \omega ^{2}+\frac{2\lambda ^{2}}{\cosh
^{2}\left( \lambda t\right) }\right) x^{2}\right) , \label{ParamHam}
\end{equation
whe
\begin{equation}
\mu ^{\prime \prime }+\left( \omega ^{2}+\frac{2\lambda ^{2}}{\cosh
^{2}\left( \lambda t\right) }\right) \mu =0 \label{ParamChar}
\end{equation
an
\begin{equation}
\mu =\frac{\lambda \cos \left( \omega t\right) \sinh \left( \lambda t\right)
+\omega \sin \left( \omega t\right) \cosh \left( \lambda t\right) }{\left(
\omega ^{2}+\lambda ^{2}\right) \cosh \left( \lambda t\right) },
\label{ParamMu}
\end{equation
has the Green function (\ref{in2}) with the following coefficients
\begin{eqnarray}
\alpha \left( t\right) &=&\frac{\left( \omega ^{2}+\lambda ^{2}\cosh
^{-2}\left( \lambda t\right) \right) \cos \left( \omega t\right) -\lambda
\omega \tanh \left( \lambda t\right) \sin \left( \omega t\right) }{2\left(
\omega \sin \left( \omega t\right) +\lambda \tanh \left( \lambda t\right)
\cos \left( \omega t\right) \right) }, \label{ParamAlpha} \\
\beta \left( t\right) &=&-\frac{\omega ^{2}+\lambda ^{2}}{\omega \sin \left(
\omega t\right) +\lambda \tanh \left( \lambda t\right) \cos \left( \omega
t\right) }, \label{ParamBeta} \\
\gamma \left( t\right) &=&\frac{\left( \omega ^{2}+\lambda ^{2}\right)
\left( \omega \cos \left( \omega t\right) -\lambda \tanh \left( \lambda
t\right) \sin \left( \omega t\right) \right) }{2\omega \left( \omega \sin
\left( \omega t\right) +\lambda \tanh \left( \lambda t\right) \cos \left(
\omega t\right) \right) }. \label{ParamGamma}
\end{eqnarray
The Green function for the parametric oscillator in general
\begin{equation}
H=\frac{1}{2}\left( p^{2}+\omega ^{2}\left( t\right) x^{2}\right)
\label{ParametHam}
\end{equation
can be found, for example, in Ref.~\cite{Lan:Sus}. (The time-dependent
quantum oscillator was thoroughly examined by Husimi \cite{HusimiI53}, \cit
{HusimiII53} and later many authors had treated different aspects of the
problem; see \cite{Dodonov:Man'koFIAN87}, \cite{Malkin:Man'ko79}, \cit
{Malk:Man:Trif70}, \cite{Malk:Man:Trif73}, \cite{Perelomov:Popov69}, \cit
{Per:Zel}, \cite{Popov:PerelomovI69}, \cite{Popov:PerelomovII69}, \cit
{Schuch08} and \cite{Solimenoetal69}; a detailed bibliography is given in
Ref.~\cite{Camizetall71}.)
\section{Expectation Values of Quadratic Operators}
We start from a convenient differentiation formula.
\begin{lemma}
Le
\begin{equation}
H=a\left( t\right) p^{2}+b\left( t\right) x^{2}+d\left( t\right) \left(
px+xp\right) , \label{genham}
\end{equation
\begin{equation}
O=A\left( t\right) p^{2}+B\left( t\right) x^{2}+C\left( t\right) \left(
px+xp\right) \label{genop}
\end{equation
an
\begin{equation}
\left\langle O\right\rangle =\left\langle \psi ,O\psi \right\rangle
=\int_{-\infty }^{\infty }\psi ^{\ast }O\psi \ dx,\qquad i\frac{\partial
\psi }{\partial t}=H\psi \label{expect}
\end{equation
(we use the star for complex conjugate). The
\begin{eqnarray}
\frac{d}{dt}\left\langle O\right\rangle &=&\left( \frac{dA}{dt}+4\left(
aC-dA\right) \right) \left\langle p^{2}\right\rangle \label{diffexp} \\
&&+\left( \frac{dB}{dt}+4\left( dB-bC\right) \right) \left\langle
x^{2}\right\rangle \notag \\
&&+\left( \frac{dC}{dt}+2\left( aB-bA\right) \right) \left\langle
px+xp\right\rangle . \notag
\end{eqnarray}
\end{lemma}
\begin{proof}
The time derivative of the expectation value can be written as \cite{La:Lif
, \cite{Merz}, \cite{Schiff}
\begin{equation}
\frac{d}{dt}\left\langle O\right\rangle =\left\langle \frac{\partial O}
\partial t}\right\rangle +\frac{1}{i}\left\langle \left[ O,H\right]
\right\rangle , \label{diffops}
\end{equation
where $\left[ O,H\right] =OH-HO$ (we freely interchange differentiation and
integration throughout the paper, it can be justified for certain classes of
solutions \cite{Lieb:Loss}, \cite{Oh89}, \cite{Per-G:Tor:Mont}, \cite{Velo96
). One should make use of the standard commutator properties, including
familiar identitie
\begin{eqnarray}
&&\left[ x^{2},p^{2}\right] =2i\left( px+xp\right) ,\qquad \left[ x,p^{2
\right] =2ip,\qquad \left[ x^{2},p\right] =2ix, \label{commuts} \\
&&\left[ px+xp,p^{2}\right] =4ip^{2},\qquad \quad \left[ x^{2},px+xp\right]
=4ix^{2}, \notag
\end{eqnarray
in order to complete the proof.
\end{proof}
Quantum systems with the self-adjoint Hamiltonians (\ref{genham}) are called
the generalized harmonic oscillators \cite{Berry85}, \cite{Dod:Mal:Man75},
\cite{Hannay85}, \cite{Leach90}, \cite{Wolf81}, \cit
{Yeon:Lee:Um:George:Pandey93}. At the same time one has to deal with
non-self-adjoint Hamiltonians in the theory of dissipative quantum systems
(see, for example, \cite{Cor-Sot:Sua:Sus}, \cite{Dekker81}, \cite{Kochan10},
\cite{Tarasov01}, \cite{Um:Yeon:George} and references therein) or when
using separation of variables in an accelerating frame of reference for a
charged particle moving in an uniform time-dependent magnetic field \cit
{Cor-Sot:Lop:Sua:Sus}. An extension to the case of non-self-adjoint
Hamiltonians is as follows.
\begin{lemma}
I
\begin{equation}
H=a\left( t\right) p^{2}+b\left( t\right) x^{2}+c\left( t\right) px+d\left(
t\right) xp, \label{GenHam}
\end{equation
\begin{equation}
O=A\left( t\right) p^{2}+B\left( t\right) x^{2}+C\left( t\right) px+D\left(
t\right) xp, \label{GenOp}
\end{equation
the
\begin{eqnarray}
\frac{d}{dt}\left\langle O\right\rangle &=&\left( \frac{dA}{dt}+2a\left(
C+D\right) -\left( 3c+d\right) A\right) \left\langle p^{2}\right\rangle
\label{GenSys} \\
&&+\left( \frac{dB}{dt}-2b\left( C+D\right) +\left( c+3d\right) B\right)
\left\langle x^{2}\right\rangle \notag \\
&&+\left( \frac{dC}{dt}+2\left( aB-bA\right) -\left( c-d\right) C\right)
\left\langle px\right\rangle \notag \\
&&+\left( \frac{dD}{dt}+2\left( aB-bA\right) -\left( c-d\right) D\right)
\left\langle xp\right\rangle . \notag
\end{eqnarray}
\end{lemma}
\begin{proof}
One should us
\begin{equation}
\frac{d}{dt}\left\langle O\right\rangle =\left\langle \frac{\partial O}
\partial t}\right\rangle +\frac{1}{i}\left\langle OH-H^{\dagger
}O\right\rangle , \label{DiffOper}
\end{equation
where $H^{\dagger }$ is the Hermitian adjoint of the Hamiltonian operator
H. $ Our formula is a simple extension of the well-known expression \cit
{La:Lif}, \cite{Merz}, \cite{Schiff} to the case of a non-self-adjoint
Hamiltonian \cite{Cor-Sot:Sua:Sus}. Standard commutator evaluations complete
the proof.\medskip
\end{proof}
Polynomial operators of the higher orders in $x$ and $p$ can be
differentiated in a similar fashion. An analog of the product rule is given
in \cite{Suslov10}. The details are left to the reader.
\section{Energy Operators and Quadratic Invariants}
In the case of the time-independent Hamiltonian, one get
\begin{equation}
\frac{d}{dt}\left\langle H\right\rangle =0 \label{constant}
\end{equation
by (\ref{diffops}). The law of conservation of energy states tha
\begin{equation}
E=\left\langle H\right\rangle =constant. \label{constantenergy}
\end{equation
In general one has to construct quantum integrals of motion, or dynamical
invariants, that are different from the variable Hamiltonian (see, for
example, \cite{Lewis:Riesen69}, \cite{Wolf81}, \cit
{Yeon:Lee:Um:George:Pandey93}; linear case is dealt with in \cite{Dod:Man79
, \cite{Dodonov:Man'koFIAN87}, \cite{Malkin:Man'ko79}, \cite{Malk:Man:Trif73}
and Appendix~C).
\subsection{Energy Operators}
A familiar definition is in order (see, for example, \cit
{Dodonov:Man'koFIAN87}, \cite{Malkin:Man'ko79}).
\begin{definition}
We call the quadratic operator \textrm{(\ref{genop})} an energy operator $E,$
or a quadratic (dynamical) invariant, i
\begin{equation}
\frac{d}{dt}\left\langle E\right\rangle =0 \label{defenergy}
\end{equation
for the corresponding variable Hamiltonian \textrm{(\ref{genham})}.
\end{definition}
By Lemma~1 the coefficients of an energy operator
\begin{equation}
E=A\left( t\right) p^{2}+B\left( t\right) x^{2}+C\left( t\right) \left(
px+xp\right) , \label{energyop}
\end{equation
must satisfy the system of ordinary differential equations
\begin{eqnarray}
\frac{dA}{dt}+4\left( a\left( t\right) C-d\left( t\right) A\right) &=&0,
\label{energysysA} \\
\frac{dB}{dt}+4\left( d\left( t\right) B-b\left( t\right) C\right) &=&0,
\label{energysysB} \\
\frac{dC}{dt}+2\left( a\left( t\right) B-b\left( t\right) A\right) &=&0.
\label{energysysC}
\end{eqnarray
In general a unique solution of this system with respect to arbitrary
initial conditions $A_{0}=A\left( 0\right) ,$ $B_{0}=B\left( 0\right) ,$
C_{0}=C\left( 0\right) $ \cite{HilleODE} determines a three-parameter family
of the quadratic invariants (\ref{energyop}). Special cases, when solutions
can be found explicitly, are of the most practical importance.\medskip
In this section we find the simplest energy operators for all quadratic
models under consideration as follows
\begin{equation}
E=\frac{\omega _{0}}{2}\left( e^{-2\lambda t}\ p^{2}+e^{2\lambda t}\
x^{2}\right) +\frac{\lambda }{2}\left( px+xp\right) , \label{EO1}
\end{equation
\begin{equation}
E=\frac{\omega _{0}}{2}\left( e^{-2\lambda t}\ p^{2}+e^{2\lambda t}\
x^{2}\right) -\frac{\lambda }{2}\left( px+xp\right) , \label{EO2}
\end{equation
\begin{equation}
E=\frac{1}{2}\cos 2t\ \left( p^{2}-x^{2}\right) +\frac{1}{2}\sin 2t\ \left(
px+px\right) , \label{EO3}
\end{equation
\begin{equation}
E=\tanh ^{2}\left( \lambda t+\delta \right) \ p^{2}+\coth ^{2}\left( \lambda
t+\delta \right) \ x^{2} \label{EQ4}
\end{equation
for the Caldirola-Kanai Hamiltonian (\ref{CKham}) \cite{Svin75}, the
modified Caldirola-Kanai Hamiltonian (\ref{modCKham}), the modified
oscillator of Meiler, Cordero-Soto and Suslov (\ref{mod1}) and for the
modified parametric oscillator (\ref{MPOHam}), respectively. Their
coefficients solve the corresponding systems (\ref{energysysA})--(\re
{energysysC}) for special initial data.\medskip
An energy operator for the united model (\ref{UMHam}) is given b
\begin{equation}
E=\frac{\omega _{0}}{2}e^{\mu t}\left( e^{-2\lambda t}\ p^{2}+e^{2\lambda
t}\ x^{2}\right) +\frac{1}{2}\left( \lambda -\mu \right) e^{\mu t}\left(
px+xp\right) . \label{EOUM}
\end{equation
One should use Lemma~2; verification is left to the reader. Finally an
energy operator for the quantum damped oscillator of Chru\'{s}ci\'{n}ski and
Jurkowski with a rescaled Hamiltonian (\ref{CJham}) is given by expression
\ref{CJEnergy}). A general case of the variable quadratic Hamiltonian is
discussed in Theorem~1.
\subsection{ The Lewis--Riesenfeld Invariant}
Classical Hamiltonian of the generalized harmonic oscillator can be
transformed into the Hamiltonian of a parametric oscillator \cite{Berry85},
\cite{Hannay85}, \cite{PadillaMaster}, \cite{Yeon:Lee:Um:George:Pandey93}.
All quadratic invariants of the quantum parametric oscillator (\re
{ParametHam}) can be found as follows \cite{Lewis67}, \cite{Lewis68}, \cit
{Lewis68a}, \cite{Lewis:Riesen69}. The corresponding system
\begin{eqnarray}
&&A^{\prime }+2C=0, \label{ParamOscA} \\
&&B^{\prime }-2\omega ^{2}\left( t\right) C=0, \label{ParamOscB} \\
&&C^{\prime }+B-\omega ^{2}\left( t\right) A=0, \label{ParamOscC}
\end{eqnarray
is integrated by the substitution $A=\kappa ^{2}.$ Then $C=-\kappa \kappa
^{\prime },$ $B=\kappa \kappa ^{\prime \prime }+\left( \kappa ^{\prime
}\right) ^{2}+\omega ^{2}\left( t\right) \kappa ^{2}$ and equation (\re
{ParamOscB}) become
\begin{eqnarray*}
\left( \kappa \kappa ^{\prime \prime }+\left( \kappa ^{\prime }\right)
^{2}+\omega ^{2}\left( t\right) \kappa ^{2}\right) ^{\prime }+2\omega
^{2}\left( t\right) \kappa \kappa ^{\prime } &=&0, \\
\kappa \left( \kappa ^{\prime \prime }+\omega ^{2}\left( t\right) \kappa
\right) ^{\prime }+3\kappa ^{\prime }\left( \kappa ^{\prime \prime }+\omega
^{2}\left( t\right) \kappa \right) &=&0
\end{eqnarray*
or with an integrating factor
\begin{equation}
\frac{d}{dt}\left( \kappa ^{3}\left( \kappa ^{\prime \prime }+\omega
^{2}\left( t\right) \kappa \right) \right) =0 \label{ODEint}
\end{equation
(see \cite{Lewis:Riesen69} and \cite{Leach:Andriopo08}). Thu
\begin{equation}
\kappa ^{\prime \prime }+\omega ^{2}\left( t\right) \kappa =\frac{c_{0}}
\kappa ^{3}}\qquad \left( c_{0}=0,1\right) \label{NonlinearODE}
\end{equation
and a general solution of the system (\ref{ParamOscA})--(\ref{ParamOscC}) is
given b
\begin{equation}
A=\kappa ^{2},\quad B=\left( \kappa ^{\prime }\right) ^{2}+\frac{c_{0}}
\kappa ^{2}},\quad C=-\kappa \kappa ^{\prime } \label{ParamSysSol}
\end{equation
in terms of solutions of the nonlinear equation (\ref{NonlinearODE}), which
is called Ermakov's equation, when $c_{0}=1$ \cite{Ermakov} (see also, \cit
{Leach:Andrio08}, \cite{Lewis68a}, \cite{Pinney50} and \cite{Schuch08}).
Thus the quadratic integrals of motion can be presented in the form \cit
{Lewis:Riesen69}
\begin{equation}
E=\left( \kappa p-\kappa ^{\prime }x\right) ^{2}+\frac{c_{0}}{\kappa ^{2}
x^{2} \label{ParamQuadInv}
\end{equation
for any given solution of the Ermakov equation (\ref{NonlinearODE}). This
quantum invariant is an analog of the Ermakov--Lewis integral of motion for
the classical parametric oscillator \cite{Ermakov}, \cite{Lewis67}, \cit
{Lewis68}, \cite{Lewis68a}, \cite{Symon70}.\medskip
In general if two linearly independent solutions of the classical parametric
oscillator equation are available
\begin{equation}
u^{\prime \prime }+\omega ^{2}\left( t\right) u=0,\qquad v^{\prime \prime
}+\omega ^{2}\left( t\right) v=0, \label{ParamOscillatorClSol}
\end{equation
then solutions of the nonlinear Ermakov equation
\begin{equation}
\kappa ^{\prime \prime }+\omega ^{2}\left( t\right) \kappa =\frac{1}{\kappa
^{3}} \label{ErmakovEquationCl}
\end{equation
are given b
\begin{equation}
\kappa =\left( Au^{2}+2Buv+Cv^{2}\right) ^{1/2}
\label{ErmakovEqPinneySolution}
\end{equation
(so-called Pinney's solution \cite{Pinney50}, \cite{Eliezer:Gray76}, \cit
{Leach:Andrio08}, \cite{Lewis68a}, \cite{PadillaMaster}), where the
constants $A,$ $B$ and $C$ are related according to $AC-B^{2}=1/W^{2}$ with
W$ being the constant Wronskian of the two linearly independent
solutions.\medskip
For example, in the case of the simple harmonic oscillator with $\omega
\left( t\right) =1,$ there are two elementary solutions
\begin{equation}
\kappa =1\quad \left( c_{0}=1\right) ,\qquad \kappa =\cos t\quad \left(
c_{0}=0\right) \label{ElemSol}
\end{equation
and the energy operators are given b
\begin{eqnarray}
H &=&\frac{1}{2}\left( p^{2}+x^{2}\right) , \label{HarmHam} \\
E &=&\left( \cos t\ p+\sin t\ x\right) ^{2}. \label{MC-SShamenergy}
\end{eqnarray
It provides a somewhat better understanding of the nature of the Hamiltonian
discussed by Meiler, Cordero-Soto and Suslov \cite{Me:Co:Su} --- this
operator plays a role of the simplest time-dependent quadratic integral of
motion for the linear harmonic oscillator.\medskip
In a similar fashion the dynamical invariants of the parametric oscillator
\ref{ParamHam}) are given by the expression (\ref{ParamQuadInv}) with
c_{0}\neq 0.$ In the Pinney solution (\ref{ErmakovEqPinneySolution}) one can
choos
\begin{eqnarray}
u &=&\frac{\omega \cos \left( \omega t\right) \cosh \left( \lambda t\right)
-\lambda \sin \left( \omega t\right) \sinh \left( \lambda t\right) }{\cosh
\left( \lambda t\right) }, \label{ParamClSolU} \\
v &=&\frac{\omega \sin \left( \omega t\right) \cosh \left( \lambda t\right)
+\lambda \cos \left( \omega t\right) \sinh \left( \lambda t\right) }{\cosh
\left( \lambda t\right) } \label{ParamClSolV}
\end{eqnarray
as two linearly independent solutions of the classical equation of motion
\ref{ParamChar}) with $W\left( u,v\right) =\omega \left( \omega ^{2}+\lambda
^{2}\right) .$ If $A=C$ and $B=0,$ the
\begin{equation}
\kappa =\left( \omega ^{2}+\lambda ^{2}\tanh ^{2}\left( \lambda t\right)
\right) ^{1/2} \label{ParamEnergySol}
\end{equation
is a particular solution of the corresponding Ermakov equation
\begin{equation}
\kappa ^{\prime \prime }+\left( \omega ^{2}+\frac{2\lambda ^{2}}{\cosh
^{2}\left( \lambda t\right) }\right) \kappa =\frac{\omega ^{2}\left( \lambda
^{2}+\omega ^{2}\right) ^{2}}{\kappa ^{3}}. \label{ParamErmakovEquation}
\end{equation
The simplest positive energy integral for our parametric oscillator (\re
{ParamHam}) is given b
\begin{eqnarray}
E &=&\left( \omega ^{2}+\lambda ^{2}\tanh ^{2}\left( \lambda t\right)
\right) \ p^{2}+\lambda ^{3}\frac{\sinh \left( \lambda t\right) }{\cosh
^{3}\left( \lambda t\right) }\ \left( px+xp\right) \label{ParamEnergy} \\
&&+\frac{\lambda ^{6}\sinh ^{2}\left( \lambda t\right) +\omega ^{2}\left(
\lambda ^{2}+\omega ^{2}\right) ^{2}\cosh ^{6}\left( \lambda t\right) }
\cosh ^{6}\left( \lambda t\right) \left( \omega ^{2}+\lambda ^{2}\tanh
^{2}\left( \lambda t\right) \right) }\ x^{2}. \notag
\end{eqnarray
Another possibility is to take a general solution of (\ref{ParamChar}) with
c_{0}=0.$
\subsection{An Extension to General Quadratic Hamiltonians}
We consider the following generalization of the Lewis--Riesenfeld invariant
\ref{ParamQuadInv}) (see also \cite{Leach90}, \cit
{Yeon:Lee:Um:George:Pandey93}).
\begin{theorem}
The dynamical invariants for the general quadratic Hamiltonian (\ref{GenHam
) are given b
\begin{equation}
E=\frac{1}{\mu _{1}}\left( \kappa \ p-\frac{1}{2a}\frac{d\kappa }{dt}\
x\right) ^{2}+\frac{C_{0}}{\mu _{2}\kappa ^{2}}\ x^{2}, \label{GenQuadInv}
\end{equation
where $C_{0}$ is a constant
\begin{equation}
\mu _{1}=\exp \left( -\int_{0}^{t}\left( 3c+d\right) \ ds\right) ,\quad \mu
_{2}=\exp \left( \int_{0}^{t}\left( c+3d\right) \ ds\right) ,
\label{IntFacts}
\end{equation
and $\kappa $ satisfies the auxiliary nonlinear equation
\begin{equation}
k\frac{d}{dt}\left( k\frac{d\kappa }{dt}\right) +4abk^{2}\kappa =\frac{C_{0
}{\kappa ^{3}}, \label{AuxEq}
\end{equation
wher
\begin{equation}
k=\frac{1}{2a}\exp \left( 2\int_{0}^{t}\left( c+d\right) \ ds\right) .
\label{Key}
\end{equation}
(For the self-adjoint Hamiltonians $c=d.$)
\end{theorem}
The case, $a=1/2,$ $b=\omega ^{2}\left( t\right) /2$ and $c=d=0,$
corresponds to the original invariant (\ref{ParamQuadInv}).
\begin{proof}
By Lemma~2 in order to find quadratic invariants of the for
\begin{equation}
E=Ap^{2}+Bx^{2}+Cpx+Dxp \label{Energy}
\end{equation
we have to solve the following system of ordinary differential equations
\begin{eqnarray}
\frac{dA}{dt}+2a\left( C+D\right) -\left( 3c+d\right) A &=&0, \label{EquatA}
\\
\frac{dB}{dt}-2b\left( C+D\right) +\left( c+3d\right) B &=&0, \label{EquatB}
\\
\frac{dC}{dt}+2\left( aB-bA\right) -\left( c-d\right) C &=&0, \label{EquatC}
\\
\frac{dD}{dt}+2\left( aB-bA\right) -\left( c-d\right) D &=&0, \label{EquatD}
\end{eqnarray
say, for arbitrary analytic coefficients $a\left( t\right) ,$ $b\left(
t\right) ,$ $c\left( t\right) $ and $d\left( t\right) .$ The substitution
C=C_{1}+D_{1},$ $D=C_{1}-D_{1}$ allows one to transform the last two
equations
\begin{eqnarray}
&&\frac{dC_{1}}{dt}+2\left( aB-bA\right) -\left( c-d\right) C_{1}=0,
\label{EquatC1} \\
&&\frac{dD_{1}}{dt}=\left( c-d\right) D_{1},\quad D_{1}=\text{constant\
\exp \left( \int_{0}^{t}\left( c-d\right) \ ds\right) . \label{EquatD1}
\end{eqnarray
The
\begin{equation*}
Cpx+Dxp=C_{1}\left( px+xp\right) +D_{1}\left( px-xp\right)
\end{equation*
and, in view of the canonical commutation relation, the coefficient $D_{1}$
can be eliminated from the consideration as belonging to the linear
invariants (see appendix~C).
Introducing integrating factors into (\ref{EquatA}), (\ref{EquatB}) and (\re
{EquatC1}), we ge
\begin{eqnarray}
&&\frac{d}{dt}\left( \mu _{1}A\right) +4a\mu _{1}C_{1}=0,\qquad \frac{\mu
_{1}^{\prime }}{\mu _{1}}=-3c-d, \\
&&\frac{d}{dt}\left( \mu _{2}B\right) -4b\mu _{2}C_{1}=0,\qquad \frac{\mu
_{2}^{\prime }}{\mu _{2}}=c+3d, \\
&&\frac{d}{dt}\left( \mu _{3}C_{1}\right) +2\mu _{3}\left( aB-bA\right)
=0,\qquad \frac{\mu _{3}^{\prime }}{\mu _{3}}=-c+d
\end{eqnarray
with $\mu _{3}^{2}=\mu _{1}\mu _{2}.$ After the substitutio
\begin{equation}
\widetilde{A}=\mu _{1}A,\qquad \widetilde{B}=\mu _{2}B,\qquad \widetilde{C
=\mu _{3}C_{1}, \label{SubTilde}
\end{equation
the system takes the for
\begin{eqnarray}
&&\frac{d\widetilde{A}}{dt}+4a\sqrt{\frac{\mu _{1}}{\mu _{2}}}\ \widetilde{C
=0, \\
&&\frac{d\widetilde{B}}{dt}-4b\sqrt{\frac{\mu _{2}}{\mu _{1}}}\ \widetilde{C
=0, \\
&&\frac{d\widetilde{C}}{dt}+2\left( a\sqrt{\frac{\mu _{1}}{\mu _{2}}}\
\widetilde{B}-b\sqrt{\frac{\mu _{2}}{\mu _{1}}}\ \widetilde{A}\right) =0.
\end{eqnarray
Introducing a \textquotedblleft proper time\textquotedblright
\begin{equation}
\tau =\int_{0}^{t}2a\sqrt{\frac{\mu _{1}}{\mu _{2}}}\ ds, \label{PropTime}
\end{equation
we finally obtain
\begin{eqnarray}
&&\frac{d\widetilde{A}}{d\tau }+2\widetilde{C}=0, \label{LRA} \\
&&\frac{d\widetilde{B}}{d\tau }-2\omega ^{2}\left( \tau \right) \widetilde{C
=0, \label{LRB} \\
&&\frac{d\widetilde{C}}{d\tau }+\widetilde{B}-\omega ^{2}\left( \tau \right)
\widetilde{A}=0,\quad \omega ^{2}\left( \tau \right) =\frac{b\mu _{2}}{a\mu
_{1}}, \label{LRC}
\end{eqnarray
which is identical to the original Lewis--Riesenfeld system (\ref{ParamOscA
)--(\ref{ParamOscC}) (positivity of $\omega ^{2}$ is not required). The
solution is given b
\begin{equation}
\widetilde{A}=\kappa ^{2},\quad \widetilde{B}=\left( \frac{d\kappa }{d\tau
\right) ^{2}+\frac{C_{0}}{\kappa ^{2}},\quad \widetilde{C}=-\kappa \frac
d\kappa }{d\tau }, \label{LRSysSol}
\end{equation
where $\kappa $ satisfies the Ermakov equation
\begin{equation}
\frac{d^{2}\kappa }{d\tau ^{2}}+\omega ^{2}\left( \tau \right) \kappa =\frac
C_{0}}{\kappa ^{3}},\quad \omega ^{2}\left( \tau \right) =\frac{b\mu _{2}}
a\mu _{1}}, \label{ErmakovEquation}
\end{equation
with respect to the new time (\ref{PropTime}). In view o
\begin{equation}
\frac{d}{d\tau }=k\frac{d}{dt},\qquad k=\frac{1}{2a}\exp \left(
2\int_{0}^{t}\left( c+d\right) \ ds\right) , \label{DiffTau}
\end{equation
the Ermakov equation (\ref{ErmakovEquation}) is transformed into our
auxiliary equation (\ref{AuxEq}). A back substitution results in the
dynamical invariant (\ref{GenQuadInv}) when the square is completed.
\end{proof}
\begin{lemma}
The dynamical invariant (\ref{GenQuadInv}) can be represented in more
symmetric for
\begin{eqnarray}
E &=&\left( \left( \mu \ p-\frac{1}{2a}\left( \frac{d\mu }{dt}-\left(
c+d\right) \mu \right) \ x\right) ^{2}+\frac{C_{0}}{\mu ^{2}}\ x^{2}\right)
\label{InvSymmForm} \\
&&\times \exp \left( \int_{0}^{t}\left( c-d\right) \ ds\right) , \notag
\end{eqnarray
where $C_{0}$ is a constant and $\mu $ is a solution of the following
auxiliary equation
\begin{equation}
\mu ^{\prime \prime }-\frac{a^{\prime }}{a}\mu ^{\prime }+\left( 4ab+\left(
\frac{a^{\prime }}{a}-c-d\right) \left( c+d\right) -c^{\prime }-d^{\prime
}\right) \mu =C_{0}\frac{\left( 2a\right) ^{2}}{\mu ^{3}}.
\label{AuxEquation}
\end{equation}
\end{lemma}
\begin{proof}
Use the substitutio
\begin{equation}
\kappa =\mu \exp \left( -\int_{0}^{t}\left( c+d\right) \ ds\right)
\label{MuSubstitution}
\end{equation
in (\ref{GenQuadInv}) and (\ref{AuxEq}). A somewhat different proof is given
in \cite{Suslov10}.
\end{proof}
The corresponding classical invariant is discussed, for example, in Refs.
\cite{Symon70} and \cite{Yeon:Lee:Um:George:Pandey93}. (Compare also our
expression (\ref{InvSymmForm}) with the one given in the last paper for the
self-adjoint case; we give a detailed proof for the non-self-adjoint
Hamiltonians and emphasize connection with the Ermakov equation.)\medskip
It is worth noting, in conclusion, that, if $\mu _{1}$ and $\mu _{2}$ are
two linearly independent solutions of the linear equation
\begin{equation}
\mu ^{\prime \prime }-\frac{a^{\prime }}{a}\mu ^{\prime }+\left( 4ab+\left(
\frac{a^{\prime }}{a}-c-d\right) \left( c+d\right) -c^{\prime }-d^{\prime
}\right) \mu =0, \label{LinEquation}
\end{equation
the general solution of the nonlinear auxiliary equation (\ref{AuxEquation})
is given b
\begin{equation}
\mu =\left( A\mu _{1}^{2}+2B\mu _{1}\mu _{2}+C\mu _{2}^{2}\right) ^{1/2},
\label{SolNonLinEquation}
\end{equation
where the constants $A,$ $B$ and $C$ are related according t
\begin{equation}
AC-B^{2}=C_{0}\frac{\left( 2a\right) ^{2}}{W^{2}\left( \mu _{1},\mu
_{2}\right) } \label{NonLinWronskian}
\end{equation
with $W\left( \mu _{1},\mu _{2}\right) =\mu _{1}\mu _{2}^{\prime }-\mu
_{1}^{\prime }\mu _{2}=constant\ \left( 2a\right) $ being the Wronskian of
the two linearly independent solutions. This is a simple extension of
Pinney's solution (\ref{ErmakovEqPinneySolution}); our equations (\re
{AuxEquation}) and (\ref{LinEquation}) form the generalized Ermakov system
\cite{Eliezer:Gray76}, \cite{PadillaMaster}. Further generalization of the
superposition formula (\ref{SolNonLinEquation})--(\ref{NonLinWronskian}) is
discussed in Ref.~\cite{Suslov10}. (If $C_{0}\neq 0,$ the substitution $\mu
\rightarrow $ $C_{0}^{1/4}\mu $ reduces equation (\ref{AuxEquation}) to a
similar form with $C_{0}=1.)$ Special case of the time-dependent damped
harmonic oscillator, when $a=e^{-F\left( t\right) }/2,$ $b=\omega ^{2}\left(
t\right) e^{F\left( t\right) }/2,$ $F\left( t\right) =\int_{0}^{t}f\left(
s\right) \ ds$ and $c=d=0,$ is discussed in \cite{LeachAmJPhys78}, \cit
{LeachSIAM78}.
\subsection{An Example}
The simplest energy operators have been already discussed in section~4.1 for
all models of quantum oscillators under consideration. In order to
demonstrate how the general approach works we discuss the united Hamiltonian
(\ref{UMHam}), when $a=\left( \omega _{0}/2\right) e^{-2\lambda t},$
b=\left( \omega _{0}/2\right) e^{2\lambda t}$ and $c=0,$ $d=-\mu .$ A direct
calculation shows that the functio
\begin{equation}
\kappa =\sqrt{\frac{\omega _{0}}{2}}e^{-\lambda t} \label{UMkappa}
\end{equation
satisfies the following equatio
\begin{equation}
\kappa ^{\prime \prime }+2\lambda \kappa ^{\prime }+\omega _{0}^{2}\kappa
=\left( \frac{\omega _{0}\omega }{2}\right) ^{2}\frac{e^{-4\lambda t}}
\kappa ^{3}},\quad \omega ^{2}=\omega _{0}^{2}-\left( \lambda -\mu \right)
^{2}>0, \label{UMAuxEq}
\end{equation
which corresponds to the nonlinear auxiliary equation (\ref{AuxEquation})
with $C_{0}=\omega ^{2}/4.$ The quadratic invariant (\ref{InvSymmForm})
simplifies to the previously found expression (\ref{EOUM}). Solution (\re
{SolNonLinEquation}) can be used for the most general case. Details are left
to the reader.
\subsection{Factorization of the Dynamical Invariant}
Following Ref.~\cite{Cor-Sot:Sua:Sus} the energy operator (\ref{InvSymmForm
) can be presented in the standard harmonic oscillator form:
\begin{equation}
E=\frac{\omega \left( t\right) }{2}\left( \widehat{a}\left( t\right)
\widehat{a}^{\dagger }\left( t\right) +\widehat{a}^{\dagger }\left( t\right)
\widehat{a}\left( t\right) \right) , \label{EnOperFactor}
\end{equation
wher
\begin{equation}
\omega \left( t\right) =\omega _{0}\exp \left( \int_{0}^{t}\left( c-d\right)
\ ds\right) ,\qquad \omega _{0}=2\sqrt{C_{0}}>0, \label{omega(t)}
\end{equation
\begin{eqnarray}
\widehat{a}\left( t\right) &=&\left( \frac{\sqrt{\omega _{0}}}{2\mu }-i\frac
\mu ^{\prime }-\left( c+d\right) \mu }{2a\sqrt{\omega _{0}}}\right) x+\frac
\mu }{\sqrt{\omega _{0}}}\frac{\partial }{\partial x}, \label{a(t)} \\
\widehat{a}^{\dagger }\left( t\right) &=&\left( \frac{\sqrt{\omega _{0}}}
2\mu }+i\frac{\mu ^{\prime }-\left( c+d\right) \mu }{2a\sqrt{\omega _{0}}
\right) x-\frac{\mu }{\sqrt{\omega _{0}}}\frac{\partial }{\partial x},
\label{across(t)}
\end{eqnarray
and $\mu $ is a solution of the nonlinear auxiliary equation (\re
{AuxEquation}). Here the time-dependent annihilation $\widehat{a}\left(
t\right) $ and creation $\widehat{a}^{\dagger }\left( t\right) $ operators
satisfy the usual commutation relation
\begin{equation}
\widehat{a}\left( t\right) \widehat{a}^{\dagger }\left( t\right) -\widehat{a
^{\dagger }\left( t\right) \widehat{a}\left( t\right) =1.
\label{commutatora(t)across(t)}
\end{equation
The oscillator-type spectrum and the corresponding time-dependent
eigenfunctions of the dynamical invariant $E$ can be obtain now in a
standard way by using the Heisenberg--Weyl algebra of the rasing and
lowering operators (a \textquotedblleft second
quantization\textquotedblright\ \cite{Lewis:Riesen69}, the Fock states).
Explicit solution of the Cauchy initial value problem in terms of the
quadratic invariant eigenfunction expansion is found in Ref.~\cite{Suslov10
. In addition the $n$-dimensional oscillator wave functions form a basis of
the irreducible unitary representation of the Lie algebra of the noncompact
group $SU\left( 1,1\right) $ corresponding to the discrete positive series
\mathcal{D}_{+}^{j}$ (see \cite{Me:Co:Su}, \cite{Ni:Su:Uv} and \cit
{Smir:Shit}).\smallskip\ Our operators (\ref{a(t)})--(\ref{across(t)}) allow
one to extend these group-theoretical properties to the general dynamical
invariant (\ref{EnOperFactor}). We shall further elaborate on these
connections elsewhere.
\section{Application to the Cauchy Initial Value Problems}
Explicit solution of the initial value problem in terms of eigenfunctions of
the general quadratic invariant is given in Ref.~\cite{Suslov10}. Here we
formulate the following uniqueness result.
\begin{lemma}
Suppose that the expectation valu
\begin{equation}
\left\langle H_{0}\right\rangle =\left\langle \psi ,H_{0}\psi \right\rangle
\geq 0 \label{exppos}
\end{equation
for a positive quadratic operato
\begin{equation}
H_{0}=f\left( t\right) \left( \alpha \left( t\right) p+\beta \left( t\right)
x\right) ^{2}+g\left( t\right) x^{2}\qquad \left( f\left( t\right) \geq 0,\
g\left( t\right) >0\right) \label{posop}
\end{equation
($\alpha \left( t\right) $ and $\beta \left( t\right) $ are real-valued
functions) vanishes for all $t\in \lbrack 0,T):
\begin{equation}
\left\langle H_{0}\right\rangle =\left\langle H_{0}\right\rangle \left(
t\right) =\left\langle H_{0}\right\rangle \left( 0\right) =0, \label{indata}
\end{equation
when $\psi \left( x,0\right) =0$ almost everywhere. Then the corresponding
Cauchy initial value proble
\begin{equation}
i\frac{\partial \psi }{\partial t}=H\psi ,\qquad \psi \left( x,0\right)
=\varphi \left( x\right) \label{Cauchyivp}
\end{equation
may have only one solution, when $x\psi \left( x,t\right) \in L^{2}\left(
\mathbb{R}
\right) $ (if $H_{0}=g\left( t\right) I,$ where $I=id$ is the identity
operator, $\psi \in L^{2}\left(
\mathbb{R}
\right) $).
\end{lemma}
Here it is not assumed that $H_{0}$ is the quantum integral of motion when
\frac{d}{dt}\left\langle H_{0}\right\rangle \equiv 0.$
\begin{proof}
If there are two solutions
\begin{equation*}
i\frac{\partial \psi _{1}}{\partial t}=H\psi _{1},\qquad i\frac{\partial
\psi _{2}}{\partial t}=H\psi _{2}
\end{equation*
with the same initial condition $\psi _{1}\left( x,0\right) =\psi _{2}\left(
x,0\right) =\varphi \left( x\right) ,$ then by the superposition principle
the function $\psi =\psi _{1}-\psi _{2}$ is also a solution with respect to
the zero initial data $\psi \left( x,0\right) =\varphi \left( x\right)
-\varphi \left( x\right) =0.$ By the hypothesis of the lemm
\begin{equation*}
\left\langle \psi ,H_{0}\psi \right\rangle =f\left( t\right) \left\langle
\left( \alpha p+\beta x\right) \psi ,\left( \alpha p+\beta x\right) \psi
\right\rangle +g\left( t\right) \left\langle x\psi ,x\psi \right\rangle =0
\end{equation*
for all $t\in \lbrack 0,T).$ Therefore $x\psi \left( x,t\right) =x\left(
\psi _{1}\left( x,t\right) -\psi _{2}\left( x,t\right) \right) =0$ and $\psi
_{1}\left( x,t\right) =\psi _{2}\left( x,t\right) $ almost everywhere for
all $t>0$ by the axiom of the inner product in $L^{2}\left(
\mathbb{R}
\right) .$
\end{proof}
In order to apply this lemma to the variable Hamiltonians one has to
identify the corresponding positive operators $H_{0}$ and establish their
required uniqueness dynamics properties with respect to the zero initial
data. In addition to the simplest available dynamical invariant (\ref{b1}),
it is worth exploring other (quadratic) possibilities. The authors believe
that it is interesting and may be important on its own. For example, our
approach gives an opportunity to determine a complete time-evolution of the
standard deviations (\ref{bdp})--(\ref{bdx}) for each of the generalized
harmonic oscillators under consideration. The details will be discussed
elsewhere.
\subsection{The Caldirola-Kanai Hamiltonian}
The required operators are given b
\begin{equation}
H=H_{0}=\frac{\omega _{0}}{2}\left( e^{-2\lambda t}\ p^{2}+e^{2\lambda t}\
x^{2}\right) , \label{CKH1}
\end{equation
\begin{equation}
L=\frac{\partial H}{\partial t}=\lambda \omega _{0}\left( -e^{-2\lambda t}\
p^{2}+e^{2\lambda t}\ x^{2}\right) , \label{CKH2}
\end{equation
\begin{equation}
E=\frac{\omega _{0}}{2}\left( e^{-2\lambda t}\ p^{2}+e^{2\lambda t}\
x^{2}\right) +\frac{\lambda }{2}\left( px+xp\right) ,\quad \frac{d}{dt
\left\langle E\right\rangle =0. \label{CKH3}
\end{equation
By (\ref{diffops}
\begin{equation}
\frac{d}{dt}\left\langle H\right\rangle =\left\langle \frac{\partial H}
\partial t}\right\rangle =\left\langle L\right\rangle . \label{CKH4}
\end{equation
Applying formula (\ref{diffexp}) one get
\begin{eqnarray}
\frac{d}{dt}\left\langle L\right\rangle &=&2\lambda ^{2}\omega _{0}\left(
e^{-2\lambda t}\ \left\langle p^{2}\right\rangle +e^{2\lambda t}\
\left\langle x^{2}\right\rangle \right) \label{CKH5} \\
&&+2\lambda \omega _{0}^{2}\left\langle px+xp\right\rangle \notag
\end{eqnarray
an
\begin{equation}
\frac{d}{dt}\left\langle L\right\rangle +4\omega ^{2}\left\langle
H\right\rangle =4\omega _{0}^{2}\left\langle E\right\rangle _{0}
\label{CKH6}
\end{equation
with the help of (\ref{CKH1}) and (\ref{CKH3}).\medskip
In view of (\ref{CKH4}) and (\ref{CKH6}) the dynamics of the Hamiltonian
expectation value $\left\langle H\right\rangle $ is governed by the
following second-order differential equatio
\begin{equation}
\frac{d^{2}}{dt^{2}}\left\langle H\right\rangle +4\omega ^{2}\left\langle
H\right\rangle =4\omega _{0}^{2}\left\langle E\right\rangle _{0}
\label{CKdiffeq}
\end{equation
with the unique solution given b
\begin{equation}
\left\langle H\right\rangle =\frac{\omega ^{2}\left\langle H\right\rangle
_{0}-\omega _{0}^{2}\left\langle E\right\rangle _{0}}{\omega ^{2}}\cos
\left( 2\omega t\right) +\frac{1}{2\omega }\left\langle \frac{\partial H}
\partial t}\right\rangle _{0}\sin \left( 2\omega t\right) +\frac{\omega
_{0}^{2}}{\omega ^{2}}\left\langle E\right\rangle _{0}. \label{CKsol}
\end{equation
The hypotheses of Lemma~4 are satisfied. Our solution allows to determine a
complete time-evolution of the expectation values of the operators $p^{2},$
x^{2}$ and $px+xp.$ Further details are left to the reader.
\subsection{The Modified Caldirola-Kanai Hamiltonian}
The required operators ar
\begin{equation}
H=\frac{\omega _{0}}{2}\left( e^{-2\lambda t}\ p^{2}+e^{2\lambda t}\
x^{2}\right) -\lambda \left( px+xp\right) , \label{MCKH1}
\end{equation
\begin{equation}
L=\frac{\partial H}{\partial t}=\lambda \omega _{0}\left( -e^{-2\lambda t}\
p^{2}+e^{2\lambda t}\ x^{2}\right) =\frac{\partial H_{0}}{\partial t},
\label{MCKH2}
\end{equation
\begin{equation}
E=\frac{\omega _{0}}{2}\left( e^{-2\lambda t}\ p^{2}+e^{2\lambda t}\
x^{2}\right) -\frac{\lambda }{2}\left( px+xp\right) . \label{MCKH3}
\end{equation
We consider the expectation value $\left\langle H_{0}\right\rangle $ of the
positive operato
\begin{equation}
H_{0}=\frac{\omega _{0}}{2}\left( e^{-2\lambda t}\ p^{2}+e^{2\lambda t}\
x^{2}\right) . \label{MCKH4}
\end{equation
In this case $H=2E-H_{0},$ an
\begin{eqnarray}
\frac{d}{dt}\left\langle H\right\rangle &=&\left\langle \frac{\partial H}
\partial t}\right\rangle =\left\langle L\right\rangle =-\frac{d}{dt
\left\langle H_{0}\right\rangle , \label{MCKH5} \\
\frac{d}{dt}\left\langle L\right\rangle &=&4\omega ^{2}\left\langle
H_{0}\right\rangle -4\omega _{0}^{2}\left\langle E\right\rangle _{0},
\label{MCKH6}
\end{eqnarray
which results in the differential equation (\ref{CKdiffeq}) with the
explicit solutio
\begin{equation}
\left\langle H_{0}\right\rangle =\frac{\omega ^{2}\left\langle
H_{0}\right\rangle _{0}-\omega _{0}^{2}\left\langle E\right\rangle _{0}}
\omega ^{2}}\cos \left( 2\omega t\right) -\frac{1}{2\omega }\left\langle
\frac{\partial H_{0}}{\partial t}\right\rangle _{0}\sin \left( 2\omega
t\right) +\frac{\omega _{0}^{2}}{\omega ^{2}}\left\langle E\right\rangle _{0}
\end{equation
of the initial value problem. The hypotheses of the lemma are satisfied.
\subsection{The United Model}
The related operators can be conveniently extended as follow
\begin{equation}
H_{0}=\frac{\omega _{0}}{2}e^{\mu t}\left( e^{-2\lambda t}\
p^{2}+e^{2\lambda t}\ x^{2}\right) , \label{UMHamZ}
\end{equation
\begin{equation}
L=e^{\mu t}\left( -e^{-2\lambda t}\ p^{2}+e^{2\lambda t}\ x^{2}\right) ,
\label{UMHamL}
\end{equation
\begin{equation}
M=e^{\mu t}\left( px+xp\right) \label{UMHamM}
\end{equation
an
\begin{eqnarray}
E &=&H_{0}\left( t\right) +\frac{1}{2}\left( \lambda -\mu \right) M\left(
t\right) \label{UMEnergy} \\
&=&\frac{\omega _{0}}{2}e^{\mu t}\left( e^{-2\lambda t}\ p^{2}+e^{2\lambda
t}\ x^{2}\right) +\frac{1}{2}\left( \lambda -\mu \right) e^{\mu t}\left(
px+xp\right) . \notag
\end{eqnarray
Then by Lemma~
\begin{equation}
\frac{d}{dt}\left\langle M\right\rangle =-2\omega _{0}\left\langle
L\right\rangle , \label{DiffM}
\end{equation
\begin{equation}
\frac{d}{dt}\left\langle H_{0}\right\rangle =\omega _{0}\left( \lambda -\mu
\right) \left\langle L\right\rangle , \label{DiffHZ}
\end{equation
\begin{equation}
\frac{d}{dt}\left\langle E\right\rangle =0 \label{DiffE}
\end{equation
an
\begin{equation}
\frac{d}{dt}\left\langle L\right\rangle =4\frac{\lambda -\mu }{\omega _{0}
\left\langle H_{0}\right\rangle +2\omega _{0}\left\langle M\right\rangle .
\label{DiffL}
\end{equation
In terms of the energy operato
\begin{equation}
\frac{d}{dt}\left\langle L\right\rangle +\frac{4\omega ^{2}}{\left( \lambda
-\mu \right) \omega _{0}}\left\langle H_{0}\right\rangle =\frac{4\omega _{0
}{\lambda -\mu }\left\langle E\right\rangle \label{DiffLE}
\end{equation
and as a resul
\begin{equation}
\frac{d^{2}}{dt^{2}}\left\langle H_{0}\right\rangle +4\omega
^{2}\left\langle H_{0}\right\rangle =4\omega _{0}^{2}\left\langle
E\right\rangle _{0},\quad \omega =\sqrt{\omega _{0}^{2}-\left( \lambda -\mu
\right) ^{2}}>0 \label{DiffEQUM}
\end{equation
with the unique solution of the initial value problem given b
\begin{eqnarray}
\left\langle H_{0}\right\rangle &=&\frac{\omega ^{2}\left\langle
H_{0}\right\rangle _{0}-\omega _{0}^{2}\left\langle E\right\rangle _{0}}
\omega ^{2}}\cos \left( 2\omega t\right) \label{UMSol} \\
&&+\frac{1}{2}\left( \lambda -\mu \right) \frac{\omega _{0}}{\omega
\left\langle L\right\rangle _{0}\sin \left( 2\omega t\right) +\frac{\omega
_{0}^{2}}{\omega ^{2}}\left\langle E\right\rangle _{0}. \notag
\end{eqnarray
The hypotheses of Lemma~4 are satisfied.
\subsection{The Modified Oscillator}
The required operators ar
\begin{eqnarray}
H &=&\left( \cos t\ p+\sin t\ x\right) ^{2} \label{MC-SS1} \\
&=&\cos ^{2}t\ p^{2}+\sin ^{2}t\ x^{2}+\sin t\cos t\ \left( px+xp\right)
\notag \\
&=&\frac{1}{2}\left( p^{2}+x^{2}\right) +\frac{1}{2}\cos 2t\ \left(
p^{2}-x^{2}\right) +\frac{1}{2}\sin 2t\ \left( px+px\right) \notag \\
&=&H_{0}+E\left( t\right) , \notag
\end{eqnarray
wher
\begin{equation}
H_{0}=\frac{1}{2}\left( p^{2}+x^{2}\right) , \label{MC-SS2}
\end{equation
\begin{equation}
E=E\left( t\right) =\frac{1}{2}\cos 2t\ \left( p^{2}-x^{2}\right) +\frac{1}{
}\sin 2t\ \left( px+px\right) \label{MC-SS3}
\end{equation
an
\begin{equation}
L=\frac{\partial H}{\partial t}=\frac{\partial E}{\partial t}=-\sin 2t\
\left( p^{2}-x^{2}\right) +\cos 2t\ \left( px+px\right) . \label{MC-SS4}
\end{equation
Her
\begin{equation}
\frac{d}{dt}\left\langle H_{0}\right\rangle =\frac{d}{dt}\left\langle
H\right\rangle =\left\langle \frac{\partial H}{\partial t}\right\rangle
=\left\langle \frac{\partial E}{\partial t}\right\rangle =\left\langle
L\right\rangle \label{MC-SS5}
\end{equation
an
\begin{equation}
\frac{d}{dt}\left\langle L\right\rangle =4\left\langle H_{0}\right\rangle .
\label{MC-SS6}
\end{equation
The expectation value $\left\langle H_{0}\right\rangle $ satisfies the
following differential equatio
\begin{equation}
\frac{d^{2}}{dt^{2}}\left\langle H_{0}\right\rangle =4\left\langle
H_{0}\right\rangle \label{MC-SSeq}
\end{equation
with the explicit solutio
\begin{equation}
\left\langle H_{0}\right\rangle =\left\langle H_{0}\right\rangle _{0}\cosh
\left( 2t\right) +\frac{1}{2}\left\langle L\right\rangle _{0}\sinh \left(
2t\right) . \label{MC-SSsol}
\end{equation
The hypotheses of Lemma~4 are satisfied.
\subsection{The Modified Damped Oscillator}
Let $\hslash =m\omega _{0}=1$ in the Hamiltonian (\ref{CJHam}):
\begin{equation}
H=\frac{\omega _{0}}{2}\left( \frac{p^{2}}{\cosh ^{2}\left( \lambda t\right)
}+\cosh ^{2}\left( \lambda t\right) \ x^{2}\right) \label{CJham}
\end{equation
without loss of generality. The corresponding energy operator can be found
as follow
\begin{eqnarray}
&&E=\frac{\omega _{0}}{2\cosh ^{2}\left( \lambda t\right) }p^{2}+\frac
\omega _{0}^{2}\sinh ^{2}\left( \lambda t\right) +\omega ^{2}}{2\omega _{0}
x^{2} \label{CJEnergy} \\
&&\qquad +\frac{\lambda }{2}\tanh \left( \lambda t\right) \left(
px+xp\right) ,\qquad \frac{d}{dt}\left\langle E\right\rangle =0, \notag
\end{eqnarray
in view of (\ref{energysysA})--(\ref{energysysC}) (one should replace
A\leftrightarrow B,$ $C\rightarrow -C$ in the momentum
representation).\medskip
Introducing the following complementary operator
\begin{eqnarray}
H_{0} &=&\frac{p^{2}}{\cosh ^{2}\left( \lambda t\right) }+\cosh ^{2}\left(
\lambda t\right) \ x^{2}, \label{CJHZ} \\
L &=&\frac{p^{2}}{\cosh ^{2}\left( \lambda t\right) }-\cosh ^{2}\left(
\lambda t\right) \ x^{2}, \label{CJL} \\
M &=&px+xp, \label{CJM}
\end{eqnarray
we ge
\begin{eqnarray}
\frac{d}{dt}\left\langle H_{0}\right\rangle &=&-2\lambda \tanh \left(
\lambda t\right) \left\langle L\right\rangle , \label{CJHsys} \\
\frac{d}{dt}\left\langle L\right\rangle &=&-2\lambda \tanh \left( \lambda
t\right) \left\langle H_{0}\right\rangle -2\omega _{0}\left\langle
M\right\rangle , \label{CJLsys} \\
\frac{d}{dt}\left\langle M\right\rangle &=&2\omega _{0}\left\langle
L\right\rangle . \label{CJMsys}
\end{eqnarray
The
\begin{eqnarray}
E &=&\frac{\omega _{0}}{2}\left( 1-\frac{\lambda ^{2}}{2\omega _{0}^{2}\cosh
^{2}\left( \lambda t\right) }\right) H_{0}+\frac{\lambda ^{2}}{4\omega
_{0}\cosh ^{2}\left( \lambda t\right) }\ L \label{CJenergy} \\
&&+\frac{\lambda }{2}\tanh \left( \lambda t\right) M \notag
\end{eqnarray
and, eliminating $\left\langle M\right\rangle $ and $\left\langle
L\right\rangle $ from the system, one gets
\begin{equation}
\frac{d^{2}}{dt^{2}}\left\langle H_{0}\right\rangle -\frac{4\lambda }{\sinh
\left( 2\lambda t\right) }\frac{d}{dt}\left\langle H_{0}\right\rangle
+2\left( 2\omega ^{2}+\frac{\lambda ^{2}}{\cosh ^{2}\left( \lambda t\right)
\right) \left\langle H_{0}\right\rangle =8\omega _{0}\left\langle
E\right\rangle _{0}. \label{CJEquation}
\end{equation
The required initial conditions
\begin{equation}
\left( \frac{d}{dt}\left\langle H_{0}\right\rangle \right) _{0}=0,\qquad
\left( \coth \left( \lambda t\right) \frac{d}{dt}\left\langle
H_{0}\right\rangle \right) _{0}=-2\lambda \left\langle L\right\rangle _{0}
\label{CJConditions}
\end{equation
follow from (\ref{CJHsys}). The unique explicit solution is given b
\begin{eqnarray}
\left\langle H_{0}\right\rangle &=&-\lambda \frac{\lambda ^{2}\left\langle
E\right\rangle _{0}+\omega _{0}\omega ^{2}\left\langle L\right\rangle _{0}}
\omega _{0}\omega ^{2}\left( 2\omega ^{2}+\lambda ^{2}\right) }
\label{CJSolution} \\
&&\times \left( 2\omega \tanh \left( \lambda t\right) \sin \left( 2\omega
t\right) +\lambda \left( 1+\tanh ^{2}\left( \lambda t\right) \right) \cos
\left( 2\omega t\right) \right) \notag \\
&&+2\left\langle E\right\rangle _{0}\frac{\omega _{0}}{\omega ^{2}}\left( 1
\frac{\lambda ^{2}}{2\omega _{0}^{2}\cosh ^{2}\left( \lambda t\right)
\right) \notag
\end{eqnarray
(see appendix~D). The hypotheses of Lemma~4 are satisfied.
\subsection{The Modified Parametric Oscillator}
In the case (\ref{MPOHam}), the energy operator (\ref{EQ4}) is a positive
operator
\begin{equation}
\left\langle E\right\rangle =\tanh ^{2}\left( \lambda t+\delta \right) \
\left\langle p^{2}\right\rangle +\coth ^{2}\left( \lambda t+\delta \right) \
\left\langle x^{2}\right\rangle =\left\langle E\right\rangle _{0}>0.
\label{MPOEnergy}
\end{equation
The related operators ar
\begin{eqnarray}
L &=&\tanh ^{2}\left( \lambda t+\delta \right) \ p^{2}-\coth ^{2}\left(
\lambda t+\delta \right) \ x^{2}, \label{MPOL} \\
M &=&px+xp, \label{MPOM} \\
H &=&\frac{\omega }{2}\ E+\frac{\lambda }{\sinh \left( 2\lambda t+2\delta
\right) }\ M \label{MPOH}
\end{eqnarray
wit
\begin{equation}
\frac{d}{dt}\left\langle L\right\rangle =-2\omega \left\langle
M\right\rangle ,\qquad \frac{d}{dt}\left\langle M\right\rangle =-2\omega
\left\langle L\right\rangle . \label{MPODiffLM}
\end{equation
From her
\begin{equation}
\frac{d^{2}}{dt^{2}}\left\langle L\right\rangle +4\omega ^{2}\left\langle
L\right\rangle =0,\qquad \frac{d^{2}}{dt^{2}}\left\langle M\right\rangle
+4\omega ^{2}\left\langle M\right\rangle =0, \label{MPODiffEqLM}
\end{equation
which determines the time-evolution of the expectation values.
\subsection{Parametric Oscillators}
In general the Lewis--Riesenfeld quadratic invariant (\ref{ParamQuadInv})
for the parametric oscillator (\ref{ParametHam}) is obviously a positive
operator for real-valued solutions of the Ermakov equation (\re
{NonlinearODE}) that satisfies the conditions of our lemma.
\subsection{General Quadratic Hamiltonian}
In the case of Hamiltonian (\ref{GenHam}) applying formula (\ref{GenSys}) to
the operators, $O=\left\{ p^{2},x^{2},px+xp\right\} ,$ one obtains \cit
{Cor-Sot:Sua:Sus}
\begin{equation}
\frac{d}{dt}\left(
\begin{array}{c}
\left\langle p^{2}\right\rangle \smallskip \\
\left\langle x^{2}\right\rangle \smallskip \\
\left\langle px+xp\right\rangl
\end{array
\right) =\left(
\begin{array}{ccc}
-3c\left( t\right) -d\left( t\right) & 0 & -2b\left( t\right) \smallskip \\
0 & c\left( t\right) +3d\left( t\right) & 2a\left( t\right) \smallskip \\
4a\left( t\right) & -4b\left( t\right) & -c\left( t\right) +d\left( t\right
\end{array
\right) \left(
\begin{array}{c}
\left\langle p^{2}\right\rangle \smallskip \\
\left\langle x^{2}\right\rangle \smallskip \\
\left\langle px+xp\right\rangl
\end{array
\right) . \label{GSystem}
\end{equation
This system has a unique solution for suitable coefficients \cite{HilleODE},
which allows one to apply Lemma~4, say, for the positive operator $x^{2}.$
Our Theorem~1 provides another choice of positive operators. On the second
thought a positive integral (\ref{b3}) determines time-evolution of the
squared norm and guarantees uniqueness in $L^{2}\left(
\mathbb{R}
\right) .$ Details are left to the reader.\medskip
\noindent \textbf{Acknowledgments.\/} We thank Professor Carlos Castillo-C
\'{a}vez, Professor Victor V. Dodonov, Professor Vladimir~I. Man'ko and
Professor Kurt Bernardo Wolf for support, valuable discussions and
encouragement. The authors are indebted to Professor George A.~Hagedorn for
kindly pointing out the papers \cite{Hag:Loss:Slaw} and \cite{Haged98} to
our attention. We thank David Murillo for help. The authors are grateful to
Professor Peter~G.~L.~Leach for careful reading of the manuscript --- his
numerous suggestions have helped to improve the presentation. One of the
authors (RCS) is supported by the following National Science Foundation
programs: Louis Stokes Alliances for Minority Participation (LSAMP): NSF
Cooperative Agreement No. HRD-0602425 (WAESO LSAMP Phase IV); Alliances for
Graduate Education and the Professoriate (AGEP): NSF Cooperative Agreement
No. HRD-0450137 (MGE@MSA AGEP Phase II).
|
1,314,259,993,410 | arxiv | \section{Introduction}
\begin{figure}
\includegraphics[width=\linewidth]{figures/morph}
\caption{Example of annotation disagreement in UD between two languages on translations of one phrase, reproduced from \citet{malaviya2018neural}. The final word in each, \form{\emph{refrescante}}, is not inflected for gender: It has the same surface form whether masculine or feminine. Only in Portuguese, it is annotated as masculine to reflect grammatical concord with the noun it modifies.}
\label{fig:disagreement}
\end{figure}
The two largest standardized, cross-lingual datasets for morphological annotation are provided by the Universal Dependencies \citep[UD;][]{nivre2017universal} and Universal Morphology \cite[UniMorph;][]{sylakglassman2015,kirov2018unimorph} projects.
Each project's data are annotated according to its own cross-lingual schema, prescribing how features like gender or case should be marked.
The schemata capture largely similar information, so one may want to leverage both UD's token-level treebanks and UniMorph's type-level lookup tables and unify the two resources.
This would permit a leveraging of both the token-level UD treebanks and the type-level UniMorph tables of paradigms.
Unfortunately, neither resource perfectly realizes its schema.
On a dataset-by-dataset basis, they incorporate annotator errors, omissions, and human decisions when the schemata are underspecified; one such example is in \autoref{fig:disagreement}.
A dataset-by-dataset problem demands a dataset-by-dataset solution; our task is not to translate a \emph{schema}, but to translate a \emph{resource}.
Starting from the idealized schema, we create a rule-based tool for converting UD-schema annotations to UniMorph annotations, incorporating language-specific post-edits that both correct infelicities and also increase harmony between the datasets themselves (rather than the schemata).
We apply this conversion to the 31 languages with both UD and UniMorph data, and we report our method's recall, showing an improvement over the strategy which just maps corresponding schematic features to each other.
Further, we show similar downstream performance for each annotation scheme in the task of morphological tagging.
This tool enables a synergistic use of UniMorph and Universal Dependencies, as well as teasing out the annotation discrepancies within and across projects. When one dataset disobeys its schema or disagrees with a related language, the flaws may not be noticed except by such a methodological dive into the resources. When the maintainers of the resources ameliorate these flaws, the resources move closer to the goal of a universal, cross-lingual inventory of features for morphological annotation.
The contributions of this work are:
\begin{itemize}
\item We detail a deterministic mapping from UD morphological annotations to UniMorph. Language-specific edits of the tags in 31 languages increase harmony between converted UD and existing UniMorph data (\autoref{sec:conversion}).
\item We provide an implementation of this mapping and post-editing, which replaces the UD features in a CoNLL-U file with UniMorph features.\footnote{Available at \url{https://www.github.com/unimorph/ud-compatibility}.}
\item We demonstrate that downstream performance tagging accuracy on UD treebanks is similar, whichever annotation schema is used~(\autoref{sec:results}).
\item We provide a partial inventory of missing attributes or annotation inconsistencies in both UD and UniMorph, a guidepost for strengthening and harmonizing each resource.
\end{itemize}
\section{Background: Morphological Inflection}
Morphological \term{inflection} is the act of altering the base form of a word (the \term{lemma}, represented in \lemma{fixed-width type}) to encode morphosyntactic features. As an example from English, \lemma{prove} takes on the \term{form} \form{proved} to indicate that the action occurred in the past. (We will represent all surface forms in quotation marks.) The process occurs in the majority of the world's widely-spoken languages, typically through meaningful affixes. The breadth of forms created by inflection creates a challenge of data sparsity for natural language processing: The likelihood of observing a particular word form diminishes.
A classic result in psycholinguistics \citep{berko1958child}
shows that inflectional morphology is a fully productive process. Indeed,
it cannot be that humans simply have the equivalent of a lookup table, where they store the inflected forms for retrieval as the syntactic context requires. Instead, there needs to be a mental process that can generate properly inflected
words on demand. \citet{berko1958child} showed this insightfully through
the \say{wug}-test, an experiment where she forced
participants to correctly inflect out-of-vocabulary lemmata,
such as the novel noun \lemma{wug}.
Certain features of a word do not vary depending on its context: In German or Spanish where nouns are gendered, the word for \lemma{onion} will always be grammatically feminine. Thus, to prepare for later discussion, we divide the morphological features of a word into two categories: the modifiable \term{inflectional features} and the fixed \term{lexical features}.
A \term{part of speech (POS)} is a coarse syntactic category (like \say{verb}) that begets a word's particular menu of lexical and inflectional features. In English, verbs express no gender, and adjectives do not reflect person or number. The part of speech dictates a set of inflectional \term{slots} to be filled by the surface forms. Completing these slots for a given lemma and part of speech gives a \term{paradigm}: a mapping from slots to surface forms. Regular English verbs have five slots in their paradigm \citep{long1957paradigms}, which we illustrate for the verb \lemma{prove}, using simple labels for the forms in \autoref{tab:ptb}.
\begin{table}
\centering
\begin{tabular}{l l l}
\toprule
Simple label & Form & PTB tag \\
\midrule
Present, 3rd singular & \form{proves} & VBZ \\
Present, other & \form{prove} & VBP \\
Past & \form{proved} & VBD \\
Past participle & \form{proven} & VBN \\
Present participle & \form{proving} & VBG \\
\bottomrule
\end{tabular}
\caption{Inflected forms of the English verb \lemma{prove}, along with their Penn Treebank tags}
\label{tab:ptb}
\end{table}
A morphosyntactic \term{schema} prescribes how language can be annotated---giving stricter categories than our simple labels for \lemma{prove}---and can vary in the level of detail provided. Part of speech tags are an example of a very coarse schema, ignoring details of person, gender, and number. A slightly finer-grained schema for English is the Penn Treebank tagset \cite{marcus1993building}, which includes signals for English morphology. For instance, its \tag{VBZ} tag pertains to the specially inflected 3rd-person singular, present-tense verb form (e.g.\ \form{proves} in \autoref{tab:ptb}).
If the tag in a schema is detailed enough that it exactly specifies a slot in a paradigm, it is called a \term{morphosyntactic description (MSD)}.\footnote{Other sources will call this a morphological tag or bundle. We avoid the former because of the analogy to POS tagging; a morphological tag is not atomic.} These descriptions require varying amounts of detail: While the English verbal paradigm is small enough to fit on a page, the verbal paradigm of the Northeast Caucasian language Archi can have over~\SI{1500000} slots~\citep{kibrik1998archi}.%
\section{Two Schemata, Two Philosophies}
Unlike the Penn Treebank tags, the UD and UniMorph schemata are cross-lingual and include a fuller lexicon of attribute-value pairs, such as \tag{\textbf{Person}:~1}. Each was built according to a different set of principles. UD's schema is constructed bottom-up, adapting to include new features when they're identified in languages. UniMorph, conversely, is top-down: A cross-lingual survey of the literature of morphological phenomena guided its design. UniMorph aims to be linguistically complete, containing all known morphosyntactic attributes. Both schemata share one long-term goal: a total inventory for annotating the possible morphosyntactic features of a word.
\subsection{Universal Dependencies}
\begin{table*}[t]
\centering
\begin{tabular}{l l}
\toprule
Schema & Annotation \\
\midrule
UD & \tag{VERB\qquad{}Mood=Ind\textbar{}Number=Sing\textbar{}Person=3\textbar{}Tense=Imp\textbar{}VerbForm=Fin} \\
UniMorph & \tag{V;IND;PST;1;SG;IPFV} \\
& \tag{V;IND;PST;3;SG;IPFV} \\
\bottomrule
\end{tabular}
\caption{Attested annotations for the Spanish verb form \say{\emph{mandaba}} \say{I/he/she/it commanded}. Note that UD separates the part of speech from the remainder of the morphosyntactic description. In each schema, order of the values is irrelevant.}
\label{tab:annotations}
\end{table*}
The Universal Dependencies morphological schema comprises part of speech and 23 additional attributes (also called features in UD) annotating meaning or syntax, as well as language-specific attributes.
In order to ensure consistent annotation, attributes are included into the general UD schema if they occur in several corpora. Language-specific attributes are used when only one corpus annotates for a specific feature.
The UD schema seeks to balance language-specific and cross-lingual concerns. It annotates for both inflectional features such as case and lexical features such as gender. Additionally, the UD schema annotates for features which can be interpreted as derivational in some languages. For example, the Czech UD guidance uses a \tag{Coll} value for the \tag{\textbf{Number}} feature to denote mass nouns (for example, "{\it lidstvo}" "humankind" from the root "{\it lid}" "people").\footnote{Note that \tag{\textbf{Number}: Coll} does not actually figure in the Czech corpus.}
UD represents a confederation of datasets \citep[see, e.g.,][]{dirix2017universal} annotated with dependency relationships (which are not the focus of this work) and morphosyntactic descriptions. Each dataset is an annotated treebank, making it a resource of \term{token-level} annotations. The schema is guided by these treebanks, with feature names chosen for relevance to native speakers. (In \autoref{sec:unimorph}, we will contrast this with UniMorph's treatment of morphosyntactic categories.) The UD datasets have been used in the CoNLL shared tasks \citep[2018 to appear]{zeman2017conll}.
\subsection{UniMorph} \label{sec:unimorph}
In the Universal Morphological Feature Schema \citep[UniMorph schema,][]{sylak2016composition}, there are at least 212 values, spread across 23 attributes. It identifies some attributes that UD excludes like information structure and deixis, as well as providing more values for certain attributes, like 23 different noun classes endemic to Bantu languages. As it is a schema for marking morphology, its part of speech attribute does not have POS values for punctuation, symbols, or miscellany (\tag{Punct}, \tag{Sym}, and~\tag{X} in Universal Dependencies).
Like the UD schema, the decomposition of a word into its lemma and MSD is directly comparable across languages. Its features are informed by a distinction between \term{universal categories}, which are widespread and psychologically \say{real} to speakers; and \term{comparative concepts}, only used by linguistic typologists to compare languages \citep{haspelmath2010comparative}. Additionally, it strives for identity of meaning across languages, not simply similarity of terminology. As a prime example, it does not regularly label a dative case for nouns, for reasons explained in depth by \citet{haspelmath2010comparative}.\footnote{\say{The Russian Dative, the Korean Dative, and the Turkish Dative are similar enough to be called by the same name, but there are numerous differences between them and they cannot be simply equated with each other. Clearly, their nature is not captured satisfactorily by saying that they are instantiations of a crosslinguistic category \say{dative}.} \citep{haspelmath2010comparative}}
The UniMorph resources for a language contain complete paradigms extracted from Wiktionary \citep{kirov2016very, kirov2018unimorph}. Word \term{types} are annotated to form a database, mapping a lemma--tag pair to a surface form. The schema is explained in detail in \citet{sylak2016composition}. It has been used in the SIGMORPHON shared task \citep{cotterell2016sigmorphon} and the CoNLL--SIGMORPHON shared tasks \citep{cotterell2017conll, cotterell-EtAl:2018}. Several components of the UniMorph schema have been adopted by UD.%
\footnote{\url{http://universaldependencies.org/v2/features.html\#comparison-with-unimorph}}
\subsection{Similarities in the annotation}
While the two schemata annotate different features, their annotations often look largely similar. Consider the attested annotation of the Spanish word \form{\emph{mandaba}} \say{(I/he/she/it) commanded}. \autoref{tab:annotations} shows that these annotations share many attributes.
Some conversions are straightforward: \tag{VERB} to \tag{V}, \tag{Mood=Ind} to \tag{IND}, \tag{Number=Sing} to \tag{SG}, and \tag{Person=3} to \tag{3}.%
\footnote{The curious reader may wonder why there are two rows of UniMorph annotation for \say{\emph{mandaba}}, each with a different recorded person. The word displays \textbf{syncretism}, meaning that a single form realizes multiple MSDs. UniMorph chooses to mark these separately for the sake of its decomposable representation. As this ambiguity is systematic and pervasive in the language, one can imagine a unified paradigm slot \tag{V;IND;PST;\{1/3\};SG;IPFV} \citep{baerman2005syntax}.}
One might also suggest mapping \tag{Tense=Imp} to \tag{IPFV}, though this crosses semantic categories: \tag{IPFV} represents the imperfective \emph{aspect}, whereas \tag{Tense=Imp} comes from \term{imperfect}, the English name often given to Spanish's \emph{pasado continuo} form. The imperfect is a verb form which combines both past tense and imperfective aspect. UniMorph chooses to split this into the atoms \tag{PST} and \tag{IPFV}, while UD unifies them according to the familiar name of the tense.
\section{UD treebanks and UniMorph tables} \label{sec:resources}
Prima facie, the alignment task may seem trivial. But we've yet to explore the humans in the loop. This conversion is a hard problem because we're operating on idealized schemata. We're actually annotating human decisions---and human mistakes. If both schemata were perfectly applied, their overlapping attributes could be mapped to each other simply, in a cross-lingual and totally general way. Unfortunately, the resources are imperfect realizations of their schemata. The cross-lingual, cross-resource, and within-resource problems that we'll note mean that we need a tailor-made solution for each language.
Showcasing their schemata, the Universal Dependencies and UniMorph projects each present large, annotated datasets. UD's v2.1 release \citep{nivre2017universal} has 102 treebanks in 60 languages. The large resource, constructed by independent parties, evinces problems in the goal of a universal inventory of annotations. Annotators may choose to omit certain values (like the coerced gender of \emph{refrescante} in \autoref{fig:disagreement}), and they may disagree on how a linguistic concept is encoded. (See, e.g., \citeauthor{haspelmath2010comparative}'s (\citeyear{haspelmath2010comparative}) description of the dative case.) Additionally, many of the treebanks \say{were created by fully- or semi-automatic conversion from treebanks with less comprehensive annotation schemata than UD} \citep{malaviya2018neural}. For instance, the Spanish word \say{\emph{vas}} \say{you go} is incorrectly labeled \tag{\textbf{Gender:} Fem\textbar{}\textbf{Number:} Pl} because it ends in a character sequence which is common among feminine plural nouns. (Nevertheless, the part of speech field for \say{\emph{vas}} is correct.)
\begin{figure*}
\centering
\begin{tabular}{l l l l l l l}
\toprule
tegarg & \textbf{latme}-ye & bad-i & be & ba:q-e & man & \textbf{zad}. \\
Hail & damage-\tag{EZ} & bad-\tag{INDEF PAR} & to & garden-\tag{EZ} & \tag{1.s} & beat-\tag{PST}. \\
\midrule
\multicolumn{7}{c}{\say{The hail caused bad damage to my garden.} \emph{or} \say{The hail damaged my garden badly.}} \\
\bottomrule
\end{tabular}
\caption{Transliterated Persian with a gloss and translation from \citet{karimi2011separability}, annotated in a Persian-specific schema. The light verb construction \form{\emph{latme zadan}} (\say{to damage}) has been spread across the sentence. Multiword constructions like this are a challenge for word-level tagging schemata.}
\label{fig:light_verb_construction}
\end{figure*}
UniMorph's development is more centralized and pipelined.%
\footnote{This centralization explains why UniMorph tables exist for only 49 languages, or 50 when counting the Norwegian Nynorsk and Bokm\aa{}l writing forms separately.}
Inflectional paradigms are scraped from Wiktionary, annotators map positions in the scraped data to MSDs, and the mapping is automatically applied to all of the scraped paradigms. Because annotators handle languages they are familiar with (or related ones), realization of the schema is also done on a language-by-language basis. Further, the scraping process does not capture lexical aspects that are not inflected, like noun gender in many languages. The schema permits inclusion of these details; their absence is an artifact of the data collection process. Finally, UniMorph records only exist for nouns, verbs, and adjectives, though the schema is broader than these categories.
For these reasons, we treat the corpora as imperfect realizations of the schemata. Moreover, we contend that ambiguity in the schemata leave the door open to allow for such imperfections. With no strict guidance, it's natural that annotators would take different paths. Nevertheless, modulo annotator disagreement, we assume that within a particular corpus, one word form will always be consistently annotated.
Three categories of annotation difficulty are missing values, language-specific attributes, and multiword expressions.
\paragraph{Missing values} In both schemata, irrelevant attributes are omitted for words to which they do not pertain. For instance, an English verb is not labeled \tag{\textbf{Gender}=NULL}; the \tag{\textbf{Gender}} attribute is simply excluded from the annotation, making the human-readable representations compact. Unfortunately, in both resources, even relevant attributes are intentionally omitted. A verb's positiveness, activeness, or finiteness can be taken as implicit, and it will be omitted arbitrarily on a language-by-language basis. For instance, in our example in \autoref{tab:annotations} only UD tags Spanish finite verbs: \tag{VerbForm=Fin}. Not only UniMorph makes such elisions: we note that \emph{neither} resource marks verb forms as active---an action entirely permitted by the schemata.
This is one source of discrepancy, both between the projects and across languages within a project, but it is straightforward to harmonize.
\paragraph{Language-specific attributes}
\phantomsection{} \label{sec:lgspec}
UD records a set of features that are kept language-specific, including \tag{\textbf{Position}} in Romanian, \tag{\textbf{Dialect}} in Russian, and \tag{\textbf{NumValue}} in Czech and Arabic.\footnote{The complete list is at \url{http://universaldependencies.org/v2/features.html\#inventory-of-features-that-will-stay-language-specific}} UniMorph has (potentially infinite) language-specific features \tag{LgSpec1}, \tag{LgSpec2}, \ldots, which are sparsely used but opaque when encountered. For instance, \tag{LgSpec1} in Spanish distinguishes between the two (semantically identical) forms of the imperfect subjunctive: the \form{-se} and \form{-ra} forms (e.g.\ \say{\emph{estuviese}} and \say{\emph{estuviera}} from \say{\emph{estar}} \say{to be}). UD does not annotate the forms differently. If a language has multiple language-specific attributes, their order is not prescribed by the UniMorph schema, and separate notes that explain the use of such tags must accompany datasets.
\paragraph{Multiword expressions} A final imperfection is how to represent multiword constructions. Both UD and UniMorph are word-level annotations, espousing what has alternately been called the \term{lexical integrity principle}~\citep{chomsky1970remarks, bresnan1995lexical} or \term{word-based morphology}~\citep{aronoff1976word, aronoff2007beginning, spencer1991morphological}. Unfortunately, not all morphological manifestations occur at the level of individual words. The Farsi (Persian) \term{light verb construction} illustrates the deficiency \citep[see][]{karimi2011separability}. Farsi expresses many actions by pairing a light verb (one with little meaning) with a noun that gives a concrete meaning. The example in \autoref{fig:light_verb_construction} uses the light verb construction \say{\emph{latme zadan}} (\say{to damage}). The parts of the verb construction are separated in the sentence, seeming to require a morphosyntactic parse. When attempting to annotate these constructs, neither schema provides guidance. In languages where these occur, language-specific decisions are made. It should be noted that multiword expressions are a general challenge to natural language processing, not specifically morphology \citep{sag2002multiword}.
\section{A Deterministic Conversion} \label{sec:conversion}
In our work, the goal is not simply to translate one schema into the other, but to translate one \emph{resource} (the imperfect manifestation of the schema) to match the other. The differences between the schemata and discrepancies in annotation mean that the transformation of annotations from one schema to the other is not straightforward.
Two naive options for the conversion are a lookup table of MSDs and a lookup table of the individual attribute-value pairs which comprise the MSDs. The former is untenable: the table of all UD feature combinations (including null features, excluding language-specific attributes) would have \SI{2.445e17} entries. Of course, most combinations won't exist, but this gives a sense of the table's scale. Also, it doesn't leverage the factorial nature of the annotations: constructing the table would require a massive duplication of effort. On the other hand, attribute-value lookup lacks the flexibility to show how a pair of values interacts. Neither approach would handle language- and annotator-specific tendencies in the corpora.
Our approach to converting UD MSDs to UniMorph MSDs begins with the attribute-value lookup, then amends it on a language-specific basis. Alterations informed by the MSD and the word form, like insertion, substitution, and deletion, increase the number of agreeing annotations. They are critical for work that examines the MSD monolithically instead of feature-by-feature \citep[e.g.][]{belinkov2017neural, cotterell2017cross}: Without exact matches, converting the individual tags becomes hollow.
Beginning our process, we relied on documentation of the two schemata to create our initial, language-agnostic mapping of individual values. This mapping has \(140\) pairs in it. Because the mapping was derived purely from the schemata, it is a useful approximation of how well the schemata match up. We note, however, that the mapping does not handle idiosyncrasies like the many uses of \say{dative} or features which are represented in UniMorph by argument templates: possession and ergative--absolutive argument marking. The initial step of our conversion is using this mapping to populate a proposed UniMorph MSD.
As shown in \autoref{sec:results}, the initial proposal is often frustratingly deficient. Thus we introduce the post-edits. To concoct these, we looked into UniMorph corpora for these languages, compared these to the conversion outputs, and then sought to bring the conversion outputs closer to the annotations in the actual UniMorph corpora. When a form and its lemma existed in both corpora, we could directly inspect how the annotations differed. Our process of iteratively refining the conversion implies a table which exactly maps any combination of UD MSD and its related values (lemma, form, etc.) to a UniMorph MSD, though we do not store the table explicitly.
Some conversion rules we've created must be applied before or after others. These sequential dependencies provide conciseness. Our post-editing procedure operates on the initial MSD hypothesis as follows:
\begin{enumerate}
\item First, we collect all arguments relating to a possessor or an ergative--absolutive language's argument agreement, because UniMorph represents both categories as a single templatic value.
\item We discard any values that UniMorph doesn't annotate for a particular part of speech, like gender and number in French verb participles, or German noun genders.
\item We make MSD additions when they are unambiguously implied by the resources, like \tag{PFV} to accompany \tag{PST} in Spanish \say{pasado simple}, but \tag{PST} to accompany \tag{IPFV} in Spanish \say{pasado continuo}.
\item We also incorporate fixes using information outside of the MSD like the \tag{LgSpec1} tag for Spanish's \form{-ra} forms, as described in \autoref{sec:lgspec}, and other language-specific corrections, like mapping the various dative cases to the cross-lingually comparable case annotations used in UniMorph.
\end{enumerate}
\paragraph{What we left out} We did, however, reject certain changes that would increase harmony between the resources. Usually, this decision was made when the UniMorph syntax or tagset was not obeyed, such as in the case of made-up tags for Basque arguments (instead of the template mentioned above) or the use of idiopathic colons (:) instead of semicolons (;) as separators in Farsi. Other instances were linguistically motivated. UD acknowledges Italian imperatives, but UniMorph does not have any in its table. We could largely alter these to have subjunctive labels, but to ill effect. A third reason to be conservative in our rules was cases of under-specification: If a participle is not marked as past or present in UD, but both exist in UniMorph, we could unilaterally assign all to the majority category and increase recall. This would pollute the data with fallacious features, so we leave these cases under-specified. In other words, we do not add new values that cannot be unequivocally inferred from the existing data.
\paragraph{Output} The Universal Dependencies data are presented in the CoNLL-U format.\footnote{\url{http://universaldependencies.org/format.html}} Each sentence is represented in tabular form to organize annotations like lemmas, parts of speech, and dependencies of each word token. The MSDs are held in a column called \texttt{FEATS}. Our MSD conversion tool produces a CoNLL-U file whose \texttt{FEATS} column now contains a UniMorph-style MSD. For more straightforward interface with UniMorph, the feature bundle includes the part of speech tag. As the \texttt{POS} column of the CONLL-U file is preserved, this can easily be stripped from the \texttt{FEATS} column, depending on use case.
\paragraph{Why not a learned mapping?} One can imagine learning the UniMorph MSD corresponding to a UD dataset's MSD by a set-to-set translation model like IBM Model~1 \citep{brown1993mathematics}. Unfortunately, statistical (and especially neural) machine translation generalizes in unreliable ways. Our goal is a straightforward, easily manipulable and extensible conversion that prioritizes correctness over coverage.
\section{Experiments}
We evaluate our tool on two tasks:
\begin{description}
\item[Intrinsic assessment:] Once we convert UD MSDs to UniMorph MSDs, how many of the converted ones are attested in UniMorph's paradigm tables.
\item[Extrinsic assessment:] Whether performance on a downstream task is comparable when using pre- and post-conversion MSDs.
\end{description}
To be clear, our scope is limited to the schema conversion. Future work will explore NLP tasks that exploit both the created token-level UniMorph data and the existing type-level UniMorph data.
\paragraph{Data} We draw our input data from the UD v2.1 treebanks \citep{nivre2017universal}. When multiple treebanks exist for a language, we select the one with a basic name, e.g.\ \say{Spanish} instead of \say{Spanish-AnCora}. We leave the construction of additional converters to future work, and we invite the community to participate in designing the mappings for all UD treebanks. UniMorph modifies its language packs individually instead of offering versioned releases. Our UniMorph lookup tables are the latest versions at the time of writing.\footnote{As of 19 June 2018, the latest modification to a UniMorph language resource was to Finnish on 3 August 2017.} There are 31 languages which possess both a UD and a UniMorph corpus.
\subsection{Intrinsic evaluation} We transform all UD data to the UniMorph. We compare the simple lookup-based transformation to the one with linguistically informed post-edits on all languages with both UD and UniMorph data. We then evaluate the recall of MSDs without partial credit.
\paragraph{Calculating recall} Because the UniMorph tables only possess annotations for verbs, nouns, adjectives, or some combination, we can only examine performance for these parts of speech. We consider two words to be a match if their form and lemma are present in both resources. Syncretism allows a single surface form to realize multiple MSDs (Spanish \form{\emph{mandaba}} can be first- or third-person), so we define success as the computed MSD matching \emph{any} of the word's UniMorph MSDs. This gives rise to an equation for recall: of the word--lemma pairs found in both resources, how many of their UniMorph-converted MSDs are present in the UniMorph tables?
\paragraph{Why no held-out test set?} Our problem here is not a learning problem, so the question is ill-posed. There is no \emph{training} set, and the two resources for a given language make up a test set. The quality of our model---the conversion tool---comes from how well we encode prior knowledge about the relationship between the UD and UniMorph corpora.
\subsection{Extrinsic evaluation} If the UniMorph-converted treebanks perform differently on downstream tasks, then they convey different information. This signals a failure of the conversion process. As a downstream task, we choose morphological tagging, a critical step to leveraging morphological information on new text.
We evaluate taggers trained on the transformed UD data, choosing eight languages randomly from the intersection of UD and UniMorph resources. We report the macro-averaged F1 score of attribute-value pairs on a held-out test set, with official train/validation/test splits provided in the UD treebanks. As a reference point, we also report tagging accuracy on those languages' untransformed data.
We use the state-of-the-art morphological tagger of \citet{malaviya2018neural}. It is a factored conditional random field with potentials for each attribute, attribute pair, and attribute transition. The potentials are computed by neural networks, predicting the values of each attribute jointly but not monolithically. Inference with the potentials is performed approximately by loopy belief propagation. We use the authors' hyperparameters.
We note a minor implementation detail for the sake of reproducibility. The tagger exploits explicit guidance about the attribute each value pertains to. The UniMorph schema's values are globally unique, but their attributes are not explicit. For example, the UniMorph \tag{Masc} denotes a masculine gender. We amend the code of \citeauthor{malaviya2018neural} to incorporate attribute identifiers for each UniMorph value.
\section{Results} \label{sec:results}
\begin{table}
\centering
\begin{tabular}{l S[table-format=5.2] S[table-format=5.2] }
\toprule
Language & {CSV} & {Post-editing} \\
\midrule
Ar & 0.00 & {-} \\
Bg & 34.61 & 87.88 \\
Ca & 23.23 & 99.78 \\
Cs & 0.48 & 81.71 \\
Da & 1.55 & 4.70 \\
De & 17.20 & 60.81 \\
En & 42.17 & 90.10 \\
Es & 17.20 & 97.86 \\
Eu & 0.00 & 0.00 \\
Fa & 0.00 & {-} \\
Fi & 59.19 & 92.81 \\
Fr & 18.61 & 99.20 \\
Ga & 0.41 & 0.41 \\
He & 4.08 & 46.61 \\
Hi & 0.00 & {-} \\
Hu & 15.46 & 24.94 \\
It & 22.32 & 94.89 \\
La & 11.73 & 64.25 \\
Lt & 0.00 & {-} \\
Lv & 0.17 & 90.58 \\
Nb & 2.11 & 38.88 \\
Nl & 12.12 & 12.12 \\
Nn & 2.40 & 40.21 \\
Pl & 7.70 & 88.17 \\
Pt & 20.11 & 99.34 \\
Ro & 0.00 & 25.16 \\
Ru & 0.00 & {-} \\
Sl & 37.57 & 90.27 \\
Sv & 13.20 & 83.44 \\
Tr & 0.00 & 65.14 \\
Uk & 4.06 & 96.45 \\
Ur & 0.00 & 55.72 \\
\bottomrule
\end{tabular}
\caption{Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method.}
\label{tab:recall}
\end{table}
\begin{table}
\centering
\begin{tabular}{l S[table-format=5.2] S[table-format=5.2]}
\toprule
Language & {{UD F1}} & {{UniMorph F1}} \\
\midrule
Da & 90.58 & 92.59 \\
Es & 78.31 & 96.44 \\
Fi & 93.78 & 94.98 \\
Lv & 84.20 & 86.94 \\
Pt & 95.57 & 95.77 \\
Ru & 89.89 & 89.95 \\
Bg & 95.54 & 95.79 \\
Sv & 92.39 & 93.83 \\
\bottomrule
\end{tabular}
\caption{Tagging F1 using UD sentences annotated with either original UD MSDs or UniMorph-converted MSDs}
\label{tab:tagging}
\end{table}
We present the intrinsic task's recall scores in \autoref{tab:recall}. Bear in mind that due to annotation errors in the original corpora (like the \say{\emph{vas}} example from \autoref{sec:resources}), the optimal score is not always \(100\%\). Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms,\footnote{Fewer than \(250\) overlapping form--lemma pairs. The other languages had overlaps in the thousands.} and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.
There are three other transformations for which we note no improvement here. Because of the problem in Basque argument encoding in the UniMorph dataset---which only contains verbs---we note no improvement in recall on Basque. Irish also does not improve: UD marks gender on nouns, while UniMorph marks case. Adjectives in UD are also underspecified. The verbs, though, are already correct with the simple mapping. Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.
For the extrinsic task, the performance is reasonably similar whether UniMorph or UD; see \autoref{tab:tagging}. A large fluctuation would suggest that the two annotations encode distinct information. On the contrary, the similarities suggest that the UniMorph-mapped MSDs have similar content. We recognize that in every case, tagging F1 increased---albeit by amounts as small as \(0.16\) points. This is in part due to the information that is lost in the conversion. UniMorph's schema does not indicate the type of pronoun (demonstrative, interrogative, etc.), and when lexical information is not recorded in UniMorph, we delete it from the MSD during transformation. On the other hand, UniMorph's atomic tags have more parts to guess, but they are often related. (E.g.\ \tag{Ipfv} always entails \tag{Pst} in Spanish.) Altogether, these forces seem to have little impact on tagging performance.
\section{Related Work}
The goal of a tagset-to-tagset mapping of morphological annotations is shared by the Interset project \citep{zeman2008reusable}. Interset decodes features in the source corpus to a \emph{tag interlingua}, then encodes that into target corpus features. (The idea of an interlingua is drawn from machine translation, where a prevailing early mindset was to convert to a universal representation, then encode that representation's semantics in the target language. Our approach, by contrast, is a direct flight from the source to the target.) Because UniMorph corpora are noisy, the encoding from the interlingua would have to be rewritten for each target. Further, decoding the UD MSD into the interlingua cannot leverage external information like the lemma and form.
The creators of HamleDT sought to harmonize dependency annotations among treebanks, similar to our goal of harmonizing across resources \citep{zeman2014hamledt}. The treebanks they sought to harmonize used multiple diverse annotation schemes, which the authors unified under a single scheme.
\citet{petrov2011universal} present mappings into a coarse, \say{universal} part of speech for 22 languages. Working with POS tags rather than morphological tags (which have far more dimensions), their space of options to harmonize is much smaller than ours.
Our extrinsic evaluation is most in line with the paradigm of \citet{wisniewski2017systematic} (and similar work therein), who compare syntactic parser performance on UD treebanks annotated with two styles of dependency representation. Our problem differs, though, in that the dependency representations express different relationships, while our two schemata vastly overlap. As our conversion is lossy, we do not appraise the learnability of representations as they did.
In addition to using the number of extra rules as a proxy for harmony between resources, one could perform cross-lingual projection of morphological tags \citep{drabek2005induction, kirov2017rich}. Our approach succeeds even without parallel corpora.
\section{Conclusion and Future Work}
We created a tool for annotating Universal Dependencies CoNLL-U files with UniMorph annotations. Our tool is ready to use off-the-shelf today, requires no training, and is deterministic. While under-specification necessitates a lossy and imperfect conversion, ours is interpretable. Patterns of mistakes can be identified and ameliorated.
The tool allows a bridge between resources annotated in the Universal Dependencies and Universal Morphology (UniMorph) schemata. As the Universal Dependencies project provides a set of treebanks with token-level annotation, while the UniMorph project releases type-level annotated tables, the newfound compatibility opens up new experiments. A prime example of exploiting token- and type-level data is \citet{tackstrom2013token}. That work presents a part-of-speech (POS) dictionary built from Wiktionary, where the POS tagger is also constrained to options available in their type-level POS dictionary, improving performance. Our transformation means that datasets are prepared for similar experiments with morphological tagging. It would also be reasonable to incorporate this tool as a subroutine to UDPipe \citep{straka2017tokenizing} and Udapi \citep{popel2017udapi}. We leave open the task of converting in the opposite direction, turning UniMorph MSDs into Universal Dependencies MSDs.
Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. These findings will harden both resources and better align them with their goal of universal, cross-lingual annotation.
\section*{Acknowledgments}
We thank Hajime Senuma and John Sylak-Glassman for early comments in devising the starting language-independent mapping from Universal Dependencies to UniMorph.
|
1,314,259,993,411 | arxiv | \section{Introduction}
The electrical grid is one of the major engineering achievements
of the 20th century. Typically, it involves high voltage transmission
lines connecting large generators and power sub-stations. The
distribution network starts at the sub-station and delivers the
energy to the end user.
The grid was originally designed to distribute
electricity from large generators. It is changing rapidly due to the
emergence of renewable and intermittent sources, energy
storage and electric vehicles \cite{bc13}. Not addressed properly, this
complexity could result in management difficulties and possible
black-outs. An important issue for the network is to predict
the power in the lines and identify critical lines, {i.e.}
the ones that are most heavily loaded.
The network should be planned and operated to control the load on
these lines.
The main model used by operators and planners to analyze stationary electrical
networks is the so-called load-flow equations\cite{kundur},
connecting incoming
power to voltage and current. These equations are nonlinear.
Typically they are solved using a Newton method \cite{gsc15}. They can
have multiple solutions and the iteration scheme can fail to converge.
In any case, it is difficult
to see how the load-flow solution is affected by the topology of
the network and the nodal distribution of generators and loads.
A global geometrical
point of view, incorporating the topology and the load-generator
distribution would be very useful to address this issue.
In this article, we propose such a geometrical point of view.
We consider the case of a transmission network and
linearize the load-flow equations to obtain a
Laplacian equation, involving
the graph Laplacian operator associated to the network \cite{crs01}.
This matrix can be seen as a discrete version of the continuous
Laplacian, see for example the finite difference approximation
in numerical analysis, see for example \cite{ananum}.
The Laplacian matrix is positive and symmetric so that its eigenvectors
can be chosen orthonormal. Using these eigenvectors, we introduce
a spectral solution of the Laplace equation.
This is a Fourier like picture of the network, where the small
order eigenvectors correspond to large scale fluxes on the network.
Conversely large order eigenvectors correspond to small scale
fluxes on the grid.
Our spectral picture naturally shows the
dependence of the line power fluxes on the topology and load-generator
distribution.
Similar ideas can be developed for distribution networks, however
these usually have a simple tree like geometry. Then the
network topology plays a less important role. Also large
scale failures occur on transmission networks. We therefore concentrate
on these networks.
Using the solution of the Laplace equation, we can write
explicitly the vector of power fluxes $P_l$ using the
discrete gradient, $\nabla $ {i.e.} the transpose of the incidence matrix
of graph theory \cite{crs01}. The vector $P_l$ can then be written
as a sum of terms $\nabla \mathbf{v}^i / \omega_i^2$ where
$\omega_i^2$ is an eigenvalue of the Laplacian with eigenvector
$\mathbf{v}^i$. We analyze how these two terms affect $P_l$ by examining
the evolution of $\omega_i^2$ and $\mathbf{v}^i$ with $i$.
We obtain an explicit Parseval relation for $\parallel P_l \parallel_2^2$
which can be used for minimization. This shows that to minimize
the norm of $P_l$ it is crucial to control its components on the
low order eigenvalues, in other words the large scales of the network. The
role of the eigenvector structure is more difficult to understand, we
therefore examine situations where the generator/load vector is
concentrated on a single $\mathbf{v}^i$. This reveals the importance of
localized eigenvectors that contribute strongly to
$\parallel \nabla \mathbf{v}^i \parallel$.
Finally, examining more realistic generator/load distributions
shows that truncating the sum for $P_l$ gives a reasonable estimation
so that the full modal decomposition is not necessary. \\
The article is organized as follows. Section 2 recalls the load-flow equations
and how they can be approximated by a Laplace equation.
We introduce the spectral solution of this equation in section 3.
Section 4 presents the spectrum of some IEEE networks and section 5
illustrates the spectral solution of the reduced load-flow. Conclusions
are presented in section 6.
\section{The load-flow equations }
To introduce these equations, we will follow the very clear
derivation of \cite{panciatici}.
At each node, we write conservation of power, this means :
\begin{equation} \label{power}
{\cal P }= {\cal V } {\cal I }^* ,\end{equation}
where ${\cal P }$ is the vector of powers inserted into or extracted
from the network, each component corresponding to a node. The right
hand side is the power due to the network.
From the generalization of Ohm's law
\begin{equation} \label{ohm}
{\cal I }= (G + j B) (V +j W) ,\end{equation}
where $G + j B$ is the so-called $Ybus$ matrix \cite{kundur}. We then get
\begin{equation} \label{ohmc}
{\cal I }^* = (G V - B W ) + j( -B V - G W) . \end{equation}
Combining (\ref{power}) and (\ref{ohmc}), we obtain
\begin{equation} \label{powerf}
{\cal P }= V (G V - B W ) + W ( B V + G W) +j [ W(G V - B W ) + V( -B V - G W)] . \end{equation}
Introducing the vector of active and reactive powers, so that
\begin{equation} \label{act_react}
{\cal P }= P +j Q , \end{equation}
we obtain our final load flow equations \cite{panciatici} :
\begin{eqnarray} \label{lflow}
V (G V - B W ) + W ( B V + G W) = P, \\
W (G V - B W ) + V( -B V - G W) =Q.
\end{eqnarray}
In index notation, the system reads, for all nodes $k$
\begin{eqnarray} \label{lflowi}
V_k \sum_i (G_{k i} V_i - B_{k i} W_i ) + W_k \sum_i ( B_{k i} V_k
+ G_{k i} W_i) = P_k, \\
W_k \sum_i (G_{k i} V_i - B_{k i} W_i ) - V_k \sum_i (B_{k i} V_k
+ G_{k i} W_i) = Q_k .
\end{eqnarray}
The sums correspond to matrix-vector multiplications while
the terms on the left of the sums correspond to tensor products. The two
operations do not commute. The system (\ref{lflow}) is
quadratic in $V$ and $W$ and needs to be solved using
an optimization solver, for example a Newton-Raphson method.
An important fact is that
the matrices $B$ and $G$ are graph Laplacians \cite{crs01}.
Typical approximations can
be done for the transmission network and for the distribution
network. We examine these in the next section taking advantage
of the special property of $B$ and $G$.
\subsection{Simplified model of a transmission network}
For a transmission network, we follow the three classical assumptions,
(see Kundur's book for example \cite{kundur}):
\begin{itemize}
\item neglect the ohmic part of the $Ybus$ matrix so take $G=0$
\item assume that voltage modulus is constant and close to 1
\item assume that the phase is small
\end{itemize}
Taking $G=0$ leads to the new system
\begin{eqnarray} \label{lflowr}
-V ( B W ) + W ( B V) = P, \\
-W ( B W ) - V( B V) =Q.
\end{eqnarray}
The second assumption and third assumptions imply
\begin{equation} \label{volt_pha}
{\cal V }= V+ jW \equiv v e^{j \theta}\approx 1 + j \theta ,\end{equation}
because the vector $v \approx 1$. Then the vectors $V,W$ are
$$V=1 , ~~W =\theta .$$
The first equation of (\ref{lflowr}) reduces to
\begin{equation} \label{bteta}
-B \theta = P.\end{equation}
This is a singular linear system to be solved for
the vector of phases $\theta$
knowing the vector of active powers $P$.
To identify critical links we compute the power line vector $P_l$
whose components are the powers in each line. It is calculated using
the discrete gradient $\nabla$ (see \cite{cks13} for an example)
\begin{equation} \label{pl}
\nabla \theta = P_l .\end{equation}
Note also the connection between $\nabla$ and the graph
Laplacian $B\equiv \Delta = \nabla^T \nabla$.
The two equations (\ref{bteta}-\ref{pl}) are the main
model that we will consider in the rest of the article. Since the
matrix $B$ is a graph Laplacian, it is singular and the linear
system (\ref{bteta}) needs to be solved with care. In the next section,
we will use the important symmetries of $B$ to solve (\ref{bteta}).
In the following we consider for simplicity that all lines
are the same. Then the discrete gradient
$\nabla$ has entries $\pm 1,0$. The Laplacian elements $\Delta_{ij}$
are such that $\Delta_{ij}=1$ if node $i$ is connected to node $j$
and $\Delta_{ii} = -\sum_{j\neq i} \Delta_{ij}$, the number of
links (degree) of node $i$ \cite{crs01}.
This simplification is for clarity of exposition. The whole of our
spectral formalism presented below carries through
when the lines are unequal i.e. in the presence of weights.
To conclude this section, note that for distribution networks, a similar simplification of the load-flow equations can be done \cite{kundur}. For those networks, we can assume $$ B=0,~~W \approx 0,~~V = 1 + \delta V . $$ This leads to the following equation, very similar to (\ref{bteta}) \begin{equation}\label{reduc_dist} P = G \delta V . \end{equation}
In the rest of the article, we will focus on transmission networks.
\section{Spectral solution of the reduced load-flow}
In this section, we use the notation from graph theory and
note the graph Laplacian matrix $B$, $\Delta$.
The matrix $\Delta$ is symmetric and positive.
Its eigenvalues can be written
$$\omega_1^2=0 \le \omega_2^2 \le \dots \le \omega_n^2 ,$$
where $n$ is the number of nodes of the network. The eigenvectors
$$\mathbf{v}^1,\mathbf{v}^2, \dots \mathbf{v}^n ,$$
can be chosen orthonormal. In the rest of the article, we assume that
the network is connected so that $\omega_1^2=0 < \omega_2^2$
\cite{crs01}.
A standard way of solving equation (\ref{bteta}) is to
use the Penrose pseudo-inverse with a regularization \cite{numrec}
to eliminate the singularity due to the zero eigenvalue.
This does not give much information on the way the solution
depends on the graph and the power distribution. To gain
insight, it is useful to project $P$ on the eigenvectors
\begin{equation}\label{proj_p}
P= p_1 \mathbf{v}^1 + p_2 \mathbf{v}^2+ \dots + p_n \mathbf{v}^n ,
\end{equation}
and take advantage on their orthogonality.
Assuming that demand and supply are balanced in the electrical generation,
the power vector $P$ satisfies
$$\sum_{k=1}^n P_k =0, $$
where the $P_k$'s are the component in the canonical basis.
Using the expansion \eqref{proj_p}, we get
$$\sum_{k=1}^n P_k = \sum_{i=1}^n p_i (\sum_{k=1}^n \mathbf{v}^i_k)
= {p_1 \over \sqrt{n}}=0,$$
because the eigenvectors $\mathbf{v}^i$ satisfy $\sum_{k=1}^n \mathbf{v}^i_k=0, i>1$.
Then we get $p_1=0$. One can then calculate $\theta$ as
\begin{equation}\label{teta}
\theta = -{ p_2 \over \omega_2^2} \mathbf{v}^2
-{ p_3 \over \omega_3^2} \mathbf{v}^3 \dots -{ p_n \over \omega_n^2} \mathbf{v}^n .\end{equation}
The power in the lines $P_l$ is then
\begin{equation}\label{plt}
P_l = \nabla \theta = -{ p_2 \over \omega_2^2} \nabla \mathbf{v}^2
-{ p_3 \over \omega_3^2} \nabla \mathbf{v}^3 \dots
-{ p_n \over \omega_n^2} \nabla \mathbf{v}^n .\end{equation}
Let us now be specific about the distribution of generators and
loads in the network. We introduce the vectors $G$ and $L$
and their components
\begin{equation}\label{gl}
G = \sum_{i=1}^n g_i \mathbf{v}^i,~~L = \sum_{i=1}^n l_i \mathbf{v}^i, ~~P \equiv G -L . \end{equation}
The euclidian norm of $P_l$ has a particularly simple form. To see this
we write
$$ \parallel P_l \parallel ^2_2
= \sum_{i,j=2}^n {(g_i-l_i)(g_j-l_j) \over \omega_i^2 \omega_j^2}
(\nabla \mathbf{v}^i)^T \nabla \mathbf{v}^j.$$
Note that
$$ (\nabla \mathbf{v}^i)^T \nabla \mathbf{v}^j = (\mathbf{v}^i)^T (\nabla^T \nabla) \mathbf{v}^j
= (\mathbf{v}^i)^T \Delta \mathbf{v}^j = \omega_i^2 \delta_{ij} ,$$
where $\delta_{ij}$ is the Kronecker symbol.
We get finally the Parseval like relation
\begin{equation}\label{pl2}
\parallel P_l \parallel ^2_2
= \sum_{i=2}^n {(g_i-l_i)^2 \over \omega_i^2 } .
\end{equation}
This simple expression shows that the $L_2$ norm of the power depends
only on the eigenvalues and the projections of the input-output powers
on the eigenvectors. In the following we will use this expression
to guide the changes to the generator or load distributions.
Expression (\ref{pl2}) also holds for the weighted Laplacian, so that
(\ref{pl2}) can be used for real electrical networks.
\subsection{Theoretical background : nodal domains}
The eigenvectors $\mathbf{v}^i$ give rise to the so-called nodal
domains. We recall the following definitions and theorem
following the presentation of \cite{booklapla}.
\begin{definition}[{\rm Nodal domain }]
\label{ndomain}
A positive (negative) nodal domain of a function $f$ defined on
the vertices of a graph $G(V,E)$ is a maximal connected
induced subgraph of $G$ on vertices $v \in V$ with $f(v) \ge 0$
($f(v) \le 0$).
\end{definition}
For a strong positive nodal domain, the sign $\ge$ should be replaced by $>$.
In the electrical grid context, positive nodal domains correspond to
generators while negative nodal domains are loads.
We call ${\cal S}(f), {\cal W}(f)$ , respectively the positive strong and weak
nodal domains of a eigenfunction $f$.
We have the following result \cite{gladwell}.
\begin{theorem}[{\rm Discrete nodal domain theorem}]
\label{bound_ndomain} Let $\Delta$ be a generalized Laplacian
of a connected graph with $n$ vertices. Then, any eigenfunction $f_k$
corresponding to the $k$th eigenvalue $\lambda_k$ with multiplicity
$r$ has at most $k$ weak nodal domains and $k+r-1$ strong nodal domains.\\
${\cal S}(f_k) \le k,~~~~~{\cal W}(f_k) \le k + r - 1$.
\end{theorem}
Then, the nodal domains are small (resp. large) scale for
large (resp. small) $i$. In particular, the eigenvector corresponding to
the first non zero eigenvalue partitions the graph in two sub-graphs, see
the following result from Fiedler \cite{fiedler}.
\begin{theorem}
An eigenfunction of second eigenvalue has exactly two nodal domains.
\end{theorem}
The power in the lines $P_l$ is connected to
the vectors $\nabla \mathbf{v}^i$. These in turn, depend on the nodal
domains. We see in the next section, how eigenvectors $\mathbf{v}^i$ that
have small nodal domains will have large $||\nabla \mathbf{v}^i||$ which will
contribute strongly to $||P_l||$.
\subsection{Decay of inverse of eigenvalues}
We have the following inequality \cite{mohar91} for $\omega_2^2$
\begin{equation}\label{mohar}
{4 \over n D } \le \omega_2^2 \le {n \over n-1} , \end{equation}
where $D$ is the diameter of the graph, {i.e.} the maximum distance
between two vertices.
We denote by $deg(u)$, the degree of vertex $u$, {i.e.}: the number
of edges incident to $u$.
The maximal eigenvalue is such that \cite{mohar91}
\begin{equation}\label{mohar2} \omega_n^2 \le { \rm max } \{ deg(u)+deg(v),~~ uv ~~{ \rm edge
~of~ G } \} . \end{equation}
Typically electrical networks have an average degree $2 \le {\bar d} \le 3 $.
Assuming that the maximal degree is bounded, then $\omega_n^2$
will be bounded from above as $n$ increases. Take for example
a grid, the inequality reads $\omega_n^2 \le 8$; in fact
$\omega_n^2=8$ so the inequality is sharp.
On the other hand, the lower bound ${4 \over n D }$ of $\omega_2^2$
decreases as $n$ increases. We then expect the spectrum of the Laplacian
to extend more towards $0$ as the network gets larger.
\subsection{Practical consequences for electrical networks}
The spectral approach that we present gives a geometric
picture of the network and the power vector. It gives
a quick approximation of the solution of the nonlinear load-flow
equations.
Relation (\ref{pl2}) gives explicitly the L$_2$ norm of the
energy in the lines. This remarkable result provides a way to optimize
the electrical network.
The relation (\ref{pl2}) implies that taking $g_i=l_i$ makes the power in
all the lines zero. This corresponds to not having any network.
Each node has a generator exactly balancing its load. This is of
course not reasonable. Instead (\ref{pl2}) seems to indicate that
the dominating terms are the small $i=2,3,4..$ terms.
Then, to minimize the expression we can choose the corresponding amplitudes
$g_i-l_i$ to be small. This naive analysis will be checked carefully
and confirmed below.
If the infinite norm is required, then
we just use formula \eqref{plt}.
The following bounds for the L$_\infty$ norm can be used
\begin{equation}\label{linf2}
{1\over \sqrt{n}} { \parallel P_l \parallel _2 } \le \parallel P_l \parallel _\infty \le \parallel P_l \parallel _2 .
\end{equation}
The infinite norm will provide the line carrying the most power, i.e.
the most critical line.
\section{Spectral features of some IEEE networks}
In this section, to estimate the relative influence of $\omega_i^2$
and $\nabla \mathbf{v}^i$, we input the power on a single eigenvector,
$$ P= p_i \mathbf{v}^i, ~~~ 2 \le i\le n , $$
and $p_1=g_1-l_1=0$.
To be able to compare different $i$,
we choose $p_i$ so that the sum of the
positive components is equal to 1, this corresponds to having
an equal generator (or load) power in the network independently of $i$.
It is equivalent to setting $\parallel P \parallel_1=2$.
We examine two IEEE networks, with 30 and 118
nodes respectively and use the parameters given in the files of
the Matpower software \cite{matpower}.
The loads are chosen uniform on the network, i.e. $l_1=g_1$
and $l_i=0,~~i\ge 2$.
\subsection{IEEE Case 30}
The case30 network from IEEE \cite{case30} is shown in
the left panel of Fig. \ref{fig_case30}.
The graph is presented in the right panel of
Fig. \ref{fig_case30}; it has $n=30$ vertices,
$m=41$ edges and an average degree $\bar d = 2m/n \approx 2.7$ .
\begin{figure}[H]
\centerline{ \epsfig{file=figs/case30.ps,height=14cm,width=5cm,angle=90}}
\caption{Left : Electrical representation of the IEEE network case 30.
Right : schematic of the IEEE network case 30, from \cite{case30} using
the Graphviz software \cite{graphviz}.}
\label{fig_case30}
\end{figure}
For each index $i$, we compute the
inverse of the eigenvalue $1/ \omega_i^2$; it decays as a
function of $i$ as shown in the left panel of Fig. \ref{30odnabla}.
The norm of $ \parallel \nabla \mathbf{v}^i \parallel _{\infty}$
increases with $i$ and has some maxima. It is shown in the
right panel of Fig. \ref{30odnabla}.
Note the peak for $i=19$ which corresponds
to the eigenvector $\mathbf{v}^{19}$ such that
$\mathbf{v}^{19}_{29}=+1/\sqrt{2},~~\mathbf{v}^{19}_{30}=-1/\sqrt{2}, ~~\mathbf{v}^{19}_i=0$
for $i$ different from $29, 30$. The strict nodal domains
are very small, $\{29\} \cup \{30\}$.
This very special eigenvector was analyzed in
our previous work \cite{cks13}, we termed it a closed swivel because only two
nodes are non zero. On almost all nodes, no action is effective on the
system on that particular eigenmode. The eigenvalue is $\omega^2_{19}=3$.
\vskip -.3cm
\begin{figure}[H]
\centerline{ \epsfig{file=figs/30od.ps,height=14cm,width=5cm,angle=90}}
\caption{
Plot as a function of $i$
of the inverse of the eigenvalue $1/ \omega_i^2$
(left panel) and of $ \parallel \nabla \mathbf{v}^i \parallel _{\infty}$ (right panel) .}
\label{30odnabla}
\end{figure}
The associated line power infinite norm
$\parallel P_l \parallel _{\infty}$ which is the
multiplication of the two different expressions
is shown in Fig. \ref{30npl}.
\begin{figure}[H]
\centerline{\epsfig{file=figs/30npl.eps,height=5cm,width=8cm,angle=0}}
\caption{
Plot of the line power infinite norm $ \parallel P_l \parallel _{\infty}$ when
$P=\mathbf{v}^i$ as a function of $i$.
}
\label{30npl}
\end{figure}
This quantity is maximum for $i=19$, corresponding
exactly to the swivel eigenvector discussed above.
This eigenvector corresponds to the power being focused in
the line between the two nodes of the swivel, giving the
maximum $ \parallel P_l \parallel _{\infty}$.
The other eigenvectors that give peaks in
$ \parallel P_l \parallel _{\infty}$ are
$\mathbf{v}^5$ and $\mathbf{v}^{10}$. Their nodal domains are more complex
than the one of $\mathbf{v}^{19}$ and are shown in Figs. \ref{v5_30} and \ref{v10_30}.
They both show strong gradients between nodal domains which
explain the peaks in $\parallel P_l \parallel$. From Fig. \ref{30odnabla}
we expect that the vector $\mathbf{v}^{15}$ will contribute to $ \parallel P_l \parallel _{\infty}$, however $\omega_5^2$ is large so that finally the
contribution of $\mathbf{v}^5$ to $ \parallel P_l \parallel _{\infty}$ is small.
\begin{figure}[H]
\centerline{
\epsfig{file=figs/e5_30.eps,height=6cm,width=12cm,angle=0}
}
\caption{
Nodal domains of the eigenvector $\mathbf{v}^5$.
The color scheme for the components $\mathbf{v}^5_i$ is brown if
$-0.3 < \mathbf{v}^5_i $, red if $-0.3 < \mathbf{v}^5_i < -0.1$,
pink if $-0.1 < \mathbf{v}^5_i < 0$,
cyan if $0 < \mathbf{v}^5_i < 0.1$ , royalblue if $0.1 < \mathbf{v}^5_i < 0.3$
and indigo if $0.3 < \mathbf{v}^5_i $ .
}
\label{v5_30}
\end{figure}
The positive nodal domains are
$A=\{10,21,22,23,24,25,26\}$
$B=\{1,2,3,4,5,7\}$.
The negative nodal domains are
$C=\{6,8,9,11,28,27,29,30\}$
and \\
$D=\{13,12,14,15,16,17,18,19,20\}$.
Note the strong gradients at the interface between the
positive and negative nodal domains. In particular
between the nodes 27 and 25 because
$-0.3 < \mathbf{v}^5_{27} < -0.1$ and
$0.1 < \mathbf{v}^5_{25} < 0.3$.
This gradient is
responsible for the peak observed for $i=5$ in Fig. \ref{30npl}.
Notice also the strong gradient between nodes 15 and 23.
\begin{figure}[H]
\centerline{
\epsfig{file=figs/e10_30.eps,height=6cm,width=12cm,angle=0}
}
\caption{
Nodal domains of the eigenvector $\mathbf{v}^{10}$, the color scheme
is the same as for Fig. \ref{v5_30} .
}
\label{v10_30}
\end{figure}
The negative nodal domains are
$A=\{1,3,4,13,6,8,9,10,19,20,21,22,24,25,27,28\}$.
The positive nodal domains are
$B=\{2,5,7\}$, $C=\{11\}$, $D=\{12,14,15,16,17,18,23\}$,
$E=\{26\}$ and $F=\{29,30\}$.
Notice the strong gradients between nodes 10 and 17 and 10
and 21. This explains the large amplitude in
$ \parallel \nabla \mathbf{v}^i \parallel _{\infty}$.
\begin{figure}[H]
\centerline{
\epsfig{file=figs/nodal.eps,height=3cm,width=12cm,angle=0}
}
\caption{
Schematic nodal domains of the eigenvectors $\mathbf{v}^5$ (left) and
$\mathbf{v}^{10}$ (right).
}
\label{nodal}
\end{figure}
The comparison between $\mathbf{v}^5$ and $\mathbf{v}^{10}$ is instructive. Fig. \ref{nodal}
shows the nodal domains for $\mathbf{v}^5$ (left) and $\mathbf{v}^{10}$ (right). There are
four nodal domains for the former forming a cycle and six for the latter
forming a star. There is no general theory predicting the shape and size
of these domains, only an upper bound on their number depending on the
order of the eigenvalue.
\subsection{IEEE Case 118}
The next example is the larger case118 with $n=118$ nodes, $m=186$
edges and an average degree $\bar d = 2m/n= 3.1$.
Note that for this network, five lines have been doubled so that the
Laplacian now has weights.
The evolutions of $1 / \omega_i^2$ and $\parallel \nabla \mathbf{v}^i\parallel_{\inf}$
are shown in
Figs. \ref{118odnabla} and \ref{118npl}. They are very similar
to the ones for the case30. In particular, the inverse of the
eigenvalues decay exponentially as shown in the lin-log scale of the
left panel of Fig. \ref{118odnabla}.
\begin{figure}[H]
\centerline{ \epsfig{file=figs/118od.ps,height=14cm,width=5cm,angle=90}}
\caption{
Plot as a function of $i$
of the inverse of the eigenvalue $1/ \omega_i^2$
(left panel) and of $ \parallel \nabla \mathbf{v}^i \parallel _{\infty}$ (right panel) .}
\label{118odnabla}
\end{figure}
Notice in the right panel of Fig. \ref{118odnabla} the strong
contributions to $ \parallel \nabla \mathbf{v}^i \parallel _{\infty}$
of the eigenvectors $\mathbf{v}^{26}, \mathbf{v}^{50}, \mathbf{v}^{59}, \mathbf{v}^{60}, \mathbf{v}^{74}, \mathbf{v}^{76}$ and
$\mathbf{v}^{83}$.
An extreme case is the swivel eigenvector \cite{cks13}
$\mathbf{v}^{26}$ such that $\mathbf{v}^{26}_{111}=+1/\sqrt{2},~~\mathbf{v}^{26}_{112}=-1/\sqrt{2}, ~~\mathbf{v}^{26}_i=0$ for $i$
different from $111,~112$. The eigenvalue is $\omega^2_{26}=1$.
The eigenvector $\mathbf{v}^{50}$ is also a swivel.
The other eigenvectors are localized in specific regions of the network.
By this we mean that the eigenvector has a small number of components of
absolute value much larger than the rest. For example
$\mathbf{v}^{59}$ is localized from nodes 84 to 88. $\mathbf{v}^{60}, \mathbf{v}^{83}$ from 100 to 118,
$\mathbf{v}^{74}$ around 90 and $\mathbf{v}^{76}$ around 110.
This localization comes as a surprise because
the general theory of nodal domains does not predict it.
The associated line power infinite norm $ \parallel P_l \parallel _{\infty}$
is shown in Fig. \ref{118npl}. Not all the peaks present in the
right panel of Fig. \ref{118odnabla} are present here. This is because
of the increase of the eigenvalues $\omega^2_i$ with $i$. For example
the large peaks $\mathbf{v}^{74},\mathbf{v}^{76}$ are now much smaller in Fig. \ref{118npl}.
The swivel eigenvector $\mathbf{v}^{26}$ gives the largest contribution.
\begin{figure}[H]
\centerline{\epsfig{file=figs/118npl.eps,height=5cm,width=8cm,angle=0}}
\caption{
Plot of the line power infinite norm $ \parallel P_l \parallel _{\infty}$ when
$P=\mathbf{v}^i$ as a function of $i$.
}
\label{118npl}
\end{figure}
To conclude this section, we have seen that
$\nabla \mathbf{v}^i$ is related to nodal domains. We see a
general trend showing that a linear interpolation of
$\parallel \nabla \mathbf{v}^i \parallel$ shows a slow increase with $i$.
However there are
some peaks that correspond to highly localized eigenvectors.
These highly localized eigenvectors $\mathbf{v}^i$ give a large contribution
to $\nabla \mathbf{v}^i$. Some are due to geometrical configurations of the
network like swivels. It is not clear where the others arise from.
In the next section, we consider general $P$ distributions. We will
see that localized eigenvectors play an important role in $P_l$ for small $i$.
When $i$ is large, their influence is mitigated by the denominator
$\omega_i^2$.
\section{Spectral solutions of the reduced load-flow}
In this section, we combine the graph information with
the generator / load vector and calculate the
power in the lines $P_l$.
\subsection{A small size network : effect of soft nodes}
Before addressing networks with a relatively large number of nodes
it is useful to consider a very simple example where calculations
can be conducted by hand. This shows the usefulness of the approach.
We consider the simple 6 node network shown in Fig. \ref{61}.
\begin{figure}[H]
\centerline{
\epsfig{file=figs/61.eps,height=3cm,width=8cm,angle=0}
}
\caption{
A 6-node electrical network.}
\label{61}
\end{figure}
The graph Laplacian here is
\begin{equation}\label{lap61} \Delta = \begin{pmatrix}
5 & -1 & -1 & -1 & -1 & -1 \cr
-1 & 2 & -1 & 0 & 0 & 0 \cr
-1 & -1 & 2 & 0 & 0 & 0 \cr
-1 & 0 & 0 & 3 & -1 & -1 \cr
-1 & 0 & 0 & -1 & 2 & 0 \cr
-1 & 0 & 0 & -1 & 0 & 2
\end{pmatrix}, \end{equation}
whose eigenvalues $\omega_i^2,~i=1,\dots,6$ are
\begin{equation}\label{eigval61}
0 , ~~~1,~~~2,~~~3, ~~~4,~~~6 \end{equation}
corresponding to the eigenvectors
\begin{eqnarray}\label{vlap61}
\mathbf{v}^1 = {1 \over {\sqrt{6}}} (1,1,1,1,1,1)^T,
~~~~\mathbf{v}^2={1 \over \sqrt{30}} (0,3,3,-2,-2,-2)^T,\nonumber \\
\mathbf{v}^3={1 \over \sqrt{2}} (0,0,0,0,1,-1)^T,
~~~~\mathbf{v}^4 = {1 \over {\sqrt{2}}} (0,1,-1,0,0,0)^T, \nonumber \\
\mathbf{v}^5={1 \over \sqrt{6}} (0,0,0,2,-1,-1)^T,
~~~~\mathbf{v}^6={1 \over \sqrt{30}} (-5,1,1,1,1,1)^T,\nonumber
. \end{eqnarray}
The associated gradients are
\begin{table} [H]
\centering
\begin{tabular}{|l|r|}
\hline
$\nabla \mathbf{v}^2$ & $(-0.55,0,0.55,0.36,0,-0.36,0.36,0)^T$ \\
$\nabla \mathbf{v}^3$ & $(0,0,0,0,-0.71,0.71,0.71,0.71)^T$, \\
$\nabla \mathbf{v}^4$ & $(0.71,-1.41,0.71,0,0,0,0,0)^T$, \\
$\nabla \mathbf{v}^5$ & $(0,0,0,0.82,-1.22,0.41,-0.41,-1.22)^T$, \\
$\nabla \mathbf{v}^6$ & $(-1.09,0,1.09,-1.09,0,1.09,-1.09,0)^T$ \\ \hline
\end{tabular}
\label{tab2a}
\end{table}
The power in the lines is then
\begin{equation}\label{pl61}
P_l = p_2 { \nabla \mathbf{v}^2 \over 1}
+ p_3 { \nabla \mathbf{v}^3 \over 2}
+ p_4 { \nabla \mathbf{v}^4 \over 3}
+ p_5 { \nabla \mathbf{v}^5 \over 4}
+ p_6 { \nabla \mathbf{v}^6 \over 6} ,\end{equation}
where $p_i$ is the projection of $P$ on the eigenvector $\mathbf{v}^i$, see
(\ref{proj_p}).
Expression (\ref{pl61}) suggests that a large $p_2$ will contribute
significantly more to $P_l$ than a large $p_5$ or $p_6$.
When the eigenvector $\mathbf{v}^i$ has a zero component at node $k$,
$\mathbf{v}^i_k=0$ ( a soft node in the language of \cite{cks13}), the $p_i$
coefficient does not depend on what is at node $k$. This is because
$p_i= P \cdot \mathbf{v}^i$. In particular, if there is a generator at node $k$,
it will not contribute to $p_i$. This reduces the number of directions
for minimizing $\parallel P_l \parallel$.
To see these effects in more detail, we first assume that the loads
are equally distributed over the network
and study how placing a single generator on the network affects $P_l$.
To examine the contribution of the different modes $\mathbf{v}^i$
to $P_l$, we introduce the partial sums
\begin{equation}\label{part_pli}
s_k^\infty = \parallel \sum_{i=2}^k (g_i-l_i) {\nabla \mathbf{v}^i \over \omega_i^2} \parallel _\infty,\end{equation}
\begin{equation}\label{part_pl2}
s_k^2 = \sum_{i=2}^k {(g_i-l_i)^2 \over \omega_i^2}.\end{equation}
Note that $s_n^2= \parallel P_l \parallel ^2_2$ and $s_n^\infty= \parallel P_l \parallel _\infty$.
\begin{table} [H]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
position & & & & \\
of generator & 1 & 2 & 4 & 6 \\ \hline
$p_2$ & 0 &-3.29 & 2.19 & 2.19 \\ \hline
$p_3$ & 0 & 0 & 0 &-4.24 \\ \hline
$p_4$ & 0 & 4.24 & 0 & 0 \\ \hline
$p_5$ & 0 & 0 & 5 & -2.44 \\ \hline
$p_6$ & 5.48 &-1.09 & -1.09& -1.09 \\ \hline
$ \parallel P_l \parallel _\infty$ & 1 & 3 & 2 & 2.75 \\ \hline
$ \parallel P_l \parallel _2$ & 2.24 & 4.12 & 3.32 & 3.94 \\ \hline
\end{tabular}
\caption{Power coefficients $p_i$, $ \parallel P_l \parallel _\infty$,
$ \parallel P_l \parallel _2$
for different generator positions. The loads are uniformly distributed.
}
\label{tab3}
\end{table}
Table \ref{tab3} shows the coefficients $p_i$ for a generator
of strength 6 placed at nodes 1, 2 or 4. A generator at node 1
will be such that only $p_6$ is non zero. Then we expect that
$ \parallel P_l \parallel $ will be minimal and this is indeed the case.
On the other hand, a generator placed at node 2 gives
a large $p_2$ so that $ \parallel P_l \parallel $ will be larger.
As expected, we see in table \ref{tab3}
a correlation between large values of $p_2$ and $p_3$
and large values of $ \parallel P_l \parallel $.
In a second set of experiments, we place two generators on the
grid and examine how $P_l$ depends on their position. For this,
we choose the following vector of loads
$$L = (1,2,1,3,0,1)^T . $$
First we assume that the generators are placed at nodes 1 and 2, so that
$G = ( G_1, G_2,0,0,0,0)^T$, where $G_1+G_2= \sum_i L_i$. Then the
power vector is $P = ( g-1, 6-g, -1, -3, 0, -1)^T$ where we replaced
$G_1$ by $g$ to simplify the notation. In the
following, Projecting
$P$ onto the eigenvectors, we note that, because of the zero components
$\mathbf{v}^3_1$ and $\mathbf{v}^5_1$,
there are no $g$ dependent
components on the eigenvectors $\mathbf{v}^3$ and $\mathbf{v}^5$; we find
$ \parallel P_l \parallel _2 = 2.7$. When the generators are now placed at nodes
1 and 5, $g$ terms will affect the components of $\mathbf{v}^2, \mathbf{v}^3, \mathbf{v}^5$
and $\mathbf{v}^6$. We then expect to find a higher maximum for $ \parallel P_l \parallel _2$
and this is the case, $ \parallel P_l \parallel _2 = 3.2$. Fig. \ref{pl61a}
shows $ \parallel P_l \parallel _2$ (blue online) and $ \parallel P_l \parallel _\infty$ (red online)
as a function of $g$ for the two different configurations.
We see that $ \parallel P_l \parallel _2$ for the 1-2 configuration (left) is
always above $ \parallel P_l \parallel _2$ for the 1-5 configuration (right). On
the other hand, the minimum of $ \parallel P_l \parallel _\infty$ is the same for
both configurations.
\begin{figure}[H]
\centerline{ \epsfig{file=figs/pl61.ps,height=14cm,width=5cm,angle=90}}
\caption{ Plot of $ \parallel P_l \parallel _2$ (blue online) and $ \parallel P_l \parallel _\infty$
(red online) as a function of the strength $g$ of the generator at
node 1, when the second generator is placed at node 2
(left panel) or at node 5 (right panel).}
\label{pl61a}
\end{figure}
The flatness of $ \parallel P_l \parallel _\infty$ for the 1-2 distribution
(left of Fig. \ref{pl61a}) is due to the zero first and
second components for $\mathbf{v}^i$. On the other hand the 1-5
distribution has less zeros so the $ \parallel P_l \parallel _\infty$ depends
more on $g$. Fig. \ref{pl61a} also shows that for both configurations
$1,2$ and $2,5$, we can simultaneously minimize the two norms.
We now place the 1st generator of amplitude $g$ at node 2
and the second one at nodes 4,5 and 6 respectively. Fig. \ref{pl61b}
shows $ \parallel P_l \parallel _2$ $ \parallel P_l \parallel _2$ (blue online) and $ \parallel P_l \parallel _\infty$
(red online) as a function of $g$.
\begin{figure}[H]
\centerline{ \epsfig{file=figs/pl61b.ps,height=14cm,width=5cm,angle=90}}
\caption{ Plot of $ \parallel P_l \parallel _2$ (blue online) and $ \parallel P_l \parallel _\infty$
(red online) as a function of the strength $g$ of the generator at
node 2, when the second generator is placed at nodes 4,5 and 6.}
\label{pl61b}
\end{figure}
We see that the 2-4 configuration gives a minimum compared to the 2-5 and 2-6.
This is clear because in this configuration, the $\mathbf{v}^3$ component
of $P$ does not depend on $g$. Here, only the $2,4$ configuration
(left of Fig. \ref{pl61b})
leads to the same minimum for $ \parallel P_l \parallel _2$
and $ \parallel P_l \parallel _\infty$.
\subsection{Convergence of $s_{k}^2$ with $k$: example of a grid}
For the placement of two generators on a network, it is interesting
to write $\parallel P_l \parallel _2^2$. Assuming the generators are
positioned at nodes $p$ and $m$ , with amplitudes $G_p$ and $G_m$, we have
$$\parallel P_l \parallel _2^2 = \sum_{i=2}^n {(g_i -l_i)^2 \over \omega_i^2} =
\sum_{i=2}^n {(G_p v_p^i + G_m \mathbf{v}^i_m -l_i)^2 \over \omega_i^2}.$$
Expanding the squares and rearranging, we get the final expression
$$\parallel P_l \parallel _2^2 = G_p^2 \sum_i {(v_p^i)^2 \over \omega_i^2}
+ G_m^2 \sum_i {(v_m^i)^2 \over \omega_i^2}$$
\begin{equation}\label{pl2km}
+ \sum_i {l_i^2 \over \omega_i^2}
+ 2 G_p G_m \sum_i {v_p^i v_m^i \over \omega_i^2}
- 2 G_p \sum_i {v_p^i l_i \over \omega_i^2}
- 2 G_m \sum_i {v_m^i l_i \over \omega_i^2} . \end{equation}
The coefficients of this polynomial in $G_p, G_m$ are sums from $i=2$
to $n$. We have observed that they converge rapidly with $i$.
Simple systems on which to test this convergence are chains and
grids (cartesian product of two chains). There, one can compute
explicitly the eigenvectors and eigenvalues so that the
network can be made arbitrarily large. A grid is also
a first approximation of a transmission network.
A chain with $n$ nodes has eigenvalues $\omega^2_{i}$ and
eigenvectors $\mathbf{v}^i$ whose components $\mathbf{v}^{i}_{p}$ are
\begin{equation}\label{gli}
\omega^2_{i} = 4 \sin^2 {\pi (i-1) \over 2 n} , ~~i=1,\dots, n\end{equation}
\begin{equation}\label{vip}
\mathbf{v}^{i}_{p}=
{1 \over N_i }
\cos [{\pi (i-1) \over n}(p -{1\over 2}) ], ~~p=1,\dots, n \end{equation}
where the normalization factor is $N_i = \sqrt{n}$ if $i=1$
and $N_i = \sqrt{n/2}$ otherwise.
Let us consider $ \parallel P_l \parallel^2_2$ for this network. From
\ref{pl2} we have
$$ \parallel P_l \parallel^2_2 = \sum_{i=2}^n {p_i^2 \over \omega_i^2}=
{1\over 4} \sum_{i=2}^n {p_i^2 \over \sin^2 {\pi (i-1) \over 2 n}}$$
The error committed when truncating the sum at $k \le n $ is
$$\delta_{k} \equiv \parallel P_l \parallel^2_2 - s_{k}^2
= {1\over 4} \sum_{i=k}^{n-1} {p_{i+1}^2 \over \sin^2 {\pi i \over 2 n}} .$$
This quantity is positive and the sequence
${1 \over \sin^2 {\pi i \over 2 n}}$ is decreasing so that
$\delta_{k}$ has the following upper bound
$$\delta_{k} \le {1\over 4}~~ \underset {k+1 \le i\le n} {\rm max}~ (p_i^2)~~
\int_{k-1}^{n-1} {dx \over \sin^2 {\pi x \over 2 n}}.$$
Finally we obtain
\begin{equation}\label{bsk2} \delta_{k} \le {n \over 2 \pi}~~
\underset {k+1 \le i\le n} {\rm max}~
( p_i^2) ~~
\left[ \mathrm{cotan}{\pi (k-1) \over 2 n} -\mathrm{cotan}{\pi (n-1) \over 2 n} \right ].\end{equation}
\begin{figure}[H]
\centerline{ \epsfig{file=figs/ch10.ps,height=14cm,width=5cm,angle=90}}
\caption{
Plot of the partial sum $s_{k}^2$ (left) and the error $\delta_{k}$
(right) as a function of $k$ for a chain with $n=100$ nodes.
The upper bound (\ref{bsk2}) is shown in dashed
line (red online). See text for parameters.}
\label{ch10}
\end{figure}
To see how good the estimate (\ref{bsk2}) we studied a chain
with $n=100$ nodes. The generator vector is such that
$G(31)=1,~G(5)=3$, $0$ elsewhere
and the load vector verifies $L(4)=2,~ L(62)=1,~L(15)=1$ and $0$ elsewhere.
The left panel of Fig. \ref{ch10} shows the partial sum $s_{k}^2$ as
a function of $k$. It reaches 80 \% of its value for $k\approx n/5$.
The error $\delta_{k}$ (right panel) decreases sharply for $k < n/5$,
afterwards its decrease is much slower. The upper bound (\ref{bsk2})
is shown in dashed line (red online). The fairly large difference is
due to $p_i^2$. This quantity depends on the eigenvectors and is difficult
to estimate; the only option is to take the upper bound
$ \underset {k+1 \le i\le n} {\rm max} p_i^2 $. We will discuss this
at the end of the section.
Consider now a grid formed by the cartesian product
$C_n \times C_m$ of two chains $C_n$ and $C_m$ with $n$ and $m$
nodes respectively. Its eigenvalues are
$\omega^2_{i,j} = \omega^2_i + \omega^2_j $ where $\omega^2_i$ is an eigenvalue
for $C_n$ while $\omega^2_j$ is an eigenvalue
for $C_m$. The associated eigenvector is $\mathbf{v}^{ij} = \mathbf{v}^i \otimes \mathbf{v}^j$,
the Kronecker product of $\mathbf{v}^i$ and $\mathbf{v}^j$ (more details can be
found in the book \cite{booklapla}).
The eigenvalue $\omega^2_{i,j}$ and the components $\mathbf{v}^{ij}_{pq}$ are
\begin{equation}\label{glij}
\omega^2_{i,j} = 4 \left [
\sin^2 {\pi (i-1) \over 2 n} + \sin^2 {\pi (j-1) \over 2 m} \right ] , \end{equation}
\begin{equation}\label{vijpq}
\mathbf{v}^{ij}_{pq}= \mathbf{v}^i_p \mathbf{v}^j_q =
{1 \over N_p N_q}
\cos [{\pi (i-1) \over n}(p -{1\over 2}) ]
\cos [{\pi (j-1) \over m}(q -{1\over 2}) ], \end{equation}
where $ i,p \in \{1,\dots ,n\},~~~j,q \in \{1,\dots ,m\}$ and where the
normalization factors $N_p,N_q$ follow the rules of the chains.
\begin{figure}[H]
\centerline{ \epsfig{file=figs/grille.eps,height=6cm,width=12cm,angle=0}}
\caption{ Eigenvalues $\omega^2_{i,j}$ as a function
of $i,j$ for a grid $n=7, m=15$. On the bottom we
show the level sets ranging from $0$ to $8$ and separated by $0.25$.}
\label{lgril}
\end{figure}
The eigenvalues $\omega^2_{i,j}$ are such that
$\omega^2_{i,j} \le 8$. They increase monotonically with $i$ and $j$ as
shown in Fig. \ref{lgril}; there the contour lines are separated by $0.25$.
The expression of $\parallel P_l\parallel_2^2$ is
\begin{equation}\label{pl2grid}
\parallel P_l\parallel_2^2 = \sum_i^{n} \sum_j^{n}
{ p_{ij}^2 \over \omega^2_{ij}} , \end{equation}
where $p_{ij}$ is the component of the power on the eigenvector
$\mathbf{v}^{ij}$ and where $p_{11}=0$. The sum is written so for ease of notation,
the term $i=j=1$ should be omitted because $\omega_{11}=0$.
Let us consider the residual
\begin{equation}\label{res2d}
\delta_{k,l} \equiv
\parallel P_l\parallel_2^2 - \sum_i^{k} \sum_j^{l} { p_{ij}^2 \over \omega^2_{ij}} .
\end{equation}
Assume for simplicity $n=m,~~k=l$. We have
$$\delta_{k,k} \le {1\over 4} ~~
\underset {k+1 \le i,j \le n} {\rm max} (p_{ij}^2) ~~
\sum_{i,j=k}^{n} {1 \over \sin^2 {\pi i \over 2 n}+ \sin^2 {\pi j \over 2 n} }
\le {1\over 4} ~~
\underset {k+1 \le i,j \le n} {\rm max} (p_{ij}^2) ~~I_2(k) ,
$$
where $I_2$ is the integral over the strip $S$, see Fig. \ref{anul}
\begin{equation}\label{i2}
I_2(k) = \iint_{S}
{dx dy \over \sin^2 {\pi x \over 2 n}+ \sin^2 {\pi y \over 2 n} }
\end{equation}
\begin{figure}[H]
\centerline{ \epsfig{file=figs/anul.eps,height=6cm,width=12cm,angle=0}}
\caption{ Integration domain for $I_2$ in the $(x,y)$ plane.}
\label{anul}
\end{figure}
The integrand in $I_2$ is positive so $I_2$ can be bounded from above by
the integral on the quarter annulus $A$ bounded by the circles $C_1$
and $C_2$ shown in Fig. \ref{anul}.
We have $$I_2= ({2n \over \pi})^2
\iint_{{\pi(k-1) \over 2n} \le w,z \le {\pi(n-1) \over 2n}}
{dw dz \over \sin^2 w + \sin^2 z } .$$
The function $ \sin^2 (r\cos \theta) + \sin^2 (r\sin \theta)$ is
minimum for $\theta=\pi/4$ so that
$${1 \over \sin^2 (r\cos \theta) + \sin^2 (r\sin \theta)}
\le {1 \over 2 \sin^2 (r/\sqrt{2})} .$$
Then
$$\delta_{k,k} \le ~~
({n \over \pi})^2
\underset {k+1 \le i,j \le n} {\rm max} (p_{ij}^2 )~~
\int_{0}^{\pi/2} d\theta \int_{\pi(k-1) \over 2n}^{\pi(n-1) \over n\sqrt{2}}
{rdr \over \sin^2 {\pi r \over \sqrt{2} }},$$
and further calculations yield the final result
\begin{equation}\label{dkk}
\delta_{k,k} \le n^2 \underset {k+1 \le i,j \le n} {\rm max} (p_{ij}^2 )~~
\left[ \mathrm{cotan}{\pi (k-1) \over 2 \sqrt{2} n} -\mathrm{cotan}{\pi (n-1) \over 2 n}
\right ].\end{equation}
The dominant term is the first $\mathrm{cotan}$. It is large for $k$ small
and decays quickly as $k$ increases. For ${k-1 \over 2 \sqrt{2} n}=0.2$
$\mathrm{cotan}{\pi (k-1) \over 2 \sqrt{2} n} \approx 1.37$.
Again, this upper bound is not sharp because of the crude
bound on $p_{ij}^2$.
To analyze the effects of $p_{ij}^2$, we have to fix the
distribution of generators and loads. Assume as in the
beginning of the section that we only have two generators placed
at nodes $p$ and $m$ and uniform loads. Then, we can use expression
(\ref{pl2km}) for $\parallel P_l \parallel_2^2 $.
For the grid, the indices $i,m$ are associated to four
indices $(p,q), ~~(r,s)$. This means that we place one generator at
position $(p,q)$ and another at $(r,s)$.
Assume these positions are fixed; we introduce the partial sum
\begin{equation}\label{skg2}
s_{k}^2=\sum_{i,j =1}^{k} {\mathbf{v}^{ij}_{pq} \mathbf{v}^{ij}_{rs} \over \omega^2_{i,j}},\end{equation}
with the restriction that we omit the term $i=j=1$.
To examine how $s_{k}^2 \to s_{n}^2$,
we considered a grid of size $n=61, m=61$ and computed $s_{k}^2$
for $(p,q,r,s)=(10,4,20,28), (10,4,10,28),
(4,4,6,6)$ and $(4,4,15,15)$. The results are shown in Fig. \ref{grins}.
\begin{figure}[H]
\centerline{ \epsfig{file=figs/grins.eps,height=6cm,width=12cm,angle=0}}
\caption{ Partial sums $s_{k}^2=\sum_{ij}^{k} {\mathbf{v}^{ij}_{pq} \mathbf{v}^{ij}_{rs} \over \omega^2_{i,j}},$ as function of $k$ for different $(p,q,r,s)$
configurations. }
\label{grins}
\end{figure}
In all cases, except for the close nodes configuration
$(4,4,6,6)$, the sum converges for $k \approx 10$. For the $(4,4,6,6)$
the sum has converged for $k \approx 20 \ll n$.
We observe similar fast convergence of the other sums
in expression (\ref{pl2km}).
\subsection{The IEEE 30 network}
There are only six generators in this network,
\begin{equation} \label{gen_case30}
G_1 = 23.54, ~G_2= 60.97, ~G_{13}=37, ~ G_{22}=21.59,~ G_{23}=19.2,~ G_{27}=26.91.
\end{equation}
The loads are distributed uniformly over the network.
\begin{figure}[H]
\centerline{ \epsfig{file=figs/pp30.ps,height=14cm,width=5cm,angle=90}}
\caption{
Plot of $|p_i|$ (left) and ${|p_i| \over \omega_i^2}$ (right) as a function of $i$ for the power vector $P$ of
IEEE case 30.}
\label{pp30}
\end{figure}
The components of the power vector $P$ are shown in Fig. \ref{pp30}. As
shown in the right panel,
The right panel shows that, as expected, ${|p_i| \over \omega_i^2}$
decays with $i$.
First, we examine the convergence of $s_k^\infty, ~~s_k^2$ as $k$
increases. The graph is shown in Fig. \ref{pld2_30}.
\begin{figure}[H]
\centerline{\epsfig{file=figs/pld2_30.eps,height=4cm,width=10cm,angle=0}}
\caption{
Plot of the partial sums $s_{k}^\infty$, $s_{k}^2/2$ from
(\ref{part_pli},\ref{part_pl2}) as a function of ${k}$.}
\label{pld2_30}
\end{figure}
Note how
$s_{k}^\infty$ and $s_{k}^2$ increase fast up to ${k}=15$ terms.
After that the rate of increase is much smaller.
As expected, the small eigenvalues dominate the sum.
Past $k=12$, the $L_\infty$
norm is stable while the $L_2$ norm continues to increase but at much
slower rate.
We did not carry out a full optimization of the amplitudes of the
generators since this is out of the scope of the article. Instead
we varied the amplitudes $G_i$ for to examine
how the power in the lines varies. We show two cases in the table below
\begin{table} [H]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
& $G_1$ & $G_2$ & $G_{13}$ & $G_{22}$ & $G_{23}$ & $G_{27}$ & $ \parallel P_l \parallel _2$ & $ \parallel P_l \parallel _\infty$ \\ \hline
original& 23.54& 60.97& 37 & 21.59 & 19.2 & 26.91 & 68.78 & 37.\\
case 2 & {\bf 3.54}& 60.97& 37 & 21.59 &{\bf 29.2}&{\bf 36.91}&{\bf 63.26} &{\bf 21.07}\\
\hline
\end{tabular}
\caption{Two different configurations of generators for IEEE case 30 with their
associated line powers $ \parallel P_l \parallel _2$ and $ \parallel P_l \parallel _\infty$.
The terms that have changed from the
original configuration are written in bold. }
\label{tab2}
\end{table}
We computed the partial sum $s_{k}^\infty$ as a function of ${k}$
for the four different configurations of table \ref{tab2} in Fig. \ref{part_pl}.
\begin{figure}[H]
\centerline{\epsfig{file=figs/pld30.eps,height=5cm,width=10cm,angle=0}}
\caption{
Plot of the partial sum (\ref{part_pli}) as a function of $n$
for the two configurations original (0) and case 2,
shown in table \ref{tab2}.}
\label{part_pl}
\end{figure}
The configuration 2 has a much lower value of $s_{k}^\infty$ than
the other configuration. To show the importance of the modal
distribution of power, we plot in Fig. \ref{ppo30} $|p_i|/ \omega_i^2$
as a function of $i$ for the two configurations.
\begin{figure}[H]
\centerline{ \epsfig{file=figs/ppo30.ps,height=14cm,width=5cm,angle=90}}
\caption{
Plot of $|p_i| / \omega_i^2 $ for the original configuration (left) and the
improved configuration (right) shown in table \ref{tab2}.}
\label{ppo30}
\end{figure}
Indeed, we see that configuration 2 has smaller $|p_i|$ for $i < 15$
than the original configuration. This explains the difference in
$ \parallel P_l \parallel _2$ and especially $ \parallel P_l \parallel _\infty$.
This experiment shows that by tuning the amplitude of existing generators
one can decrease significantly the power in the lines.
We will carry out such an optimization in a further study.
\subsection{The IEEE 118 network}
The components of the power vector are shown in Fig. \ref{pp118}.
\begin{figure}[H]
\centerline{ \epsfig{file=figs/pp118.ps,height=14cm,width=5cm,angle=90}}
\caption{
Plot of $|p_i|$ (left) and ${|p_i| \over \omega_i^2}$ (right) as a function of $i$ for the power vector $P$ of
IEEE case 118.}
\label{pp118}
\end{figure}
A peak observed in $|p_7|$ in both panels. It corresponds to reinforcing the
localized eigenvector $\mathbf{v}^7$. The large components of $p_i$ are
smoothed out in the right panel by the denominator $\omega_i^2$.
We examine the convergence of $s_{k}^\infty, ~~s_{k}^2$ as ${k}$
increases. The graph is shown in Fig. \ref{pld2_118}.
\begin{figure}[H]
\centerline{\epsfig{file=figs/pld2_118.eps,height=5cm,width=10cm,angle=0}}
\caption{
Plot of the partial sums $s_{k}^\infty$ $s_{k}^2/2$ from
(\ref{part_pli},\ref{part_pl2}) as a function of ${k}$.}
\label{pld2_118}
\end{figure}
As for case 30, both $s_{k}^\infty$ and $s_{k}^2$ stabilize after 10 to 15 terms
and again the small eigenvalues dominate the sum.
\section{Conclusion and discussion}
We have shown that the load-flow equations can be reduced to a
singular linear system involving the graph Laplacian. Using the
a basis of eigenvectors of the Laplacian, we introduced
a spectral method to solve the load-flow equations. This provides a geometrical
picture of the power flow on the network, very similar to a Fourier
decomposition.
This spectral method provides an explicit expression of
$ P_l$ as a sum of components $\nabla \mathbf{v}^i / \omega_i^2$,
where $\omega_i^2,~~ \mathbf{v}^i$ are respectively the $i$th eigenvalue
and associated eigenvector of the Laplacian. These two components
play different roles. The eigenvalues $\omega_i^2$ typically increase with $i$
so that the small $i$ 's will generally control the sum. The term
$\nabla \mathbf{v}^i$ is more difficult to estimate; it
measures the space scale of the contribution on the network
and is loosely related to the nodal domains of $\mathbf{v}^i$.
Also, special eigenvectors $\mathbf{v}^i$ are strongly localized
in a given region of the network and will dominate $P_l$ if $i$ is small.
Soft nodes, where the eigenvector has zero components also turned
out to be important for optimization.
Using the orthogonality of $\mathbf{v}^i$, we obtained a
Parseval-like expression of $ \parallel P_l \parallel _2$.
Numerical studies show
that the main contribution to
$ \parallel P_l \parallel _2$ and especially
to $ \parallel P_l \parallel _\infty$ tends to come from the small
$i$ eigenvalues and eigenvectors, these correspond to
large nodal domains i.e. large scales on the network. For example,
only 10 or 20 modes are necessary to get a good estimate for a grid
network of 30 nodes.
For a 118 node network, 15 modes are sufficient to describe the solution
with a 5 \% accuracy. These numerical results are confirmed by
analysis done on a chain and a grid.
This geometric approach could complement the standard
nonlinear load-flow because it gives a global
view of the network and the power vector.
Because of this, in view of the growing portion of intermittent sources,
our spectral approach could allow to optimize and reconfigure networks
rapidly.
{\bf Acknowledgements}
The authors are funded by Agence Nationale de la Recherche grant "Fractal
grid". The calculations were done at the CRIANN computing center.
|
1,314,259,993,412 | arxiv |
\section{Introduction}
Recent innovations to lasso-type algorithms \citep{efronall04, friedman2007pathwise, friedman2010regularization} have largely addressed selection of redundant variables, rejection of informative variables, and poor performance under high multicollinearity in high dimensional ($p>n$) and large scale data (large $p$ and large $n$). However, in alleviating old problems, the innovations have revealed new challenges.
Bootstrap variable selection [e.g., \citet{bach2008bolasso}, \citet{meinshausen2010stability}, \citet{wang2011random}, and \citet{mameli2017estimating}] markedly improves variable selection sparsity and inference accuracy, yet it requires repeating lasso and its variants (often with cross-validation) on hundreds of bootstrap subsamples to average the variable selection results or the inference results. \citet{xu2012asymptotic} and Sections~\ref{subsection:suml1} and \ref{subsection:comp} below illustrate that bootstrap selection methods exponentially increase computation load, limiting applicability in large scale data such as DNA sequencing, image recognition, fMRI and MRI data of the neuroimaging, and natural language processing (where both $p$ and $n$ are often over $10,000$). More seriously, choosing the bootstrap variable selection threshold, which is often set based on field experience or simulations, remains an unsolved issue. \citet{bach2008bolasso} and \citet[Figure~2]{huang2014stat} illustrate that a pre-defined threshold may omit informative variables (low power) and select redundant variables (high false discovery rate) in both high and low dimensions.
One strategy to improve lasso selection sparsity without increasing computation burden is to use a post-selection rule to screen variables selected by lasso. Post-lasso selection rules [e.g., the `safe rule' \citep{ghaoui2010safe} and the `strong rule' \citep{tibshirani2012strong}] are capable of reducing the number of variables to enhance computational efficiency in lasso. However, recent research \citep{wang2014safe, zeng2017efficient} and Section~\ref{section:example} suggest both rules may be prone to rejecting informative variables, selecting redundant variables, or proposing repeated modifications (e.g., rejecting a variable in an early round and adding it back in a later round).
Data-splitting hypothesis tests are another way to screen variables selected by lasso \citep{wasserman2009high, meinshausen2009p,romano2019multiple, diciccio2020exact}. The original data are divided into two: one part for variable selection, the other part for testing. However, to improve test power, data splitting is repeated on each bootstrap subsample, raising similar computational concerns as bootstrapping variable selection \citep{bach2008bolasso}. \citet{diciccio2020exact} also argue that because data splitting reserves some of the data for variable selection, it reduces the degrees of freedom for testing on the remaining data, presenting a challenge to detect weak signals when sample size is limited.
Specifically designed to address the challenges of high dimensional data, the variable screening algorithm \citep{fan2008sure, hall2009using,hall2009usingb, li2012robust, li2012feature} ranks the absolute values of unconditional correlations between each covariate and the response variable, selecting only the top-ranked variables. However, \citet{fan2008sure}, \citet{barut2016conditional}, and Section~\ref{section:example} below show that variable screening also suffers from selection of redundant variables and rejection of informative variables when the dependence structures are complicated.
According to \citet{friedman2001elements, weisberg04}, forward selection was historically dismissed in high-dimensional spaces due to inefficiency and sensitivity to sampling randomness, multicollinearity, noise, and outliers due to the iterative refitting of the residual. \citet{tibshirani2015general} illustrates through simulation that (i) forward selection may produce similar generalization errors to lasso-type estimators for fitted models and (ii) that forward selection is computationally competitive to lasso in different applications (image de-noising, matrix completion, etc.). However, \citet{tibshirani2015general} does not suggest any solution to a range of issues for forward selection or lasso (solved by lars), including instability of variable selection, selection of redundant variables, lack of robustness to the irrepresentable condition and complicated dependence structures, or sensitivity to sampling randomness, multicollinearity, noise, and outliers. Moreover, \citet{tibshirani2015general} demonstrates the computation speedup through comparison without providing any rigorous analysis.
\subsection{Main results}
To address issues above, we propose a new forward selection algorithm, \emph{subsample-ordered least-angle regression (solar)}, and its coordinate-descent generalization, \emph{solar-cd}.
Solar re-constructs lasso paths using the $L_0$ norm and averages the resulting solution paths across subsamples. Path averaging retains the ranking information of the informative variables while averaging out sensitivity to high dimensionality, improving variable selection stability, efficiency, and accuracy. Using the same numerical optimizers as lasso does, solar can be easily generalized to many lasso variants. Under the \citet{zhang09} general framework of forward selection, we prove that: (i) with a high probability, path averaging perfectly separates informative variables from redundant variables on the average $L_0$ path; (ii) solar variable selection is consistent and accurate under the general framework of forward selection; and (iii) the probability that solar omits weak signals is controllable for finite sample size.
Using simulations, examples, and real-world data, we demonstrate the following advantages of solar: (i) solar yields, with less than $1/3$ of the lasso computation load, substantial improvements over lasso in terms of the sparsity (64-84\% reduction in redundant variable selection), stability, and accuracy of variable selection; (ii) compared with the lasso safe/strong rule and variable screening, solar largely avoids selection of redundant variables and rejection of informative variables in the presence of complicated dependence structures and harsh settings of the irrepresentable condition; (iii) the sparsity of solar conserves residual degrees of freedom for data-splitting hypothesis testing, improving the efficiency and accuracy of post-selection inference for weak signals with limited $n$; (iv) replacing lasso with solar in subsampling selection (e.g., the bootstrap lasso or stability selection) produces a multi-layer variable ranking scheme that improves selection sparsity and ranking accuracy with the computation load of only one lasso realization; and (v) Given the computation resources, solar bootstrap is substantially faster (98\% lower computation time) than the theoretical maximum speedup for parallelized bootstrap lasso (confirmed by Amdahl's law). The efficiency of bootstrap solar makes cross validation computationally affordable for optimizing the bootstrap selection threshold even in large scale and high dimensional data. We provide a parallel computing package for solar (\texttt{solarpy}) that uses a Python interface and an Intel MKL Fortran/C++ compiler in a supplementary file and dedicated \href{https://github.com/isaac2math/solarpy}{Github page}.
The paper is organized as follows. In Section~\ref{section:algo}, we introduce the solar algorithm, show the theoretical properties of path avergaing and solar, explain the coordinate descent generalization of solar, and discuss generalizations of solar to variants of lasso. In Section~\ref{section:adv}, we use examples to demonstrate the advantages of solar over lasso, the safe/strong rules, and variable screening . In Section~\ref{section:comp}, we use simulations to demonstrate the advantages of solar over lasso-type algorithms in terms of variable selection sparsity, accuracy, and computation load. In Section~\ref{section:application}, we use real-world data to show that the improvements from solar are feasible in the presence complicated dependence structures, while lasso and elastic net [the lasso variant alleged \citep{zou2005regularization, jia2010model} to have the best selection accuracy and sparsity under multicollinearity] completely lose sparsity. The proofs of the properties of solar are in Supplementary Material~A. The \texttt{solarpy} code and raw simulation results are in Supplementary Material~B.
\section{The Solar algorithm \label{section:algo}}
The key to solar lies in the parameterization of the solution path. For any forward selection method, \citet[Theorem~2]{zhang09} shows that the earlier a variable enters the solution path, the more likely it is to be informative. Thus, an accurate and stable ordering of variables in the solution path may help to identify the informative variables. Since we focus on accuracy, the only relevant feature of the regression coefficients in the solution path is whether $\beta_i = 0$ at each stage. Thus, solar parameterizes the lasso path (or, more generally, any forward selection path) using the $L_0$ norm.
\begin{definition}[$L_0$ solution path]
%
Define the \textbf{$L_0$ solution path} on $\left( Y, X \right)$ to be the order that least angle regression includes variables across all stages. For example, if the least angle regression includes $\mathbf{x}_3$ at stage 1, $\mathbf{x}_2$ at stage 2 and $\mathbf{x}_1$ at stage 3, the corresponding $L_0$ path is the ordered set $\left\{ \mathbf{x}_3, \mathbf{x}_2, \mathbf{x}_1 \right\}$.
%
\label{def:solution_path}
%
\end{definition}
\subsection{Solar optimized by least angle regression}
The solar algorithm involves two steps: \emph{parameterizing and averaging $L_0$ paths} and \emph{selecting variables on the average $L_0$ path}.
\subsubsection{$L_0$ path parameterizing and averaging}
The solution path is the foundation of variable (feature) selection in $L_p$-regularized linear modelling. The first step in solar is to improve the robustness of the solution path to high dimensional issues such as multicollinearity, complicated dependence structures, noise, weak signals, etc. In particular, there are two major concerns.
\begin{itemize}
%
\item Computation efficiency: computation load is a central concern in subsampling-based model averaging. Because bootstrap methods (e.g., bootstrap lasso) require hundreds of lasso repetitions to average out variable selection issues in high dimensions, they are computationally expensive with large $n$ and large $p$. Thus, improving selection performance and reducing the number of repetitions would go a long way to reducing computation load.
%
\item Averaging efficiency: the $L_1$ lasso solution path (solved by lars) is essentially a piecewise linear function $\beta = g(\lambda)$, which is easy to average. By contrast, it is not obvious how to average the $L_0$ path because it is an ordered set of rankings. If we average the ranks each $\mathbf{x}_i$ enters the path in large $p$ problems, a weak signal (i.e., an $\mathbf{x}_i$ with a small but non-zero $\left\Vert \beta_i \right\Vert_1$ in the population) may occasionally be ranked at a later stage, returning a large stage value, and exerting outlier influence on the stage value averaging. In other words, to accurately average solution paths using as few subsamples as possible, we need a parameterization method for the $L_0$ path that is more robust to outliers in ranking $\mathbf{x}_i$.
%
\end{itemize}
\noindent
Our solution to these concerns is the $\widehat{q}$ \emph{method}, summarized in Algorithm~\ref{algo:APE-lar} and illustrated in Figure~\ref{fig:q_demo}. For solution path averaging, instead of using the stage value each $\mathbf{x}_i$ enters the solution path, which varies in the range $\left[0, +\infty \right)$, Algorithm~\ref{algo:APE-lar} uses $\widehat{q}^{\,k}_i$, which normalizes the stage value into the range $\left[ 0,1 \right]$. The concentration inequalities for empirical processes show that averaging $\widehat{q}^{\,k}_i$ across subsamples converges much faster and is much more stable than averaging the stage values of the $\mathbf{x}_i$.
\smallskip
\begin{algorithm}[ht]
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\smallskip
\Input{$\left( Y, X \right)$.}
divide the original sample equally into $K$ folds and generate $K$ subsamples $\left\{ \left( Y^k, X^k \right) \right\}^{K}_{k=1}$ by removing one fold in turn from $\left( Y, X \right)$\;
set $\widetilde{p} = \min\left\{ n\left(K-1\right)/K, p \right\}$\;
\For{ k := 1 to K, stepsize = 1 \nllabel{outer_averaging_start} }{
run an unrestricted least angle regression (or any forward selection algorithm) on $\left( Y^k, X^k \right)$ and record the order of variable inclusion at each stage\;
\nllabel{inner_averaging_start}
define $\widehat{q}^k = \mathbf{0} \in \mathbb{R}^p$\;
$\forall i,l \in \mathbf{N}^+$, if $\mathbf{x}_i$ is included at stage $l$ and excluded at $l-1$, set $\widehat{q}^k_i= (\widetilde{p} + 1 - l) / \widetilde{p}$, where $\widehat{q}^k_i$ is the $i$\textsuperscript{th} entry of $\widehat{q}^k$\;
\nllabel{inner_averaging_end}
}
$\widehat{q} := \frac{1}{K} \sum_{k=1}^{K} \widehat{q}^k$\; \nllabel{outer_averaging_end}
\Return $\widehat{q}$
\caption{$\widehat{q}$ method: parameterizing and averaging $L_0$ solution paths} \label{algo:APE-lar}
\end{algorithm}
\begin{figure}[ht]
\centering
\includegraphics[width=0.66\paperwidth]{q_demo_1new.pdf}
\caption{Computation of $\widehat{q}$ on 2 subsamples by least angle regression.}
\label{fig:q_demo}
\end{figure}
After the subsamples are created, lines~\ref{inner_averaging_start}-\ref{inner_averaging_end} of Algorithm~\ref{algo:APE-lar} compute $\widehat{q}^k$, which summarizes the order that least angle regression includes each $\mathbf{x}_i$ across all stages (see Figure~\ref{fig:q_demo}). The unrestricted least angle regression ranks variables by the stage they enter the solution path. As shown in line~\ref{inner_averaging_end} of Algorithm~\ref{algo:APE-lar} and Figure~\ref{fig:q_demo}, variables included at earlier stages have larger $\widehat{q}^k_i$ values: the first variable included is assigned $1$, the last is assigned $1/\widetilde{p}$, while the rejected variables are assigned $0$ (which occurs only when $p > n$). Thus, the $L_0$ solution path is obtained by ranking the $\mathbf{x}_i$ according to their $\widehat{q}^k_i$ values.
\citet[Theorem 2]{zhang09} implies that, on average, variables with the largest $\widehat{q}^k_i$ values are more likely to be informative. The $\widehat{q}^k_i$ may be sensitive in high-dimensional spaces to multicollinearity, sampling randomness, and noise. In these circumstances, a redundant variable may be included at an early stage in some $\left( Y^k, X^k \right)$ subsample. Algorithm~\ref{algo:APE-lar} reduces the impact of sensitivity in the $\widehat{q}^k_i$ by computing $\widehat{q} := \frac{1}{K} \sum_{k=1}^{K} \widehat{q}^k$ and ranking the $\mathbf{x}_i$ according to $\widehat{q}_i$ (the $i$\textsuperscript{th} entry in $\widehat{q}$), to arrive at the average $L_0$ solution path. The average $L_0$ solution path is formally defined as follows.
\begin{definition}[average $L_0$ solution path]
%
Define the \textbf{average $L_0$ solution path} of least angle regression on $\left\{ \left( Y^k, X^k \right) \right\}_{k=1}^{K}$ to be the (decreasing) rank order of the $\mathbf{x}_i$ variables based on their corresponding $\widehat{q}_i$ values. For example, in Figure~\ref{fig:q_demo}, the $\widehat{q}_i$ for $\mathbf{x}_1$, $\mathbf{x}_2$ and $\mathbf{x}_3$ are, respectively, $\widehat{q}_1 = 5/6$, $\widehat{q}_2 = 4/6$ and $\widehat{q}_3 = 3/6$. Thus, the average $L_0$ solution path may be represented as an ordered set $\left\{ \mathbf{x}_1, \mathbf{x}_2, \mathbf{x}_3 \right\}$.
%
\label{def:L_0_solution_path}
%
\end{definition}
To justify theoretically the $\widehat{q}$ method, we use the \citet{zhang09} framework to derive the theoretical properties of path averaging (see Appendix~A).
\begin{itemize}
\item Under the \citet{zhang09} conditions, Lemma~\ref{lemma:1} shows that, with a high probability, using $\widehat{q}^{\,k}_i$ ranking for variable selection on $\left( Y^k, X^k \right)$ generates the same theoretical results as the \citet{zhang09} forward selection method.
%
\item Under a similar stopping condition to \citet{zhang09}, Lemma~\ref{lemma:2} shows that, with a high probability, there exists a threshold $c^k$ for the $L_0$ path on $\left( Y^k, X^k \right)$ such that $\widehat{q}^{\,k}_i \geqslant c^k$ for informative $\mathbf{x}_i$ and $\widehat{q}^k_i < c^k$ for redundant $\mathbf{x}_i$
%
\item Using Lemma~\ref{lemma:2}, Lemma~\ref{lemma:3} shows that there exists a threshold $c = \sum_{i=k}^{K} c^k/K$ for the average $L_0$ path such that $\widehat{q}_i \geqslant c$ for informative $\mathbf{x}_i$ and $\widehat{q}_i < c$ for redundant $\mathbf{x}_i$ with large probability.
%
\end{itemize}
\subsubsection{Variable selection on the average $L_0$ path}
The solar algorithm is constructed on the aveage $L_0$ path and summarized in Algorithm~\ref{algo:solar}. We present solar under the generic framework of forward regression and can easily adapt it to least angle regression, forward or backward selection algorithms.
\begin{algorithm}[ht]
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\smallskip
Randomly select 20\% of the sample points as the validation set; denote the remaining points as the training set\;
Estimate $\widehat{q}$ using Algorithm~\ref{algo:APE-lar} on the training set and compute $Q(c) = \left\{ \mathbf{x}_j \; \vert \; \widehat{q}_j \geqslant c, \forall j\right\}$ for all $c \in \left\{ 1, 0.98, \ldots, 0.02, 0 \right\}.$
Run an OLS regression of each $Q(c)$ on $Y$ using the training set and find $c^*$, the value of $c$ that minimizes the validation error\;
Compute the OLS coefficients of $Q(c^*)$ on $Y$ using the whole sample.
\caption{Subsample-ordered least-angle regression (solar) \label{algo:solar}}
\end{algorithm}
In Algorithm~\ref{algo:solar}, variables are included into forward regression according to their rank order in the average $L_0$ solution path, represented by $\left\{ Q(c) \vert c = 1, 0.98, \ldots, 0\right\}$ in Algorithm~\ref{algo:solar}. We use $\widehat{q}$ from Algorithm~\ref{algo:APE-lar} to generate a list of variables $Q \left( c \right) = \left\{ \mathbf{x}_j \; \vert \; \widehat{q}_j \geqslant c, \forall j \leqslant p \right\}$. For any $c_1 > c_2$, $Q\left(c_1\right) \subset Q\left(c_2\right)$, implying a sequence of nested sets $\left\{ Q(c) \vert c = 1, 0.98, \ldots, 0\right\}$. Each $c$ denotes a stage of forward regression. For a given value of $c$, $Q(c)$ denotes the set of variables with $\left\Vert \beta_i \right\Vert_0=1$ on average and $Q(c) - Q(c - 0.02)$ is the set of variables with $\left\Vert \beta_i \right\Vert_0$ just turning to $1$ at $c$. Therefore, $\left\{ Q(c) \vert c = 1, 0.98, \ldots, 0\right\}$ is the average $L_0$ solution path of Definition~\ref{def:L_0_solution_path}. Variables that are more likely to be informative have larger $c$ values in $Q(c)$ and will be selected first by the solar algorithm.
Using the \citet{zhang09} framework and Lemmas~\ref{lemma:2} and~\ref{lemma:3}, we derive the following theoretical results for variable selection (see Appendix~A).
\begin{itemize}
%
\item Theorem~\ref{thm:1} shows that solar variable selection is $L_0$ consistent under similar sparse eigenvalue and irrepresentable conditions as have been used to prove lasso consistency.
%
\item Under similar assumptions to \citet{zhang09}, Lemmas~\ref{lemma:4} and \ref{lemma:5} show that the number of omitted informative $\mathbf{x}_i$ and the probability of selecting at least one redundant $\mathbf{x}_i$ are restricted by sample size, the sparse eigenvalue condition, and the stopping condition.
%
\end{itemize}
The key difference between solar and the lasso-type estimators, and the source of the advantages of solar, is solution path averaging.
\begin{itemize}
%
\item The difference between solar and lasso is that solar averages the solution path. Lasso and solar both use the solution path for variable selection. Lasso and its variants focus on optimizing the shrinkage parameter $\lambda$ (via cross validation), leaving aside concerns about the reliability of the lasso path in high dimensions. Optimizing $\lambda$ on an unreliable path renders variable selection difficult. By contrast, solar prioritizes averaging the solution path, which not only averages out path unreliability in high dimensions, but also ranks all the informative variables at the start of the average $L_0$ path (as shown in Lemma~\ref{lemma:2} and~\ref{lemma:3}). Hence, with a high probability, the variable selection algorithm needs only to analyze the variables at the start of the average $L_0$ path, making selection accurate and efficient.
%
\item The difference between solar and lasso-related bootstrap selection (e.g., bolasso) is in how they average the variable selection algorithm. Given the $\lambda$ value (optimal or not), lasso-related bootstrap selection averages the selection \emph{results} across subsamples. Thus, bootstrap selection requires hundreds of repetitions to average out the instability and redundancy of lasso variable selection \citep{bach2008bolasso}. By contrast, solar averages solution \emph{paths}, which solves most of the lasso instability and redundancy issues, returning a more reliable path (the average $L_0$ path). Variable selection along a reliable path substantially reduces the likelihood that solar selects redundant variables or omits informative variables. As a result, solar-related bootstrap selection (e.g., bootstrap solar or solar stability selection) requires only 3-5 repetitions to outperform hundreds of lasso-related bootstrap repetitions (see Section~\ref{subsection:comp} for details).
%
\end{itemize}
\subsection{Solar optimized by coordinate descent}
The solar algorithm can easily be generalized to use coordinate descent. For lasso, least angle regression or coordinate descent generates the same solution path parameterized by the $\beta_i$ and the shrinkage parameter $\lambda$. Thus, to reprogram solar to use coordinate descent, we simply replace Algorithm~\ref{algo:APE-lar} with Algorithm~\ref{algo:APE-cd}, which records the order of variable selection along the coordinate descent solution path.
\smallskip
\begin{algorithm}[ht]
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\smallskip
\Input{$\left( Y, X \right)$.}
generate $K$ subsamples $\left\{ \left( Y^k, X^k \right) \right\}^{K}_{k=1}$ by randomly remove $1/K$ of observations in $\left( Y, X \right)$\;
set $\widetilde{p} = \min\left\{ n_{\mathrm{sub}}, p \right\}$ \;
\For{ k := 1 to K, stepsize = 1 \nllabel{outer_averaging_start3} }{
denote $\lambda_s$ as the shrinkage parameter value that coordinate descent lasso selects $s$ variables, $\forall s \in \left[ 0, \widetilde{p}\right]$;
run a pathwise coordinate descent for lasso on $\left( Y^k, X^k \right)$, $\forall \lambda \in \left\{\lambda_0, \lambda_1, \ldots, \lambda_{\widetilde{p}},\right\}$
record the order of variable inclusion at each $\lambda \in \left\{\lambda_0, \lambda_1, \ldots, \lambda_{\widetilde{p}},\right\}$\;
define $\widehat{q}^k = \mathbf{0} \in \mathbb{R}^p$\;
$\forall i,s \in \mathbf{N}^+$, if $\mathbf{x}_i$ is included at $\lambda = \lambda_s$ and excluded at $\lambda_{s-1}$, set $\widehat{q}^k_i= (\widetilde{p} + 1 - s) / \widetilde{p}$, where $\widehat{q}^k_i$ is the $i$\textsuperscript{th} entry of $\widehat{q}^k$\;
}
$\widehat{q} := \frac{1}{K} \sum_{k=1}^{K} \widehat{q}^k$\; \nllabel{outer_averaging_end3}
\Return $\widehat{q}$
\caption{average $L_0$ solution path estimation via coordinate descent \label{algo:APE-cd}}
\end{algorithm}
\begin{figure}[ht]
%
\centering
%
\includegraphics[width=0.66\paperwidth]{q_demo_3.pdf}
%
\caption{Computation of $\widehat{q}$ on 2 subsamples using coordinate descent.}
%
\label{fig:q_demo_3}
%
\end{figure}
Algorithm~\ref{algo:APE-cd} serves the same purpose as Algorithm~\ref{algo:APE-lar}: to estimate the average $L_0$ path. Algorithm~\ref{algo:APE-cd} uses $\lambda$ to record the order that each variable enters the path. Consider the example in Figure~\ref{fig:q_demo_3}. To re-parameterize the solution path, we denote $\lambda_s$ to be the $\lambda$ value that coordinate descent lasso includes $s$ variables, $\forall s\in \left( 0, \min \left\{ n/2, p \right\} \right]$, giving a sequence of $\lambda$ for grid search. In each subsample $\left( Y^k, X^k \right)$, we train a standard pathwise coordinate descent for lasso, allowing $\lambda$ to increase stepwise within the grid $\left\{\lambda_1, \ldots, \lambda_{ \min \left\{ n/2, p \right\} } \right\}$, where $\lambda_1 \geqslant \ldots \geqslant \lambda_{ \min \left\{ n/2, p \right\} }$. In Figure~\ref{fig:q_demo_3}, when $\lambda \leqslant \lambda_3$ at subsample $\left( Y^1, X^1 \right)$, all three variables are selected in the solution path, implying that $\widehat{q}^1_i \geqslant 1/3$ for all variables. When $\lambda$ increases to $\lambda_2$, only $\{\mathbf{x}_3, \mathbf{x}_1\}$ survive the harsher shrinkage, implying that they should be ranked higher than $\mathbf{x}_2$. As a result, $\widehat{q}^1_1, \widehat{q}^1_3 \geqslant 2/3$ and $\widehat{q}^1_2 = 1/3$. When $\lambda$ reaches $\lambda_3$, only $\{\mathbf{x}_1\}$ remains, leaving $\widehat{q}^1_1 = 3/3$ and $\widehat{q}^1_3 = 2/3$. Applying the same method to each subsample produces the same $\widehat{q}$ as Algorithm~\ref{algo:APE-lar}.
\subsection{Comparison and generalization to lasso variants \label{subsection:variant}}
\label{subsec:variant}
Because solar is trained by least angle regression or coordinate descent, it can easily be extended to several lasso variants:
\begin{itemize}
%
\item `Grouped solar' is invoked by forcing specific variables to be simultaneously selected into the solution path;
%
\item `Adaptive solar' is obtained by weighting variable rankings in the average $L_0$ path according to their OLS coefficients;
%
\item `Solar elastic net' or `fused solar' is derived by replacing the coordinate descent loss function in Algorithm~\ref{algo:APE-cd} with the $L_1$-$L_2$ loss
%
\begin{equation}
%
\left\Vert Y -X\beta \right\Vert_2^2 + \lambda^{(1)} \left\Vert \beta \right\Vert_1 + \lambda^{(2)} \left\Vert \beta \right\Vert_2^2
%
\end{equation}
%
or fused loss
%
\begin{equation}
%
\left\Vert Y -X\beta \right\Vert_2^2 + \lambda^{(1)} \left\Vert \beta \right\Vert_1 + \lambda^{(2)} \sum_{j=2}^{p} \left\vert \beta_j - \beta_{j-1} \right\vert_1.
%
\end{equation}
%
\end{itemize}
Furthermore, many lasso enhancements (e.g., safe/strong rules, post-lasso hypothesis testing) may be applied to solar because they use the same optimization methods. Rather than competing with the lasso enhancements, solar supplements them by improving variable selection performance and computation speed in large-scale applications.
\section{Solar advantages over lasso variants, lasso rules, and variable screening \label{section:adv}}
In this section, we use a series of examples to demonstrate the advantages of the solar algorithm for post-selection hypothesis testing, in the presence of complicated dependence structures, and in terms of its robustness to the \emph{irrepresentable condition} (IRC).
\subsection{Post-selection hypothesis testing}
A major advantage of solar is its amenability to post-selection testing. Because the lasso tests \citep{lockhartall14, taylor2014exact} are based on forward regression, they may be adapted to solar. More interestingly, it is straightforward to adapt the data-splitting tests \citep{wasserman2009high,meinshausen2009p} to solar for weak signal detection. We illustrate this point using Example~1.
\smallskip
\noindent
\textbf{Example 1.} Consider the DGP
\begin{equation}
Y = \mathbf{x}_0 + 2 \mathbf{x}_1 + 3 \mathbf{x}_2 + 4 \mathbf{x}_3 + 5 \mathbf{x}_4 + \sum_{j=5}^{p} 0 \cdot \mathbf{x}_j + e,
\end{equation}
where $\mathbf{x}_i$, $i=0,\dots,p$, are standard Gaussian variables with pairwise correlations of $0.5$, $e$ is a standard Gaussian noise term, and $p/n=100/100$.
Following \citet[Example~4.1]{romano2019multiple} and \citet{diciccio2020exact}, we conduct data-splitting tests by randomly separating the data into two portions of 50 observations. In the first round, one portion is used for solar or lasso selection and the other for testing. In the second round, the roles of the two portions are reversed. As a result, the p-values of any given variable are uncorrelated across the two rounds. Thus, we may apply Theorem~3.2 of \citet{romano2019multiple} and compute the average p-value across the two rounds to conduct a valid t-test for any selected covariate.
\citet{diciccio2020exact} stresses the importance of retaining residual degrees of freedom to ensure accurate tests. Because solar yields a more sparse and accurate variable selection than lasso does(Section~\ref{section:comp}), it conserves residual degrees of freedom, improving the reliability of post-selection p-values. Figure~\ref{fig:p_value_compare} plots the average p-values for the informative variables $\{\mathbf{x}_0,\ldots,\mathbf{x}_4\}$ from post-solar and post-lasso data-splitting tests using 100 repetitions. While the solar and lasso p-values are less than $0.05$ for the stronger signals $\{\mathbf{x}_1,\ldots,\mathbf{x}_4\}$, more than $25\%$ of the lasso p-values exceed $0.05$ for the weakest signal $\mathbf{x}_0$, implying non-trivial false non-rejection of $H_0$. By contrast, the solar p-value boxplot is very compact for $\mathbf{x}_0$, with only $5$ out of $100$ above $0.05$. Hence, solar p-values are more reliable for detecting weak signals with small $n$ and large $p$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\paperwidth]{p_value_compare.pdf}
\caption{Average p-value boxplots for data-splitting t-tests with solar and lasso.}
\label{fig:p_value_compare}
\end{figure}
Moreover, the solar $L_0$ path may also assist with the formulation of $H_0$ for $p>n$. Because conserving residual degrees of freedom is so important, tests on the selection (omission) of redundant (informative) variables trigger decisions on which $\beta_i$ to test. \citet[Theorem~2]{zhang09} shows that the earlier a variable enters the $L_0$ path, the more likely it is informative, implying that variables should be tested in rank order. Given the solar path is more robust than lasso path to settings of the irrepresentable condition, sampling noise, multicollinearity, and other issues, it provides more reliable guidance on the order to test the $\beta_i$. $\blacksquare$
\subsection{Complicated dependence structures\label{section:example}}
Another advantage of solar is that the average $L_0$ solution path is more robust to outliers, multicollinearity, and noise in high-dimensional spaces. Thus, solar is likely to be more reliable than other variable selection methods under complicated dependence structures. We illustrate the point with the following two (Bayesian network) examples.
\begin{figure}[ht]
\centering
%
\includegraphics[width=0.35\paperwidth]{uncond_example.pdf}
%
\caption{Y is unconditionally uncorrelated with an informative $\mathbf{x}_1$.}
%
\label{fig:uncond_example}
\end{figure}
The first example is a common empirical regression problem: \emph{informative variables} that are \emph{unconditionally uncorrelated to} $Y$ in the DGP. In Figure~\ref{fig:uncond_example}, $\mathbf{x}_1$ and $\mathbf{x}_2$ are informative for $Y$, while $\mathbf{x}_1$ and $Y$ are independent. For example, in biostatistics, concussion ($\mathbf{x}_1$) or a brain tumor ($Y$) may cause headaches ($\mathbf{x}_2$), implying that concussion history is when attempting to diagnose a brain tumor. In this setting, Example~2a shows that solar is more reliable than post-lasso rules and variable screening.
\smallskip
\noindent
\textbf{Example 2a.} In Figure~\ref{fig:uncond_example}, there are $100$ variables and $\mathbf{x}_2$ is (causally) generated by its parents $\left\{ \mathbf{x}_1, Y \right\}$ as follows,
\begin{equation}
%
\mathbf{x}_2 = \alpha_1 \mathbf{x}_1 + \alpha_2 Y + u,
%
\label{eqn:collider_1}
%
\end{equation}
where $\mathbf{x}_1$ is unconditionally uncorrelated with $Y$, $\mathbf{x}_1$ and $Y$ are both unconditionally and conditionally uncorrelated with the redundant variables $\{\mathbf{x}_3, \ldots, \mathbf{x}_{99}\}$, $\left\{\alpha_1, \alpha_2 \right\}$ are population regression coefficients, and $u$ is a Gaussian noise term. If $Y$ is chosen to be the response variable, the population regression equation is
\begin{equation}
%
Y = -\frac{\alpha_1}{\alpha_2} \mathbf{x}_1 + \frac{1}{\alpha_2} \mathbf{x}_2 - \frac{1}{\alpha_2}u.
%
\label{eqn:collider_2}
%
\end{equation}
Note that $\mathbf{x}_1$ and $\mathbf{x}_2$ are both informative variables for $Y$. However, since $\mathbf{x}_1$ is unconditionally uncorrelated with $Y$ in the population, the post-lasso rules [such as the strong rule \citep{tibshirani2012strong} and the safe rule \citep{ghaoui2010safe}] may be prone to rejecting $\mathbf{x}_1$. For a given value of the shrinkage parameter $\lambda$ in grid search, the base strong rule and the safe rule for lasso to reject a selected variable, respectively, satisfies (\ref{eqn:safe_rule}) and (\ref{eqn:strong_rule}):
\begin{eqnarray}
%
\left\vert \mathbf{x}_i^T Y \right\vert < & \lambda - \left\Vert \mathbf{x}_i \right\Vert_2 \left\Vert Y \right\Vert_2 \frac{\lambda_{max} - \lambda} {\lambda_{max}} ; \label{eqn:safe_rule} \\
%
\left\vert \mathbf{x}_i^T Y \right\vert < & 2\lambda - \lambda_{max} , \label{eqn:strong_rule}
%
\label{eqn:post_estmation_rule}
%
\end{eqnarray}
where the $\mathbf{x}_i$ are standardized and $\lambda_{max}$ is the value of the shrinkage parameter that rejects all the variables. Both rules are based on the unconditional covariance between $\mathbf{x}_i$ and $Y$. For a given value of $\lambda$ (typically selected by CV), lasso will likely select $\mathbf{x}_1$ and $\mathbf{x}_2$ along with redundant variables from $\left\{ \mathbf{x}_3, \ldots, \mathbf{x}_{99} \right\}$ [because the DGP does not violate the IRC]. Since $\mathrm{corr} \left( \mathbf{x}_1, Y \right) = \mathrm{corr} \left( \mathbf{x}_3, Y \right) = \cdots = \mathrm{corr} \left( \mathbf{x}_{99}, Y \right) = 0$ in the population, the sample value of $\left\vert \mathbf{x}_1^T Y \right\vert$ will be approximately as small as the $\left\vert \mathbf{x}_i^T Y \right\vert$ of any redundant variable. Put another way, $\mathbf{x}_1$ cannot be distinguished from the redundant variables by the value of $\left\vert \mathbf{x}_i^T Y \right\vert$. To ensure $\mathbf{x}_1$ is not rejected by (\ref{eqn:safe_rule}) or (\ref{eqn:strong_rule}), both $\lambda - \left\Vert \mathbf{x}_1 \right\Vert_2 \left\Vert Y \right\Vert_2 \frac{\lambda_{max} - \lambda} {\lambda_{max}}$ and $2\lambda - \lambda_{max}$ must be smaller than $\left\vert \mathbf{x}_1^T Y \right\vert$. However, this will lead to two problems. First, decreasing the right-hand side of (\ref{eqn:safe_rule}) and (\ref{eqn:strong_rule}) will reduce the value of $\lambda$, implying that lasso will select more redundant variables. Second, since $\left\vert \mathbf{x}_1^T Y \right\vert$ will be approximately as small as the $\left\vert \mathbf{x}_i^T Y \right\vert$ of any redundant variable selected by lasso, not rejecting $\mathbf{x}_1$ (by reducing both right-hand side terms) may result in (\ref{eqn:safe_rule}) and (\ref{eqn:strong_rule}) retaining redundant variables.
Variable screening methods \citep{fan2008sure} may also be prone to selecting redundant variables. Screening ranks variables decreasingly based on the absolute values of their unconditional correlations to $Y$, selecting the top $w$ variables (with $w$ selected by CV, bootstrap, or BIC). Since $\mathrm{corr} \left( \mathbf{x}_2, Y \right) \neq 0$ in the population, screening will rank $\mathbf{x}_2$ highly. However, it may not rank $\mathbf{x}_1$ highly because $\mathrm{corr} \left( \mathbf{x}_1, Y \right) = 0$ in the population. Thus, some redundant variables may be ranked between $\mathbf{x}_2$ and $\mathbf{x}_1$, implying that if both $\mathbf{x}_1$ and $\mathbf{x}_2$ are selected, screening will select redundant variables.
The average $L_0$ solution path will not suffer the same problems. For convenience, assume $-\alpha_1 / \alpha_2 > 0$ and $p/n = 100/200$ or smaller. For least angle regression, as $\left\Vert \beta_2 \right\Vert_1$ increases at stage~1 (i.e., as $\mathbf{x}_2$ is `partialled out' of $Y$), the unconditional correlation between $Y - \beta_2 \mathbf{x}_2$ and $\mathbf{x}_1$ will increase above $0$ significantly while the marginal correlation between $Y - \beta_2 \mathbf{x}_2$ and any redundant variable will remain approximately $0$. Thus, in the $L_0$ solution path and, hence, the average $L_0$ solution path, $\mathbf{x}_1$ will be included immediately after $\mathbf{x}_2$ is included. $\blacksquare$
\citet{fan2008sure} and \citet{barut2016conditional} propose two solutions for the problems with variable screening in situations like Example~2a. However,
\begin{itemize}
%
\item the first approach \citep[Section~2.2 and~3]{barut2016conditional} assumes the identity of $\mathbf{x}_2$ is known, which is unlikely to be realistic in practical applications. [In Bayesian networks or probabilistic graph modelling, $\mathbf{x}_2$ is known as a \emph{collider}; \citet{barut2016conditional} refer to $\mathbf{x}_2$ as a \emph{hidden signature} variable and denote it by $X_c$];
%
\item the second approach \citep[Section~1 and~2.2]{barut2016conditional} suggests randomly trying out several variables to be colliders. The logic is straightforward: randomly trying out a wrong variable (like $\mathbf{x}_2$) to be a collider is harmless because conditioning on that variable will not make $corr(Y,\mathbf{x}_1) \neq 0$, nor will it cause the selection of a redundant variable. Moreover, by repeatedly randomly trying out variables, there is a non-zero probability the correct collider will eventually be uncovered, producing a statistically significant $corr(Y,\mathbf{x}_1) \neq 0$. However, using multiple trials may be inefficient and computationally expensive, especially with high-dimensional data. To improve high-dimensional efficiency, \citet{barut2016conditional} suggests trying out several variables simultaneously. However, if $corr(Y, \mathbf{x}_1) \neq 0$ were discovered after trying out, say, $\left\{\mathbf{x}_2,\mbox{other variables}\right\}$, it would still be necessary to decide which of $\left\{\mathbf{x}_2,\mbox{other variables}\right\}$ are redundant, meaning variable selection is not completed.
%
\end{itemize}
\medskip
The second example illustrates another common problem in empirical regression: \emph{redundant variables} that are \emph{unconditionally correlated to} $Y$ in the DGP. In Figure~\ref{fig:cond_example}, the problem occurs because $\mathbf{x}_3$ and $Y$ are determined by common variables. For example, house rent ($Y$) and food expenditure ($\mathbf{x}_3$) are both determined by income ($\mathbf{x}_1$) and saving ($\mathbf{x}_2$), yet $\mathbf{x}_3$ is redundant if $\mathbf{x}_1$ and $\mathbf{x}_2$ are used to predict $Y$. In this setting, Example~2b illustrates that the strong rule, base rule, and variable screening methods may struggle to reject the redundant $\mathbf{x}_3$ even when IRC is satisfied. By contrast, solar will be less prone to selecting redundant variables.
\begin{figure}[ht]
%
\centering
%
\includegraphics[width=0.35\paperwidth]{example3.pdf}
%
\caption{$Y$ is unconditionally correlated with a redundant $\mathbf{x}_3$.}
%
\label{fig:cond_example}
%
\end{figure}
\smallskip
\noindent
\textbf{Example 2b.} Figure~\ref{fig:cond_example} depicts the following confounding structure,
\begin{equation}
%
\begin{cases}
%
\mathbf{x}_3 = \frac{1}{3} \mathbf{x}_1 + \frac{1}{3} \mathbf{x}_2 + \frac{\sqrt{7}}{3} u, \\
%
Y = \frac{7}{10} \mathbf{x}_1 + \frac{2}{10} \mathbf{x}_2 + \frac{\sqrt{47}}{10} e, \\
%
\end{cases}
%
\label{eqn:example_4}
%
\end{equation}
where $\mathbf{x}_1$ and $\mathbf{x}_2$ cause both $Y$ and $\mathbf{x}_3$, implying that $\mathbf{x}_3$ is unconditionally correlated to $Y$; $\mathbf{x}_1$, $\mathbf{x}_2$, $u$ and $e$ are independent; $\mathbf{x}_3$ is independent from $e$; $Y$ is independent from $u$; and all variables are standardized.
For large $n$, when the sample correlations are close to their population values, the sample marginal correlations to $Y$ are:
\begin{equation}
%
\begin{aligned}
%
\mathrm{corr} \left( \mathbf{x}_1, Y \right) = & \;0.7, \\
%
\mathrm{corr} \left( \mathbf{x}_3, Y \right) = & \;\mathrm{corr} \left( \frac{1}{3} \mathbf{x}_1 + \frac{1}{3} \mathbf{x}_2, \frac{7}{10} \mathbf{x}_1 + \frac{2}{10} \mathbf{x}_2 \right)
%
= 0.3, \\
%
\mathrm{corr} \left( \mathbf{x}_2, Y \right) = & \;0.2. \\
%
\end{aligned}
%
\end{equation}
Because $\mathbf{x}_2$ ranks below $\mathbf{x}_1$ and $\mathbf{x}_3$ in terms of marginal correlations to $Y$, the variable screening method must select all $3$ variables---including the redundant $\mathbf{x}_3$---to avoid omitting $\mathbf{x}_2$. The base strong rule and safe rule may also have difficulty rejecting $\mathbf{x}_3$. Since $\mathrm{corr} \left( \mathbf{x}_3, Y \right)>\mathrm{corr} \left( \mathbf{x}_2, Y \right)$, if lasso selects $\mathbf{x}_3$ and $\mathbf{x}_2$ and the strong (or safe) rule is used to reject $\mathbf{x}_3$, $\mathbf{x}_2$ will also be rejected.
Forward regression, solar, and lasso will not make the same error. Because (\ref{eqn:example_4}) does not violate the IRC, variable-selection consistency of forward regression, lars, and lasso is assured from the theoretical results of \citet{zhang09} and \citet{zhaoyu06}. In forward regression, $\mathbf{x}_1$ will be included at the first stage. After controlling for $\mathbf{x}_1$, the partial correlations (for large $n$) of both $\mathbf{x}_2$ and $\mathbf{x}_3$ with $Y$ are:
\begin{equation}
%
\begin{aligned}
%
\mathrm{corr} \left( \mathbf{x}_2, Y \vert \mathbf{x}_1 \right) = & \;\mathrm{corr} \left( \mathbf{x}_2, \frac{2}{10} \mathbf{x}_2 \right)
%
= 0.2, \\
%
\mathrm{corr} \left( \mathbf{x}_3, Y \vert \mathbf{x}_1 \right) = & \;\mathrm{corr} \left( \frac{1}{3} \mathbf{x}_1 + \frac{1}{3} \mathbf{x}_2, \frac{2}{10} \mathbf{x}_2 \right)
%
= 0.0667. \\
%
\end{aligned}
%
\end{equation}
Because $\mathrm{corr}(\mathbf{x}_2, Y \vert \mathbf{x}_1)>\mathrm{corr}(\mathbf{x}_3, Y \vert \mathbf{x}_1)$, forward regression will include $\mathbf{x}_2$ not $\mathbf{x}_3$ at the second stage. After controlling for both $\mathbf{x}_1$ and $\mathbf{x}_2$, the remaining variation in $Y$ is due to $e$, which $\mathbf{x}_3$ cannot explain. Thus, CV or BIC will terminate forward regression after the second stage and $\mathbf{x}_3$ will not be selected. Similarly, because solar relies on the average $L_0$ path, it will include $\mathbf{x}_1$ and $\mathbf{x}_2$ but not $\mathbf{x}_3$. $\blacksquare$
\bigskip
Essentially, the strong rule, safe rule, and variable screening struggle in Examples~2a and~2b because they rely on unconditional correlations to $Y$, whereas informative variables in regression analysis are defined in terms of conditional correlations. In many scenarios, unconditional and conditional correlations are aligned. However, when they are not, variable selection based conditional correlation is better placed to select the informative variables.
\citet{fan2008sure} propose redeeming variable screening on $Y$ by first selecting variables with high unconditional correlations to $Y$ and then running a lasso of the residuals on the dropped variables. By contrast, solar completes variable selection in a single pass of conditional correlation ranking, reducing computational costs. Moreover, the \citet{fan2008sure} approach does not solve Example~2b type problems. At the first step, variables with high unconditional correlations to $Y$ will be selected, including the redundant $\mathbf{x}_3$. Selecting redundant variables will be more serious when $Y$ has multiple $\mathbf{x}_3$-like siblings and in complicated dependence structures where multicollinearity results in inaccurate estimates of the coefficients and standard errors in finite samples. In short, solar is likely to be more computationally efficient and better at variable selection in settings with complicated dependence structures.
\subsection{Robustness to the IRC \label{subsection:irc}}
Solar is more robust to different settings of the IRC than the lasso. The IRC is considered to be sufficient and almost necessary for accurate lasso variable selection \citep{zhang09}. Here, we ignore lasso rules and variable screening since, as discussed above, their selection accuracy may be compromised by a reliance on unconditional correlations to $Y$. We define the IRC as in \citet{zhang09}.
\begin{definition}[IRC]
Given $F \subset \left\{ 1, \ldots, p \right\}$, define $X_F$ to be the $n \times \left\vert F \right \vert$ matrix with only the full set of informative variables. Define
%
\begin{align}
%
\mu \left( F \right) = & \max \left\{ \left\Vert \left( \left( X_F \right)^T X_F \right)^{-1} \left( X_F \right)^T \mathbf{x}_j \right\Vert_1 \; \vert \; \forall j \not\in F \right\}. \notag
%
\end{align}
%
Given a constant $1 \geqslant \eta > 0$, the \emph{strong} irrepresentable condition is satisfied if $\mu \left( F \right) \leqslant 1 - \eta$ and the \emph{weak} irrepresentable condition is satisfied if $\mu \left( F \right) < 1$.$\blacksquare$
\end{definition}
\smallskip
\noindent
\textbf{Example 3.} Modify the DGP in Example~2b to match the \citet{zhaoyu06} simulations. Thus, $n = 200$, $p = 50$, and $\{\mathbf{x}_0, \ldots, \mathbf{x}_4, \mathbf{x}_6, \ldots, \mathbf{x}_{50}\}$ are generated from a zero-mean, unit-variance multivariate Gaussian distribution, where all the correlation coefficients are $0.5$. The DGP of $Y$ and $\mathbf{x}_5$ is
\begin{equation}
%
\begin{cases}
%
\mathbf{x}_5 = \omega \mathbf{x}_0 + \omega \mathbf{x}_1 + \gamma\cdot \sqrt{1 - 2\omega^2} \\
%
Y = 2 \mathbf{x}_0 + 3\mathbf{x}_1 + 4 \mathbf{x}_2 + 5 \mathbf{x}_3 + 6 \mathbf{x}_4 + e \\
%
\end{cases}
%
\label{eqn:dgp_x5}
%
\end{equation}
where $\omega \in \mathbb{R}$, while $\gamma$ and $e$ are both standard Gaussian noise terms, independent from each other and all the other variables. Compared with Example~2b, this DGP increases the challenge of accurate selection by increasing the number of redundant variables from 1 to 46, $\{\mathbf{x}_5, \ldots, \mathbf{x}_{50}\}$. This DGP also makes it straightforward to control the IRC through $\omega$, which affects the value of $\mu \left( F \right)$.
\begin{figure}
%
\centering
%
\subfloat[\label{fig:solar_ic_type-II1}$\omega = 1/4,\;\mu\left(F\right)=1/2$, lasso]
{\includegraphics[width=0.25\paperwidth]{acc_plot_top20_ic_25_False_lars-crop.pdf}}
%
\subfloat[\label{fig:solar_ic_type-II2}$\omega = 1/3,\;\mu\left(F\right)=2/3$, lasso]
{\includegraphics[width=0.25\paperwidth]{acc_plot_top20_ic_33_False_lars-crop.pdf}}
%
\subfloat[\label{fig:solar_ic_type-II3}$\omega = 1/2,\;\mu\left(F\right)=1$, lasso]
{\includegraphics[width=0.25\paperwidth]{acc_plot_top20_ic_5_False_lars-crop.pdf}}
\subfloat[\label{fig:solar_ic_type-II7}$\omega = 1/4,\;\mu\left(F\right)=1/2$, solar]
{\includegraphics[width=0.25\paperwidth]{acc_plot_top20_ic_25_False_solar-crop.pdf}}
%
\subfloat[\label{fig:solar_ic_type-II8}$\omega = 1/3,\;\mu\left(F\right)=2/3$, solar]
{\includegraphics[width=0.25\paperwidth]{acc_plot_top20_ic_33_False_solar-crop.pdf}}
%
\subfloat[\label{fig:solar_ic_type-II9}$\omega = 1/2,\;\mu\left(F\right)=1$, solar]
{\includegraphics[width=0.25\paperwidth]{acc_plot_top20_ic_5_False_solar-crop.pdf}}
%
\caption{Probability of including redundant variables (top 15) in simulation~2 ($\mathbf{x}_5$ in orange).}
\label{fig:solar_ic_type-II}
%
\end{figure}
In (\ref{eqn:dgp_x5}), the IRC only affects the redundant $\mathbf{x}_5$. Hence, we focus on the probability of incorrectly selecting $\mathbf{x}_5$ in 200 repetitions. By setting $\omega$ to either $1/4$, $1/3$, or $1/2$, the population value of $\mu \left( F \right)$ changes, respectively, to $1/2$, $2/3$, or $1$, gradually increasing the difficulty of rejecting the redundant $\mathbf{x}_5$.
Figure~\ref{fig:solar_ic_type-II} displays the simulation results. When $\mu \left( F \right) = 1/2$, lasso wrongly includes $\mathbf{x}_5$ with probability $0.25$. By contrast, $\mathbf{x}_5$ is not among the top 15 variables selected by solar, implying a probability less than $0.1$. When $\mu \left( F \right)$ increases to $2/3$, the probability lasso includes $\mathbf{x}_5$ increases to around $0.3$. When $\mu \left( F \right)$ increases to $1$ in the population and strong IRC is violated, the probability lasso includes $\mathbf{x}_5$ rises to almost $0.5$. By contrast, the probability solar includes $\mathbf{x}_5$ is below $0.1$ even when $\mu\left(F\right)=1$. The results illustrate that solar is more robust to different settings of the IRC. $\blacksquare$
\section{Solar advantages over subsample variable selection\label{section:comp}}
In this section, we shift our focus to simulation. We demonstrate that: (i) solar offers significant improvements over lasso-type algorithms in terms of variable selection sparsity and accuracy; (ii)~replacing lasso with solar in bootstrap selection drastically reduces the computation load, measured by runtime. We choose the simulation settings so that, as far as possible, the comparisons are fair, representative, and generalizable. Our overall goal is to enable \emph{ceteris paribus} comparisons between solar and state-of-the-art lasso algorithms.
\subsection{Simulation competitors}
We consider a subset of lasso-type algorithms for comparison to solar. Firstly, some lasso modifications (e.g., fused lasso, grouped lasso) are designed to solve specific empirical problems that are not relevant to our paper. Secondly, it may be difficult to determine how much some variants outperform lasso.\footnote{For example, while \citet{jia2010model} show numerically that elastic net has slightly better variable-selection accuracy than lasso, they also find that ``when the lasso does not select the true model, it is more likely that the elastic net does not select the true model either'' (a point we verify in Section~\ref{section:application}). While simulations in \citet{zou2006adaptive} show that adaptive lasso outperforms lasso when $p/n<1$, it requires first computing the OLS estimates of all $\mathbf{x}_i$ coefficients, which is difficult when $p/n>1$.} Since both solar and lasso may be evaluated via least angle regression and coordinate descent, many other lasso modifications can be directly applied to solar, as discussed in Section~\ref{subsec:variant}. We do not consider information criteria for shrinkage parameter tuning. \citet{scikit-learn} points out that information criteria are over-optimistic and require a proper estimation of the degrees of freedom for the solution. Moreover, information criteria are derived asymptotically and tend to break down when the problem is badly conditioned (e.g., $p > n$).\footnote{See \url{https://scikit-learn.org/stable/modules/linear_model\#lasso.html} for details.}
Solar competes with $10$-fold, cross-validated lasso (denoted `lasso' for short), following the \citet{friedman2001elements} simulations that show 10 folds balances the bias-variance trade-off in CV error minimization. We set the number of generated subsamples ($K$) in Algorithm~\ref{algo:APE-lar} to $3$ since $K>3$ has only negligible effects. Because least-angle regression and coordinate descent yield similar selection results for solar and lasso, we combine the lars and coordinate descent results for solar and report only the runtime for lars lasso (see Supplementary Material B).
We also include bootstrap selection algorithms in the comparisons. A bootstrap selection repeats lasso multiple times across bootstrap subsamples to produce a set of averaged (or accumulated) selection results. Given the similarities among lasso bootstrap selection methods, we choose the \citet{bach2008bolasso} bootstrap lasso (\emph{bolasso}) to be the competitor to solar. \citet{bach2008bolasso} proposes two bolasso algorithms: bolasso-H and bolasso-S; both are competitors in the simulations. Bolasso-H selects only variables that are selected in all bootstrap subsamples, i.e., the subsample selection frequency threshold, $f=1$. Bolasso-S selects variables that are selected in 90\% of the bootstrap subsamples ($f=0.9$). \citet{bach2008bolasso} finds that bolasso selection and prediction performance improves with the number of subsamples. To ensure a rigorous challenge for solar, we set the number of bootstrap subsamples in bolasso to $256$, the maximum in the \citet{bach2008bolasso} simulations. Moreover, \citet{meinshausen2010stability} points out that bolasso relies on choosing the $\lambda$ value on bootstrap subsamples. If the $\lambda$ value is unecessarily large on more than $10\%$ of all bootstrap subsamples, bolasso-H and bolasso-S will omit informative variables. Given the fact that the optimal value of $\lambda$ may change substantially in high dimenisons, we use $10$-fold cross validation to tune $\lambda$ in each bootstrap subsample.
We also consider a bootstrap solar selection \emph{(bsolar)}, which executes solar on each bootstrap subsample and computes the selection frequency for each variable across all bootstrap subsamples. To ensure that any performance difference is due to replacing lasso with solar in the bootstrap selection system, bolasso and bsolar use the same subsample selection frequency threshold. Thus, we evaluate 2~versions of bsolar: bsolar-H ($f=1$) and bsolar-S ($f=0.9$). We use the notation bsolar-$m$H and bsolar-$m$S, where $m$ is the number of subsamples used to compute the selection frequency.
\subsection{Simulation settings}
The DGP for the simulations is as follows. The $p$ covariates in $X \in \mathbb{R}^{n \times p}$ are generated from a zero-mean, multivariate Gaussian distribution, with all off-diagonal elements in the covariance matrix equal to~0.5. The first 5 variables in $X$ are informative; the remaining $p-5$ variables are redundant. The response variable $Y \in \mathbb{R}^{n \times 1}$ is:
\begin{equation}
Y = 2 \mathbf{x}_0 + 3 \mathbf{x}_1 + 4 \mathbf{x}_2 + 5 \mathbf{x}_3 + 6 \mathbf{x}_4 + e,
\label{eqn:pop_model}
\end{equation}
where $e\in \mathbb{R}^{n \times 1}$ is a standard Gaussian noise term. All data points are independently and identically distributed. Each $\mathbf{x}_i$, $i=1,\ldots,p$, is independent from the noise term $e$, which is standard Gaussian. Simulations are repeated 200 times with fixed Python random generators across simulations.
We vary the data dimensions $p/n$ as follows. In the first block of simulations, $p/n$ approaches $0$ from above, corresponding to the classical $p<n$ setting. In the second block, $p/n$ approaches $1$ from above, corresponding to high dimension settings. In the third block, $p/n=2$ as $\log(p)/n$ slowly approaches $0$, corresponding to ultrahigh dimension settings, i.e., where $(p-n)\rightarrow\infty$.
We compare the performance of solar and lasso in terms of sparsity and accuracy of variable selection and on the runtime. Sparsity is measured by the mean number of selected variables. Discovery accuracy is measured by the mean number of \emph{informative} selected variables. Purge accuracy is measured by the mean number of \emph{redundant} selected variables (equal to sparsity minus discovery accuracy). Runtime is measured by mean CPU time. The raw simulation results are available in the supplementary file.
\subsection{Programming languages, parallelization, and hardware}
To ensure a credible comparison between solar and the lasso competitors, we choose the hardware and software settings to maximize the computation speed of lasso. We show that, even under the ideal computation environment for lasso, solar exhibits a substantial runtime advantage.
To maximize computation speed, we use \texttt{Numpy}, \texttt{Scipy}, and \texttt{Cython}---all well-known for performance and speed---to outsource all numerical and matrix operations to the Intel Math Kernel Library, currently the fastest and most accurate C++/Fortran library for CPU numerical operations.
To reduce the possibility of CPU and RAM bottlenecks in parallel computing of lasso and bootstrap lasso, we code in Python rather than R. \citet{donoho201750} claims: ``R has the well-known reputation of being less scalable than Python to large problem sizes''. Given the simulations repeat solar, lasso, and bootstrap lasso many times to arrive at representative performance measures, choosing Python over R mitigates the impact of hardware limitations. Computations are executed with an Intel Xeon W-3245 CPU with 3.2GHz base frequency and 64GB RAM, further reducing the possibility of CPU-RAM bottlenecks.
To guarantee the programming quality of the lasso implementation, we source lasso and bootstrap lasso from the Sci-kit learn library \citep{scikit-learn} of efficient machine-learning tools.\footnote{Detail is available at \url{https://scikit-learn.org/stable/}.} Used widely in research and industry, Sci-kit learn also uses \texttt{Numpy}, \texttt{Scipy}, and \texttt{Cython} to delegate all numerical and matrix operations to Fortran/C++.
Lastly, to optimize computation and avoid large overheads, we implement multi-core parallelization. Each realization of lasso requires 10 repetitions of lars or coordinate descent to compute the CV error of each $\lambda$ value. Thus, we design a parallel architecture to assign one repetition per CPU core, maximizing the computation speed for lasso.
\subsection{Comparison of sparsity, accuracy, and time complexity \label{subsection:suml1}}
Table~\ref{table:sim_1} summarizes average selection performance.\footnote{Detailed histograms are available in the supplementary file.} While all competitors always include the 5 informative variables, solar outperforms lasso in terms of sparsity in every $p/n$ scenario, implying superior ability to limit the selection of redundant variables. Notably, as $p/n\rightarrow1$, lasso sparsity deteriorates while solar sparsity improves, further confirming the advantage of path averaging. While the sparsity of all competitors deteriorates as $\log(p)/n\rightarrow0$, solar maintains a clear advantage over lasso.
\begin{table}[ht]
\centering
\caption{Simulation results for sparsity and accuracy.\label{table:sim_1}}
\resizebox{0.98\textwidth}{!}{%
\renewcommand{\arraystretch}{0.7}
\begin{tabular}{l ... ... ...}
\toprule
& \multicolumn{3}{c}{$p/n\rightarrow0$}
& \multicolumn{3}{c}{$p/n\rightarrow1$}
& \multicolumn{3}{c}{$\log(p)/n\rightarrow0$} \\
\cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10}
& \multicolumn{1}{c}{$\frac{100}{100}$} & \multicolumn{1}{c}{$\frac{100}{150}$} & \multicolumn{1}{c}{$\frac{100}{200}$}
& \multicolumn{1}{c}{$\frac{150}{100}$} & \multicolumn{1}{c}{$\frac{200}{150}$} & \multicolumn{1}{c}{$\frac{250}{200}$}
& \multicolumn{1}{c}{$\frac{400}{200}$} & \multicolumn{1}{c}{$\frac{800}{400}$} & \multicolumn{1}{c}{$\frac{1200}{600}$} \\
\midrule
\multicolumn{9}{l}{\emph{mean number of selected variables}}\\
\hspace*{5mm}lasso & 20.4 & 19.5 & 19.7 & 23.1 & 24.1 & 27.2 & 30.7 & 36.7 & 37.4 \\
\hspace*{5mm}solar & 10.5 & 9.3 & 9.1 & 10.7 & 9.8 & 8.7 & 11.4 & 16.1 & 18.5 \\
\\ [-8pt]
\hspace*{5mm}bolasso-S & 5.5 & 6.4 & 6.5 & 5.5 & 6.4 & 6.5 & 5.7 & 6.6 & 7.6 \\
\hspace*{5mm}bolasso-H & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 \\
\\ [-8pt]
\hspace*{5mm}bsolar-3S/3H & 5.4 & 5.2 & 5.1 & 5.4 & 5.2 & 5.1 & 5.3 & 5.8 & 6 \\
\hspace*{5mm}bsolar-5S/5H & 5.2 & 5.1 & 5 & 5.2 & 5.1 & 5 & 5.1 & 5.2 & 5.4 \\
\hspace*{5mm}bsolar-10S & 5.2 & 5.1 & 5 & 5.2 & 5.1 & 5 & 5.1 & 5.2 & 5.3 \\
\hspace*{5mm}bsolar-10H & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5.1 \\
\\ [-8pt] \multicolumn{9}{l}{\emph{mean number of selected informative variables}}\\
\hspace*{5mm}lasso & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 \\
\hspace*{5mm}solar & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 \\
\hspace*{5mm}bolasso-S/H & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 \\
\hspace*{5mm}bsolar-3S/3H/5S/5H/10S/10H & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 \\
\bottomrule
\end{tabular}}
%
\end{table}
Table~\ref{table:sim_1} also reveals several advantages of solar over lasso in bootstrap selections.
\begin{itemize}
%
\item In terms of variable selection, bolasso-S stands out with the poorest sparsity while the others perform almost identically.
%
\item Solar and bsolar exhibits a considerable computational advantage. We show in Section~\ref{subsection:comp} that solar imposes less than $1/3$ of the lasso computation load, implying that bsolar-3 has the same computation load as lasso. Given bolasso requires 256 subsample lasso repetitions while bsolar-3 has the same computation load as one lasso realization, bsolar reduces subsample repetitions by 99\% relative to bolasso (assuming a time complexity measure like $O(n^2)$ and $p>n$ for lasso).
%
\item Similar findings apply to the comparison between bsolar and lasso stability selection. Bsolar, bolasso-H, and stability selection ($f>0.9$) return very similar sparsity and accuracy (on average selecting all informative variables and very rarely including an redundant variable). However, lasso stability selection implements $100$ subsample repetitions respectively while bsolar-3 only requires $3$. Even though the size of the bootstrap subsample in stability selection is $n/2$ (substantially smaller than the bootstrap sample size of bsolar, which is $n$), the time complexity analysis of \citet{meinshausen2010stability} still implies that bsolar-3 produces a reduction of at least $67$-$82\%$ in computation load relative to lasso stability selection. Such amount of computation time reduction is crucial in large scale applications like DNA sequencing, natural language processing, imagine processing, and MRI neuroimaging [where each observation (image) often contains more than $10^6$ pixels as candidate variables, and the total data size can easily go beyond 1GB even with limited $n$.]. The amount of computation reduction can be even more substantial if the application requires a certain lasso/solar variation like "group", "fused", or "elastic net" (discussed in Section~\ref{subsection:variant}).
%
\end{itemize}
\subsection{Explanation of the efficiency discrepancy between bolasso-bsolar}
The efficiency of bsolar is due to its unique multi-layer variable ranking scheme. While bsolar and bolasso both generate bootstrap subsamples, bsolar uses a different bootstrap variable selection procedure. Specifically,
\begin{itemize}
\item solar executes Algorithm~\ref{algo:APE-lar} (or \ref{algo:APE-cd}) on each bootstrap subsample and ranks variables using the average $L_0$ path, which we call the \emph{internal ranking}. The internal ranking identifies the strongest signals on each bootstrap subsample.
\item bsolar collates the internal ranking results to produce an overall ranking, which we call the \emph{external ranking}. The external ranking identifies the strongest signals on the majority of bootstrap subsamples.
\end{itemize}
The multi-layer method has several advantages over the usual one-layer ranking methods, such as bootstrap lasso and lasso stability selection \citep{fan2008sure, hall2009usingb, hall2009using, li2012robust, li2012feature}.
\begin{itemize}
%
\item First, one-layer methods rank variables on the whole sample. By contrast, the internal ranking uses the average $L_0$ path, which, as discussed in Section~2.1, improves robustness to multicollinearity, noise, and sample size.
%
\item Second, as shown in Section~\ref{section:example}, internal ranking avoids issues caused by complicated dependence structures that other (unconditional) ranking methods cannot.
%
\item Most important, multi-layer ranking reduces the number of bootstrap repetitions without compromising accuracy. One-layer methods select variables immediately after ranking. Our method performs a second external ranking that, by detecting persistent signals, is more tolerant of subsample variation: if $\mathbf{x}_i$ is wrongly selected or omitted in the internal ranking, there is still a large probability that the mistake will be corrected in the external ranking. While stability selection and bolasso require, respectively, 100 and 256 repetitions to average out lasso selection issues, bsolar requires only 3-10 bootstrap repetitions to confirm the solar variable ranking.
\end{itemize}
\begin{table}[!htb]
\caption{Subsample variable selection frequencies for bolasso and bsolar-10.}
\label{table:subsample_select_freq}
\begin{minipage}[t]{.55\linewidth}
\small
\subfloat[bolasso]{%
\label{table:subsample_select_freq_1}
\renewcommand{\arraystretch}{0.7}
\begin{tabular}{cl}
\toprule
frequency & variables \\
\midrule
$\geqslant 1.00$ & $\mathbf{x}_4, \mathbf{x}_3, \mathbf{x}_2, \mathbf{x}_1, \mathbf{x}_0$ \\
$\geqslant 0.88$ & $\mathbf{x}_4, \mathbf{x}_3, \mathbf{x}_2, \mathbf{x}_1, \mathbf{x}_0, \mathbf{x}_{28}$ \\
$\geqslant 0.84$ & $\mathbf{x}_4, \mathbf{x}_3, \mathbf{x}_2, \mathbf{x}_1, \mathbf{x}_0, \mathbf{x}_{28}, \mathbf{x}_{71}$\\
$\geqslant 0.76$ & $\mathbf{x}_4, \mathbf{x}_3, \mathbf{x}_2, \mathbf{x}_1, \mathbf{x}_0, \mathbf{x}_{28}, \mathbf{x}_{71}, \mathbf{x}_{91}$\\
$\geqslant 0.70$ & $\mathbf{x}_4, \mathbf{x}_3, \mathbf{x}_2, \mathbf{x}_1, \mathbf{x}_0, \mathbf{x}_{28}, \mathbf{x}_{71}, \mathbf{x}_{91}, \mathbf{x}_{94}$\\
$\geqslant 0.69$ & $\mathbf{x}_4, \mathbf{x}_3, \mathbf{x}_2, \mathbf{x}_1, \mathbf{x}_0, \mathbf{x}_{28}, \mathbf{x}_{71}, \mathbf{x}_{91}, \mathbf{x}_{94}, \mathbf{x}_{70}, \mathbf{x}_{40}$ \\
$\vdots$ & $\vdots$ \\
\bottomrule
\end{tabular}}
\end{minipage}
\begin{minipage}[t]{.5\linewidth}
\small
\subfloat[bsolar-10]{%
\label{table:subsample_select_freq_2}
\renewcommand{\arraystretch}{0.7}
\begin{tabular}{cl}
\toprule
frequency & variables \\
\midrule
$\geqslant 1.00$ & $\mathbf{x}_4, \mathbf{x}_3, \mathbf{x}_2, \mathbf{x}_1, \mathbf{x}_0$ \\
$\geqslant 0.10$ & $\mathbf{x}_4, \mathbf{x}_3, \mathbf{x}_2, \mathbf{x}_1, \mathbf{x}_0, \mathbf{x}_{91}, \mathbf{x}_{71}$ \\
$= 0$ & all other variables \\
\bottomrule
\end{tabular}
}
\end{minipage}
\end{table}
Furthermore, as shown in Table~\ref{table:subsample_select_freq}, bsolar produces a shorter and more accurate list of subsample variable selection frequencies. Table~\ref{table:subsample_select_freq_1} breaks down the subsample selection frequency list from 256 subsamples for one bolasso realization with $p/n=100/200$. Due to the length of the list, we report only subsample selection frequencies $\ge0.69$. With only one layer of ranking, bolasso is unable to separate informative from redundant variables even with 256 subsample repetitions. The frequency discrepancy for bolasso between the highest-ranking redundant ($\mathbf{x}_{28}$) and the lowest-ranking informative variable ($\mathbf{x}_0$) is only $0.12$. By contrast, Table~\ref{table:subsample_select_freq_2} shows bsolar-10 returns a much shorter list with a frequency discrepancy between the highest-ranking redundant ($\mathbf{x}_{91}$) and the lowest-ranking informative variable ($\mathbf{x}_0$) of $0.9$. To increase the discrepancy between the lowest ranked informative and highest ranked redundant variables for bolasso, \citet{bach2008bolasso} suggests raising the number of subsample repetitions. However, increasing repetitions will raise the bolasso computation load in high-dimensional spaces, increasing the advantage of bsolar.
\subsection{Computation time comparison \label{subsection:comp}}
The time complexity of an algorithm indicates only how computation time changes as data size (parameterized by $n$, $p$, and $K$) increases. Time complexity analysis omits many other computation parameters (such as hardware specification), suggesting it may substantially underestimate computation time difference of two algorithms in real-world problems. Hence, in this section we compare computation efficiency in terms of CPU times.
Since the computation load for lars or coordinate descent on a given sample is fixed, we may use the number of lars or coordinate descents to approximate the computation load for solar and lasso. For comparison, we compute solar with $K$ subsamples and lasso with $K$-fold cross-validation. As shown in Algorithm~\ref{algo:APE-lar} and \ref{algo:APE-cd}, solar computes one lars or coordinate descent on each subsample $(X^k, Y^k)$, which implies $K=3$ lars or coordinate descents to compute $\widehat{q}$ and one more pass to compute $c^*$ for variable selection. Lasso requires computing $K=10$ lars or coordinate descents to optimize the tuning parameter and, given the optimal tuning parameter, one more pass on the full sample to select variables. Thus, the solar computation load is less than $1/3$ that of lasso.
Given the computation loads for lasso and solar, we can work out the differences between bolasso and bsolar using the number of subsample repetitions (SR). Bolasso repeats lasso $256$ times while bolar-3 repeats solar only 3 times to obtain similar sparsity, bsolar-3 has approximately the same computation load as lasso.
\begin{table}[ht]
\centering
\caption{Simulation results for parallel computation time (mean runtime in seconds).\label{table:sim_load}}
\smallskip
\resizebox{0.98\textwidth}{!}{%
\renewcommand{\arraystretch}{0.7}
\begin{tabular}{l ... ... ...}
\toprule
& \multicolumn{3}{c}{$p/n\rightarrow0$}
& \multicolumn{3}{c}{$p/n\rightarrow1$}
& \multicolumn{3}{c}{$\log(p)/n\rightarrow0$} \\
\cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10}
& \multicolumn{1}{c}{$\frac{100}{100}$} & \multicolumn{1}{c}{$\frac{100}{150}$} & \multicolumn{1}{c}{$\frac{100}{200}$} & \multicolumn{1}{c}{$\frac{150}{100}$} & \multicolumn{1}{c}{$\frac{200}{150}$} & \multicolumn{1}{c}{$\frac{250}{200}$} & \multicolumn{1}{c}{$\frac{400}{200}$} & \multicolumn{1}{c}{$\frac{800}{400}$} & \multicolumn{1}{c}{$\frac{1200}{600}$} \\
\midrule
bsolar-3 & 0.05 & 0.07 & 0.08 & 0.06 & 0.08 & 0.12 & 0.32 & 0.51 & 1.04 \\
bolasso (lars, 256 SR) & 9.52 & 12.49 & 10.61 & 10.01 & 13.92 & 19.72 & 23.10 & 184.59 & 502.56 \\
bolasso (cd, 256 SR) & 13.49 & 60.51 & 60.35 & 13.92 & 16.85 & 20.17 & 27.73 & 100.58 & 308.12 \\
\bottomrule
\end{tabular}}
%
\end{table}
Table~\ref{table:sim_load} shows the average runtimes for the simulations. Generally speaking, bsolar-3 has a much shorter runtime than bolasso. When $n \times p$ is small (the first 5 columns), bsolar-3 runtime is roughly $0.5$-$1\%$ of bolasso runtime (assuming bolasso is solved by lars), which is consistent with the time complexity estimation. However, as $n$ and $p$ increase rapidly, parallel computation of bolasso is substantially more difficult to coordinate. This is primarily because our CPU must simultaneously generate 10-16 (the number of CPU cores) data matrices $X\in \mathbb{R}^{1200 \times 600}$, each of which must then be bootstrapped into 256 sub-matrices $X_{sub}\in \mathbb{R}^{1080 \times 600}$ for each CPU core to read again in parallel. This volume of data is more than our CPU can read from RAM in a single pass. As a result, the runtime differences are even more pronounced when $p$ and $n$ increase. The 256 subsample repetitions (totally 2816 lars or coordinate descent reptitions) render the bolasso selection algorithms computationally infeasible even with moderate $p$ and $n$. By contrast, bsolar-3 requires only 9 realizations of lars or coordinate descent. Due to a lighter computational load and CPU usage, bsolar-3 parallel computing is much easier to coordinate. As a result, the computation time difference will be much more substantial if the number of CPU cores is below $8$.
\subsubsection{Comparison with previous lasso computation research}
We thoroughly demonstrate the solar computation advantages in bootstrap selection in two steps. Firstly we show that, for given computation resources, our bolasso package almost attains the theoretical maximum speedup for lasso parallelization. Secondly, we show that bsolar is substantially faster than our bolasso package. Thus, given the computation resources, the speed of bsolar substantially exceeds the theoretical maximum speed of bolasso.
Given the same convergence criteria (tolerance for optimization and number of iterations), number of folds for CV ($K=10$), and number of $\lambda$s in the grid search (100), the time complexity of lasso is mostly determined by $n$, $p$, and pairwise correlations among the covariates ($corr$). For the purposes of comparison, we consider a Gaussian regression with $p/n=1000/100$ and $corr=0.5$.
\begin{itemize}
%
\item With a 2.8GHz frequency, 2-core Intel Xeon CPU,
\citet[Table 1]{friedman2010regularization} method reports an average runtime of 0.07 seconds for one pathwise coordinate descent realization (with covariance pre-computed for updating). The \citet{friedman2010regularization} package is coded in R with all numerical computations executed in Fortran/C++.
%
\item Using an Intel Xeon W-3245 CPU with 3.2GHz frequency and 16 cores, the average runtime for the coordinate descent bolasso package is 41.92 seconds (with covariance pre-computed automatically), accounting for 256 realizations of 10-fold, cross-validated lasso (namely 2,816 pathwise coordinate descent realizations). Thus, the average runtime is 0.014 seconds per pathwise coordinate descent.
%
\end{itemize}
\noindent
Thus, with a similar CPU frequency and 14 additional cores, our lasso implementation produces an average speedup of $0.07/0.014=5.0$ times over \citet{friedman2010regularization} for each pathwise coordinate descent repetition.
Our code and the \citet{friedman2010regularization} code use the same design: 10 (parallelizable) pathwise coordinate descent repetitions to optimize $\lambda$ followed by a final (non-parallelizable) step to compute $\beta$. Roughly 11\% of the total computations (I/O, code interpretation to C++/Fortran, data generation, etc., matrix manipulation, and the step to compute $\beta$) are not parallelizable. Given $n$ and $p$, the maximum speedup according to Amdahl's law is:
\begin{equation}
%
\frac{1}{\rho + (1-\rho)/s} = \frac{1}{0.11 + (1-0.11)/(16/2)} \approx 4.5,
%
\end{equation}
where $\rho$ is the proportion of computation that is not parallelizable and $s$ is the computation speedup for the parallelizable proportion (i.e., the core number multiple). Given that our CPU base frequency is also higher than \citet{friedman2010regularization} ($3.2$GHz over $2.8$GHz), we adjust the maximum speedup by the frequency multiple ($3.2/2.8$), resulting in a final maximum speedup of $4.5 \times 3.2/2.8 \approx 5.2$, or 4\% faster than our speedup of 5.0. Hence, given the core number and CPU frequency, our coordinate descent bolasso package achieves almost 96\% of the maximum possible speedup.
\begin{figure}[ht]
%
\centering
%
\includegraphics[width=0.8\linewidth]{runtime.pdf}
%
\caption{Average runtime (per pathwise coordinate descent) comparison for different $X$ matrix sizes.}
%
\label{fig:runtime}
%
\end{figure}
As illustrated in Figure~\ref{fig:runtime}, bsolar easily outmatches the theoretical maximum speedup for paral- lelizing lasso-type estimators as n and p increases. Figure~\ref{fig:runtime} plots average runtime against the size ($n \times p$) of the $X$ matrix. As matrix size increases, the optimized bolasso package runtime rises exponentially while bsolar runtime increases linearly. Thus, bsolar easily outmatches the theoretical maximum speedup for bolasso as $n$ and $p$ increase, confirming the bsolar-3 advantage for high-dimensional data.
\subsubsection{Implication of solar computation advantages}
The efficiency of bsolar computation solves the issue of choosing a bootstrap variable selection threshold. \citet{bach2008bolasso} and \citet{meinshausen2010stability} claim that choosing a predefined value for the selection threshold ($f=1$, $0.9$, or $f\in\left[0.6,0.9\right]$) will return similarly sparse results. However, \citet{bach2008bolasso} and \citet{huang2014stat} show that predefined values may cause problems for variable selection. With $p/n=50/500$, $\mathrm{sd}(e)=3$, and true signal strength around $2$, \citet{huang2014stat} finds a 50\% false discovery rate with bootstrap selection methods, suggesting that the threshold still requires data-driven tuning. Moreover, the large number of bootstrap repetitions makes traditional parameter tuning methods (such as cross validation) computationally unaffordable for stability selection or bolasso. By contrast, the bsolar algorithm is efficient with even large $p,n$. The efficiency of bsolar means it is possible to tune $f$ by cross validation with a runtime of less than $6$ seconds for $p/n=1200/600$.
\section{Real-world data: Sydney house price prediction\label{section:application}}
To demonstrate that the improvements from solar are empirically feasible, we apply solar to real-world data. The real-world data reflect both the $p/n\rightarrow0$ scenarios as well as the challenging IRC settings, complicated dependence structures, and grouping effects typical of data in the social sciences.
The database is assembled from multiple sources. The primary source comprises real estate market transaction data for 11,974 Sydney, Australia, houses sold in 2010, including price and house attribute information (GIS coordinates, property address, bedrooms, bathrooms, car spaces, etc.). Each property is GIS-matched with: 2011 census data by Statistical Area Level 1 (the smallest census area in Australia, comprising at most 200 people or 60 households); 2010 and 2011 crime data by suburb; 2010 geo-spatial information on topology, climate, pollution, and aircraft noise; Google Maps data; 2009 primary and secondary school data; and 2010 Sydney traffic and public transport data (bus routes, train stations, and ferry wharfs). We predict house price with a linear model.
Using an ensemble of Bayes network learning algorithms for data cleaning, we reject variables with both very low conditional and unconditional correlations to house price. The remaining variables are listed in the first column of Table~\ref{table:house_variable}.\footnote{Due to the 200GB size of the database, we include only the data for these variables in the supplementary file.} The 57 variables fall into 5 broad categories: house attributes, distance to key locations (public transport, shopping, etc.), neighbourhood socioeconomic data, localized administrative and crime data, and local school quality. Pairwise correlations among all 57 covariates indicate, not surprisingly, severe multicollinearity and grouping effects, implying a harsh IRC setting.\footnote{Correlations and IRC are also reported in supplementary files.} Thus, heuristically increasing the value of the tuning parameter in lasso-type estimators (e.g., using the one-sd or the `elbow' rule) is unlikely to be useful since it may trigger further grouping effects and the random dropping of variables.
Table~\ref{table:house_variable} shows the selection comparison across the elastic net, lasso, and solar. With all variables in linear form, both lasso and elastic net lose sparsity, likely due to the complicated dependence structures and severe multicollinearity in the data, accordant with \citet{jia2010model}. By contrast, solar returns a much sparser model, with only $9$ variables selected from $57$. Very similar results are found with the variables in log form, hinting that solar possesses superior selection sparsity and robustness to a change in functional form. More importantly, solar variable selection outperforms the lasso-type estimators in terms of the balance between sparsity and prediction power. While pruning 25-48 variables from the elastic net and lasso selections, the post-selection regression $\mathrm{R}^2$ for solar falls by just 3-5\%.
\section{Conclusion}
In this paper we propose the solar (subsample-ordered least-angle regression) algorithm for high-dimensional data. Solar constructs solution paths using the $L_0$ norm and averages the resulting solution paths across subsamples, reducing sensitivity to high dimensionality while improving variable selection stability, efficiency, and accuracy. We prove that $L_0$ path averaging separates informative from redundant variables, that solar variable selection is consistent, and that the probability that solar omits weak signals is controllable for finite sample size.
Through simulations, examples, and real-world data, we demonstrate that, without any increase in computation load, solar yields substantial improvements over lasso in terms of the sparsity, stability, and accuracy of variable selection. We also find that solar largely avoids selection of redundant variables and rejection of informative variables in the presence of complicated dependence structures and harsh settings of the irrepresentable condition while conserving residual degrees of freedom for hypothesis testing. Relative to bootstrap lasso, bootstrapping solar improves selection sparsity and ranking accuracy and, for given computation resources, is substantially faster.
Detection of weak signals is a potential weakness in solar, although relative to lasso the difference is very slight. Nonetheless, we are working on an extension to solar, the double-bootstrap solar (DBsolar), which, if early results are any indication, promises to enable solar accurately to detect variables with weak signals.
\bibliographystyle{elsarticle-harv}
|
1,314,259,993,413 | arxiv |
\section{Introduction}
\label{sec:introduction}
The CERN LHC has provided sufficient data to probe a large variety of theories beyond the standard model (SM).
Among these, theories based on supersymmetry (SUSY)~\cite{Wess,Golfand,Volkov,Chamseddine,Kane,Fayet,Barbieri,Hall,Ramond}, which predict the existence of a spectrum of supersymmetric partners to the SM particles, are strongly motivated. Scenarios with nondegenerate supersymmetric particle spectra, with cross sections as low as ${\approx}1$~fb, have been explored in many final states; however, as yet no evidence for SUSY has been found.
The focus of many current searches is so-called natural SUSY~\cite{Barbieri:2009ev,Papucci:2011wy}, in which the Higgs boson mass can be stabilized without excessive fine-tuning. In natural SUSY scenarios, the Higgsino mass parameter $\mu$ is required to be of the order of 100 GeV, and the lightest top squark $\PSQt_1$, the gluino $\PSg$, and the lightest bottom squark $\PSQb_1$ are constrained to have masses around the TeV scale, while the masses of the other superpartners are unconstrained and can be much heavier and beyond the LHC reach. The possibility that the top squark could be light has motivated several searches by the ATLAS and CMS collaborations~\cite{Aad:2013ija,Aad:2014qaa,Aad:2014bva,Aad:2014kva,Aad:2014kra,Aad:2014mha,Aad:2014lra,Chatrchyan:2013xna,Chatrchyan:2013mya,Khachatryan:2014doa,Khachatryan:2015vra,Khachatryan:2015pot} for this sparticle. In general, the sensitivity of these searches diminishes for direct top squark production when the mass of the top squark approaches that of the lightest supersymmetric particle (LSP), which is assumed to be the lightest neutralino $\PSGczDo$. For searches that specifically target the decay $\PSQt_1 \to \cPqt \PSGczDo$, the sensitivity is reduced when the mass difference $\Delta m$ between the top squark and the LSP is comparable to the top quark mass $m_\cPqt$.
Here, we focus on two types of scenarios: the so-called compressed spectrum in which $\Delta m$ is very small, of the order of a few GeV to tens of GeV (\eg~\cite{Martin:2007gf, Martin:2007hn, Carena:2008mj}),
and scenarios where $\Delta m \approx m_\cPqt$. In the compressed case, the top squark decays to the LSP and soft decay products, which are difficult to detect. When $\Delta m \approx m_\cPqt$, the signature of top squark production is very similar to that of $\cPqt\cPaqt$ production, which has a much higher cross section.
Therefore, to be sensitive to such processes, we cannot solely rely on the top squark decay products.
Possibilities to discriminate the signal are tagging the top squark events based on a jet from initial-state radiation (ISR) using the monojet signature~\cite{Khachatryan:2015wza,Aad:2014nra}, or searching for top squark events in cascade decays of heavier particles, such as the heavy top squark decays $\PSQt_2 \to \PSQt_1 + \PH/\cPZ$~\cite{Khachatryan:2014doa}, or from gluino decays.
In this paper, we search for the challenging top squark final states described above in gluino decays.
Specifically, we consider gluino-pair production where each gluino decays to a top squark and a top quark. We consider the scenarios in which the gluino has a mass of around 1\TeV and the lighter top squark has a mass of a few hundred\GeV. Because of the significant mass gap between the gluino and the top squark, the top quark from the gluino decay will receive a large boost. The top squark decays to $\cPqc \PSGczDo$ for a small $\Delta m$, or to $\cPqt \PSGczDo$ for $\Delta m \approx m_\cPqt$, as
in the targeted searches for $\PSQt_1 \to \cPqt \PSGczDo$ mentioned above. The analysis described in
this paper is especially sensitive to the decay $\PSQt_1 \to \cPqc \PSGczDo$. Consequently,
this analysis provides new information about the viability of natural SUSY.
The gluino-pair production processes described above, with $\PSQt_1 \to \cPqc \PSGczDo$ or $\PSQt_1 \to \cPqt \PSGczDo$, can be described using simplified model spectra~\cite{Alves:2011wf,Alves:2011sq,Alwall:2008ag,Alwall:2008va,ArkaniHamed:2007fw,Chatrchyan:2013sza}. Specifically, the models T1ttcc and T1t1t, shown in Fig.~\ref{fig:diagrams}, are used in the design of the analysis and in the interpretation of the results.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{Figure_001-a}
\includegraphics[width=0.45\textwidth]{Figure_001-b}
\caption{Diagrams for the T1ttcc (left panel) and T1t1t (right panel) simplified model spectra. Here, an asterisk ($^*$) denotes an antiparticle of a supersymmetric partner.
\label{fig:diagrams}}
\end{figure*}
In light of the discussion above, it is expected that boosted top quarks are a promising signature of new physics involving a massive gluino decaying to a relatively light top squark.
Boosted objects with high transverse momentum, $\pt$, are characterized by merged decay products separated by $ \Delta R \approx 2 m / \pt $, where $m$ denotes the mass of the decaying particle.
For the top quark decay products to be merged within the typical jet size of $\Delta R = 0.5$ requires a top quark momentum of ${\approx}700$\GeV, a value difficult to reach with proton-proton collisions at 8\TeV.
Therefore, in order to increase the signal efficiency by entering the boosted regime,
we focus on $\PW$ bosons from top quark decays, which require a more accessible $\pt$ of around $300$\GeV.
The targeted final state therefore contains boosted $\PW$ bosons and jets originating from $\cPqb$ quarks ($\cPqb$ jets) from top quark decays, light quark jets from unmerged hadronic $\PW$ boson decay products or charm quarks, and missing energy from the neutralinos.
Hadronically decaying boosted $\PW$ boson candidates are identified using the pruned jet mass~\cite{Ellis:2009su,Ellis:2009me,Chatrchyan:2013vbb} and a jet substructure observable
called N-subjettiness~\cite{Thaler:2010tr}.
The razor kinematic variables $\MR$ and $R^2$~\cite{rogan} are used to discriminate the processes with new heavy particles from SM processes in final states with jets and missing transverse energy. To increase the sensitivity to new physics, we perform the analysis by partitioning the ($\MR$,$R^2$) plane into multiple bins.
This paper is organized as follows. The razor variables are introduced in Section~\ref{sec:razor}. Section~\ref{sec:cms} gives a brief overview of the CMS detector, while Section~\ref{sec:triggerdatasets} covers the triggers, data sets, and Monte Carlo (MC) simulated samples used in this analysis.
Details of the object definitions and event selection are given in Sections~\ref{sec:eventreco} and \ref{sec:selection}, respectively.
Section~\ref{sec:Wtag_SF} describes the data/simulation scale factors that are needed to correct the modeling of the boosted $\PW$ boson tagger.
The statistical analysis is explained in Section~\ref{sec:likelihood}, and Section~\ref{sec:systematics} covers the systematic uncertainties. Finally, our results and their interpretation are presented in Section~\ref{sec:interpretation}, followed by a summary in Section~\ref{sec:summary}.
\section{Razor variables \label{sec:razor}}
The razor variables $\MR$ and $R^2$ \cite{rogan} are useful for
describing a signal arising from the pair production of heavy particles, each of which
decays to a massless visible particle and a massive invisible particle.
In the two-dimensional razor plane, a signal with heavy particles is expected to appear as a peak on top of smoothly falling SM backgrounds, which can be empirically described using exponential functions.
For this reason, the razor variables are robust discriminators for SUSY signals in which supersymmetric particles are pair produced and decay to SM particles and the LSP. For the simple case in which the final state
comprises two visible particles, \eg jets, the razor variables are defined using the momenta $\vec{p}^{\,\mathrm{j}_1}$ and $\vec{p}^{\,\mathrm{j}_2}$ of the two jets as
\begin{eqnarray}
\label{eq:MRstar}
\MR & \equiv &
\sqrt{
(|\vec{p}^{\,\mathrm{j}_1}|+|\vec{p}^{\,\mathrm{j}_2}|)^2 -
(p^{\mathrm{j}_1}_z+p^{\mathrm{j}_2}_z)^2}
\; ,\\
M_\mathrm{T}^\mathrm{R} & \equiv & \sqrt{ \frac{\ETm(\pt^{\mathrm{j}_1}+\pt^{\mathrm{j}_2}) -
\ptvecmiss {\cdot}
(\ptvec^{\,\mathrm{j}_1}+\ptvec^{\,\mathrm{j}_2})}{2}} \; ,
\label{eq:MRT}
\end{eqnarray}
where $p^{\mathrm{j}_{1,2}}_z$ are the $z$ components of the $\mathrm{j}_{1,2}$ momenta, \ptvecmiss is the missing transverse momentum, computed as the negative vector sum of the transverse momenta of all observed particles in the event, and \ETm is its magnitude (see Section~\ref{sec:eventreco} for a more precise definition).
Given $\MR$ and the transverse quantity $M^\mathrm{R}_\mathrm{T}$, the razor dimensionless ratio is defined as
\begin{eqnarray}
R \equiv \frac{M_\mathrm{T}^\mathrm{R}}{\MR}
\; .
\label{eq:R2}
\end{eqnarray}
If the heavy mother particle is denoted by $G$ and the heavy invisible daughter particle is denoted by $\chi$, the peak of the $\MR$ distribution and end point of the $M_\mathrm{T}^\mathrm{R}$ distribution are both estimates of the quantity $(m_\mathrm{G}^\mathrm{2} - m_{\chi}^\mathrm{2})/m_\mathrm{G}$.
When the decay chains are complicated, producing multiple particles in the final state,
the razor variables can still be meaningfully calculated by reducing the final state to a two-``megajet'' structure.
The megajet algorithm aims to cluster visible particles coming from the decays of the same heavy supersymmetric particle. The razor variables $\MR$ and $R^2$ are computed using the four-momenta of the two megajets, where the megajet four-momentum is the sum of the four-momenta of the particles comprising the megajet. Studies show that, of all the possible clusterings, the one that minimizes the sum of the squared invariant masses of the megajets maximizes the efficiency with which particles are matched to their heavy supersymmetric particle ancestor~\cite{razorPRL}.
Figure~\ref{fig:MRR2baseline} shows the simulated distributions of the overall SM background and a T1ttcc signal with $m_{\PSg} = 1\TeV$, $m_{\PSQt} = 325\GeV$, and $m_{\PSGczDo} = 300\GeV$ in the ($\MR$,$R^2$) plane. The binning is chosen in accordance with the exponentially falling behavior of the razor variables, to optimize the statistical precision in each bin. The numerical values for the bin boundaries, which are used all through the analysis are given in Table~\ref{tab:results_prediction}.
The SM background, which mainly arises from multijet production, is dominant at low values of $R^2$, while the SUSY-like signal peaks higher in the ($\MR$,$R^2$) plane ($\MR$ peaks at around 900\GeV, which is the expected value).
\begin{figure*}[tpb]
\centering
\includegraphics[width=0.49\textwidth]{Figure_002-a}
\includegraphics[width=0.49\textwidth]{Figure_002-b}
\caption{Distributions in the ($\MR$,$R^2$) space of the overall SM backgrounds and a T1ttcc signal with $m_{\PSg} =1\TeV$, $m_{\PSQt} =325\GeV$ and $m_{\PSGczDo} =300\GeV$, both obtained from simulation. A very loose selection is used: a good primary vertex and at least three jets, one of which is required to have $\pt > 200$\GeV.
\label{fig:MRR2baseline}}
\end{figure*}
In order to be sensitive to low-\ETm scenarios (small $\Delta m$), we use a lower $R^2$ threshold than that used in previous razor analyses~\cite{razor2010,razorPRL,Chatrchyan:2014goa,Khachatryan:2015pwa}.
To exploit the boosted phase space in which the expected signal significance is greater than in the nonboosted phase space, we work at large $(m_\mathrm{G}^\mathrm{2} - m_{\chi}^\mathrm{2})/m_\mathrm{G}$ and thus at high $\MR$, allowing us to raise the $\MR$ threshold.
This has the added virtue of keeping the SM backgrounds at a manageable level.
\section{The CMS detector \label{sec:cms}}
A detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found elsewhere~\cite{Chatrchyan:2008zzk}. A characteristic feature of the CMS detector is its superconducting solenoid magnet, of 6\unit{m} internal diameter, which provides a field of 3.8\unit{T}. Within the field volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter, and a brass and scintillator hadron calorimeter. Muon detectors based on gas-ionization chambers are embedded in a steel flux-return yoke located outside the solenoid.
Events are collected by a two-layer trigger system, where the first level is composed of custom hardware processors,
and is followed by a software-based high-level trigger.
The tracking system covers the pseudorapidity region $\abs{\eta} < 2.5$, the muon detector $\abs{\eta} < 2.4$, and the calorimeters $\abs{\eta} < 3.0$. Additionally, the forward region at $3 < \abs{\eta} < 5$ is covered by steel and quartz fiber forward calorimeters. The near hermeticity of the detector permits an accurate measurement of the momentum balance in the transverse plane.
\section{Trigger and event samples \label{sec:triggerdatasets}}
This analysis is based on a sample of proton-proton collision data at $\sqrt{s}=8\TeV$ collected by the
CMS experiment in 2012 and corresponding to an integrated luminosity of 19.7\fbinv.
Events are selected using two triggers,
requiring either the highest jet \pt or the scalar sum $\HT$ of jet transverse momenta to be above given thresholds. The jet \pt threshold was 320\GeV (and 400\GeV for a brief data taking period corresponding to 1.8\fbinv), while the $\HT$ threshold was 650\GeV.
The two trigger algorithms were based on a fast implementation of the particle-flow (PF) reconstruction method~\cite{PF2,PF}, which is described in Section~\ref{sec:eventreco}.
To measure the efficiency of these triggers, samples with unbiased jet \pt and $\HT$ distributions are obtained using an independent set of triggers that require at least one electron or muon.
Figure~\ref{fig:trigger_efficiency} shows, on the left-hand side, the efficiency of
the requirement that events satisfy at least one of the two trigger conditions as well as the baseline selection described in Section~\ref{sec:selection}, in the ($\HT$, leading jet \pt) plane. The trigger is fully efficient for events with $\HT > 800\GeV$. In order to account for the lower efficiency of the regions with $\HT < 800\GeV$, the measured trigger efficiency over the ($\HT$, leading jet \pt) plane is applied as an event-by-event weight to the simulated samples. The right-hand side of Fig.~\ref{fig:trigger_efficiency} shows the trigger efficiency across the ($\MR,R^2$) plane for the total simulated background.
\begin{figure*}[htpb]
\centering
\includegraphics[width=0.49\textwidth]{Figure_003-a}
\includegraphics[width=0.49\textwidth]{Figure_003-b}
\caption{(Left panel) The trigger efficiency, obtained from data, as a function of $\HT$ and leading jet $\pt$ after the baseline selection discussed in Section~\ref{sec:selection}. (Right panel) The trigger efficiency as a function of $\MR$ and $R^2$ after the same baseline selection, obtained by applying the trigger efficiency as a function of $\HT$ and leading jet $\pt$ to the simulated background.
\label{fig:trigger_efficiency}}
\end{figure*}
Simulated event samples are used to investigate the characteristics of the background and signal processes. Multijet, $\cPqt\cPaqt$, $\PW({\to}\,\ell\PGn)+$jets, $\cPZ/\cPgg^*({\to}\,\ell\bar{\ell})+$jets, and $\cPZ({\to}\,\PGn\PAGn)+$jets events are generated using \MADGRAPH 5.1.3.30~\cite{Alwall:2011uj,MadGraph} with CTEQ6L1~\cite{Pumplin:2002vw} parton distribution functions (PDFs),
while $\PW\PW$, $\PW\cPZ$, and $\cPZ\cPZ$ events are generated using {\PYTHIA}6.424~\cite{Sjostrand:2006za} with CTEQ6L1 PDFs. In what follows, $\PW$ and $\cPZ$ bosons will be collectively referred to as $V$.
Single top quark events are generated using \POWHEG 1.0~\cite{powheg,powheg2} and CT10 PDFs~\cite{Lai:2010vv}.
The cross sections for these SM processes are given in Table~\ref{tab:cutflow}.
The inclusive background processes are scaled to the highest-order cross section calculation available, whereas leading-order cross sections are used for $\PW({\to}\,\ell\PGn)+$jets, $\cPZ/\cPgg^*({\to}\,\ell\bar{\ell})+$jets, and $\cPZ({\to}\,\PGn\PAGn)+$jets, which are produced with varying generator-level \HT requirements.
The simplified model signals are produced using \MADGRAPH 5.1.5.4 using CTEQ6L1 PDFs. The signal cross sections are computed at next-to-leading order with next-to-leading-log corrections using \PROSPINO and \textsc{nll-fast}~\cite{Kramer:2012bx,NLONLL1,NLONLL2,NLONLL3,NLONLL4,NLONLL5}.
The parton-level events are showered and hadronized using {\PYTHIA}6.426 with tune Z2*~\cite{Chatrchyan:2013gfi}, which is derived from the Z1 tune~\cite{Field:2010bc}. The latter uses the CTEQ5L PDFs~\cite{Lai:1999wy}, whereas Z2* adopts CTEQ6L.
For the background events, the response of the CMS detector is simulated in detail using a program (\textsc{FullSim}) based on \GEANTfour~\cite{G4}. A parametrized fast detector simulation program (\textsc{FastSim}) is used to simulate the detector response
for the signal events~\cite{fastsim}.
\section{Event reconstruction \label{sec:eventreco}}
We select events that have at least one interaction vertex associated with at least four charged-particle tracks. The vertex position is required to lie within 24\unit{cm} of the center of the CMS detector along the beam direction and within 2\unit{cm} from the center in the plane transverse to the beam. Because of the high instantaneous luminosity of the LHC, hard scattering events are typically accompanied by overlapping events from multiple proton-proton interactions (pileup),
and therefore contain multiple vertices.
We identify the primary vertex, \ie, the vertex of the hard scatter, as the one with the highest value of the
$\sum \pt^2$ of the associated tracks. Detector- and beam-related filters are used to discard events with anomalous noise that mimic events with high energy and a large imbalance in transverse momentum~\cite{Chatrchyan:2011tn, MET8TeV}.
CMS reconstructs events using the PF algorithm, in which
candidate particles (PF candidates) are formed by combining information from the inner tracker, the calorimeters, and the muon system. Each PF candidate is assigned to one of five object categories: muons, electrons, photons, charged hadrons, and neutral hadrons. Contamination from pileup
events is reduced by discarding charged PF candidates that are incompatible with having
originated from the primary vertex~\cite{CMS-PAS-JME-14-001}. The average pileup energy associated with neutral hadrons is computed event by event and subtracted from the jet energy and from the energy used when computing lepton isolation, \ie, a measure of the activity around the lepton. The energy subtracted is the average pileup energy per unit area (in $\Delta\eta \times \Delta\phi$) times the jet or isolation cone area~\cite{Fastjet1, Fastjet2}.
Jets are clustered with \textsc{FastJet 3.0.1}~\cite{Cacciari:2011ma} using the anti-\kt algorithm~\cite{antikt} with distance parameter $\Delta R=0.5$. These jets are referred to as AK5 jets. Corrections are applied as a function of jet $\pt$ and $\eta$ to account for the residual effects of a nonuniform detector response. The jet energies are corrected so that, on average, they match those of simulated particle-level jets~\cite{Chatrchyan:2011ds}. After correction, jets are required to have $\pt > 30\GeV$ and $\abs{\eta} < 2.4$. We use the combined secondary vertex algorithm~\cite{btag7TeV,btag8TeV} to identify jets arising from $\cPqb$ quarks. The medium tagging criterion, which yields a misidentification rate for light quark and gluon jets of ${\approx}1\%$ and a typical efficiency of ${\approx}70\%$, is used to select $\cPqb$ jets. The loose tagging criterion, with a misidentification rate of ${\approx}10\%$ and an efficiency of ${\approx}85\%$, is used to reject events containing $\cPqb$ jets.
To identify boosted $\PW$ bosons, we follow a similar procedure as outlined in Ref.~\cite{EXO-12-024}.
Jets are clustered with \textsc{FastJet} using the Cambridge-Aachen algorithm~\cite{Dokshitzer:1997in} and a distance parameter of 0.8, yielding CA8 jets. Jet energy corrections for these jets are derived from the anti-\kt jets with distance parameter $\Delta R=0.7$. Simulations show that the corrections are valid for CA8 jets and have an additional uncertainty
$\leq 2$\%.
The jet mass is calculated from the constituents of the jet after jet pruning, which removes the softest constituents of the jet. During jet pruning, the jet constituents are reclustered, and at each step the softer and larger-angle ``protojet'' of the two protojets to be merged is removed should it fail certain criteria~\cite{Ellis:2009su,Ellis:2009me}.
A CMS study has shown that jet pruning reduces pileup effects and provides good discrimination between boosted $\PW$ jets and quark/gluon ($\PQq$/$\Pg$) jets~\cite{Chatrchyan:2013vbb}.
We define mass-tagged jets ($mW$) as CA8 jets with $\pt > 200\GeV$ and jet mass within the range $70 < m_\text{jet} < 100\GeV$ around the $\PW$ boson mass.
In addition to the jet mass, we also consider the N-subjettiness~\cite{Thaler:2010tr} variables, which are obtained by first finding $N$ candidate axes for subjets in a given CA8 jet, and then computing the quantity
\begin{equation}
\ifthenelse{\boolean{cms@external}}
{
\begin{split}
\tau_N = \ \ \ & \\ \frac{1}{R_0} \sum_k & p_{\mathrm{T}, k} \min (\Delta R_{1,k}, \Delta R_{2,k}, \cdots \Delta R_{N,k}) /
\sum_{k} p_{\mathrm{T}, k},
\end{split}
}
{
\tau_N = \frac{1}{R_0} \sum_k p_{\mathrm{T}, k} \min (\Delta R_{1,k}, \Delta R_{2,k}, \cdots \Delta R_{N,k}) /
\sum_{k} p_{\mathrm{T}, k},
}
\end{equation}
where $R_0$ is the original jet distance parameter and $k$ runs over all constituent particles.
The subjet axes are obtained with \textsc{FastJet} via exclusive $k_\textrm{T}$ clustering,
followed by a one-pass optimization to minimize the N-subjettiness value.
The quantity $\tau_N$ is small if the original jet is consistent with having $N$ or fewer subjets.
Therefore, to discriminate boosted $\PW$ bosons, which have two subjets, from $\PQq$/$\Pg$ jets characterized by a single subjet, we require that a $\PW$ boson mass-tagged jet satisfy $\tau_2 / \tau_1 < 0.5$ for it to be classified as a $\PW$ boson tagged jet (labeled $\PW$ in the following).
The $\PW$ boson tagging efficiency is dependent on the CA8 jet \pt, and is 50--55\% according to simulation. The corresponding misidentification rate is 3--5\%.
We also define $\PW$ boson antitagged jets ($aW$) as $\PW$ boson mass-tagged jets that satisfy the complement of the $\tau_2 / \tau_1$ criterion, and use these jets to define control regions for data-driven background modeling.
To calculate \ptvecmiss, which is used in the calculation of the razor variable $R^2$ defined in Eqs.~(\ref{eq:MRT}) and (\ref{eq:R2}), the vector sum over the transverse momenta is taken of all the PF candidates in an event.
Loosely identified and isolated electrons~\cite{Khachatryan:2015hwa} (and muons~\cite{Chatrchyan:2012xi}) with $\pt > 5\GeV$ and $\abs{\eta} < 2.5$ ($2.4$) are used both to suppress backgrounds in the signal region and in the definition of the control regions.
Tightly identified isolated leptons, electrons (muons) with $\pt > 10\GeV$ and $\abs{\eta} < 2.5$ ($2.4$), define a control region enriched in $\cPZ{\to}\,\ell \bar{\ell}$ events, from which we estimate the systematic uncertainty in the predicted number of $\cPZ {\to}\, \PGn \PAGn$ events in the signal region.
Electron candidates that lie in the less well-instrumented transition region between the barrel and end cap calorimeters, $1.44 < \abs{\eta} < 1.57$, are rejected.
We suppress the background from events that are likely to contain $\PGt$ and other leptons that fail the loose selection by discarding events with isolated tracks with $\pt > 10\GeV$ and a track-primary vertex distance along the beam direction $|d_z| < 0.05\unit{cm}$.
Known differences between the properties of data and MC simulated data are corrected by weighting simulated events with data/simulation scale factors for
the jet energy scale, $\cPqb$ tag, $\PW$ mass-tag, $\PW$ tag, and $\PW$ antitag efficiency. The $\PW$ tagging-related scale factors are described in Section~\ref{sec:Wtag_SF}.
In addition, event-by-event weights are used to correct the simulated data so that
their pileup, trigger, top quark $\pt$, and ISR characteristics match those of the data.
\section{Analysis strategy and event selection
\label{sec:selection}}
We search for deviations from the SM in the (high-$\MR$, high-$R^2$) region
using events with at least one boosted $\PW$ boson, at least one $\cPqb$-tagged jet, and no isolated leptons or tracks. SM backgrounds in the signal region $S$ are estimated using observations in control regions and scale factors, calculated from MC simulation, that relate the number of events in one region to that in another.
Three control regions, $Q$, $T$, and $W$, select high-purity samples of multijet, $\cPqt\cPaqt$, and $\PW({\to}\,\ell\PGn)+$jets events, respectively. Details of the background estimation method are given in Section~\ref{sec:likelihood}.
Events must satisfy the following baseline selection:
\begin{enumerate}
\item have at least one good primary vertex (see Section~\ref{sec:eventreco});
\item pass all detector- and beam-related filters (see Section~\ref{sec:eventreco});
\item have at least three selected AK5 jets of which at least one has $\pt > 200\GeV$, thereby
defining the boosted phase space; and
\item satisfy $\MR > 800\GeV$ and $R^2 > 0.08$, where the megajets are constructed from the selected AK5 jets.
\end{enumerate}
The details of the event selection in addition to the baseline selection are given in Table~\ref{tab:selection}.
The signal and control regions are defined using different requirements on the
multiplicities of leptons, $\cPqb$-tagged jets, and $\PW$-tagged jets, and on kinematic variables that discriminate between different processes.
The multijet-enriched control sample $Q$ is used for estimating the multijet background in the $S$ and $T$ regions. To characterize $Q$, we use the fact that \ETm in multijet events is largely due to jet mismeasurements rather than the escape of particles that interact weakly with the detector; consequently, \ptvecmiss will often be aligned with one of the jets. Therefore, a good discriminant between multijet events and events with genuine \ETm is
\begin{equation}
\Delta\phi_\text{min} = \min_i{\Delta\phi(\ptvecmiss, {\vec p}_{\mathrm{T}\, i} ) },
\end{equation}
that is, the minimum of the angles between \ptvecmiss and the transverse momentum of each jet,
where $i$ runs over the three leading AK5 jets. Since detector inaccuracies mostly cause undermeasurements of the jet energy and momentum, the variable $\Delta\phi_\text{min}$ provides a reliable discrimination of fake $\ETm$ in multijet events.
\begin{table*}[htbp]
\centering
\topcaption{Summary of the selections used, in addition to the baseline selection, to define the signal region ($S$), the three control regions ($Q$, $T$, $W$), and the two regions ($S'$, $Q'$) used for the cross-checks described later in the text.
\label{tab:selection}}
\begin{scotch}{lcccccc}
Selection & $S$ & $S'$ & $Q$ & $Q'$& $T$ & $W$ \\
\hline
Number of $\cPqb$-tagged jets & ${\geq} 1$ & ${\geq} 1$ & 0 & 0 & ${\geq} 1$ & 0 \\
Number of mass-tagged $\PW$s & ${\geq} 1$ & ${\geq} 1$ & ${\geq} 1$ & ${\geq} 1$ & ${\geq} 1$ & ${\geq} 1$ \\
Number of tagged $\PW$s & ${\geq} 1$ & ${\geq} 1$ & \NA & \NA & ${\geq} 1$ & \NA \\
Number of antitagged $\PW$s & \NA & \NA & ${\geq} 1$ & ${\geq} 1$ & \NA & \NA \\
Number of loose leptons & 0 & 0 & 0 & 0 & 1 & 1 \\
Number of isolated tracks & 0 & 0 & 0 & 0 & \NA & \NA \\
$m_\mathrm{T}$ (\GeVns) & \NA & \NA & \NA & \NA & ${<} 100$ & 30--100\\
$\Delta\phi_\text{min}$ & ${>} 0.5$ & ${<} 0.5$ & ${<} 0.3$ & ${>} 0.5$ & ${>} 0.5$ & ${>} 0.5$\\
\end{scotch}
\end{table*}
The $T$ and $W$ control regions are used to characterize the $\cPqt\cPaqt$ and $\PW+$jets backgrounds, respectively, in the $S$ region.
The contamination in the $S$ region from fully hadronic decays of $\cPqt\cPaqt$ pairs is negligible because they
do not produce sufficient genuine $\ETm$ to satisfy our event selection.
The $\cPqt\cPaqt$ contamination
consists thus of the semileptonic decays of $\cPqt\cPaqt$ pairs in which
one $\PW$ boson is boosted and the other $\PW$ boson decays
to a charged lepton that is not identified.
Therefore, the $T$ region is required to have a lepton from the decay of a $\PW$ boson, at least
one $\cPqb$-tagged jet, and a $\PW$-tagged jet.
Similarly, the $\PW+$jets contribution in the $S$ region comes from leptonic $\PW$ boson decays in which the charged lepton is not identified and a jet is misidentified as a $\PW$ jet. Therefore, we require the $W$ region to have events with a lepton from the $\PW$ boson and a mass-tagged boosted $\PW$ jet, which is a quark or gluon initiated jet misidentified as a boosted $\PW$ boson. The N-subjettiness criterion is not imposed in order to maintain high event yields in these control regions and
therefore higher statistical precision.
In the $T$ and $W$ regions, we suppress potential signals using the transverse mass,
\begin{equation}
m_\mathrm{T} = \sqrt{2\pt^\ell\ETm ( 1 - \cos\Delta\phi )},
\end{equation}
where $\Delta\phi$ is the difference in azimuthal angle between the lepton \ptvec and \ptvecmiss, and $\pt^\ell$ is the magnitude of the lepton \ptvec.
The $m_\mathrm{T}$ distribution exhibits a kinematic edge at the mass of the $\PW$ boson for $\cPqt\cPaqt$ and $\PW({\to}\ell\PGn)+$jets processes. However, such an edge is not present for signal events because of the extra contribution to \ETm from neutralinos, which escape direct detection. Therefore, potential signals are suppressed in the $T$ and $W$ regions by requiring $m_\mathrm{T} < 100$\GeV. For the $W$ region, we additionally require $m_\mathrm{T} > 30$\GeV in order to reduce residual contamination from multijet events, which are expected to have small \ETm and therefore small $m_\mathrm{T}$. Table~\ref{tab:selection} lists two additional control regions, $S'$ and $Q'$, which are used in the
cross-checks described later in this section.
Figure~\ref{fig:DataMC} shows the simulated distributions in the signal region for the $\MR$ and $R^2$ variables, where the smoothly falling nature of the backgrounds, as well as their relative contributions, can be observed.
The $m_\mathrm{T}$ distribution in the $T$ and $W$ regions prior to the $m_\mathrm{T}$ and $\Delta\phi_\text{min}$
selection is shown in Fig.~\ref{fig:mT}, while Fig.~\ref{fig:deltaphi} shows the $\Delta\phi_\text{min}$ distribution in the $Q$ region, for both data and simulated backgrounds.
Overall, there is reasonable agreement between the
observed and simulated yields. The discrepancies are accommodated by the systematic uncertainties we assign to the simulated yields.
\begin{figure*}[htb]
\includegraphics[width=0.49\textwidth]{Figure_004-a}
\includegraphics[width=0.49\textwidth]{Figure_004-b} \\
\caption{Simulated $\MR$ (left panel) and $R^2$ (right panel) distributions in the signal region, $S$.
Stacked on top of the background distributions is
the predicted signal contribution from an example T1ttcc model, with parameters $m_{\PSg} =1\TeV$, $m_{\PSQt} =325\GeV$, and $m_{\PSGczDo} =300\GeV$. The bin entries are normalized proportionally to the bin width.
\label{fig:DataMC}}
\end{figure*}
\begin{figure*}[htb]
\centering
\includegraphics[width=0.49\textwidth]{Figure_005-a}
\includegraphics[width=0.49\textwidth]{Figure_005-b}
\caption{Distributions of $m_\mathrm{T}$ for data and simulated backgrounds, in the $T$ (left panel) and $W$ (right panel) control regions, without applying any selection on $m_\mathrm{T}$ and $\Delta\phi_\text{min}$. The contribution from an example signal corresponding to the T1ttcc model with $m_{\PSg} =1\TeV$, $m_{\PSQt} =325\GeV$, and $m_{\PSGczDo} =300\GeV$, is stacked on top of the background processes. Only statistical uncertainties are shown.
\label{fig:mT}}
\end{figure*}
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{Figure_006}
\caption{Distributions of $\Delta\phi_\text{min}$ for data and simulated backgrounds in the $Q$ region without applying a selection on $\Delta\phi_\text{min}$. Only statistical uncertainties are shown. Signal contamination in this control region is negligible.
\label{fig:deltaphi}}
\end{figure}
In Table~\ref{tab:cutflow}, we show the expected number of events obtained from simulation for the different background processes and for the example T1ttcc model with $m_{\PSg} = 1\TeV$, $m_{\PSQt} = 325\GeV$, and $m_{\PSGczDo} = 300\GeV$.
The observed event counts after different levels of selection, beyond the trigger requirement, are also reported.
The background composition in percent after the baseline, $S$, $Q$, $T$, and $W$ region selections is reported in Table~\ref{tab:BG_comp_percent}.
The signal region is $\cPqt\cPaqt$ dominated, with additional contributions from $\PW({\to}\,\ell\PGn)+$jets and multijet processes. Each control region, $Q$, $T$, and $W$, has
high purity for the background process it targets, 90\% multijet, 83\% $\cPqt\cPaqt$ and single top quark processes, and 85\% $\PW({\to}\,\ell\PGn)+$jets, respectively. The discrepancies between the observations and the simulation are due to uncertainties in the MC modeling, especially for the multijet processes.
\begin{table*}[p]
\centering
\topcaption{Event yields in simulated event samples and in data as event selection requirements are applied.
The simulated event counts are normalized to an integrated luminosity of $19.7\fbinv$. ``Other'' refers to the sum of the small
background components $\cPZ/\cPgg^*{\to}\,\ell\bar{\ell}+$jets, triboson, and $\cPqt\cPaqt V$.
The signal is the T1ttcc model with $m_{\PSg}=1000\GeV$, $m_{\PSQt_1}=325\GeV$, $m_{\PSGczDo}=300\GeV$.
The row corresponding to ``$n_\mathrm{PV} > 0$'' gives the event counts after applying the noise filters, pileup reweighting, top \pt reweighting for $\cPqt\cPaqt$, ISR reweighting for the signal, and the requirement of at least one primary vertex. The column listing the total number of background events also includes some processes that only contribute at the early stages of the event selection.
The cross sections used for each sample are listed in the second line of the header. Several of the simulated background samples were produced with generator-level selections applied, which are not fully covered by the first selection levels listed in this table.
}
\renewcommand{\arraystretch}{1.1}
\cmsTable{
\begin{scotch}{ l | c c c c c c c | c | c | c }
\multirow{2}{*}{Selection} & Multijet & $\cPqt\cPaqt$ & $\PW({\to}\ell\PGn)$ & Diboson & Single top & $\cPZ({\to}\PGn\PAGn)$ & Other & Total & Signal & \multirow{2}{*}{Data} \\
& $10.4{\times}10^7$ pb & 245.8 pb & 111.5 pb & 95.4 pb & 114.9 pb & 588.3 pb & 25.2 pb & background & 0.02435 pb & \\ \hline \hline
No selection & $\phantom{0}2.1{\times}10^{11}$ & $\phantom{0}4.9{\times}10^6$ & $\phantom{0}2.2{\times}10^6$ & $\phantom{0}1.9{\times}10^6$ & $\phantom{0}2.3{\times}10^6$ & $\phantom{0}1.2{\times}10^7$ & $\phantom{0}4.9{\times}10^5$ & $\phantom{0}2.1{\times}10^{11}$ & 499 & \\
$n_\mathrm{PV} > 0$ & $1.05{\times}10^{11}$ & $4.42{\times}10^6$ & $2.02{\times}10^6$ & $1.08{\times}10^6$ & $1.72{\times}10^6$ & $2.87{\times}10^6$ & $\phantom{0}4.0{\times}10^5$ & $1.05{\times}10^{11}$ & 479 & \\
$n_\mathrm{j} \geq 3$ & $2.04{\times}10^{10}$ & $4.08{\times}10^6$ & $1.51{\times}10^6$ & $5.19{\times}10^5$ & $1.10{\times}10^6$ & $6.24{\times}10^5$ & $3.37{\times}10^5$ & $2.05{\times}10^{10}$ & 472 & \\
$\pt(\rm j_1) > 200\GeV$ & $1.82{\times}10^{8\phantom{0}}$ & $2.88{\times}10^5$ & $4.36{\times}10^5$ & $1.86{\times}10^4$ & $6.08{\times}10^4$ & $5.89{\times}10^{4}$ & $7.23{\times}10^4$ & $1.82{\times}10^{8\phantom{0}}$ & 403 & \\
$\MR \,{>}\, 800, R^2 \,{>}\, 0.08$ & $3.47{\times}10^{4\phantom{0}}$ & $5.83{\times}10^3$ & $1.17{\times}10^4$ & 309 & 900 & $3.25{\times}10^3$ & 645 & $57\,557$ & 224 & \\
Trigger & $3.15{\times}10^{4\phantom{0}}$ & $5.12{\times}10^3$ & $9.38{\times}10^3$ & 249 & 786 & $2.32{\times}10^3$ & 569 & $50\,164$ & 216 & $67\,037$ \\
\hline \hline
No leptons & $3.09{\times}10^{4\phantom{0}}$ & $1.87{\times}10^3$ & $3.75{\times}10^3$ & 96.3 & 311 & $2.30{\times}10^3$ & 216 & $39\,666$ & 142 & $56\,220$ \\
\hline
$n_\cPqb \geq 1$ & $9.37{\times}10^{3\phantom{0}}$ & $1.51{\times}10^3$ & 590 & 25.2 & 226 & 302 & 79.8 & $12\,187$ & 119 & $18\,164$ \\
$n_\PW \geq 1$ & 841 & 332 & 56.4 & 8.52 & 56.7 & 22.1 & 16.9 & $1\,350$ & 28 & $1\,817$ \\
$S$ & 14.8 & 90.4 & 23.1 & 3.7 & 11.7 & 12.7 & 4.17 & 160 & 23.4 & 187 \\
\hline
$n_\cPqb = 0$ & $1.25{\times}10^{4\phantom{0}}$ & 98.3 & $1.70{\times}10^3$ & 35.6 & 25.9 & $1.25{\times}10^3$ & 54.3 & $15\,691$ & 5.65 & $20\,667$ \\
$n_\mathrm{aW} \geq 1$ & 1519 & 18.7 & 204 & 8.36 & 7.40 & 158 & 6.98 & $1\,923$ & 0.667 & $2\,712$ \\
$Q$ & 1447 & 10.6 & 93.1 & 3.88 & 3.94 & 38.9 & 4.48 & $1\,603$ & 0.07 & $2\,240$ \\
\hline
\hline
1 lepton & 585.9 & $2.74{\times}10^3$ & $5.52{\times}10^3$ & 132 & 421 & 22.1 & 272 & $9\,699$ & 65.0 & $10\,008$ \\
\hline
$n_\cPqb \geq 1$ & 236.7 & $2.17{\times}10^3$ & 625 & 29.9 & 301 & 4.14 & 102 & $3\,470$ & 54 & $3\,930$ \\
$n_\PW \geq 1$ & 24.3 & 496 & 61.6 & 10.0 & 50.9 & 0.56 & 21.9 & 666 & 12.3 & 770 \\
$T$ & 0 & 112 & 20.2 & 2.0 & 13.3 & 0 & 4.1 & 151 & 1.2 & 153 \\
\hline
$n_\cPqb = 0$ & 150.5 & 153 & $2.86{\times}10^3$ & 52.8 & 41.3 & 11.5 & 68.8 & $3\,329$ & 2.54 & $3\,165$ \\
$n_{mW} \geq 1$ & 30.8 & 79.1 & 605 & 33.1 & 13.8 & 2.4 & 20.3 & 786 & 1.19 & 581 \\
$W$ & 0 & 15.5 & 127 & 3.6 & 1.6 & 0.64 & 1.4 & 150 & 0.06 & 116 \\
\end{scotch}
}
\label{tab:cutflow}
\end{table*}
\begin{table*}[tpb]
\centering
\topcaption{Background composition according to simulation after the baseline, $S$, $Q$, $T$, $W$, $Q'$ and $S'$ region selections. ``Other'' refers to the sum of the small
background components $\cPZ/\cPgg^*{\to}\,\ell\bar{\ell}$, triboson, and $\cPqt\cPaqt V$.
\label{tab:BG_comp_percent}}
\newcolumntype{.}{D{.}{.}{-1}}
\begin{scotch}{ l . . . . . . . }
\multirow{2}{*}{Selection} & \multicolumn{1}{c}{Multijet} & \multicolumn{1}{c}{ $\cPqt\cPaqt$} & \multicolumn{1}{c}{$\PW({\to}\,\ell\PGn)$} & \multicolumn{1}{c}{Diboson} & \multicolumn{1}{c}{Single top} & \multicolumn{1}{c}{$\cPZ({\to}\,\PGn\PAGn)$} & \multicolumn{1}{c}{Other} \\
& \multicolumn{1}{c}{(\%)} & \multicolumn{1}{c}{(\%)} & \multicolumn{1}{c}{(\%)} & \multicolumn{1}{c}{(\%)} & \multicolumn{1}{c}{(\%)} & \multicolumn{1}{c}{(\%)} & \multicolumn{1}{c}{(\%)} \\
\hline
Baseline & 62.8 & 10.2 & 18.7 & 0.5 & 1.6 & 4.6 & 1.6 \\
$S$ & 9.2 & 56.3 & 14.4 & 2.3 & 7.3 & 7.9 & 2.6 \\
$Q$ & 90.2 & 0.7 & 5.8 & 0.2 & 0.2 & 2.4 & 0.3 \\
$T$ & 0.0 & 73.9 & 13.3 & 1.3 & 8.8 & 0.0 & 2.7 \\
$W$ & 0.0 & 10.3 & 84.8 & 2.4 & 1.1 & 0.4 & 1.0 \\
$Q'$ & 12.3 & 2.8 & 36.8 & 1.7 & 1.0 & 45.0 & 0.4 \\
$S'$ & 69.5 & 20.3 & 2.8 & 0.4 & 3.8 & 0.8 & 2.4 \\
\end{scotch}
\end{table*}
We do not explicitly estimate the background in the signal region. Rather,
from the observations in the control regions,
we create a prior distribution (described in Section~\ref{sec:likelihood})
for the four background components of the signal region that incorporates all
statistical and systematic uncertainties. However,
in order to verify that the control regions in data provide adequate models for
backgrounds in the signal region and that the translations between different regions behave as expected, we perform two cross-checks, taking into account statistical uncertainties only.
In the first cross-check, we predict the background in a signal-like control region, and compare
these predictions with the observations in that region.
This control region, denoted by $S^\prime$, is defined by inverting the $\Delta\phi_\text{min}$ requirement while preserving the rest of the signal selection.
The estimated number of events in the $S^\prime$ region for the multijet, $\PW(\to\ell\PGn)+$jets, and top quark processes is computed as follows:
\begin{equation}
\widehat{N}_\text{multijet}^{S^\prime} = \left( N_\text{obs}^{Q} - N_\text{other, MC}^{Q} \right) /
\left( \frac{N_\text{multijet}^{Q}}{N_\text{multijet}^{S^\prime}} \right)_\mathrm{MC},
\label{eq:E1}
\end{equation}
\begin{equation}
\widehat{N}_{\PW(\to\ell\PGn)}^{S^\prime} = \left( N_\text{obs}^{W} - N_\text{other, MC}^{W} \right) /
\left( \frac{N_{\PW(\to\ell\PGn)}^{W}}{N_{\PW(\to\ell\PGn)}^{S^\prime}} \right)_{\mathrm{MC}},
\label{eq:E2}
\end{equation}
\begin{equation}
\ifthenelse{\boolean{cms@external}}
{
\begin{split}
\widehat{N}_\mathrm{TTJ+T}^{S^\prime} & = \\
\left( N_\text{obs}^{T} \right. & \left. - \widehat{N}_\text{multijet}^{T} - N_\text{other, MC}^{T} \right) /
\left( \frac{N_\mathrm{TTJ+T}^{T}} {N_\mathrm{TTJ+T}^{S^\prime}}\right)_{\mathrm{MC}},
\end{split}
}
{
\widehat{N}_\mathrm{TTJ+T}^{S^\prime} = \left( N_\text{obs}^{T} - \widehat{N}_\text{multijet}^{T} - N_\text{other, MC}^{T} \right) /
\left( \frac{N_\mathrm{TTJ+T}^{T}} {N_\mathrm{TTJ+T}^{S^\prime}}\right)_{\mathrm{MC}},
}
\label{eq:E3}
\end{equation}
while the estimated number of multijet events in the control region $T$ is given by
\begin{equation}
\widehat{N}_\text{multijet}^{T} = \left( N_\text{obs}^{Q} - N_\text{other, MC}^{Q} \right) /
\left( \frac{N_\text{multijet}^{Q}}{N_\text{multijet}^{T}} \right)_{\mathrm{MC}}.
\label{eq:E4}
\end{equation}
In Eqs.~(\ref{eq:E1})--(\ref{eq:E4}), the superscripts denote one of the control regions, while the subscripts ``other", $\PW(\to\ell\PGn)$,
$\rm{TTJ+T}$, and multijet, denote the sum of the small backgrounds, $\PW(\to\ell\PGn)$+jets, $\cPqt\cPaqt$ plus
single top quark, and multijet, respectively, while ``obs" labels observed counts.
These equations are used only in this cross-check. However, they incorporate the same relations between signal and control regions as will be used in the likelihood procedure described in Section~\ref{sec:likelihood}.
As can be seen from Table~\ref{tab:BG_comp_percent}, the nominal choice of the parameters associated with systematic uncertainties leads to $N_\text{multijet, MC}^{T} = 0$.
The total estimated background in $S^\prime$ is
\begin{equation}
\widehat{N}^{S^\prime} = \sum_i \widehat{N}^{S^\prime}_i ,
\end{equation}
where $i$ runs over all background processes. For smaller backgrounds, $\widehat{N}^{S^\prime}_i$ is determined by simulation.
Backgrounds are estimated bin by bin in the $(\MR,R^2)$ space, where the bin boundaries are numerically defined in Table~\ref{tab:results_prediction}.
However, the estimated scale factors are global as the statistical precision is not sufficient to yield reliable bin-by-bin estimates. The expected global scale factors, which we denote by $\kappa$, are defined in Section~\ref{sec:likelihood}, which also describes how they are calculated.
Figure~\ref{fig:Shape_syst_1D_project_sideband} shows the projection on the $\MR$ and $R^2$ axes of the predicted and observed distributions in the $S'$ region. The prediction agrees with observation within ${\approx}20\%$. This cross-check of the background modeling shows that it is feasible to estimate a multicomponent background in a signal-like region using the control regions we
have defined.
\begin{figure*}[tpb]
\includegraphics[width=0.45\textwidth]{Figure_007-a}
\includegraphics[width=0.45\textwidth]{Figure_007-b}
\caption{One-dimensional projection of $\MR$ (left panel) and $R^2$ (right panel) for the cross-check predicting the $\Delta\phi_\text{min}$ sideband region $S'$.
The estimates for the three different background processes are stacked on top of each other.
The uncertainties shown are statistical only. The horizontal error bars indicate the bin width. \label{fig:Shape_syst_1D_project_sideband}}
\end{figure*}
In the second cross-check, we use the $Q$ region to estimate the background in a signal-like $Q$ region, denoted by $Q^\prime$, for which $\Delta\phi_\text{min} > 0.5$, from the relationship
\begin{equation}
\widehat{N}^{Q^\prime} = N_\text{obs}^Q \frac{N_\mathrm{MC}^{Q^\prime}}{N_\mathrm{MC}^Q}.
\end{equation}
Here, $N_\mathrm{MC}$ includes all contributing background processes, and $N_\text{obs}^Q$ is the
observed count in the $Q$ region.
This test assesses the degree to which the simulated distribution of $\Delta\phi_\text{min}$ as well as its extrapolation from the $Q$ region to the $S$ region are reliable.
As observed from Table~\ref{tab:BG_comp_percent}, the multijet process is only a small contribution in the $Q'$ region. Therefore, this cross-check assesses how well the reduction of the multijet process, via the $\Delta\phi_\text{min}>0.5$ requirement, is modeled.
The comparison between prediction and observation can be made from data shown in Fig.~\ref{fig:Shape_syst_1D_project_QCD}.
The level of discrepancy between the prediction
and the observation in this cross-check is incorporated as a systematic uncertainty of 42\% in the
global scale factor for the multijet component, as described in Section~\ref{sec:likelihood}.
\begin{figure*}[tpb]
\includegraphics[width=0.45\textwidth]{Figure_008-a}
\includegraphics[width=0.45\textwidth]{Figure_008-b}
\caption{One-dimensional projection of $\MR$ (left panel) and $R^2$ (right panel) for the cross-check predicting the background in region $Q'$ defined by $\Delta\phi_\text{min} > 0.5$. The uncertainties shown are statistical only. The horizontal error bars indicate the bin width. \label{fig:Shape_syst_1D_project_QCD}}
\end{figure*}
\section{The \texorpdfstring{\PW}{W} boson tagging scale factors \label{sec:Wtag_SF}}
The $\PW$ boson tagger used in this analysis is the same as that defined and used in previous CMS analyses~\cite{EXO-12-024,Khachatryan:2014vla}. Since the $\PW$ boson tagging efficiency does not depend significantly on the event topology, we use the same scale factor~\cite{EXO-12-024}
\begin{equation}
\textrm{SF}_{\textrm{Wtag}} = 0.86 \pm 0.07 ,
\end{equation}
as used in these previous analyses,
for correcting the modeling differences between \textsc{FullSim} and data for the $\PW$ boson tagging efficiency and apply the scale factor to processes with genuine hadronically decaying $\PW$ bosons (mainly $\cPqt\cPaqt$ and signal) in the $S$ and $T$ regions.
On the other hand, the data/\textsc{FullSim} scale factors for the misidentification (mistag) efficiency for mass-tagged, antitagged, and tagged $\PW$ bosons are derived specifically for this analysis. The mistag efficiency is defined as the probability to tag, with one of the $\PW$ taggers, a jet not originating from the hadronic decay of a $\PW$ boson.
Scale factors are necessary to correct the mistag efficiencies for $\PW$ boson mass tagging and antitagging in the MC simulation of the $Q$ and $W$ control regions, respectively, whereas the mistag efficiency scale factor for $\PW$ boson tagging is used to correct simulated events with misidentified $\PW$ bosons, \eg multijet or $\PW({\to}\,\ell\PGn)$+jets events, in the $S$ and $T$ regions.
All three mistag efficiency scale factors are derived using the same multijet-enriched control region, defined as region $Q$ with the exception of all selections related to razor variables and $\PW$ tagging.
To obtain the mistag efficiencies $\epsilon_\mathrm{f}$ for $\PW$ boson tagging, mass tagging and antitagging, we use the leading CA8 jet in each event and measure the fraction of these jets passing the given tagger.
After obtaining $\epsilon_\mathrm{f}$ in both data and \textsc{FullSim}, we compute the scale factor,
\begin{equation}
\textrm{SF}(\pt) = \frac{\epsilon_\mathrm{f}^{\textrm{data}}(\pt)}{\epsilon_\mathrm{f}^{\textsc{FullSim}}(\pt)}.
\end{equation}
The scale factors for the $\PW$ boson tagging, mass tagging, and antitagging mistag efficiency vary between $1.0$--$1.2$, $1.1$--$1.4$, and $1.2$--$1.5$, respectively, depending on the CA8 jet \pt.
The uncertainties in the scale factor include the statistical uncertainty as well as the trigger efficiency and jet energy scale uncertainties, and vary between 2--7\% depending on the CA8 jet \pt.
Because the signal processes are simulated with \textsc{FastSim}, the resulting tagging efficiencies must be corrected for modeling differences between the programs \textsc{FastSim} and \textsc{FullSim}.
To compute the $\PW$ boson tagging efficiency \textsc{FullSim}/\textsc{FastSim} scale factor we use a sample of $\cPqt\cPaqt$ events simulated with \textsc{FullSim} and \textsc{FastSim}.
We first determine the $\PW$ boson tagging efficiency for both samples, considering only events with exactly one hadronically decaying $\PW$ boson at the generator level for which the closest reconstructed CA8 jet lies within $\Delta R = 0.8$ of the $\PW$ boson.
Since we wish to select boosted $\PW$ bosons, and not boosted top quarks, we require that there be no (generator-level) $\cPqb$ quark from the top quark decay within the cone of the closest CA8 jet.
The $\PW$ boson tagging efficiency as a function of \pt for a given sample is then obtained by dividing the \pt distribution of the closest CA8 jets that also satisfy the tagging condition ($70 < m_\text{jet} < 100\GeV$ and $\tau_{2}/\tau_{1} < 0.5$) by the \pt distribution of all of the closest CA8 jets.
To determine the \textsc{FullSim}/\textsc{FastSim} scale factor for the $\PW$ boson tagging efficiency, we divide the efficiencies $\epsilon$ obtained from the \textsc{FullSim} and \textsc{FastSim} samples, $\textrm{SF}_{\textrm{Full/Fast}}(\pt) = \epsilon^{\textsc{FullSim}}(\pt)/\epsilon^{\textsc{FastSim}}(\pt)$.
This scale factor is applied to all signal samples and varies between 0.89--0.95, depending on the \pt of the given CA8 jet, with an uncertainty of less than 3\%.
\section{Statistical analysis \label{sec:likelihood}}
The statistical analysis of the observations in the signal region is based on a likelihood function, $L(\sigma)$, given by
\begin{equation}
\ifthenelse{\boolean{cms@external}}
{
\begin{split}
L(\sigma) = \ \ \ & \int \rd\mathcal{L} \int \rd\vec{\theta}_1 \cdots \int \rd\vec{\theta}_M \\
& \left[ \prod_{i=1}^M p(N^S_i | \sigma, \mathcal{L}, \vec{\theta}_i) \right] \,
\pi(\vec{\theta}_1,\cdots,\vec{\theta}_M) \, \pi(\mathcal{L}) ,
\end{split}
}
{
L(\sigma) = \int \rd\mathcal{L} \int \rd\vec{\theta}_1 \cdots \int \rd\vec{\theta}_M
\left[ \prod_{i=1}^M p(N^S_i | \sigma, \mathcal{L}, \vec{\theta}_i) \right] \,
\pi(\vec{\theta}_1,\cdots,\vec{\theta}_M) \, \pi(\mathcal{L}) ,
}
\label{eq:marginal}
\end{equation}
where $\sigma$ is the total signal cross section, $M = 25$ is the number of bins in the $(M_{\textrm{R}},R^2)$ plane,
$N^S_i$ is the observed count in bin $i$ of the signal region,
and the bin-by-bin parameters $\epsilon$, $b^S_\text{multijet}$, $b^S_\mathrm{TTJ}$,
$b^S_{\PW(\to\ell\PGn)}$, and $b^S_\text{other}$ are denoted collectively by
$\vec{\theta}$.
The parameter $\epsilon$ represents the $M$
signal efficiencies (including acceptance) for a given signal model, while
the bin-by-bin background parameters for a given background process in the $S$ region are denoted by $b^{S}_\text{process}$.
The function $\pi(\mathcal{L})$ is the integrated luminosity prior and $\pi(\vec{\theta}_1,\cdots,\vec{\theta}_M)$ is an evidence-based prior
constructed from observations in the control regions and the four global scale factors $\kappa^{A/B}_\text{process} = \sum_i b^A_{\text{process}, \mathrm{MC}, i} / \sum_i b^B_{\text{process}, \mathrm{MC}, i}$, where the sum is over all bins of the simulated data; $A$ and $B$ denote any of the $S$, $Q$, $T$, or $W$ regions.
The association of the global scale factors with the control regions is shown in
Fig.~\ref{fig:BoostWorkflow}, which also shows which control regions
provide constraints on the background parameters, $b^S_\text{ process}$. Although we use the same global scale factors in each
bin, shape uncertainties in the simulated distributions are accounted
for by allowing the uncertainty in the scale factors to be bin dependent. The
25 signal bins in the $(M_{\textrm{R}},R^2)$ plane are divided into three sets for which different uncertainties are applied: the four bins nearest the origin
(set 1), the five surrounding bins (set 2), and the remaining bins
(set 3).
The likelihood per bin is taken to be
$p(N^S | \sigma, \mathcal{L}, \vec{\theta}) = \textrm{Poisson}(N^S, \epsilon \sigma \mathcal{L} + b^S_\text{multijet} + b^S_\mathrm{TTJ} + b^S_{\PW(\to\ell\PGn)} + b^{S}_\text{other})$.
\begin{figure}[p]
\centering
\ifthenelse{\boolean{cms@external}}{
\includegraphics[width=0.48\textwidth]{Figure_009}
}{
\includegraphics[width=0.75\textwidth]{Figure_009}
}
\caption{Graphical representation of the analysis method.
The circles represent the signal ($S$) and control ($Q,T,W$) regions, with their definition summarized in the associated boxes.
Listed inside each circle are the likelihood parameters relevant to that region: the bin-by-bin background parameters $b^\text{region}_\text{process}$ for the given region and background process, as well as the global scale factors $\kappa^{A/B}_\text{process} = \sum_i b^A_{\text{process}, \mathrm{MC}, i} / \sum_i b^B_{\text{process}, \mathrm{MC}, i}$, where the sum is over all bins of the simulated data.
A connection between two regions indicates that one or more parameters are shared.
The total expected background, per the $(\MR,R^2)$ bin, is the sum of the terms shown for each region.
Furthermore, associated with each bin of each region is an observed count, $N^\text{region}$, a simulated count, $N^\text{region}_{\text{process}, \mathrm{MC}}$, and
a count $N^\text{region}_{\text{other}, \mathrm{MC}}$ equal to the sum of the smaller backgrounds,
$\cPZ/\cPgg^*{\to}\,\ell\bar{\ell}+$jets, diboson, triboson, and $\cPqt\cPaqt V$,
with an associated parameter in the likelihood $b^\text{region}_\text{other}$.
\label{fig:BoostWorkflow}}
\end{figure}
The integral in Eq.~(\ref{eq:marginal}) is approximated using MC integration by sampling
the priors $\pi(\mathcal{L})$ and $\pi(\vec{\theta}_1,\cdots,\vec{\theta}_M)$ and averaging the
multibin likelihood with respect to the sampled points $\{(\mathcal{L}, \vec{\theta}_1,\cdots,\vec{\theta}_M)\}$.
The priors for the expected integrated luminosity $\mathcal{L}$, signal efficiencies $\epsilon$, and
simulated background counts $b^\text{region}_{\text{process}, \mathrm{MC}}$ are modeled with
gamma function densities,
\begin{align}
\mathrm{Ga}(x, \gamma, \beta) &= \beta^{-1}(x/\beta)^{\gamma-1} \exp(-x / \beta) / \Gamma(\gamma),
\label{eq:gamma}
\end{align}
in which the mode is set to $c$
and the variance to $\delta c^2$,
where
$c \pm \delta c$ denotes either the measured integrated luminosity
or,
for a given bin of a
given region and process, the simulated signal efficiency,
or the simulated background count. From $c \pm \delta c$, we
calculate
the gamma density parameters,
\begin{align}
\gamma &= \Bigl[(k + 2) + \sqrt{(k+2)^2 - 4}\Bigr]/2,\\
\beta &= \Bigl[\sqrt{c^2 + 4\delta c^2} - c\Bigr]/2,
\end{align}
where $k = (c / \delta c)^2$.
For empty bins, we set $\gamma = 1$ and the bin value is
constrained to zero by
setting the $\beta$ parameter to $10^{-4}$.
For the signal efficiencies and backgrounds,
the prior is modeled
hierarchically,
\begin{equation}
\ifthenelse{\boolean{cms@external}}
{
\begin{split}
\pi(\vec{\theta}_1,\cdots,\vec{\theta}_M) =
\int \rd\vec{c}_1 & \cdots \int \rd\vec{c}_M \int \rd\vec{\phi} \\ \left[ \prod_{i=1}^M \pi(\vec{\theta}_i | \vec{c}_i ) \right ] & \,
\pi(\vec{c}_1,\cdots,\vec{c}_M |
\vec{\phi} ) \pi(\vec{\phi}),
\end{split}
}
{
\pi(\vec{\theta}_1,\cdots,\vec{\theta}_M) =
\int \rd\vec{c}_1 \cdots \int \rd\vec{c}_M \int \rd\vec{\phi} \, \left[ \prod_{i=1}^M \pi(\vec{\theta}_i | \vec{c}_i ) \right ] \,
\pi(\vec{c}_1,\cdots,\vec{c}_M |
\vec{\phi} ) \pi(\vec{\phi}),
}
\label{eq:prior}
\end{equation}
where $\vec{\phi}$ represents parameters that characterize the independent
sources
of systematic uncertainty, described in Section~\ref{sec:systematics}.
The integral in Eq.~(\ref{eq:prior}) is
evaluated
as follows: $\vec{\phi}$
values are sampled from
$\pi(\vec{\phi})$ following the procedure described in Section~\ref{sec:systematics},
then $\vec{c}_{i}$ values from $\pi(\vec{c}_1,\cdots,\vec{c}_M | \vec{\phi})$, then $\vec{\theta}_i$ values from
$\pi(\vec{\theta}_i | \vec{c}_i)$. The sampling from $\pi(\vec{\phi})$ and $\pi(\vec{\theta}_i|\vec{c}_i)$
is straightforward because the functional forms are known. However,
the sampling of $\vec{c}_i$ requires running the analysis multiple times,
yielding an ensemble of
histograms in the $(\MR, R^2)$ plane, which
is the output of the procedure described in
Section~\ref{sec:systematics}. Thereafter, the sampling, which yields
the points
$\{(\mathcal{L}, \vec{\theta}, \cdots, \vec{\theta}_M)\}$, proceeds as follows:
\begin{enumerate}
\item sample the integrated luminosity parameter;
\item sample the efficiency parameters, $\epsilon$, for every bin and
every signal model;
\item sample the background parameters $b^\text{region}_\text{process,
MC}$ for
every bin and every background;
\item scale $b^Q_{\text{multijet, MC}}$ by a random number sampled
from a gamma density of unit
mode and standard deviation 0.36 in order to induce
the 42\% uncertainty in the multijet global scale factor $\kappa^{Q/S}_\text{multijet}$
that accounts for deficiencies in
the modeling of multijet production, as derived from the second cross-check mentioned in Section~\ref{sec:selection};
\item compute the $\kappa$ parameters from the appropriate
background sums, for example, $\kappa^{Q/S}_\text{multijet} = \sum_i b^Q_{\text{multijet}, \mathrm{MC}, i} /
\sum b^S_{\text{multijet}, \mathrm{MC}, i}$;
\item scale each $\kappa$ value by a
random number sampled from a gamma density with unit mode and
standard deviation of either 0.5 or 1.0 for the bins in set 2
or set 3, respectively, to account for the larger uncertainties in the
tails of the simulated distributions; and
\item sample the
background parameters $b^S_\text{multijet}$, $b^S_\mathrm{TTJ}$,
and
$b^S_{\PW(\to\ell\PGn)}$,
from the
Poisson
models of the control regions; for example, for region $Q$,
$\textrm{Poisson}(N^Q , \kappa^{Q / S} b^S_\text{multijet} + b^Q_\text{ other})$ is mapped to a posterior density in $b^S_\text{multijet}$ using a
flat prior in $b^S_\text{multijet}$, and $b^S_\text{multijet}$ is sampled from the posterior density.
\end{enumerate}
If no statistically significant signal is observed, we determine limits on the total signal cross section using the CLs criterion~\cite{Junk:1999kv,Read:2002hq,LHCCLs} and
the test statistic $t_\sigma = 2 \ln [ L(\hat{\sigma}) / L(\sigma)]$ when
$0 \leq\hat{\sigma} \leq \sigma$, and $t_\sigma = 0$ when $\hat{\sigma} > \sigma$. Large
values of $t_\sigma$ indicate incompatibility between the
best fit hypothesis $\sigma^\prime = \hat{\sigma}$ and the hypothesis
$\sigma^\prime = \sigma$ being tested. Given the $p$ values
$p_0 = \mathrm{Pr}(t_\sigma > t_{\sigma, \text{obs}} | \sigma^\prime = 0)$ and
$p_\sigma = \mathrm{Pr}(t_\sigma > t_{\sigma, \text{obs}} | \sigma^\prime=\sigma)$, obtained
by simulation, a 95\% CLs upper
limit on the cross section is obtained by solving
$\mathrm{CLs}(\sigma) = p_\sigma / p_0 = 0.05$. The
quantity $t_{\sigma, \text{obs}}$ denotes the
observed values of the test statistic, one for each hypothesis $\sigma^\prime=\sigma$.
\section{Systematic uncertainties \label{sec:systematics}}
The input to the statistical analysis is an ensemble of histograms in the $(\MR, R^2)$ plane that incorporate systematic uncertainties in the simulated signal and background samples. The independent systematic effects, described below, are sampled
simultaneously. For each sampled systematic effect,
a Gaussian variate with zero mean and unit variance is used in the calculation
of the random shift due to the systematic effect
for all the signal and background models.
Likewise, the same randomly sampled PDFs are used for all signal and background models. In this way, the statistical dependencies among all bins of the signal and background models are correctly, and automatically, modeled. The sampling of the systematic effects is repeated several hundred times.
In all cases, except for those associated with PDFs, the systematic uncertainties are in the scale factors (SF)
applied to the simulated samples to correct them for modeling deficiencies. We consider the systematic uncertainties in the following quantities:
\begin{itemize}
\item {\bf Jet energy scale:} The uncertainties are dependent on jet \pt and $\eta$~\cite{Chatrchyan:2011ds}.
\item {\bf Parton distribution functions:} We use 100 randomly sampled sets of PDFs from NNPDF23\_lo\_as\_0130\_qed~\cite{nnpdf}, MSTW2008lo68cl~\cite{Martin:2009iq}, and CT10~\cite{Lai:2010vv}. The samples for the latter two are generated using the program
{\sc hessian2replicas}, recently
released with {LHAPDF6}~\cite{LHAPDF6}. Given a sampled set $i$, for PDF set $K$ and the
PDF set $O$ with which the events were simulated, events are reweighted using the scale factors,
$\mathrm{SF}_{K, i} = w_{K, i} / w_{O}$,
where the weights $w$ are products of the event-by-event PDFs for the colliding partons.
\item {\bf Trigger efficiency: } We take the uncertainty in each bin, as a function of $\HT$ and leading jet $\pt$, to be the maximum of the statistical uncertainty in the efficiency after the baseline selection and the difference between the efficiencies before and after the baseline selection.
\item {\bf $\cPqb$ tagging scale factors:} The $\cPqb$ tagging performance differs between data and simulation, and differs between \textsc{FullSim} and \textsc{FastSim}, which is used to model signal processes. The simulated events are therefore corrected by applying jet flavor-, \pt-, and $\eta$-dependent data/\textsc{FullSim} and \textsc{FullSim}/\textsc{FastSim} scale factors on the $\cPqb$ tagging or mistagging efficiency. The uncertainties in these scale factors are also jet flavor, \pt, and $\eta$ dependent, and are of the order of a few percent~\cite{btag8TeV}.
\item {\bf $\PW$ tagging scale factors:} The $\PW$ boson tag efficiency, and the mistag efficiency for $\PW$ boson tagging, $\PW$ boson mass tagging, and $\PW$ boson antitagging differ between data and simulation, as well as between \textsc{FullSim} and \textsc{FastSim}. Data/\textsc{FullSim} and \textsc{FullSim}/\textsc{FastSim} scale factors, whose uncertainties are functions of jet $\pt$, are applied to the simulated samples.
\item {\bf Lepton identification:} For electrons, we use \pt- and $\eta$-dependent scale factors for the identification efficiency. The uncertainties
are also \pt and $\eta$ dependent~\cite{Khachatryan:2015hwa}. The scale factor for the muon identification efficiency equals one and the corresponding uncertainties are negligible~\cite{Chatrchyan:2012xi}.
\item {\bf Initial-state radiation:} Deficiencies in the modeling of ISR
are corrected by reweighting~\cite{Chatrchyan:2013xna} the signal samples using an event weight
that depends on the \pt of the recoiling system. The associated systematic uncertainty is equal to the difference $1 - w_\mathrm{ISR}$, where $w_\mathrm{ISR}$ is the ISR event weight.
\item {\bf Top quark transverse momentum:} Differential top quark pair production cross section analyses have shown that the shape of the \pt spectrum of top quarks in data is softer than predicted~\cite{toppt}. To account for this, we reweight events based on the \pt of the generator level $\cPqt$ and $\cPaqt$ quarks in the $\cPqt\cPaqt$ simulation.
The uncertainty associated with this reweighting is taken to be equal to the full amount of the reweighting.
\item {\bf Pileup: } Simulated events are reweighted so that their vertex multiplicity distribution matches that observed in data. The minimum-bias cross section is varied by ${\pm} 5\%$, thereby changing the shape of the vertex multiplicity distribution and therefore the weights.
\item {\bf Multijet spectrum:} The cross-checks described in Section~\ref{sec:selection} showed that there is a 42\% uncertainty in the multijet scale factor $\kappa$ between the $S$ and $Q$ regions. This uncertainty is incorporated by increasing the uncertainty in the $\kappa$
parameter, as described in Section~\ref{sec:likelihood}.
\item {\bf $\cPZ ({\to}\, \PGn \PAGn)+$jets prediction:} About 8\% of the background in the signal region is composed of $\cPZ({\to}\,\PGn\PAGn)$+jets events. Since we require the presence of at least one $\cPqb$-tagged jet, and given the known deficiency in modeling $\cPZ$ production in association with heavy flavor quarks~\cite{Chatrchyan:2014dha}, we include an extra systematic uncertainty in the $\cPZ({\to}\,\PGn\PAGn)$+jets contribution. This uncertainty is estimated using a data control region enriched in $\cPZ({\to}\,\ell \bar{\ell})+$jets, required to have exactly two tight leptons with the same flavor ($\Pe$ or $\PGm$) and opposite charge, $60 < m_{\ell\bar{\ell}} < 120\GeV$, at least one $\cPqb$-tagged jet, and at least one $\PW$ mass-tagged jet. We estimate the uncertainty by first computing bin-by-bin data/simulation ratios in this control region. Then, we take the uncertainty in the ratio in each bin as the standard deviation of a Gaussian density, normalized to the number of events in that bin. Finally, the Gaussian
densities from all bins are superposed, and the uncertainty is taken to be the magnitude of the 68\% band around a ratio of unity.
\end{itemize}
As noted above, all systematic effects are varied simultaneously across $(\MR, R^2)$ bins. However,
to assess the effect of each systematic uncertainty individually, each one is varied by one standard deviation up and down.
The effect on the background count and signal efficiency in the signal region is shown in Table~\ref{tab:bgsigsys}.
The signal values are obtained from averaging over all mass points in the T1ttcc model ($\Delta m = 25\GeV$) plane.
The PDF systematic uncertainties are obtained by running over 100 different members from the three PDF sets and fitting a Gaussian function to the efficiency distribution.
The last line in the table corresponds to the full sampling of the systematic uncertainties. To obtain this value, we again fit a Gaussian function to the efficiency distribution obtained from the full systematic sampling including 500 variations.
Although the effects of some of these systematic uncertainties on the backgrounds are large, they do not influence our results greatly because only the ratios of simulated background counts enter the statistical analysis, not the absolute values. Therefore, most of the systematic effects cancel.
The statistical precision on the number of events in the control regions is the leading uncertainty in the background prediction for the search bins at large $\MR$ or $R^2$.
The dominant systematic uncertainty in the signal efficiency arises from the PDFs.
{
\begin{table*}[tpb]
\centering
\topcaption{Summary of $\pm 1$ standard deviation systematic uncertainties for the average signal efficiency over all mass assumptions in the T1ttcc model ($\Delta m=25\GeV$), and for the total background count in the signal region, unless indicated otherwise, as determined from simulation. \label{tab:bgsigsys}}
\newcolumntype{x}{D{x}{\,}{-1}}
\begin{scotch}{l x x}
Systematic effect & \multicolumn{1}{c}{Signal (\%)} & \multicolumn{1}{c}{Background (\%)} \\
\hline
Jet energy scale & +2.2 x {-2.1} & +10.9 x {-5.2}\\
Trigger & +1.1 x {-3.3} & +3.4 x {-5.7}\\
$\cPqb$ tagging \textsc{FullSim} & +2.1 x {-2.3}& +3.9 x {-4.0}\\
$\cPqb$ tagging \textsc{FastSim} & +1.2x {-1.3}& \multicolumn{1}{c}{\NA} \\
$\PW$ tag efficiency {\sc Fullsim} & +9.0 x {-8.9}& +4.6 x {-4.6}\\
$\PW$ tag efficiency \textsc{FastSim} & +2.2 x {-2.2}&\multicolumn{1}{c}{\NA}\\
$\PW$ tag mistag efficiency \textsc{FullSim} &\multicolumn{1}{c}{\NA}& +1.4 x {-1.4} \\
$\PW$ antitag mistag efficiency \textsc{FullSim} ($Q$ region only) &\multicolumn{1}{c}{\NA}& +2.6 x {-2.6} \\
$\PW$ mass-tag mistag efficiency \textsc{FullSim} ($W$ region only) &\multicolumn{1}{c}{\NA}& +2.3 x {-2.3} \\
Electron identification ($T$ and $W$ region only) &\multicolumn{1}{c}{\NA}& +0.2 x {-0.2} \\
Pileup & +0.5 x {-0.5} & +1.0 x {-1.1}\\
ISR & +6.6 x {-6.6} &\multicolumn{1}{c}{\NA}\\
Top quark $\pt$ &\multicolumn{1}{c}{\NA}& +20.5 x {-14.4} \\
$\cPZ(\to\PGn\PAGn)+$ heavy flavor &\multicolumn{1}{c}{\NA}& +4.0 x {-4.0} \\
PDF & \multicolumn{1}{c}{$20.7$} & \multicolumn{1}{c}{$10.7$} \\
\hline
All & \multicolumn{1}{c}{$24.4$} & \multicolumn{1}{c}{$22.1$} \\
\end{scotch}
\end{table*}
}
\section{Results and interpretation \label{sec:interpretation}}
Our background predictions for each bin in the
$(\MR,R^2)$ plane are presented in
Fig.~\ref{fig:results_prediction} and in
Table~\ref{tab:results_prediction}, which also lists the observed event yield in each bin.
The background predictions are presented as the mean and standard deviation as
determined from the background prior $\pi(\theta)$
described in Section~\ref{sec:likelihood}.
The observed event yields are found to be in agreement
with the predicted backgrounds from SM processes. Consequently, no evidence of a signal is observed.
\begin{figure*}[p]
\centering
\includegraphics[width=0.49\textwidth]{Figure_010-a}
\includegraphics[width=0.49\textwidth]{Figure_010-b}
\includegraphics[width=0.49\textwidth]{Figure_010-c}
\includegraphics[width=0.49\textwidth]{Figure_010-d}
\includegraphics[width=0.49\textwidth]{Figure_010-e}
\caption{Background predictions and observations. The results are shown in bins of $\MR$ for each $R^2$ bin.
The hatched band represents the total uncertainty in the background prediction.
Overlaid are two signal distributions corresponding to the T1ttcc model with $m_{\PSg} =1\TeV$, $m_{\PSQt} =325\GeV$, and $m_{\PSGczDo} =300\GeV$, and the T1t1t model with $m_{\PSg} =800\GeV$, $m_{\PSQt} =275\GeV$, and $m_{\PSGczDo} =100\GeV$.
\label{fig:results_prediction}}
\end{figure*}
\begin{table*}[p]
\centering
\topcaption{Event yields for the predicted backgrounds and for the data in each of the signal bins in $R^2$ and $\MR$.
The uncertainties in the predictions are the combined statistical and systematic uncertainties obtained using the sampling procedure described in the text.
\label{tab:results_prediction}}
\cmsTable{\small
\newcolumntype{y}{D{,}{\,\pm\,}{5,5}}
\newcolumntype{z}{D{,}{,\,}{7,7}}
\begin{scotch}{ c z | y y y y | y | c }
\multicolumn{1}{c}{$R^2$} & \multicolumn{1}{c |}{$\MR$ (\GeVns)} & \multicolumn{1}{c}{$\cPqt\cPaqt$} & \multicolumn{1}{c}{Multijet} & \multicolumn{1}{c}{$\PW({\to}\,\ell \PGn)$} & \multicolumn{1}{c|}{Other} & \multicolumn{1}{c|}{Total} & Observed\\
\hline \hline
\multirow{5}{*}{[0.08, 0.12[} & [800, 1000[ & 47.1 , 8.6 & 21.1 , 32.0 & 6.1 , 1.9 & 6.0 , 2.3 & 80.2 , 33.4 & 75 \\
& [1000, 1200[ & 15.2 , 4.1 & 4.7 , 9.9 & 1.9 , 0.9 & 2.2 , 0.9 & 24.0 , 10.6 & 24 \\
& [1200, 1600[ & 7.3 , 4.8 & 1.4 , 0.9 & 1.3 , 1.0 & 1.4 , 0.7 & 11.4 , 5.1 & 10 \\
& [1600, 2000[ & 0.8 , 1.2 & 0.2 , 0.2 & 0.4 , 0.5 & 0.1 , 0.0 & 1.5 , 1.3 & 0 \\
& [2000, 4000] & 0.8 , 1.1 & 0.0 , 0.1 & 0.4 , 0.6 & 0.1 , 0.1 & 1.4 , 1.3 & 0 \\
\hline
\multirow{5}{*}{[0.12, 0.16[} & [800, 1000[ & 15.5 , 4.2 & 2.5 , 1.2 & 1.1 , 0.8 & 2.8 , 1.2 & 21.9 , 4.8 & 34 \\
& [1000, 1200[ & 3.4 , 1.8 & 0.5 , 0.3 & 1.3 , 0.6 & 1.2 , 0.7 & 6.4 , 2.0 & 8 \\
& [1200, 1600[ & 2.8 , 2.3 & 0.2 , 0.1 & 0.6 , 0.5 & 0.6 , 0.4 & 4.1 , 2.3 & 3 \\
& [1600, 2000[ & 0.8 , 1.2 & 0.0 , 0.1 & 0.2 , 0.3 & 0.1 , 0.0 & 1.1 , 1.2 & 0 \\
& [2000, 4000] & 0.8 , 1.1 & 0.0 , 0.0 & 0.2 , 0.4 & 0.0 , 0.0 & 1.0 , 1.1 & 0 \\
\hline
\multirow{5}{*}{[0.16, 0.24[} & [800, 1000[ & 9.1 , 5.8 & 0.7 , 0.4 & 1.8 , 1.4 & 2.4 , 1.1 & 14.0 , 6.0 & 16 \\
& [1000, 1200[ & 2.5 , 2.4 & 0.2 , 0.1 & 0.5 , 0.5 & 1.5 , 0.8 & 4.7 , 2.5 & 4 \\
& [1200, 1600[ & 0.9 , 1.0 & 0.1 , 0.1 & 1.3 , 0.9 & 0.2 , 0.2 & 2.5 , 1.4 & 2 \\
& [1600, 2000[ & 0.9 , 1.6 & 0.0 , 0.0 & 0.2 , 0.3 & 0.0 , 0.0 & 1.1 , 1.7 & 1 \\
& [2000, 4000] & 0.9 , 1.3 & 0.0 , 0.0 & 0.2 , 0.3 & 0.0 , 0.0 & 1.1 , 1.3 & 0 \\
\hline
\multirow{5}{*}{[0.24, 0.5[} & [800, 1000[ & 7.4 , 7.0 & 0.1 , 0.1 & 0.9 , 1.2 & 2.1 , 1.0 & 10.4 , 7.2 & 8 \\
& [1000, 1200[ & 1.3 , 1.4 & 0.0 , 0.0 & 0.9 , 1.0 & 0.6 , 0.3 & 2.7 , 1.6 & 0 \\
& [1200, 1600[ & 0.8 , 1.4 & 0.0 , 0.0 & 0.4 , 0.6 & 0.2 , 0.2 & 1.5 , 1.5 & 1 \\
& [1600, 2000[ & 0.8 , 1.1 & 0.0 , 0.0 & 0.2 , 0.2 & 0.1 , 0.0 & 1.0 , 1.1 & 0 \\
& [2000, 4000] & 0.8 , 1.2 & 0.0 , 0.0 & 0.2 , 0.3 & 0.0 , 0.0 & 1.1 , 1.2 & 0 \\
\hline
\multirow{5}{*}{[0.5, 1]} & [800, 1000[ & 2.0 , 1.9 & 0.0 , 0.0 & 0.4 , 0.6 & 0.5 , 0.3 & 2.9 , 2.0 & 0 \\
& [1000, 1200[ & 0.9 , 1.3 & 0.0 , 0.0 & 0.2 , 0.4 & 0.1 , 0.1 & 1.2 , 1.4 & 1 \\
& [1200, 1600[ & 0.9 , 1.2 & 0.0 , 0.0 & 0.2 , 0.3 & 0.1 , 0.1 & 1.2 , 1.3 & 0 \\
& [1600, 2000[ & 0.8 , 1.1 & 0.0 , 0.0 & 0.2 , 0.5 & 0.0 , 0.0 & 1.0 , 1.2 & 0 \\
& [2000, 4000] & 0.8 , 1.0 & 0.0 , 0.0 & 0.2 , 0.3 & 0.0 , 0.0 & 1.0 , 1.0 & 0 \\
\end{scotch}
}
\end{table*}
We interpret our results in terms of the simplified model spectra T1ttcc and T1t1t, whose diagrams are shown in Fig.~\ref{fig:diagrams}.
These models each have three mass parameters: the gluino, top squark, and LSP masses. The
mass of the gluino is varied between 600 and 1300\GeV and that of the LSP between 1 and 500\GeV, while the mass difference between the top squark and the LSP, $\Delta m$, is fixed at 10, 25, or 80\GeV for the T1ttcc model, and at 175\GeV for the T1t1t model. In both models the gluino is assumed to decay 100\% of the time into a top squark and a top quark.
To illustrate the expected signal sensitivity, we show in Fig.~\ref{fig:eff_T1ttcc} the signal efficiencies as a function of the gluino and neutralino masses, for the T1ttcc model, to which this analysis is particularly sensitive, and for the T1t1t model.
Efficiencies of up to 6\% in the most boosted regimes are reached.
For the T1ttcc model a drop in efficiency is observed for the region of model parameter space with the lowest neutralino mass ($m_{\PSGczDo} = 1\GeV$), which can be explained by Lorentz boosts. For LSP masses higher than the mass of the charm quark, the LSP will assume most of the momentum. For the bins with the lowest LSP mass, however, the LSP and the charm quark have about equal mass, so that after the boost they will share the momentum about equally. This results in a softer \ETm spectrum and therefore a lower $R^2$ value, which reduces the efficiency substantially.
\begin{figure*}[p]
\centering
\includegraphics[width=0.49\textwidth]{Figure_011-a}
\includegraphics[width=0.49\textwidth]{Figure_011-b}
\includegraphics[width=0.49\textwidth]{Figure_011-c}
\includegraphics[width=0.49\textwidth]{Figure_011-d}
\caption{Signal efficiency for the T1ttcc and T1t1t simplified model spectra, as a function of the gluino and neutralino masses. Three mass splittings between top squark and LSP are considered for the T1ttcc model: 10, 25, and 80\GeV, shown in the top left, top right, and bottom left panels, respectively. The efficiency for the T1t1t model with a mass splitting of 175\GeV is shown in the bottom right panel.
\label{fig:eff_T1ttcc}}
\end{figure*}
Figure~\ref{fig:limits} shows the observed 95\% confidence level (\CL) upper limit on the signal cross section as a function of the gluino
and neutralino masses, obtained using the CLs method described briefly in
Section~\ref{sec:likelihood}, for the T1t1t model and for the T1ttcc model with $\Delta m=10, \,25, \,\textrm{and } 80$\GeV.
Additionally, the figure also shows contours corresponding to the observed and expected lower limits, including their uncertainties, on the gluino and neutralino masses.
This analysis has made significant inroads into the parameter space of the T1ttcc model.
Gluinos with mass up to about 1.1\TeV have been excluded for neutralinos with a mass less than about 400\GeV when the top squark decays to a charm quark and a neutralino and $\Delta m < 80\GeV$. This also means that top squarks with masses up to about 400\GeV have been excluded for small mass differences with the LSP, given the existence of a gluino with a mass less than about 1.1\TeV.
Similarly, for the T1t1t model, top squarks with a mass of up to about 300\GeV have been excluded for the scenarios with $\Delta m = 175\GeV$ and gluino mass less than 700\GeV.
The observed limit for this model is lower than the expected limit because of the small excess in the low $\MR$ bins for $0.12 \leq R^2 < 0.16$, which are among the most sensitive bins for the T1t1t model.
\begin{figure*}[p]
\centering
\includegraphics[width=0.49\textwidth]{Figure_012-a}
\includegraphics[width=0.49\textwidth]{Figure_012-b}
\includegraphics[width=0.49\textwidth]{Figure_012-c}
\includegraphics[width=0.49\textwidth]{Figure_012-d}
\caption{Observed upper limit (CLs method, 95\% \CL) on the signal cross section as a function of the gluino
and neutralino masses for the T1ttcc model with $\Delta m=10,$ 25, and 80\GeV (top left, top right, bottom left panels) and for the T1t1t model with $\Delta m = 175\GeV$ (bottom right panel).
Also shown are the contours corresponding to the observed and expected lower limits, including their uncertainties, on the gluino and neutralino masses.
\label{fig:limits}}
\end{figure*}
\section{Summary \label{sec:summary}}
We have presented a search for new physics in hadronic final states with at least one boosted $\PW$ boson and a $\cPqb$-tagged jet using data binned at high values of the razor kinematic variables, $\MR$ and $R^2$. The analysis uses 19.7\fbinv of 8\TeV proton-proton collision data collected by the CMS experiment.
The SM backgrounds are estimated using control regions in data. Scale factors, derived from simulations, connect these control regions to the signal region.
The observations are found to be consistent with the SM expectation, as shown in Fig.~\ref{fig:results_prediction} and
Table~\ref{tab:results_prediction}. The results, which are encapsulated in a binned likelihood, are interpreted in terms of supersymmetric models describing pair production of heavy gluinos decaying to boosted top quarks. Limits are set on the gluino and neutralino masses using the CLs criterion on the gluino-neutralino mass plane, as shown in Fig.~\ref{fig:limits}.
Assuming that the gluino always decays into a top squark and a top quark, this analysis excludes gluino masses up to 1.1\TeV for top squarks with a mass of up to about 450\GeV that decay exclusively to a charm quark and a neutralino. In this scenario, the mass difference considered between the top squark and the neutralino is less than 80\GeV.
This analysis also excludes gluino masses of up to 700\GeV when the top squark decays solely to a top quark and a neutralino, and the mass difference between the top squark and the neutralino is around the top quark mass.
\section*{Acknowledgements}
\hyphenation{Bundes-ministerium Forschungs-gemeinschaft Forschungs-zentren} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: the Austrian Federal Ministry of Science, Research and Economy and the Austrian Science Fund; the Belgian Fonds de la Recherche Scientifique, and Fonds voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq, CAPES, FAPERJ, and FAPESP); the Bulgarian Ministry of Education and Science; CERN; the Chinese Academy of Sciences, Ministry of Science and Technology, and National Natural Science Foundation of China; the Colombian Funding Agency (COLCIENCIAS); the Croatian Ministry of Science, Education and Sport, and the Croatian Science Foundation; the Research Promotion Foundation, Cyprus; the Ministry of Education and Research, Estonian Research Council via IUT23-4 and IUT23-6 and European Regional Development Fund, Estonia; the Academy of Finland, Finnish Ministry of Education and Culture, and Helsinki Institute of Physics; the Institut National de Physique Nucl\'eaire et de Physique des Particules~/~CNRS, and Commissariat \`a l'\'Energie Atomique et aux \'Energies Alternatives~/~CEA, France; the Bundesministerium f\"ur Bildung und Forschung, Deutsche Forschungsgemeinschaft, and Helmholtz-Gemeinschaft Deutscher Forschungszentren, Germany; the General Secretariat for Research and Technology, Greece; the National Scientific Research Foundation, and National Innovation Office, Hungary; the Department of Atomic Energy and the Department of Science and Technology, India; the Institute for Studies in Theoretical Physics and Mathematics, Iran; the Science Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy; the Ministry of Science, ICT and Future Planning, and National Research Foundation (NRF), Republic of Korea; the Lithuanian Academy of Sciences; the Ministry of Education, and University of Malaya (Malaysia); the Mexican Funding Agencies (CINVESTAV, CONACYT, SEP, and UASLP-FAI); the Ministry of Business, Innovation and Employment, New Zealand; the Pakistan Atomic Energy Commission; the Ministry of Science and Higher Education and the National Science Centre, Poland; the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, Portugal; JINR, Dubna; the Ministry of Education and Science of the Russian Federation, the Federal Agency of Atomic Energy of the Russian Federation, Russian Academy of Sciences, and the Russian Foundation for Basic Research; the Ministry of Education, Science and Technological Development of Serbia; the Secretar\'{\i}a de Estado de Investigaci\'on, Desarrollo e Innovaci\'on and Programa Consolider-Ingenio 2010, Spain; the Swiss Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich, and SER); the Ministry of Science and Technology, Taipei; the Thailand Center of Excellence in Physics, the Institute for the Promotion of Teaching Science and Technology of Thailand, Special Task Force for Activating Research and the National Science and Technology Development Agency of Thailand; the Scientific and Technical Research Council of Turkey, and Turkish Atomic Energy Authority; the National Academy of Sciences of Ukraine, and State Fund for Fundamental Researches, Ukraine; the Science and Technology Facilities Council, UK; the US Department of Energy, and the US National Science Foundation.
Individuals have received support from the Marie-Curie program and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund; the OPUS program of the National Science Center (Poland); the Compagnia di San Paolo (Torino); MIUR project 20108T4XTM (Italy); the Thalis and Aristeia programs cofinanced by EU-ESF and the Greek NSRF; the National Priorities Research Program by Qatar National Research Fund; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University (Thailand); the Chulalongkorn Academic into its 2nd Century Project Advancement Project (Thailand); and the Welch Foundation, contract C-1845.
\clearpage
|
1,314,259,993,414 | arxiv | \section{Introduction}
Online communities provide us with the means to study what people are interested in and talking about.
This includes political engagement~\cite{agarwal2019tweeting}, sports discussions~\cite{yu2015world} and general news~\cite{kwak2010twitter}.
However, these communities do not exist in isolation: the same users may visit multiple platforms, and information can propagate from one community to another.
For example, we regularly see this ecosystem effect when sharing memes~\cite{zannettou2018origins} and news media~\cite{zannettou2017web}.
Studying these kinds of connections can help us to learn more about how information moves across the web, and also can give us more insight into the way people are using various platforms.
In this study, we focus on the platform \emph{Urban Dictionary} (UD),\footnote{\url{https://www.urbandictionary.com/}} which is an online, crowdsourced dictionary for English slang and colloquial language.
Urban dictionary is known to be both complex and noisy, but also potentially invaluable in terms of its vantage of emerging slang terminology~\cite{nguyen2018emo}.
It serves as a mirror of parts of today's society, reflecting current trends and providing a perspective on the zeitgeist. For example, surges in definitions around U.S. Presidents George W. Bush (in office 2001-2009), Barack Obama (2009-2017) and Donald Trump (2017-Present) show how real-world events impact use of language online (Figure \ref{fig:presidents}).
We posit that this connection to the zeitgeist may provide powerful insight into ongoing discussions, as well as offering a tool to better interpret online discourse.
However, to date, we lack the tools or computational studies that can measure the connection between UD and the kinds of conversations happening elsewhere on the web, e.g., Twitter. We are particularly interested in understanding how terminology may spread between platforms, and how UD influences with wider web-sphere.
To overcome this deficiency, we present the first study to explore the relationship between UD and the use of terminology on a major social media platform, Twitter. We select Twitter due to its huge scale and ease of access to data. In this work, we specifically seek to answer the following research questions:
\begin{enumerate}
\item Is activity on Urban Dictionary significantly correlated with discussions taking place on Twitter?
\item If yes, for which terms does activity on these two platforms exhibit either a positive or negative temporal correlation? What are the characteristics of these terms?
\item Is it more likely that new definitions are added to Urban Dictionary for a term if it is currently \textit{trending} on Twitter?
\end{enumerate}
To answer these questions, we collect minute-level data files containing tweets from a 1\% sample of all of Twitter between January 2012 and the end of September 2019, as well as a snapshot of the entirety of Urban Dictionary in October 2019. We use cross correlation analysis to explore the connections between activity on the two platforms, and we find that in some cases, UD activity \emph{does} reflect trends on Twitter, albeit with varying degrees of correlation and temporal lag. We categorize UD terms\footnote{Throughout the paper, we generically describe items that are defined in UD as ``terms'', while acknowledging that some of the headwords are actually multi-word expressions.} based on their association with Twitter, and find that positively correlated terms are more associated with political figures, memes, and historic events, while negatively correlated terms are more negative in sentiment, nonprofessional, and often have explicit themes. We also explore the relationship between \textit{trending} terms on Twitter and UD, finding that this tends to be strong in time periods connected to the creation of new definitions on UD.
\textit{We warn the reader that this paper contains offensive terms due to the nature of the data. It is necessary not to censor this content, so to offer a comprehensive description of material on Urban Dictionary.}
\section{Related Work}
\pb{Multi-Platform Analyses.}
There has been a recent surge in interest surrounding multi-platform influence.
This includes understanding how news and links spread across websites~\cite{zannettou2017web}; how image content is copied between social media~\cite{zannettou2018origins}; and even how communities coordinate to impact other platforms~\cite{mariconti2019you}.
These studies have shown that web and social platforms sit within a wider ecosystem with (poorly understood) influence over each other. We contribute to this understanding by inspecting how two particular platforms influence each other: UD and Twitter.
\pb{Evolution of Language \& UD.}
People have been studying the evolution of languages for hundreds of years~\cite{hamilton2016cultural}. This includes changes in word meanings~\cite{mitra2014s}, as well as how words are used~\cite{maity2016wassup,maity2016out}. Social media, however, has provided the first opportunity to get real-world insight into day-to-day changes in language \cite{shoemark-etal-2019-room}.
We posit that UD better allows us to understand this evolution ``on the ground''.
There have been a small set of recent studies of UD. Smith \emph{et al.}\xspace~\cite{smith2011urban} performed a qualitative analysis of how UD has effected and influenced both access to and formulation of the lexis.
Smith~\cite{smith2011urban}
performed a qualitative study, focusing on the word ``meep'', and exploring how UD might free language from prescriptive language ideologies. Wilson \emph{et al.}\xspace~\cite{lrec2020} used UD as a training corpus for neural-network based word embeddings, finding that these embeddings were competitive with other popular pre-trained word embeddings models across a range of tasks including sentiment analysis and sarcasm detection. Closest to our work is that by Nguyen \emph{et al.}\xspace~\cite{nguyen2018emo}, who performed a \emph{quantitative} study of terminology indexed on UD. They offer a statistical analysis of UD's content, showing for example a high presence of opinion-focused entries.
Our work differs in that we specifically look at how UD may influence other platforms. Furthermore, we focus on understanding ``activity log'' data, which was not inspected in these prior studies.
\section{Methodology \& Data}
\label{sec:methodology}
We start by outlining our data collection methodology, as well as how we control for missing data.
\subsection{Urban Dictionary}
Urban Dictionary is an online, crowd-sourced dictionary for (mostly)\footnote{Terms from other languages like ``hombre'' are defined, but definitions and examples describe code-switched usage of these terms within English speaking contexts.} English-language terms containing definitions that are not typically captured by traditional dictionaries. In the best cases, users provide meaningful definitions for new and emerging language, while in reality, many entries are a mix of honest definitions (``Stan: a crazy or obsessed fan''), jokes (``Shoes: houses for your feet''), personal messages (``Sam: a really kind and caring person''), and inappropriate or offensive language \cite{nguyen_ud}.
Each entry, uploaded by a single user, contains a term, its definition, examples, and tags (Figure \ref{fig:example}). Further, those who view the entry have the opportunity to provide other definitions to the entry and/or also provide a vote in the form of a ``thumbs-up'' or a ``thumbs-down''. These votes are recorded and used to rank the possible definitions for a given term when it is looked up in Urban Dictionary. Entries in the Urban Dictionary can be for a singular word, a phrase (e.g., ``spill the tea'', Figure~\ref{fig:example}), or an abbreviation (e.g., ``brb'' and ``FYI'').
\begin{figure*}[tb]
\includegraphics[width=0.9\textwidth]{img/exampleactivity.png}
\centering
\caption{Example entry on Urban Dictionary, including the head word (1), definition (2), usage examples (3), tags (4), user and date (5), upvote and downvote counts (6), and activity graph (7). Words and phrases in color that are also bold and underlined indicate links to other entries on Urban Dictionary.}
\label{fig:example}
\end{figure*}
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{img/definitions_per_word.png}
\caption{Number of definitions per term defined on Urban Dictionary (log scale) \cite{lrec2020}.}
\label{fig:defperterm}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{img/votes.png}
\caption{Counts of upvotes and downvotes per entry on Urban Dictionary, with histograms (log scale) \cite{lrec2020}.}
\label{fig:votes}
\end{figure}
For every entry in Urban Dictionary, we crawl and store all of the aforementioned information, resulting in a total of approximately 2 million unique defined terms with an average of 1.8 definitions per term. The full histogram of the number of definitions per term is presented in Figure \ref{fig:defperterm}. This data collection includes an up-to-date version of Urban Dictionary as of October 16, 2019. In order to get a high-level understanding of the data, we also
plot the upvotes and downvotes assigned to the full set of definitions in Figure \ref{fig:votes}. We note similar skewness in these figures as was reported in an earlier analysis of Urban Dictionary data \cite{nguyen_ud}.
We also scrape all “activity” statistics, which reflect user interest in these terms measured on month-to-month basis.
This is shown on the right hand side of Figure \ref{fig:example} (\#7) and represents the number of page clicks a definition has received.
We collect this information for all terms from January 2012 onward, since this is the earliest month for which this data is available across the site. As opposed to the temporal signal provided by the UD definitions, these activity statistics provide a more continuous gauge of overall interest in terms over time from a consumer perspective. These activity logs represent the number of visits to each word page over time. These are normalized, preventing us from known the scale of accesses. Instead, we can only see the trend. Note that these activity logs only cover 21.8\% of all terms, as less popular terms are not accompanied by the activity log.
\subsection{Twitter}\label{sec:Twitter}
We gather historical Twitter data from \url{archive.org},\footnote{\url{https://archive.org/download/archiveteam-json-twitterstream}} covering the same period as the Urban Dictionary activity statistics (i.e., starting in January 2012).
This covers multiple terabytes of Twitter data, gathered using the 1\% “sprinkle” sample of the Twitter streaming API. Since UD is an English-language resource, we apply the pre-trained \texttt{fasttext} language classifier \cite{joulin2016fasttext} to all of the tweets, and only search for UD terms within the tweets that identified as being written in English. This is particularly important as UD contains a handful of terms, intended to be English slang or acronyms, that share surface forms with tokens in other languages (e.g., the Indonesian word ``nih'' will be confused with the UD term defined as an acronym for ``Not Invented Here'' or ``National Institute of Health''), leading to false positives. Further, we exclude words that are less than three characters long (the letters of the English alphabet have their own definitions on Urban Dictionary) or those that are included in a stopword list\footnote{English stopword list retrieved from \url{https://www.academia.edu/7221849/}}, leaving us with a set of 1,560,780 words and phrases to search for in each tweet.
\subsection{Searching Twitter for UD terms}\label{sec:udsearch}
We check for all UD terms in each tweet using the Aho-Corasick algorithm \cite{aho1975efficient},\footnote{We use the implementation provided in the \hyperlink{https://pyahocorasick.readthedocs.io/en/latest/}{\texttt{pyahocorasick}} Python package.} which provides the locations of all substrings that match those in the input list to search for. We consider a term to be matched only if the characters before and after the substring match are both non-alphanumeric and if the string is not preceded with an $@$, indicating that the string is part of a handle (i.e., a username). We cannot first apply tokenization to the tweets, because some UD terms contain multiple tokens (e.g., ``falling in love'') or special characters like punctuation (``thebomb.com''), and so tokenization and other most text pre-processing steps would only make it more difficult to detect these terms. Therefore, we operate directly on the raw text of the tweets. The resulting total counts are then aggregated at the day-level, and the daily totals are then averaged across each month so that the length of a given month does not disproportionately affect its total count.
\subsection{Missing Data.}
While our dataset represents a majority of the time period being studied, some segments of the Twitter data are missing for all terms.
We assume this was due to issues within the \url{archive.org} data collection.
To correct this, we check for any missing data at the minute-level and record the total number of minutes for which we have data each month. We define $O_m({M})$ as the \textit{observed} minute count for the month $M$ in a particular year. We then compute a correction for each month as:
$$
C(M) = \frac{E_m(M)}{O_m(M)}
$$
where $E_m(M)$ is the expected or actual number of minutes with month $M$. We estimate the number of minutes within a month as $60 \times 24 \times n_{days}$ where $n_{days}$ is the number of days during that month and year, taking leap years into account. We then take the total activity count $a(M)$ for each term found in month $M$ and multiply it by $C(M)$, rounding to the nearest whole integer, labeling this quantity, the corrected count for this month and year, as $\hat{a}(M)$. The average correction score across all months was 1.06, indicating that only a small number of total minutes were typically missing for a given month. In some instances, however, data is missing at the day level. For months missing more than 14 days of data,\footnote{These months were January 2014, January-March 2015, and May 2018.} we impute the counts of each term for that month by inserting the average of the (corrected) counts from the previous and following months.
\section{Cross-Platform Dynamics}
We next proceed to explore key trends both within UD, as well as Twitter. We start by explaining how we selected key terms shared between both datasets, before proceeding to explore how these two platforms influence each other.
\subsection{Term Selection}\label{sec:term_selection}
Rather than examine every term in Urban Dictionary, we focus our study on the subset of terms that provide us with enough data to explore interesting trends across our two platforms of interest. We consider all terms that:
\begin{enumerate}
\item have been defined on UD;
\item appear in our Twitter data sample at least 10,000 times over the course of nearly eight years of data;
\item have recorded activity logs on UD that share at least 12 complete months of overlap with the available Twitter data.
\end{enumerate}
After applying these filtering steps, we are left with 31,803 terms, which appear in Twitter a total of 5,969,621,745 ($\approx$ 6 billion) times. The distribution of the total number of times that each of these terms appears in Twitter is presented in Figure \ref{fig:twitter_histogram}. Most of the terms appear between 10,000 (our minimum threshold value) and 1 million times, with a few appearing tens of millions of times through the time period we examine. Some of the most common UD terms on Twitter include ``lol'' (31 million occurrences), ``love'' (29 million), ``twitter'' (17 million), ''retweet'' (16 million), and ``god'' (16 million). Interestingly, ``love'' and ``god'' are also two of the words that have previously been identified as having the largest number of distinct definitions on UD \cite{nguyen2018emo}. We spend the rest of the section exploring how these two time series datasets influence each other.
\begin{figure}
\centering
\customxy{img/twitter_histogram.png}{UD term bin}{occurrences in Twitter sample}
\caption{Histogram of total occurrences of selected (see section \ref{sec:term_selection}) UD terms in entire Twitter sample (log scale).}
\label{fig:twitter_histogram}
\end{figure}
\subsection{Who influences whom? Twitter or UD?}
\label{sec:cc}
We start by exploring how the use of terms with UD and Twitter correlate over time.
Our goal is to understand if terms are introduced on Twitter and then spread to UD, or vice versa.
Specifically, measuring the cross-correlation between the two time series allows us to capture the relationships between the two sequences, as well as providing a measure of the time offset at which the two sequences are most highly correlated. Since the Twitter and UD data have differing units of measurement, we first normalize each month in time series, $S$, according to:
$$
n(M,S) = \frac{\hat{a}(M) - \mu_S}{\sigma_S}
$$
where $\mu_S$ and $\sigma_S$ are the mean and standard deviation of the series $S$, respectively, and $\hat{a}(M)$ is the corrected activity value as computed in section \ref{sec:Twitter}, or the raw activity value in the case of UD. Then, define the series of all normalized values $n(M,S)$ for a given word as $S_w$, and let $U_w$ and $T_w$ represent the time series activity of term $w$ for UD and Twitter data, respectively.
We can then measure the zero-normalized cross correlation as:
$$
R_w(k,U,T) = \sum_{M\in X(U_w,T_w)} (n(M+k,U_w) \times \overline{ n(M,T_w) } )
$$
where $X(U_w,T_w)$ represents the longest overlapping period of time for which $U_w$ and $T_w$ are defined and $k$ represents a number of months. Call the time lag resulting in the most extreme positive or negative correlation $t_w=\operatorname{argmax}_k |R_w(k,U,T)|$ for $k\in[k_{min}~..~k_{max}]$.
In order to split the terms based on those with a positive, negative, or no correlation, we identify the terms for which the difference between $R_w(t,U,T)$ and 0 is statistically significant with a value of $\alpha=0.01$,
correcting for multiple hypothesis testing using the Benjamini–Hochberg procedure \cite{benjamini1995controlling}\footnote{We use the implementation provided in the \hyperlink{https://www.statsmodels.org/dev/generated/statsmodels.stats.multitest.multipletests.html}{\texttt{statsmodels}} Python package.} to control the false discovery rate. When we find that we have sufficient evidence to reject our null hypothesis, $H_0: R_w(t,U,T) = 0$, we report that a term exhibits either a positive (if $R_w(t,U,T)>0$) or negative (otherwise) correlation between UD and Twitter activity with the defined $\alpha$ value.
\begin{figure}
\centering
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=\linewidth]{img/overall_density_colored.png}
\caption{All terms.}
\label{sub:before}
\end{subfigure}%
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=\linewidth]{img/overall_density_colored_significant.png}
\caption{Only significantly correlated terms.}
\label{sub:after}
\end{subfigure}%
\caption{Stacked histograms of most extreme temporal correlations between UD and Twitter activity trends considering possible time lags of $t=[-3~..~3]$ months before and after removal of cases where $H_0$ was rejected.}
\label{fig:correlation_density}
\end{figure}
\begin{table}[]
\begin{tabular}{rcc|rcc}
\multicolumn{3}{c}{\textbf{Positive correlation}} & \multicolumn{3}{c}{\textbf{Negative correlation}} \\ \hline
term & corr & t & term & corr. & t \\ \hline
alex from target & 1.000 & 0 & goth & -0.778 & 1 \\
number neighbor & 1.000 & 0 & naruto & -0.721 & 1 \\
harlem shake & 0.997 & 0 & mole & -0.720 & 3 \\
omarosa & 0.993 & 0 & troll & -0.717 & 1 \\
pokemon go & 0.991 & 0 & squirt & -0.699 & 2 \\
balsa & 0.990 & 3 & as*hat & -0.698 & 0 \\
united airlines & 0.989 & 0 & f*ck me & -0.691 & 3 \\
alternative facts & 0.989 & 0 & pornography & -0.685 & 2 \\
franken & 0.978 & 0 & f*cked & -0.676 & 3 \\
scaramucci & 0.978 & -1 & hai & -0.676 & 3 \\
ebola & 0.977 & -1 & p*ssy & -0.676 & 2 \\
lochte & 0.977 & 0 & fisting & -0.675 & -3 \\
hurricane irma & 0.975 & 0 & balls deep & -0.675 & 1 \\
kokobop & 0.974 & 0 & fanboy & -0.674 & 3 \\
paris agreement & 0.973 & -1 & squirting & -0.670 & 2 \\\hline
\end{tabular}
\caption{Examples of terms with strong positive and negative correlations between Twitter and UD activity trends, along with the value of $t$ (in $[-3~..~3]$) for which this correlation was measured.}
\label{tab:correlations}
\end{table}
We next proceed to discuss our results. The distribution of $R_w(t,U,T)$ for all values of $w$ is presented in Figure \ref{sub:before}, and the distribution for which the values are statistically significant in their difference from 0 is presented in Figure \ref{sub:after}.
For context, the final value of $t_w$ tells us that the highest correlation for term $w$ occurs when $U_w$ is shifted by an offset of $t_w$ months. So, when $t_w$ is negative, we can say that the Twitter activity seems to lag \emph{behind} the UD activity, and when $t_w$ is positive, the \emph{opposite} is true. When $t_w=0$, the two time series seem to be most highly correlated with one another with no lag.
\begin{table}[]
\begin{tabular}{rc|rc}
\multicolumn{2}{c}{\textbf{Positive correlation}} & \multicolumn{2}{c}{\textbf{Negative correlation}} \\ \hline
tag & PMI & tag & PMI \\ \hline
\#rap & 0.843 & \#f*ckboy & 0.587 \\
\#politics & 0.771 & \#sensitive & 0.579 \\
\#b*tches & 0.664 & \#big d*ck & 0.527 \\
\#meme & 0.639 & \#pathetic & 0.523 \\
\#omg & 0.563 & \#cheater & 0.477 \\
\#internet & 0.559 & \#personality & 0.469 \\
\#ghetto & 0.521 & \#creative & 0.452 \\
\#school & 0.515 & \#bestfriend & 0.445 \\
\#poser & 0.485 & \#america & 0.436 \\
\#wtf & 0.467 & \#pleasure & 0.430 \\\hline
\end{tabular}
\caption{Examples of UD tags applied to terms with with significant positive and negative correlations between Twitter and UD activity trends.}
\label{tab:pmi}
\end{table}
Figures \ref{sub:before} and \ref{sub:after} show that there are, indeed, noticeable correlations between the use of terminology on Twitter and its definition in UD.
It is marginally more typical for terms to emerge on Twitter before UD, rather than vice versa.
Overall, we identify 4,917 terms for which Urban Dictionary and Twitter activity is correlated.
To provide context, Figure~\ref{fig:corr_example} provides prominent examples of three terms that have positive, negative and no correlations. We see noticeable differences with viral terms like ``Pok\'{e}mon'' highly correlated. Further examples of these terms are presented in Table \ref{tab:correlations}.
For instance, we see that for certain well known and longstanding terms (e.g., ``goth'' and ''f*cked''), Twitter lags behind UD, but for other more emergent terms and memes (e.g., ``alex from target'', ``pokemon go'' and ``harlem shake'') Twitter is ahead of UD. This suggests that terminology usage requires a critical mass, before warranting inclusion on UD. We also see cases where sudden events (e.g., ``hurricane irma'') rapidly emerge on Twitter, before later being added to UD.
Briefly, we also examine the number of likes and dislikes given to definitions of these words on UD, finding no major differences from the overall distribution (originally presented in Figure \ref{fig:votes}).
\begin{figure*}
\centering
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.9\linewidth]{img/pokemon_go.png}
\caption{Pok\'{e}mon Go: positive correlation ($0.99$)}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.9\linewidth]{img/guacamole.png}
\caption{Guacamole: no sig. correlation ($-0.11$)}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.9\linewidth]{img/naruto.png}
\caption{Naruto: negative correlation ($-0.72$)}
\end{subfigure}
\caption{Month-level plots of three example terms with correlated and uncorrelated activity on Urban Dictionary and Twitter over time. Pok\'{e}mon Go is an augmented reality mobile phone game that was initially released on July 6, 2016, exhibiting highly focused attention. The cross-platform interest in the term ``guacamole'' shows no consistent patterns over time. For the Japanese manga Naruto, increases in Twitter discussion in early 2017 align with the end of a ten year television series, while activity on Urban Dictionary begins to drop-off around the same time.}
\label{fig:corr_example}
\end{figure*}
\subsection{What themes are defined and discussed?}
We next inspect which themes are covered within these terms. To achieve this, we use hashtags associated with each term as a proxy (each definition can be accompanied by tags).
First, we take the set of tags given by UD users to each of the terms and compute the point-wise mutual information (PMI) between the occurrence of the tag and one of three categories \cite{manning2008introduction}. Specifically, we categorise terms based on whether or not their usage is correlated on Twitter with UD (as defined in Section \ref{sec:cc}). For simplicity, we group each term into: positive correlation, negative correlation, or no correlation. PMI is computed as
$$
PMI(x,y) = \operatorname{log}\frac{\operatorname{p}(x,y)}{\operatorname{x}\operatorname{y}}
$$
where, in our case, $x$ is a variable representing the event that a tag is attached to a term and $y$ represents the event that a term belongs to the set of either positively correlated or negatively correlated time series. The joint probability $\operatorname{p}(x,y)$ represents the likelihood that a specific tag has been assigned to a term that also belongs to a category: positive, negative, or no correlation, and we can compute a PMI score for each tag for each set.
Note that we consider the full set of tags, including those assigned to the ``not correlated'' group, when computing the observed probabilities of tags or categories occurring, though we are only interested in computing the final PMI scores for the positively correlated and negatively correlated categories.
\begin{table}[]
\begin{tabular}{r|l|l|l|l}
& t < 0 & t = 0 & t > 0 & \textbf{all} \\ \hline
positive correlation & 75.8\% & 63.9\% & 72.5\% & \textbf{70.0\%} \\
no significant correlation & 79.01\% & 79.4\% & 81.6\% & \textbf{80.2\%} \\
negative correlation & 94.6\% & 94.2\% & 89.7\% & \textbf{93.1\% }\\ \hline
\textbf{all} & \textbf{82.1\%} &\textbf{ 77.4\%} & \textbf{80.9\%} & \textbf{79.8\%} \\
\end{tabular}
\caption{Percentage of terms with definitions in Wiktionary.}
\label{tab:wiktionary}
\end{table}
The tags with the highest PMI scores for the positive and negative correlation groups are presented in Table \ref{tab:pmi}.
We are particularly curious to understand if these terms with significant correlations are nonstandard English words, multi-word expressions, or proper nouns. To explore this, we compute the percentage of each group that has been defined in the English section of the online resource Wiktionary.\footnote{\url{https://en.wiktionary.org/}}
Table \ref{tab:wiktionary} shows the proportion of terms that are defined in Wiktionary for each cross-section of data based on level of correlation and value of $t$. Interestingly, we observe that the greatest fraction of terms that are \textit{undefined} in Wiktionary come from the ``positive correlation'' group (note the lower overall fraction of terms with definitions for this group, the first row in Table \ref{tab:wiktionary}) indicating that words from this group are less likely to be standard English words.
\subsection{Are UD entries more likely for trending terms?} \label{sec:trends}
We conjecture that certain terms may experience rapid surges in popularity, and that these surges may correlate with new entries being added for terms on UD. Thus, we next explore if certain terms start to ``trend'' at points within our measurement period, both within Twitter and UD, and how likely it is that new entries are added to UD for terms that are currently trending. Previously proposed trending detection algorithms typically act in real time, relying only the use of data preceding the point of the trending period in order to detect trends as early as possible \cite{xie2016topicsketch}. Trend detection approaches may also involve the use of machine learning models that are trained to recognize examples of items that were known to go on to be considered trending \cite{chen2013latent}. However, these approaches depend on knowledge of ``ground truth'' for which terms eventually moved into a \textit{trending} period, meaning that a potentially unknown definition of \textit{trending} is being learned. Others approaches aim for personalization by incorporating user-level features such as the types of topics that a person is typically interested in \cite{fiaidhi2013developing}, which we do not make use of as we are searching for general periods of upward trending. Additionally, we do not consider burst detection methods \cite{kleinberg2003bursty} which can accurately identify abnormal spikes in usage, since we also wish to discover trends that experience a rapid initial increase in usage followed by long plateaus of high usage, e.g., for terms that were first introduced at some point in time yet remained popular after the initial increase in usage.
As we are able to analyze the entire period of interest \emph{post-hoc}, and we would like to apply criteria for trending detection that are general to both UD and Twitter, we opt for the following approach. Inspired by previous work in the earth science domain \cite{sharma2016trend}, we fit a piece-wise function across the entire time series. This allows us to quickly check for sections of rapid increases by analyzing the slope of this function at a given point in time.
\begin{figure}
\centering
\customxy{img/vibing_spikes.png}{year}{occurrences in Twitter sample}
\caption{Twitter activity plot for the term ``Vibing'' along with detected \textit{trending} periods (shaded regions) using the proposed approach.}
\label{fig:trending}
\end{figure}
To fit the piece-wise function, we first split the time series at all identified change points using the pruned exact linear time (PELT) change point detection algorithm \cite{killick2012optimal}\footnote{We use the implementation provided in the \hyperlink{http://dev.ipol.im/~truong/ruptures-docs/build/html/index.html}{\texttt{ruptures}} Python library.}.
PELT is a dynamic programming approach used to efficiently find the best segmentation of a time series by minimizing a cost function defined in terms of the likelihood of the data in each segment. After running PELT on each time series, we then fit an ordinary least squares regression line to each segment of the data that lies between two change points, and inspect the slope of the line. If the \textit{slope} is greater than the threshold $\tau_m$, we mark this period of time as ``trending'' for this term. In our analyses, we set $\tau_m=\frac{\operatorname{max}(S)}{4}$ where $S$ represents all points in a time series. Figure \ref{fig:trending} shows an example of the results of this trending detection approach on our Twitter data sample.
\begin{table}[]
\begin{tabular}{p{2cm}cc}
& \textbf{Twitter} & \textbf{UD} \\ \hline
$\operatorname{p}(d|u)$ & \textbf{0.105} & \textbf{0.113} \\
$\operatorname{p}(d|\neg u)$ & 0.077 & 0.104 \\ \hline
$\operatorname{p}(u|d)$ & \textbf{0.142} & \textbf{0.172} \\
$\operatorname{p}(u|\neg d)$ & 0.111 & 0.162 \\ \hline
\end{tabular}
\caption{Observed probabilities associated with the creation of new definitions on UD and trending periods on Twitter (column 1) and UD (column 2). Bold font denotes a statistically significant difference from the quantity directly below in the table using a two sample t-test and $\alpha=.001$. $d=1$ indicates that a term is defined in a given month, and $u=1$ indicates that the same term is \textit{trending} on either Twitter or UD during that month.}
\label{tab:probs}
\end{table}
We compute all trending periods for both the UD and Twitter time series for all terms, and compare these time periods to the dates during which new definitions are added to UD for a given term. Let the symbol $d$ represent a binary random variable that is \texttt{true} in the event that a new definition for a term is added during a given month, and \texttt{false} otherwise. Then, let $u$ be \texttt{true} if term the same term is \textit{trending} during month the same month. We estimate the conditional probabilities associated with various values of $d$ and $u$ in Table \ref{tab:probs}. We find that the probability of observing a new definition for a given term is statistically significantly more likely in a given month if activity centered around that term is \textit{trending} on either UD or Twitter. Further, when a term has received a new definition in a given month, it is also more likely that this term would be marked as \textit{trending} according to our trend detection algorithm for either the UD activity or the Twitter activity time series.
\section{Discussion}
Having completed our analyses, we return to our initial research questions and attempt to answer each given the evidence that we have gathered.
\pb{(1) Is any activity on Urban Dictionary significantly correlated with discussions taking place on Twitter?} In section \ref{sec:cc}, we computed the cross-correlations between the monthly Twitter and UD activity time series and found that, for a subset of terms of interest, there was a significant correlation between activity on the two platforms. While we are unable to make conclusions about the majority of terms that appear on UD and Twitter, we are able to identify those terms for which there exists either a positive or negative correlation. Overall, we find that there are more terms with a significant positive correlation, and that these correlations occur with a time lag of 0, suggesting that the activity that is happening on UD and Twitter is generally synchronized for these terms. These results confirm that UD itself does in fact reflect trends occurring elsewhere around the internet, and based on qualitative analyses, events taking place in the offline world.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{img/ebola.png}
\caption{Twitter and UD activity for the term ``ebola''.}
\label{fig:ebola}
\end{figure}
\pb{(2) If yes, for which terms does activity on these two platforms exhibit either a positive or negative temporal correlation? What are the characteristics of these terms?}
As we did find a link between the activity on the two platforms, we explored some of these terms and their attributes later in Section \ref{sec:cc}. We notice several major trends for the terms exhibiting positive correlations between Twitter and UD activity measurements. First, we see a theme of internet memes, exemplified by the terms ``alex from target'', ``harlem shake'', and ''number neighbor'', as well as tags such as \#meme and \#internet.
Second, there are a myriad of terms related to political figures and large-scale events, such as ``omarosa'', ``scaramucci'', ``ebola'', ``hurricane irma'' and ``paris agreement'', as well as the tag \#politics. Since many of these terms are related to extremely specific events that took place in a single month or even a single day, the online activity observed on both platforms is often very acutely focused around the time of that event. For example, see the time series plot of the term ``ebola'' in Figure \ref{fig:ebola}. There is a single major spike in both time series in late 2014, roughly when the first case of the Ebola virus was confirmed in the United States during the 2014-2016 epidemic \cite{kaner2016understanding}. For the negatively correlated terms, we instead see a range of slang and risqué language. While further investigation is needed to fully understand why these terms exhibit a strong negative correlation between UD and Twitter activity, one possibility is that we may be tapping into a larger trend taking place on these platforms in which language that was once considered taboo and was relegated only to website like UD is now more well known and commonplace, appearing more on Twitter, making it less novel on UD. Either way, it is clear that these two platforms \emph{do} influence each other (either tacitly or directly).
\pb{(3) Is it more likely that new definitions are added to Urban Dictionary for a term if it is currently \textit{trending} on Twitter?}
In Section \ref{sec:trends} we define \textit{trending} for a given term on either platform. Given this definition and the data we have about the creation of new definitions for terms of UD, we calculate the likelihood of terms appearing both inside and outside of \textit{trending} intervals, finding it (statistically) significantly more likely to witness new definitions during \textit{trending} periods when considering both UD and Twitter time series. Additionally, we find that terms are more likely to be \textit{trending} during months for which new definitions have been added to UD. While capturing the causal relationships at play, if they exist, is left as future work, these results solidify the relationship that exists between the observed user behaviors on these two platforms centered around specific types of content.
\section{Conclusion}
We have presented the first analysis of the temporal relationships between online activity on the under studied platform Urban Dictionary and the broad conversations happenings on Twitter. We explored the relationships between periods of time when terms were \textit{trending} and corresponding activity on Urban Dictionary, such as the creation of new definitions, finding that new definitions are more likely to occur during these periods. Through a series of cross-correlation analyses, we identified cases in which Urban Dictionary activity most closely reflects the content being discussed on Twitter. By inspecting and characterizing the types of terms that have a stronger connection to discussions on Twitter, we found that Urban Dictionary activity that is positively correlated with Twitter mentions is centered around terms related to memes, popular public figures, and offline events. While this work represents an initial venture into the study of the links between these two platforms, we hope that it provides a foundation for future work exploring the web and its many components as a larger socio-technical system, searching for interactions between various online communities and their behaviors rather than studying each one in isolation.
\begin{acks}
This work was supported by The Alan Turing Institute under the EPSRC grants EP/N510129/1, and EP/S033564/1. We also acknowledge support via EP/T001569/1.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
1,314,259,993,415 | arxiv | \section{Introduction}
Interstellar dust is manifest at nearly all wavelengths of astronomical interest, scattering, absorbing, and emitting radiation from X-ray to radio wavelengths. Embedded in this diversity of phenomena are clues to the nature of interstellar grains---their size, shape, composition, and optical properties.
A combination of astronomical observations, laboratory studies, and theoretical calculations has informed a picture of interstellar dust that consists of, at minimum, amorphous silicate and carbonaceous materials \citep[see][for a review]{Draine_2003}. However, many questions remain as to the details of these components, e.g., their optical properties, porosity, purity, size distributions, shapes, and alignment, including whether the silicate and carbonaceous materials exist as distinct components or whether they are typically found in the same interstellar grains.
The astronomical data which constrain models of interstellar dust are extensive and ever increasing in detail. Determinations of solid phase abundances define the elemental makeup and mass of interstellar dust grains per H atom. Interstellar extinction has been measured from the far-UV (FUV) through the mid-infrared (MIR), including a number of spectral features suggesting specific materials. Emission from dust grains heated by the ambient interstellar radiation field has been observed from the near-infrared (NIR) through the microwave. Additionally, anomalous microwave emission (AME), thought to arise from rapidly rotating ultrasmall grains, is seen at radio frequencies while extended red emission (ERE), attributed to fluorescence, is observed in the optical. Polarization has been detected in both extinction and emission, including in some spectral features, placing additional constraints on the shapes, compositions, and alignment properties of interstellar grains.
With high-sensitivity far-infrared (FIR) imaging and polarimetry, the {\it Planck} satellite measured the properties of submillimeter polarized dust emission in unprecedented detail \citep{Planck_Int_XIX}. The very high submillimeter polarization fractions and the observed characteristic ratios between polarized FIR emission and polarized extinction at optical wavelengths have posed serious challenges to pre-{\it Planck} dust models \citep{Planck_Int_XIX, Planck_2018_XII}. It is imperative that these new findings guide the development of the next generation of models.
When presenting a new dust model, it has become customary to detail the set of observations that constrain it \citep[e.g.,][]{Mathis+Rumpl+Nordsieck_1977,Draine+Lee_1984,Zubko+Dwek+Arendt_2004,Draine+Fraisse_2009,Compiegne+etal_2011,Siebenmorgen+Voshchinnikov+Bagnulo_2014,Guillet+etal_2018}. Given the now vast array of observations that can be employed in calibrating and testing models, and given also the heterogeneity of the observations in terms of wavelengths covered and region observed, synthesizing a coherent set of model constraints can be as challenging as construction of the model itself. It is therefore the goal of this work to summarize the current state of observations constraining the properties of dust in the diffuse interstellar medium (ISM) and to establish a set of benchmark constraints against which models of interstellar dust can be tested.
This paper is organized as follows: we first derive the solid phase abundances of the primary elemental constituents of dust in Section~\ref{sec:abundances}; then, we combine various observational data to derive the wavelength dependence of dust extinction (Section~\ref{sec:extinction}), polarized extinction (Section~\ref{sec:extpol}), emission (Section~\ref{sec:irem}), and polarized emission (Section~\ref{sec:ir_pol}) for a typical diffuse, high-latitude sightline. Finally, we present a summary of these constraints in Section~\ref{sec:summary}.
\section{Abundances}
\label{sec:abundances}
\begin{deluxetable}{ccll}
\tablewidth{0pc}
\tablecaption{Interstellar Abundances of Selected Elements\label{table:abundances}}
\tablehead{\colhead{Element} & \colhead{$A$ [ppm]}
& \colhead{Method} & \colhead{Reference(s)}}
\startdata
C & $324\pm38$ & Solar + Solar Twins & 2, 6, 11\\
& $331\pm38$ & Solar + GCE Model & 2, 3, 6 \\
& $358\pm82$ & Young F \& G Stars & 1 \\
& $245\pm62$ & Young F \& G Stars & 5, 7\\
& $190\pm77$ & B Stars & 1 \\
& $214\pm20$ & B Stars & 8 \\
O & $682\pm79$ & Solar + Solar Twins & 2, 6, 11 \\
& $575\pm66$ & Solar + GCE Model & 2, 3, 6 \\
& $445\pm156$ & Young F \& G Stars & 1 \\
& $589\pm176$ & Young F \& G Stars & 4, 7 \\
& $350\pm133$ & B Stars & 1 \\
& $575\pm66$ & B Stars & 8 \\
Mg & $52.9\pm4.9$ & Solar + Solar Twins & 2, 9, 11 \\
& $45.7\pm4.2$ & Solar + GCE Model & 2, 3, 9 \\
& $42.7\pm17.2$ & Young F \& G Stars & 1 \\
& $44\pm21$ & Young F \& G Stars & 4, 7 \\
& $23\pm7.0$ & B Stars & 1 \\
& $36.3\pm4.2$ & B Stars & 8 \\
Al & $3.5\pm0.3$ & Solar + Solar Twins & 2, 9, 11 \\
& $3.5\pm1.8$ & Young F \& G Stars & 4, 7 \\
Si & $44.6\pm3.1$ & Solar + Solar Twins & 5, 8 \\
& $41.7\pm2.9$ & Solar + GCE Model & 2, 3, 9 \\
& $39.9\pm13.1$ & Young F \& G Stars & 1 \\
& $41\pm22$ & Young F \& G Stars & 4, 7 \\
& $18.8\pm8.9$ & B Stars & 1 \\
& $31.6\pm3.6$ & B Stars & 8 \\
S & $17.2\pm1.2$ & Solar + Solar Twins & 2, 9, 11 \\
& $17.4\pm1.2$ & Solar + GCE Model & 2, 3, 10 \\
& $19.5\pm4.5$ & Young F \& G Stars & 4, 7 \\
Ca & $3.2\pm0.2$ & Solar + Solar Twins & 2, 9, 11 \\
& $3.0\pm2.7$ & Young F \& G Stars & 4, 7 \\
Fe & $43.7\pm4.0$ & Solar + GCE Model & 2, 3, 10 \\
& $27.9\pm7.7$ & Young F \& G Stars & 1 \\
& $40.7\pm23.5$ & Young F \& G Stars & 4, 7 \\
& $28.5\pm18.0$ & B Stars & 1 \\
& $33.1\pm2.3$ & B Stars & 8 \\
Ni & $2.1\pm0.2$ & Solar + Solar Twins & 2, 10, 11 \\
& $1.8\pm0.3$ & Young F \& G Stars & 4, 7
\enddata
\tablenotetext{}{$^1$\citet{Sofia+Meyer_2001},
$^2$\citet{Turcotte+Wimmer-Schweingruber_2002},
$^3$\citet{Chiappini+Romano+Matteucci_2003},
$^4$\citet{Bensby+etal_2005}, $^5$\citet{Bensby+Feltzing+2006},
$^6$\citet{Asplund+etal_2009}, $^7$\citet{Lodders+etal_2009},
$^8$\citet{Nieva+Przybilla_2012}, $^9$\citet{Scott+etal_2015a},
$^{10}$\citet{Scott+etal_2015b}, $^{11}$\citet{Bedell+etal_2018}}
\tablecomments{Abundances of selected elements derived from solar
abundances (Refs. 6, 9, 10) corrected for diffusion (Ref. 2) and
chemical enrichment (Refs. 3 and 11), from young F and
G stars (Refs. 4, 5, and 7), and from B stars (Refs. 1 and 8).}
\end{deluxetable}
\begin{deluxetable}{cccc}
\tablewidth{0pc}
\tablecaption{Adopted Gas and Solid Phase Abundances of Selected
Elements\label{table:solid_abundance}}
\tablehead{\colhead{X} & \colhead{(X/H)$_{\rm ISM}$}
& \colhead{(X/H)$_{\rm gas}$} & \colhead{(X/H)$_{\rm dust}$} \\
& [ppm] & [ppm] & [ppm] }
\startdata
C & $324$ & $198$ & $126\pm56$ \\
O & $682$ & $434$ & $249\pm94$ \\
Mg & $52.9$ & $7.1$ & $45.8\pm4.9$ \\
Al & $3.5$ & $0.1$ & $3.4\pm0.3$ \\
Si & $44.6$ & $6.6$ & $38.0\pm3.1$ \\
S & $17.2$ & $9.6$ & $7.6\pm2.0$ \\
Ca & $3.2$ & $0.1$ & $3.2\pm0.2$ \\
Fe & $43.7$ & $0.88$ & $42.8\pm4.0$ \\
Ni & $2.1$ & $0.04$ & $2.0\pm0.2$
\enddata
\tablecomments{ISM abundances based on Solar abundances
\citep{Asplund+etal_2009, Scott+etal_2015a, Scott+etal_2015b} corrected for diffusion \citep{Turcotte+Wimmer-Schweingruber_2002} and with GCE \citep{Bedell+etal_2018}. Gas phase abundances taken from \citet{Jenkins_2009} assuming moderate depletion (i.e., $F_* = 0.5$).}
\end{deluxetable}
The heavy elements that make up the bulk of the mass of grains are produced in stars which return material to the ISM via winds or ejecta. Some of the atoms remain in the gas while a fraction get locked in grains. Comparison of stellar and gas phase abundances of metals is thus an important observational constraint on grain models.
The elements C, O, Mg, Si, and Fe are depleted in the gas phase and compose most of the interstellar dust mass. In addition, Al, S, Ca, and Ni are also depleted and constitute a minor but non-negligible fraction of the dust mass. A dust model should account for the observed depletions of each of these elements. Other elements (e.g., Ti) are also present in the grains, but collectively account for $<1\%$ of the grain mass, and will not be discussed here.
While gas phase abundances are determined directly from absorption line spectroscopy, inferring the solid phase abundances from these measurements requires determination of the {\it total} abundance of
each element in the ISM. This is often done starting from the well-constrained Solar abundances and applying a correction for Galactic chemical enrichment (GCE) during the $\sim$4.6\,Gyr since the formation of the Sun.
Detailed 3D hydrodynamical modeling of the Solar atmosphere has yielded photospheric abundances of $\log\epsilon_{\rm C} = 8.43\pm0.05$, $\log\epsilon_{\rm O} = 8.69\pm0.05$, $\log\epsilon_{\rm Mg} = 7.59\pm0.04$, $\log\epsilon_{\rm Al} = 6.43\pm0.04$, $\log\epsilon_{\rm Si} = 7.51\pm0.03$, $\log\epsilon_{\rm S} = 7.12\pm0.03$, $\log\epsilon_{\rm Ca} = 6.32\pm0.03$, $\log\epsilon_{\rm Fe} = 7.47\pm0.04$, and $\log\epsilon_{\rm Ni} = 6.20\pm0.04$ \citep{Asplund+etal_2009, Scott+etal_2015a, Scott+etal_2015b}, where
\begin{equation}
\log \epsilon_{\rm X} \equiv 12 + \log_{10}\left({\rm X}/{\rm H}\right)
\end{equation}
and $\left({\rm X}/{\rm H}\right)$ is the number of atoms of element X per H atom. To convert these present-day photospheric abundances to protosolar abundances, we apply a diffusion correction of +0.03\,dex \citep{Turcotte+Wimmer-Schweingruber_2002}. We adopt these values as our reference protosolar abundances.
The protosolar values are presumed to reflect the abundances in the ISM at the time of the Sun's formation 4.6\,Gyr ago. Present-day interstellar metal abundances are likely enhanced relative to these protosolar values. The chemical evolution model of \citet[][Model 7]{Chiappini+Romano+Matteucci_2003} predicts the C, O, Mg, Si, S, and Fe abundances to be enriched by 0.06, 0.04, 0.04, 0.08, 0.09, and 0.14 dex, respectively, relative to the protosolar values. \citet{Bedell+etal_2018} estimated the chemical enrichment as a function of time by determining the elemental abundances in Solar twins of various ages. If we assume $\Delta$[Fe/H] = 0.14 \citep{Chiappini+Romano+Matteucci_2003}, their results imply present-day enrichments of 0.05, 0.11, 0.10, 0.08, 0.11, 0.09, 0.16, and 0.09 dex for C, O, Mg, Al, Si, S, Ca, Ni respectively, where in the case of C, we have taken the weighted mean of the determinations based on \ion{C}{i} and CH. These results are summarized in Table~\ref{table:abundances}. We apply the latter values to our reference protostellar abundances to define our reference ISM abundances, listed in the first column of Table~\ref{table:solid_abundance}.
Interstellar abundances can also be inferred from observations of young stars. Studies of young ($<$ 1 Gyr) F and G stars \citep{Bensby+etal_2005, Bensby+Feltzing+2006} have yielded fairly concordant numbers for O, Mg, Al, Si, S, Ca, Fe, and Ni \citep[see][for review]{Lodders+etal_2009}. The C abundance, however, appears somewhat lower than would be predicted from the solar abundances. On the other hand, \citet{Sofia+Meyer_2001} report C, O, Mg, Si, and Fe abundances obtained from young ($\leq 2$\,Gyr) F and G stars that are in good agreement with the protosolar abundances plus enrichment, including the C abundance.
Photospheric abundances have also been determined for B stars with mostly consistent results, as summarized in Table~\ref{table:abundances}. However, the Si abundances determined from B stars are somewhat lower, with reported values of $18.8\pm8.9$\,ppm \citep{Sofia+Meyer_2001} and $31.6\pm2.6$\,ppm \citep{Nieva+Przybilla_2012}. Likewise, the Fe abundances are lower than those based on solar abundances by $\sim$10\,ppm.
Different determinations of the interstellar metal abundances are not yet fully concordant, and the uncertainties quoted by any study using a specific class of objects may under-represent the underlying systematic uncertainties particular to that method. For the purposes of this work, we adopt abundances based on solar abundances plus enrichment as representative.
Once the baseline interstellar abundances have been determined, absorption line spectroscopy can be employed to determine the quantity of each element missing from the gas phase due to incorporation into grains. Compiling data over a large number of sightlines and gas species, \citet{Jenkins_2009} defined a parameter $F_*$ that quantifies the level of depletion of all metals along that sightline. $F_* = 0.5$, roughly the median depletion in the \citet{Jenkins_2009} sample, corresponded to sightlines with mean $n_{\rm H} \simeq 0.3$\,cm$^{-3}$, appropriate for diffuse \ion{H}{i}. Therefore, we adopt the gas phase abundances for $F_* = 0.5$ as representative for the diffuse sightlines of interest in this work.
In Table~\ref{table:solid_abundance}, we list the gas phase abundances of C, O, Mg, Si, S, Fe, and Ni corresponding to $F_* = 0.5$ and the relations for each element derived by \citet{Jenkins_2009}. For Al and Ca, we assume the level of depletion is the same as for Fe.
With the ISM and gas phase abundances constrained, we take the difference to determine the solid phase abundances, which we list in Table~\ref{table:solid_abundance}. We estimate the error bars by adding in quadrature those from Table~\ref{table:abundances} and the errors on the gas phase abundances inferred from \citet{Jenkins_2009}. Models of interstellar dust should account for the solid phase abundances presented here to within the observational and modeling uncertainties.
\section{Extinction}
\label{sec:extinction}
\subsection{Introduction}
Interstellar dust attenuates light through both scattering and absorption. ``Extinction'' refers to the sum of these processes, and the wavelength dependence of interstellar extinction forms a key constraint on the properties of interstellar grains. Because interstellar dust preferentially extinguishes shorter wavelengths in the optical, the effects of extinction are often referred to as ``reddening.''
Extinction is typically measured in one of two ways. In the ``pair method,'' the spectrum of a reddened star is compared to an intrinsic spectrum derived from a set of standard unreddened stars \citep[e.g.,][]{Trumpler_1930, Bless+Savage_1972}. Alternatively, the stellar spectrum and the interstellar extinction can be modeled simultaneously with the aid of theoretical stellar spectra \citep[e.g.,][]{Fitzpatrick+Massa_2005,Schultheis+etal_2014,Fitzpatrick+etal_2019}.
However, neither method readily yields the total extinction $A_\lambda$, where
\begin{equation}
\label{eq:a_lambda}
A_\lambda \equiv 2.5 \log_{10} \frac{F^{\rm int}_\lambda}{F^{\rm obs}_\lambda}
~~~,
\end{equation}
$F_\lambda^{\rm int}=L_\lambda/4\pi D^2$ is the intrinsic (i.e., unreddened) flux, and $F^{\rm obs}_\lambda$ is the observed flux. However, if the wavelength dependence of the luminosity $L_\lambda$ is presumed to be known, the differential extinction between two wavelengths is independent of distance $D$. Most empirical extinction curves are thus expressed as the ``selective'' extinction relative to a reference bandpass or wavelength $\lambda_0$ and written as
\begin{equation}
E(\lambda-\lambda_0) \equiv A_\lambda - A_{\lambda_0}
~~~.
\end{equation}
To remove the dependence on the dust column, this is often then normalized to selective extinction between two reference bandpasses or wavelengths, classically the Johnson $B$ and $V$ bands, e.g., $E(\lambda-V)/E(B-V)$. The quantity
\begin{equation}
R_V \equiv \frac{A_V}{E(B-V)}
\end{equation}
is commonly used to parameterize the shape of the extinction curve.
As noted by many authors \citep[e.g.,][]{Blanco_1957,MaizApellaniz+etal_2014,Fitzpatrick+etal_2019}, the use of bandpasses rather than monochromatic wavelengths to normalize extinction curves becomes problematic at high precision because the measured extinction in a finite bandpass depends not just on the interstellar extinction law but also on the intrinsic spectrum of the object. We therefore focus where possible in this work on spectroscopic or spectrophotometric determinations of the interstellar extinction law.
Because we are principally interested in connecting observations to the properties of interstellar grains, we express our synthesized representative extinction law in terms of optical depth $\tau_\lambda$ rather than $A_\lambda$, which are related by
\begin{equation}
\tau_\lambda \equiv \ln\left(\frac{F_\lambda^{\rm int}}{F_\lambda^{\rm obs}}\right) = \frac{A_\lambda}{2.5\log_{10} e} = \frac{A_\lambda}{1.0857}
~~~.
\end{equation}
\subsection{X-Ray Extinction}
\label{subsec:ext_xray}
Although measurement of absolute extinction is usually not possible at X-ray energies, the differential extinction associated with X-ray absorption features can be determined spectroscopically. Such spectroscopic measurements have been made across the O K edge at 530\,eV \citep[e.g.,][]{Takei+etal_2002}, Fe L edge at $\sim700$--$750$\,eV \citep[e.g.,][]{Paerels+etal_2001,Lee+etal_2009}, Mg K edge at 1.3\,keV \citep[e.g.,][]{Rogantini+etal_2020}, and Si K edge at 1.84\,keV \citep{Schulz+etal_2016, Zeegers+etal_2017, Rogantini+etal_2020}. {\it Chandra} and {\it XMM-Newton} both have sufficient spectral resolution to distinguish gas-phase absorption from extinction contributed by dust.
X-ray spectra have been interpreted as showing that interstellar silicates are Mg-rich \citep{Costantini+etal_2012, Rogantini+etal_2019}, and \citet{Westphal+etal_2019} conclude that most of the Fe is in metallic form. While the absorption profile of the $10\,\mu$m silicate feature has been interpreted as giving a $2\%$ upper limit on the crystalline fraction (see Section~\ref{subsec:ext_sil_features}), X-ray observations of the Mg and Si K edges have been interpreted as showing that $11-15\%$ of the silicate material is crystalline, \citep{Rogantini+etal_2019,Rogantini+etal_2020}.
Efforts to identify the specific minerals hosting the solid-phase C, O, Mg, Si, and Fe remain inconclusive because of not-quite-sufficient spectral resolution, limited signal-to-noise, and limited laboratory data. Scattering contributes significantly to the extinction \citep{Draine_2003b}, and therefore model comparisons depend not only on the composition of the dust, but also on the size and shape of the grains \citep{Hoffman+Draine_2016}. Future measurements of X-ray extinction and X-ray scattering (see Section~\ref{subsubsec:xray_sca}) offer the prospect of mineralogical identification. The key will be to interpret the observations using dust models together with all available observational constraints.
\subsection{UV Extinction}
\label{subsec:ext_uv}
Spectroscopy from the {\it International Ultraviolet Explorer} (IUE) has been one of the primary datasets for characterizing the interstellar extinction law in the UV since the 1980s \citep[e.g.,][]{Witt+etal_1984,Fitzpatrick+Massa_1986,Fitzpatrick+Massa_1988,Cardelli+Clayton+Mathis_1989,Valencic+Clayton+Gordon_2004}. Other notable measurements of UV extinction have been made by the {\it Copernicus} satellite \citep[e.g.][]{Cardelli+Clayton+Mathis_1989}, the {\it Orbiting and Retrievable Far and Extreme Ultraviolet Spectrometer} \citep[ORFEUS,][]{Sasseen+etal_2002}, the {\it Hubble Space Telescope} \citep[{\it HST}; e.g.,][]{Clayton+etal_2003}, and the {\it Far Ultraviolet Spectroscopic Explorer} \citep[{\it FUSE}; e.g.,][]{Gordon+etal_2009}. Extinction in the UV is characterized by a steep rise to short wavelengths, a prominent broad spectral feature at 2175\,\AA\ (see Section~\ref{subsubsec:2175}), and a notable lack of other substructure \citep{Clayton+etal_2003,Gordon+etal_2009}.
Spectroscopic characterization of interstellar extinction from UV to optical wavelengths was recently undertaken by \citet{Fitzpatrick+etal_2019}, who used {\it HST} Space Telescope Imaging Spectrograph (STIS) spectroscopy extending from 290-1027\,nm to complement {\it IUE} UV data. Additionally, JHK photometry from the Two-Micron All Sky Survey (2MASS) was used to extend the analysis into the near-infrared. On the basis of these data toward a curated sample of 72 O and B stars, they derived a mean extinction law having $A( 5500\,\text{\AA})/E(4400\,\text{\AA}-5500\,\text{\AA}) = 3.02$, corresponding approximately to $R_V = 3.1$. Because of the narrow-band observations, the resulting extinction curve is monochromatic and normalized using the extinction at 4400 and 5500\,\AA\ rather than the Johnson $B$ and $V$ bands, respectively. We illustrate this curve in Figures~\ref{fig:ext_uv} and \ref{fig:ext_op}.
On the basis of UV, optical, and NIR data toward a sample of 45 stars studied in the UV by \citet{Fitzpatrick+Massa_1988}, \citet{Cardelli+Clayton+Mathis_1988} presented an analytic parameterization for the extinction between 3.3 and 8\,$\mu$m$^{-1}$ as a function of $R_V$. This law was extended to the range 0.3 to 10\,$\mu$m$^{-1}$ by \citet{Cardelli+Clayton+Mathis_1989}. Combining {\it IUE} spectroscopy and 2MASS data along 417 lines of sight, \citet{Valencic+Clayton+Gordon_2004} further refined this parameterization in the 3.3 to 8.0 $\mu$m$^{-1}$ range\footnote{Note corrected numbers in \citet{Valencic_erratum}.}. We note, however, that the extinction law in this range was not formulated to join smoothly with the adjacent sections of the extinction law parameterized by \citet{Cardelli+Clayton+Mathis_1989}. Finally, \citet{Gordon+etal_2009} used the functional form\footnote{Note corrected numbers in \citet{Gordon2009_erratum}.} presented in \citet{Cardelli+Clayton+Mathis_1989} to fit 75 extinction curves measured with {\it FUSE} data from 3.3 to 11\,$\mu$m$^{-1}$.
We include the extinction laws of \citet{Cardelli+Clayton+Mathis_1989}, \citet{Valencic+Clayton+Gordon_2004}, and \citet{Gordon+etal_2009} in Figure~\ref{fig:ext_uv}. These extinction laws were derived in terms of $E(\lambda-V)/E(B-V)$ rather than monochromatic equivalents. Applying the correction factors to account for the finite bandpasses suggested by \citet{Fitzpatrick+etal_2019} (their Equation~4) results in curves that deviate more substantially from unity at 4400\,\AA\ and zero at 5500\,\AA\ than applying no correction. It is also the case that the $R_V = 3.1$ curve using the \citet{Cardelli+Clayton+Mathis_1989} parameterization does not precisely have $A_V/E(B-V) = 3.1$ \citep[see discussion in][]{MaizApellaniz_2013}. Given these issues, we simply assume that $E(B-V)$ corresponds exactly to $E(4400\,\text{\AA}\ -\ 5500\,\text{\AA})$ to convert the $R_V = 3.1$ curves to monochromatic reddenings.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{extcrv_uv.pdf}
\caption{We present various determinations of the UV extinction curve of the diffuse Galactic ISM. The extinction law of \citet{Fitzpatrick+etal_2019} (orange solid) was derived in terms of monochromatic reddenings. However, those of \citet{Cardelli+Clayton+Mathis_1989} (purple dotted), \citet{Sasseen+etal_2002} (magenta dot-dashed), \citet{Valencic+Clayton+Gordon_2004} (blue dashed), and \citet{Gordon+etal_2009} (green solid) were derived with respect to finite bandpasses, i.e., $E(\lambda-V)/E(B-V)$. As discussed in the text, we present these curves by simply assuming perfect correspondence with $E\left(\lambda-5500\,\text{\AA}\right)/E\left(5500\,\text{\AA}-6410\,\text{\AA}\right)$.} \label{fig:ext_uv}
\end{figure*}
\citet{Sasseen+etal_2002} made a determination of the mean FUV (910--1200\,\AA) extinction law using observations of eleven pairs of B stars with the {\it ORFEUS} spectrometer. This curve is also plotted in Figure~\ref{fig:ext_uv} where, as with several of the other curves presented, we do not apply any corrections to translate from the reported $E(\lambda-V)/E(B-V)$ to monochromatic reddenings. While the shape of this curve is in general agreement with that of \citet{Cardelli+Clayton+Mathis_1989} and \citet{Gordon+etal_2009}, there is significantly less FUV extinction per $E(B-V)$.
As Figure~\ref{fig:ext_uv} illustrates, there is general agreement among extinction curves in the UV. The \citet{Fitzpatrick+etal_2019} and \citet{Gordon+etal_2009} curves correspond closely between 3.3 and 6\,$\mu$m$^{-1}$, while that of \citet{Valencic+Clayton+Gordon_2004} agrees better with \citet{Fitzpatrick+etal_2019} between 5 and 8\,$\mu$m$^{-1}$. For our representative extinction curve, we therefore employ the \citet{Fitzpatrick+etal_2019} curve from 5500\,\AA\ to 8\,$\mu$m$^{-1}$, and then match onto the curve of \citet{Gordon+etal_2009} to extend to 11\,$\mu$m$^{-1}$. This is accomplished by using the \citet{Cardelli+Clayton+Mathis_1989} curve between 8 and 10\,$\mu$m$^{-1}$.
\subsection{Optical Extinction}
\label{subsec:ext_op}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{extcrv_op.pdf}
\caption{The extinction laws of \citet{MaizApellaniz+etal_2014} (green dot-dashed), \citet{Schlafly+etal_2016} (blue dotted), and \citet{Fitzpatrick+etal_2019} (orange solid) are compared at optical wavelengths. The normalization corresponds roughly to $E(V-R)$ which we adopt since the \citet{Schlafly+etal_2016} curve does not extend to wavelengths shorter than 500\,nm.} \label{fig:ext_op}
\end{figure}
While the extinction curve of the diffuse ISM has been well determined from UV to optical wavelengths over decades of observations, it is only recently that spectrophotometric observations have enabled detailed characterization at optical wavelengths. In this section, we compare determinations of the mean Galactic extinction curve from 500\,nm to 1\,$\mu$m.
In addition to \citet{Fitzpatrick+etal_2019}, a recent determination of the optical extinction law using spectroscopy is that of \citet{MaizApellaniz+etal_2014}, who used the Fibre Large Array Multi-Element Spectrograph (FLAMES) on the Very Large Telescope to determine the extinction toward 83 O and B stars in 30~Doradus. These spectroscopic data extended from 3960–-5071\,\AA\ and were supplemented with both 2MASS JHK and {\it HST} Wide Field Camera 3 photometry (UBVI and H$\alpha$) to test and revise the \citet{Cardelli+Clayton+Mathis_1989} extinction law from the optical to the near-infrared. Outside this wavelength range, \citet{MaizApellaniz+etal_2014} tied the extinction law to that of \citet{Cardelli+Clayton+Mathis_1989}. The resulting curve is presented in Figure~\ref{fig:ext_op} alongside that of \citet{Fitzpatrick+etal_2019}. While there is general consistency between the two extinction laws, there are also significant departures. As 30~Doradus is located in the Large Magellanic Cloud (LMC), the extinction has a contribution from the LMC dust which may differ systematically from that of the Galaxy. We therefore seek comparisons with other observations.
\citet{Schlafly+etal_2016} determined the extinction toward 37,000 APOGEE stars in ten photometric bands from $g$ (503.2\,nm) to WISE 2 (4.48\,$\mu$m). This wavelength coverage does not extend far enough blueward to apply the normalization used in Figure~\ref{fig:ext_uv}, and indeed \citet{Schlafly+etal_2016} note that different methods of extrapolating their extinction law to the $B$ band yield $R_V$s that differ by a few tenths. Thus, Figure~\ref{fig:ext_op} presents a different comparison using $E(5500\,\text{\AA}-6410\,\text{\AA})$ as the normalization factor, roughly equivalent to $E(V-R)$. Because of the explicit treatment of the bandpasses, the \citet{Schlafly+etal_2016} extinction curve is defined with respect to monochromatic wavelengths.
From 500 to $\sim800$\,nm, the \citet{MaizApellaniz+etal_2014} and \citet{Schlafly+etal_2016} curves are in close agreement. We note that the \citet{MaizApellaniz+etal_2014} extinction law defaults to that of \citet{Cardelli+Clayton+Mathis_1989} at wavelengths longer than $\sim800$\,nm. Indeed, \citet{Schlafly+etal_2016} note that the \citet{Cardelli+Clayton+Mathis_1989} parameterization provides a poor fit to the infrared data for the full range of $R_V$ studied while the \citet{MaizApellaniz+etal_2014} law is an excellent fit in the optical.
\citet{Wang+Chen_2019} employed {\it Gaia} parallaxes for a sample of more than 61,000 red clump stars in APOGEE to overcome the distance/attenuation degeneracy and derive a mean interstellar extinction law in 21 photometric bands. When expressed as color excess ratios $E(\lambda-\lambda_1)/E(\lambda_2-\lambda_1)$, their mean curve agrees with that of \citet{Schlafly+etal_2016} to within a few percent over the full 0.5--4.5\,$\mu$m wavelength range.
Given these corroborating studies, we adopt the extinction law of \citet{Schlafly+etal_2016} from 550\,nm to the IR. However, converting from $E(\lambda-5500\,\text{\AA})/E(4400\,\text{\AA}-5500\,\text{\AA})$ to a quantity like $A_\lambda/A(5500\,\text{\AA})$ requires a measurement of the absolute extinction at some wavelength. This is because a single reddening law is consistent with a family of extinction laws that differ by an additive offset common to all wavelengths over which the reddening has been measured. The classic $R_V = 3.1$ is relatively well-determined from the fact that the infrared extinction is much smaller than the optical and UV extinction, and so measurement of reddening relative to a NIR band, e.g., $E(V-L)/E(B-V)$, constrains any component common to all bands sufficiently well, i.e., $E(V-L)/E(B-V) \approx R_V$. As determinations of the extinction curve are made at increasingly long wavelengths, the sensitivity to the size of this common component increases. We explore this issue in more detail in the following section.
\subsection{NIR Extinction}
\label{subsec:ext_nir}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{extcrv_nir.pdf}
\caption{We compare the NIR/MIR broadband extinction curves of \citet{Indebetouw+etal_2005} along a diffuse sightline in the Galactic plane ($\ell = 42^\circ$) and \citet{Wang+Chen_2019} toward a sample of more than 61,000 red clump stars in the APOGEE survey. For both of these curves, we report extinction relative to the $K_s$ band. We compare these determinations to the \citet{Schlafly+etal_2016} reddening law assuming two values of $A_H/A_K$: 1.55 from \citet{Indebetouw+etal_2005} and 1.75, a representative average of recent measurements (see Section~\ref{subsec:ext_nir}).} \label{fig:ext_nir}
\end{figure}
The NIR extinction law from $\sim$1--5\,$\mu$m is often approximated as a power law $A_\lambda \propto \lambda^{-\alpha}$. A foundational analysis of NIR extinction was made by \citet{Rieke+Lebofsky_1985}, who measured extinction toward $o$\,Sco \citep[$A_V = 2.7$,][]{Whittet_1988b}, Cyg\,OB2-12 \citep[$A_V \simeq 10.2$,][]{Humphreys_1978,TorresDodgen+etal_1991}, and several heavily reddened sources toward the Galactic Center ($A_V$ between 23 and 35). The widely-used extinction law of \citet{Cardelli+Clayton+Mathis_1989} relies on the extinction curve determined by \citet{Rieke+Lebofsky_1985} at wavelengths longer than $J$ band ($\lambda_J \simeq 1.23\,\mu$m), employing $\alpha = 1.61$ for $0.91 < \lambda/\mu{\rm m} < 3.3$.
Many other early determinations of $\alpha$ likewise found values in the range $\sim$ 1.6--1.85 (see the reviews of \citet{Draine_1989} and \citet{Mathis_1990}). However, an analysis by \citet{Stead+Hoare_2009} demonstrated that the value of $\alpha$ derived from fits to extinction in the $JHK$ photometric bands depends sensitively on how the bandpasses are treated, particularly for highly reddened sources. Accounting explicitly for these bandpass effects in sources of different intrinsic spectra and levels of reddening, and using photometry from both the United Kingdom Infrared Deep Sky Survey (UKIDSS) and 2MASS, they recommend a mean value of $\alpha = 2.15\pm0.05$, significantly larger than most earlier determinations. Recently, a similar study using 2MASS photometry found $\alpha = 2.27$ with an uncertainty of $\sim1$\% \citep{MaizApellaniz2020}.
While the power law approximation is both simple and effective, \citet{Fitzpatrick+Massa_2009} demonstrated that extinction between the $I$ ($\lambda_I \simeq 0.798\,\mu$m)and $K_s$ ($\lambda_{K_s} \simeq 2.16\,\mu$m) bands is better represented by a modified power law in which $\alpha$ increases between 0.75 and 2.2\,$\mu$m. They proposed instead a function of the form
\begin{equation}
\frac{A_\lambda}{E\left(B-V\right)} \propto \frac{1}{1 + \left(\lambda/\lambda_0\right)^\gamma}
~~~,
\end{equation}
with $\lambda_0 = 0.507$\,$\mu$m. The fit values of $\gamma$ varied considerably from sightline to sightline, ranging from $\sim1.8$--2.8, and the constant of proportionality was found to depend on $R_V$. \citet{Schlafly+etal_2016} found excellent agreement in the NIR between this parameterization with $\gamma \simeq 2.5$ and their mean extinction law. While this functional form captures flattening of the extinction law at the shortest wavelengths in this range, other studies have noted an apparent flattening of the NIR extinction law at the longest wavelengths as well, particularly in comparing the slope of the extinction curve between $J$ and $H$ ($\lambda_H \simeq 1.63\,\mu$m) to the slope between $H$ and $K_s$ \citep[e.g.,][]{Fritz+etal_2011,Hosek+etal_2018,NoguerasLara+etal_2020}. Such behavior is not unexpected given indications of a relatively flat MIR extinction curve (see Section~\ref{subsec:mir_ext}).
The assumption of a power law can have dramatic effect on the conversion from reddening to extinction. If $A_\lambda \propto \lambda^{-\alpha}$, then
\begin{equation}
\label{eq:ejhk_alpha}
\frac{E\left(J-H\right)}{E\left(H-K\right)} = \frac{\left(\frac{\lambda_H}{\lambda_J}\right)^\alpha - 1}{1 - \left(\frac{\lambda_H}{\lambda_K}\right)^\alpha}
~~~.
\end{equation}
With a sample of 37,000 stars, \citet{Schlafly+etal_2016} made precise determinations of interstellar reddening in these bands. Inserting their measured reddenings into Equation~\ref{eq:ejhk_alpha} yields $\alpha = 2.30$. As discussed in Section~\ref{subsec:ext_nir}, however, a single reddening law is consistent with a family of extinction laws that differ by an additive constant. One method of placing a limit on this constant is to require the extinction in the longest wavelength band to be positive. Another more constraining method is to find the additive constant such that the ratio of the extinction in two bands agrees with a measured value. We find that $\alpha \simeq 2.30$ between $J$ and $K$ can be achieved by employing the \citet{Schlafly+etal_2016} reddening law and imposing $A_H/A_K = 1.87$. However, this same reddening law is consistent with a (wavelength-dependent) logarithmic slope of $\sim1.7$ in the NIR when instead requiring $A_H/A_K = 1.55$ \citep[as determined by][]{Indebetouw+etal_2005}.
It is therefore unclear whether the large values of $\alpha$ found by \citet{Stead+Hoare_2009} and \citet{MaizApellaniz2020} are indeed more physical due to the more careful treatment of the bandpasses or whether they are biased toward higher values of $\alpha$ by forcing extinction in the $JHK$ bands to conform precisely to a power law. An independent constraint on the absolute extinction is needed to break this degeneracy.
The default curve put forward by \citet{Schlafly+etal_2016} employs $A_H/A_K = 1.55$ as determined by \citet{Indebetouw+etal_2005}. In that study, the absolute extinction was constrained along a diffuse sightline in the Galactic plane with $\ell = 42^\circ$ by measuring the extinction toward K giants, which are well-localized in color space, under the assumption that extinction per unit distance is constant in the Galactic plane. \citet{Wang+Chen_2019} used {\it Gaia} parallaxes to measure the reddening as a function of distance modulus toward a sample of more than 60,000 red clump stars. They found $A_H/A_{K_s} = 1.75$, noting agreement with \citet{Chen+etal_2018} who used 55 classical Cepheids to measure distance to the Galactic Center and derived $A_H/A_{K_s} = 1.717$. Photometry of red clump stars toward the Galactic Center has yielded relatively concordant values of $\sim1.69\pm0.03$ \citep{Nishiyama+etal_2006,Nagatomo+etal_2019}, $1.76\pm0.10$ \citep{Schodel+etal_2010}, and $1.84\pm0.03$ \citep{NoguerasLara+etal_2020}.
The steep NIR extinction laws implied by large values of $A_H/A_K$ are difficult to reconcile with relatively flat extinction between 4--8\,$\mu$m and comparisons between visual extinction and extinction in the 9.7\,$\mu$m feature, as we discuss in the next section. The NIR extinction is sensitive to relative abundance of the largest interstellar grains, and so sightlines passing through molecular gas, where grains grow to larger sizes through coagulation, may have systematically different properties. It is unclear if this effect is responsible for the discrepancy between the observations of \citet{Indebetouw+etal_2005} on a relatively diffuse sightline and those toward the Galactic Center.
Ultimately, on the basis of the observed properties of the MIR extinction, we adopt as our representative NIR extinction curve the reddening law of \citet{Schlafly+etal_2016} with $A_H/A_K = 1.55$ to convert to extinction. We present the resulting extinction law in Figure~\ref{fig:ext_nir}, where we compare it to the same reddening law derived assuming $A_H/A_K = 1.75$ instead. Further studies of NIR extinction along diffuse sightlines are needed to clarify the steepness of the interstellar extinction curve and its variations with environment.
\subsection{MIR Extinction}
\label{subsec:mir_ext}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{extcrv_mir.pdf}
\caption{Various determinations of the MIR extinction law are presented, normalized to 2.2\,$\mu$m or the $K_s$ band as appropriate. Two versions of the \citet{Schlafly+etal_2016} curve are shown, one assuming $A_H/A_K = 1.55$ (dotted) and the other 1.75 (dashed). The extinction toward Cyg\,OB2-12 based on ISO SWS and {\it Spitzer} IRS data is shown in black \citep{Hensley+Draine_2020} and is the basis of our synthesized extinction curve for $\lambda > 2.2\,\mu$m. Note that it matches onto the \citet{Schlafly+etal_2016} curve with $A_H/A_K = 1.55$ by design. We also present a few representative broadband determinations using combinations of {\it Spitzer} IRAC, {\it WISE}, and {\it AKARI} photometry, including \citet{Indebetouw+etal_2005} (red error bars), \citet{Xue+etal_2016} ($A_{K_s} < 0.5$ sample, purple circles), and \citet{Wang+Chen_2019} (blue error bars).} \label{fig:ext_mir}
\end{figure*}
The MIR extinction is dominated by continuum extinction between 3--8\,$\mu$m and by the 9.7 and 18\,$\mu$m silicate features longward of 8\,$\mu$m. We focus here on the former, deferring discussion of the silicate features to Section~\ref{subsec:ext_sil_features}. Carbonaceous MIR extinction features are discussed in Section~\ref{subsubsec:carbon_features_mir}.
Some early determinations of the MIR extinction suggested a continuation of the NIR power law with a sharp minimum at 7\,$\mu$m \citep[e.g.,][]{Rieke+Lebofsky_1985,Bertoldi+etal_1999, Rosenthal+Bertoldi+Drapatz_2000, Hennebelle+etal_2001}. However, a growing body of work suggests that the MIR extinction is relatively flat between $\sim 4$ and 8\,$\mu$m across a diversity of sightlines and values of $R_V$.
Sightlines toward the Galactic Center have been well-measured in extinction and were the first to suggest, via observation of hydrogen recombination lines, a flattening of the extinction law in the MIR \citep{Lutz+etal_1996, Lutz_1999}. Subsequent broadband and spectroscopic observations toward the
Galactic center \citep{Nishiyama+etal_2006, Nishiyama+etal_2008, Nishiyama+etal_2009, Fritz+etal_2011} and the Galactic plane \citep{Jiang+etal_2003, Jiang+etal_2006, Gao+Jiang+Li_2009} have proven consistent with a relatively flat extinction law. Likewise, \citet{Flaherty+etal_2007} found good agreement with the \citet{Lutz+etal_1996} extinction curve when measuring the extinction toward nearby star-forming regions where the extinction was dominated by molecular gas. Observing in the dark cloud Barnard~59 ($A_K \sim 7$, $A_V \sim 59$), \citet{RomanZuniga+etal_2007} measured a 1.25--7.76\,$\mu$m extinction law consistent with that of \citet{Lutz+etal_1996}.
We seek the properties of dust in the diffuse ISM, which may be systematically different from these more heavily extinguished sightlines. However, the relatively flat extinction law between $\sim$3 and 8\,$\mu$m appears fairly universal. Combining {\it Spitzer} and 2MASS observations on an ``unremarkable'' region in the Galactic plane centered on $\ell = 42^\circ$, $b = 0.5^\circ$, \citet{Indebetouw+etal_2005} derived a extinction curve in agreement with \citet{Lutz+etal_1996}. \citet{Zasowski+etal_2009} derived an average extinction curve over 150$^\circ$ in the Galactic midplane also using {\it Spitzer} and 2MASS photometry, finding excellent agreement with \citet{Indebetouw+etal_2005}. Further, they note consistency between their result and extinction curves in low extinction regions in molecular clouds measured by \citet{Chapman+etal_2009}. \citet{Wang+etal_2013} measured the IR extinction law in regions of the Coalsack nebula that sampled a range of environments from diffuse to dark, finding a relatively universal shape of the MIR extinction across environments. \citet{Xue+etal_2016} derived a relatively flat MIR extinction curve toward a sample of G and K-giants in the {\it Spitzer} IRAC bands, in agreement with recent studies and sharply discrepant with a deep minimum in the extinction curve at $\sim7\,\mu$m.
The {\it Spitzer} Infrared Spectrograph (IRS) enables spectroscopic determination of the extinction law from $\sim5$--$37\,\mu$m. Employing IRS spectra toward a sample of five O and B stars, \citet{Shao+etal_2018} derived a relatively flat extinction curve between 5 and 7.5\,$\mu$m. Also using IRS data, \citet{Hensley+Draine_2020} determined a nearly identical extinction curve toward Cyg\,OB-12 in the 5--8\,$\mu$m range.
On the basis of these data, we conclude that a relatively flat extinction curve between $\sim4$--$8\,\mu$m is universal and typical of even of the diffuse ISM having $R_V \approx 3.1$, not just sightlines with large values of $R_V$. We summarize a selection of these observations in Figure~\ref{fig:ext_mir}. It must be cautioned, however, that the conversion from reddenings to extinction in many of these studies was accomplished by assuming a power law form in the NIR, and thus uncertainty still remains in both the precise shape and amount of 4--8\,$\mu$m extinction relative to the NIR.
To create a composite extinction law, we join the \citet{Schlafly+etal_2016} curve (with $A_H/A_K = 1.55)$ described in Section~\ref{subsec:ext_nir} to the extinction measured toward Cyg\,OB2-12 by \citet{Hensley+Draine_2020}. The latter study presented a synthesized extinction curve by joining the measured 6--37\,$\mu$m extinction inferred from {\it Spitzer} IRS measurements to the \citet{Schlafly+etal_2016} extinction law likewise assuming $A_H/A_K = 1.55$. As illustrated in Figure~\ref{fig:ext_mir}, this provides a good representation of other studies of extinction in the 4--8\,$\mu$m range.
As discussed in Section~\ref{subsec:ext_nir}, $A_H/A_K = 1.55$ is low relative to several recent determinations, which favor a value of $\simeq1.75$. On the other hand, the \citet{Schlafly+etal_2016} extinction law having $A_H/A_K = 1.75$ shows no evidence for flattening even out to 4.5\,$\mu$m and implies lower 4--8\,$\mu$m extinction relative to $K$ band than inferred from a number of studies (see Figure~\ref{fig:ext_mir}). As we discuss in the following section, our adopted extinction curve has a value of $A_V/\Delta\tau_{9.7} = 20.0$, at the upper end of the observed range \citep[$\sim18.5\pm2$,][]{Draine_2003}. Joining a representative MIR extinction profile to an NIR extinction law with a higher value of $A_H/A_K$ would result in a larger $A_V/\Delta\tau_{9.7}$, exacerbating this tension. More work is needed to fully reconcile the existing observations of NIR and MIR extinction, and we thus present our synthesized curve as only our current best estimate of the true interstellar extinction.
Finally, we note that \citet{Schlafly+etal_2016} determined the interstellar extinction only in broad photometric bands and thus their resulting extinction curve does not contain spectral features. In contrast, \citet{Hensley+Draine_2020} used spectroscopic {\it ISO}-SWS data to determine the profile of the the 3.4\,$\mu$m spectroscopic feature toward Cyg\,OB2-12, which can be seen in Figure~\ref{fig:ext_mir}. We discuss this and other other spectroscopic features in greater detail in the following sections.
\subsection{Silicate Features}
\label{subsec:ext_sil_features}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{extcrv_silprof.pdf}
\caption{We compare three determinations of the profile of the 9.7\,$\mu$m silicate feature on different sightlines: toward the Galactic Center \citep{Kemper+Vriend+Tielens_2004}, a sample of five O and B stars \citep{Shao+etal_2018}, and Cyg\,OB2-12 \citep{Hensley+Draine_2020}. The agreement between these profiles suggests a universality of the silicate feature throughout the ISM. Residual differences between the profiles may be attributable to different treatments of the underlying continuum extinction.} \label{fig:ext_silprof}
\end{figure}
In addition to smooth continuum extinction provided by the ensemble of interstellar dust grains, there are well-studied extinction features attributable to specific grain species. Prominent among these are features at 9.7 and 18\,$\mu$m that have been identified with silicate material, the former arising from the Si-O stretching mode and the latter from the O-Si-O bending mode.
The 9.7\,$\mu$m feature was discovered as a circumstellar emission feature \citep{Gillett+Low+Stein_1968, Woolf+Ney_1969}. \citet{Woolf+Ney_1969} demonstrated that the feature was consistent with the expected behavior of silicate material, a claim strengthened by the discovery of a second feature at 18\,$\mu$m \citep{Forrest+McCarthy+Houck_1979}. Subsequent observations have revealed that these features are not only found in circumstellar emission, but are also ubiquitous in absorption in the diffuse ISM \citep[see, e.g.][]{vanBreemen+etal_2011}.
The sightline to the Galactic Center has enabled detailed study of both the 9.7\,$\mu$m \citep{Roche+Aitken_1985, Smith+Aitken+Roche_1990, Kemper+Vriend+Tielens_2004} and 18\,$\mu$m features \citep{McCarthy+etal_1980} by virtue of its substantial dust column. \citet{Roche+Aitken_1985} found that the $V$ band extinction relative to the optical depth $\Delta\tau_{9.7}$ of the silicate feature at 9.7\,$\mu$m has a value of $A_V/\Delta\tau_{9.7} = 9\pm1$. \citet{Kemper+Vriend+Tielens_2004} employed {\it ISO} observations toward two carbon-rich Wolf-Rayet stars located toward the Galactic Center to derive the profile of the 9.7\,$\mu$m silicate feature $\Delta\tau_\lambda/\Delta\tau_{9.7\,\mu{\rm m}}$, which we plot in Figure~\ref{fig:ext_silprof}.
With heavy visual extinction \citep[$A_V \simeq 10.2$\,mag][]{Humphreys_1978,TorresDodgen+etal_1991} and yet a lack of ice features, the sightline toward the blue hypergiant Cyg\,OB2-12 is ideal for studying extinction arising from the diffuse atomic ISM \citep{Whittet_2015}. The 9.7\,$\mu$m silicate feature on this sightline was first observed by \citet{Rieke_1974}, and subsequent observations have produced detailed determinations of the both the 9.7 and 18\,$\mu$m silicate features \citep{Whittet+etal_1997,Fogerty+etal_2016,Hensley+Draine_2020}. In Figure~\ref{fig:ext_silprof}, we compare the Cyg\,OB2-12 feature profile determined by \citet{Hensley+Draine_2020} to that of the Galactic Center \citep{Kemper+Vriend+Tielens_2004} and a sample of O and B stars \citep{Shao+etal_2018}.
The agreement between these profiles corroborates other studies noting a relatively universal silicate feature profile in the diffuse ISM \citep[e.g.,][]{Chiar+Tielens_2006,vanBreemen+etal_2011}. Interstellar dust models should therefore be compatible with this profile, which has FWHM $\simeq 2.2\,\mu$m. As noted by \citet{Chiar+Tielens_2006}, this average feature profile is narrower than the profile seen in emission toward the Trapezium region \citep[FWHM $\simeq 3.45\,\mu$m,][]{Forrest+Gillett+Stein_1975}, which was used to calibrate some models \citep[e.g.,][]{Draine+Lee_1984}.
Dust models should also be able to reproduce the observed strength of the feature. The extinction curve we synthesize in this work has $A_{5500\,\text{\AA}}/\Delta\tau_{9.7} = 20.0$. Comparing a variety of measurements toward Wolf-Rayet stars and toward Cyg\,OB2-12, \citet{Draine_2003} suggested a mean value $A_V/\Delta\tau_{9.7} = 18.5\pm2$, consistent with our composite curve.
Determination of the 18\,$\mu$m feature profile is made difficult by uncertainty in the underlying continuum extinction \citep[see discussion in][]{vanBreemen+etal_2011,Hensley+Draine_2020}. $\Delta\tau_{18}/\Delta\tau_{9.7}$ is typically found to be of order 0.5 \citep{Chiar+Tielens_2006,vanBreemen+etal_2011,Hensley+Draine_2020}. In performing model fits to the emission from Cyg\,OB-2 and its stellar wind, \citet{Hensley+Draine_2020} required that the extinction longward of 18\,$\mu$m extrapolate to values estimated from the FIR emission with a functional form approximating the dust opacity law also inferred from FIR emission. Thus, while the 18\,$\mu$m feature itself is difficult to isolate from the total extinction, the long wavelength behavior of the extinction curve synthesized here is both physically and empirically motivated and serves as a reasonable best estimate.
Just as the presence of the 9.7 and 18\,$\mu$m silicate features constrains grain models, the {\it absence} of certain features likewise informs our understanding of the composition of interstellar dust. The 11.2\,$\mu$m feature arising from silicon carbide (SiC) is not observed to low detection limits, which appears to constrain the amount of Si in SiC dust to less that about 5\% \citep{Whittet+Duley+Martin_1990}. However, the SiC absorption profile is highly shape dependent, and irregularly shaped SiC grains could be abundant despite the non-detection at 11.2\,$\mu$m. If the observed ``shoulder'' of the 9.7\,$\mu$m feature is attributed to irregular SiC grains, as much as 9--12\% of the interstellar Si could be in the form of SiC \citep{Whittet+Duley+Martin_1990}.
Little substructure has been detected in the 9.7\,$\mu$m silicate feature, indicating that the feature arises predominantly from amorphous rather than crystalline silicates. Toward Cyg~OB2-12, \citet{Bowey+Adamson+Whittet_1998} found minimal evidence for fine structure between 8.2 and 11.7\,$\mu$m except a possible weak feature at 10.4\,$\mu$m that may be attributable to crystalline serpentine. Measuring silicate absorption toward two protostars and finding a lack of fine structure, \citet{Demyk+etal_1999} determined that at most 1-2\% of the mass of the silicates giving rise to the feature in star-forming clouds could be crystalline, whereas \citet{Kemper+Vriend+Tielens_2005} estimated that at most 2.2\% of the silicate mass in the diffuse ISM could be crystalline. On the basis of detections of the 11.1\,$\mu$m feature from crystalline forsterite in many interstellar environments, \citet{DoDuy+etal_2020} concluded that $\sim1.5$\% of the silicate mass in the diffuse ISM is crystalline, which is consistent with previously derived upper limits. To the extent that the weak, broad 11.1\,$\mu$m feature is present in the extinction toward Cyg\,OB2-12, it is implicitly included in the representative extinction curve we derive in this work.
\subsection{Carbonaceous Features}
\label{subsec:ext_carbon_features}
The presence of extinction features arising from carbon bonds is well-attested in the diffuse ISM. We review here the extinction ``bump'' at 2175\,\AA, the infrared extinction features, and the diffuse interstellar bands (DIBs).
\subsubsection{\texorpdfstring{The 2175\,{\rm \AA}\ Feature}{The 2175A Feature}}
\label{subsubsec:2175}
As evidenced in Figure~\ref{fig:ext_uv}, a striking feature of the interstellar extinction curve is the ``bump'' at 2175\,\AA. This feature was first discovered by \citet{Stecher_1965} and quickly identified with extinction from small graphite particles \citep{Stecher+Donn_1965}, although this identification is not universally accepted. As the backbone of a PAH is in many ways analogous to a graphite sheet, the 2175\,\AA\ feature may be attributable to PAHs \citep{Donn_1968,Draine_1989b,Joblin+Leger+Martin_1992,Draine_2003}.
Regardless of the carrier of the feature, a number of observational facts appear clear. First, the feature appears ubiquitous in the ISM, found over a wide range of $E(B-V)$ \citep{Bless+Savage_1972,Savage_1975}. Second, the feature is quite strong and therefore its carrier must be composed of one of the more abundant elements in the ISM---C, O, Mg, Si, or Fe \citep{Draine_1989b}. Third, the central wavelength of the feature is nearly invariant across many sightlines, though the width can vary dramatically (FWHM between 360 and 600\,\AA) across environments \citep{Fitzpatrick+Massa_1986}. Finally, this feature is weaker, and in some cases absent, in sightlines toward the LMC \citep{Fitzpatrick_1985, Clayton+Martin_1985, Fitzpatrick_1986,Misselt+Clayton+Gordon_1999} and SMC \citep{RoccaVolmerange+etal_1981, Prevot_1984,Thompson+etal_1988, Gordon+etal_2003}.
The consistency of the central wavelength across environments suggests that the feature is relatively insensitive to the grain size distribution, while its weakness in the SMC and LMC lends credence to the idea that it is associated with a specific carrier which may be underabundant in those environments.
While graphite-like sheets, such as those found in PAHs, provide perhaps the most attractive explanation for the feature at present, it is not without difficulties. In particular, \citet{Draine+Malhotra_1993} demonstrated that graphite has difficulty explaining the observed width of feature by variations in the size and shape of the grains while simultaneously preserving the constant central wavelength. Alternative hypotheses, such as transitions in OH$^{-}$ ions in amorphous silicates \citep{Steel+Duley_1987}, onion-like carbonaceous composite materials \citep{Wada+etal_1999}, and hydrogenated amorphous carbon \citep{Mennella+etal_1998, Duley+Hu_2012}, provide ways to account for the feature without invoking graphite, though most of these models still attribute the feature to carbonaceous bonds. As of yet, no hypothesis offers a clear explanation for the simultaneous near-invariance of the central wavelength and substantial variation in the feature's width.
\subsubsection{Infrared Features}
\label{subsubsec:carbon_features_mir}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{extcrv_ch.pdf}
\caption{We compare determinations of the hydrocarbon feature profiles toward the Galactic Center based on ISO-{\it SWS} spectroscopy \citep{Chiar+etal_2000,Chiar+etal_2013} and toward Cyg\,OB2-12 which employed both ISO-{\it SWS} and {\it Spitzer} IRS spectroscopy \citep{Hensley+Draine_2020}. Both sets of profiles have been normalized to the maximum optical depth in the 3.4\,$\mu$m feature. Although these sightlines probe very different interstellar environments, the agreement is excellent aside from the 7.25 and 7.7\,$\mu$m features, where the determinations are most uncertain.} \label{fig:ext_ch}
\end{figure*}
An interstellar absorption feature at 3.4\,$\mu$m was first discovered by \citet{Soifer+Russell+Merrill+1976} toward the Galactic Center source IRS7, though it was not until the non-detection of emission features at 6.2 and 7.7\,$\mu$m along the same line of sight that its interstellar origin was appreciated \citep{Willner+etal_1979}. \citet{Wickramasinghe+Allen_1980} detected a pronounced 3.4\,$\mu$m feature toward IRS7 as well as toward the M star OH\,01--477, which they attributed to the CH stretch band. Detection of this feature toward Cyg\,OB-12 suggests that it is a generic feature of extinction from the diffuse ISM \citep{Adamson+Whittet+Duley_1990,Whittet+etal_1997}.
Subsequent observations of the 3.4\,$\mu$m feature revealed a complex profile, including a number of ``subpeaks'' at 3.39, 3.42, and 3.49\,$\mu$m \citep{Duley+Williams_1983, Butchart+etal_1986, Sandford+etal_1991}. \citet{Sandford+etal_1991} demonstrated consistency between these features and C-H stretching in $=$CH$_2$ (methylene) and -CH$_3$ (methyl) groups in aliphatic hydrocarbons. These results were supported by the more extensive observations of \citet{Pendleton+etal_1994}, who determined that diffuse ISM has a characteristic CH$_2$ to CH$_3$ abundance of about 2.0--2.5. Detailed comparison of the 3.4\,$\mu$m feature to laboratory measurements of a range of materials yielded a close match with hydrocarbons with both aliphatic and aromatic characteristics \citep{Pendleton+Allamandola_2002}.
A key prediction of the aliphatic hydrocarbon origin of the 3.4\,$\mu$m feature is the presence of a 6.85\,$\mu$m CH deformation mode. \citet{Tielens+etal_1996} identified this feature in an IR spectrum of the Galactic Center, confirming this hypothesis. Additionally, they identified features at 5.5 and 5.8\,$\mu$m with C=O (carbonyl) stretching and a feature at 5.5\,$\mu$m with metal carbonyls such as Fe$\left({\rm CO}\right)_4$. Subsequently, \citet{Chiar+etal_2000} detected a 7.25\,$\mu$m feature ascribed to a methylene deformation mode toward the Galactic Center. The 6.85\,$\mu$m feature has been observed toward Cyg\,OB2-12 with the same strength relative to the 3.4\,$\mu$m feature as seen toward the Galactic Center \citep{Hensley+Draine_2020}. Thus, the 6.85\,$\mu$m feature also appears generic to extinction from the diffuse ISM. On the other hand, the 7.25\,$\mu$m feature was {\it not} detected toward Cyg\,OB2-12, although a weak feature could not be completely ruled out. The hydrocarbon feature profiles toward the Galactic Center and Cyg\,OB2-12 are compared in Figure~\ref{fig:ext_ch}.
The 3.47\,$\mu$m subfeature of the 3.4\,$\mu$m complex has been attributed to bonds between H and $sp^3$ bonded (diamond-like) C \citep{Allamandola+etal_1992}. This feature appears to be present in the spectrum of the Galactic Center \citep{Chiar+etal_2013} and absorption in the vicinity of this feature is even stronger toward Cyg\,OB2-12 \citep{Hensley+Draine_2020}, as illustrated in Figure~\ref{fig:ext_ch}. While this suggests diamond-like C may be ubiquitous in both the dense and diffuse ISM, it is in conflict with the finding of \citet{Brooke+etal_1996} that the strength of the 3.47\,$\mu$m feature is better correlated with the 3.1\,$\mu$m H$_2$O ice feature (absent toward Cyg\,OB2-12) than with the 9.7\,$\mu$m silicate feature. Observations of these features on more sightlines are needed to clarify the evolution of hydrocarbons in the ISM.
The attribution of the strong IR emission features to PAHs (see Section~\ref{sec:pah_emission}) implies the presence of aromatic features in the interstellar extinction curve as well as the observed aliphatic features. Observing eight IR sources, including two Galactic Center sources and Cyg\,OB2-12, with {\it ISO}-SWS spectroscopy, \citet{Schutte+etal_1998} detected a 6.2\,$\mu$m absorption feature associated with aromatic hydrocarbons, which has a well-known corresponding emission feature. Subsequently, both the 3.3 and 6.2\,$\mu$m aromatic features were detected in absorption toward the Quintuplet Cluster \citep{Chiar+etal_2000,Chiar+etal_2013}, and \citet{Hensley+Draine_2020} reported detections of the 3.3, 6.2, and 7.7\,$\mu$m aromatic features in absorption toward Cyg\,OB2-12. While there is a feature in the extinction curve toward the Galactic Center in the vicinity of 7.7\,$\mu$m, \citet{Chiar+etal_2000} attributed it to the 7.68\,$\mu$m feature from methane ice. Because of the detection on the iceless sightline toward Cyg\,OB2-12, we include it in Figure~\ref{fig:ext_ch}, but note that there are substantial observational uncertainties on the depth and width of the feature in both the Galactic Center and Cyg\,OB2-12 determinations. The strength of the 7.7\,$\mu$m feature detected toward Cyg\,OB2-12 is, however, consistent with predictions of models for interstellar PAHs \citep{Draine+Li_2007}.
While the aromatic 3.3\,$\mu$m feature is substantially weaker than the aliphatic 3.4\,$\mu$m feature in absorption, it dominates in emission. It is also noteworthy that the 3.3\,$\mu$m feature width is substantially broader in absorption \citep[$\Delta\lambda^{-1} \simeq 90\,$cm$^{-1}$,][]{Chiar+etal_2013,Hensley+Draine_2020} than seen in emission \citep[$\Delta\lambda^{-1} \simeq 30\,$cm$^{-1}$,][]{Tokunaga+etal_1991,Joblin+etal_1996}.
As with the silicate features, carbonaceous features {\it not} observed in the diffuse ISM also constrain dust composition. Polycrystalline graphite is expected to have a lattice resonance in the vicinity of 11.53\,$\mu$m \citep{Draine_1984,Draine_2016}. Such a feature was not observed towards Cyg\,OB2-12 \citep{Hensley+Draine_2020}, though the weakness of the feature allowed only an upper limit of $<$160\,ppm of C in graphite to be set. More stringent upper limits will require more sensitive data and possibly a sightline without contaminating H recombination lines.
Laboratory data suggest the presence of NIR features at 1.05 and 1.23\,$\mu$m associated with ionized PAHs having 40--50 C atoms \citep{Mattioda+etal_2005a,Mattioda+etal_2005b}. These wavelengths may be too short for even ultrasmall grains to produce strong emission features, but if present they should be observable in extinction \citep{Mattioda+etal_2005a}. However, we are unaware of any existing observational constraints on the presence or absence of these features.
\subsubsection{The Diffuse Interstellar Bands}
The diffuse interstellar bands are a set of numerous, relatively broad (hence ``diffuse'') interstellar absorption features that likely arise from molecular transitions. The first two DIBs $\lambda5780$ and $\lambda5795$ were noted as unidentified stellar absorption features \citep{Heger_1922a, Heger_1922b}, but their interstellar nature was not confirmed until \citet{Merrill_1936} found that the lines remained at fixed wavelength in a spectroscopic binary while the stellar lines exhibited the expected time-dependent oscillation. Subsequently, over five hundred DIBs have been identified, the vast majority of which have not been identified with a specific molecular carrier \citep{Herbig_1995, Hobbs+etal_2009, Fan+etal_2019}.
The first definitive identification of a DIB carrier did not occur until 2015 when laboratory measurements demonstrated that C$_{60}^+$ can reproduce the absorption features at 9632 and 9577\,\AA\ \citep{Campbell+etal_2015}. Subsequent detection of the predicted 9428\,\AA\ band has confirmed C$_{60}^+$ as the carrier \citep{Cordiner+etal_2019}. Based on the observed DIB strength, it is estimated that C$_{60}^+$ accounts for only $\sim0.1$\% of the interstellar carbon abundance \citep{Berne+etal_2017}.
The correlation between DIB strength and total reddening is non-linear \citep{Snow+Cohen_1974} and varies among DIBs, suggesting that the various DIB carriers preferentially reside in different interstellar environments, e.g., atomic versus molecular gas \citep{Lan+etal_2015}. It is in principle possible to construct a representative spectrum for DIBs in diffuse \ion{H}{i} gas assuming the empirical relations between DIB equivalent widths and $N_\ion{H}{i}$ derived by \citet{Lan+etal_2015} for the set of 20 DIBs between 4430 and 6614\,\AA\ considered in their study, but we do not pursue such an undertaking in this work.
\subsection{Other Features}
Although we have discussed a number of extinction features associated with specific materials found in diffuse interstellar gas, this inventory is incomplete, particularly as we push to weaker features. Indeed, \citet{Massa+etal_2020} recently presented evidence of ``Intermediate Scale Structure,'' i.e., extinction features a few hundred to 1000\,\AA\ wide, in the spectrophotometric extinction curves of \citet{Fitzpatrick+etal_2019}. They identified two features at 4370 and 4870\,\AA\ which both showed correlation with the strength of the 2175\,\AA\ feature, and one feature at 6300\,\AA\ which did not. Further, they argue that the reported ``Very Broad Structure'' \citep{Whiteoak_1966} is actually a minimum between the 4870 and 6300\,\AA\ features. These features affect the optical extinction at the $\lesssim10\%$ level, and we include them in our representative extinction curve only insofar as they are inherent in the mean extinction curves of \citet{Schlafly+etal_2016} and \citet{Fitzpatrick+etal_2019}, which we employ over this wavelength range.
\subsection{\texorpdfstring{$N_{\rm H}/E(B-V)$}{NH/E(B-V)}}
\label{subsec:nh_ebv}
It is expected that the amount of extinction on a given sightline scales linearly with the dust column density and, to the extent that dust and gas are well mixed, with the gas column density. This scaling is borne out observationally and is typically summarized by the quantity $N_{\rm H}/E(B-V)$, which appears roughly constant for the diffuse ISM. Using Ly$\alpha$ absorption measurements made by the {\it Copernicus} satellite for 75 stars within 3400\,pc, \citet{Bohlin+Savage+Drake_1978} derived a value of $N_{\rm H}/E(B-V) = 5.8\times10^{21}$\,H\,cm$^{-2}$\,mag$^{-1}$. They noted that very few of their sightlines differ from this relation by more than a factor of 1.5. Ly$\alpha$ absorption studies with IUE by \citet{Shull+vanSteenberg_1985} and \citet{Diplas+Savage_1994} derived similar $N_\ion{H}{i}/E(B-V)$ values of 5.2 and 4.9$\times10^{21}$\,H\,cm$^{-2}$\,mag$^{-1}$, respectively. Finally, \citet{Rachford+etal_2009} obtained $N_{\rm H}/E(B-V) = \left(5.94\pm0.37\right)\times10^{21}$\,H\,cm$^{-2}$\,mag$^{-1}$ with data from {\it FUSE} for translucent clouds ($A_V \gtrsim 0.5)$ where both \ion{H}{i} and H$_2$ were measured directly.
Measuring the \ion{H}{i} column density toward globular clusters using the 21\,cm line, \citet{Knapp+Kerr+1974} and \citet{Mirabel+Gergely_1979} found $N_\ion{H}{i}/E(B-V)$ of 5.1 and 4.6$\times10^{21}$\,H\,cm$^{-2}$\,mag$^{-1}$, respectively. These values are also consistent with data from a similar study using RR Lyrae \citep{Sturch_1969}, all of which corroborate the values from \ion{H}{i} absorption studies.
However, employing 21\,cm data from the Leiden-Argentina-Bonn (LAB) \ion{H}{i} Survey \citep{Kalberla+etal_2005} and the Galactic Arecibo L-band Feed Array (GALFA) H{\sc i} Survey \citep{Peek+etal_2011} in conjunction with the reddening map of \citet{Schlegel+Finkbeiner+Davis_1998}, \citet{Liszt_2014a} determined $N_\ion{H}{i}/E(B-V) = 8.3\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$ for $|b| \gtrsim 20^\circ$ and $E(B-V) \lesssim 0.1$\, mag. This is a factor of 1.4 higher than that found by \citet{Bohlin+Savage+Drake_1978}. \citet{Liszt_2014a} noted that some previous determinations using \ion{H}{i} emission are in good agreement with this higher value, particularly for $E(B-V) < 0.1$. For instance, \citet{Heiles_1976} found $E(B-V) = (-0.041 \pm 0.012) + N_\ion{H}{i}/(4.85 \pm 0.36) \times10^{21}$\,cm$^{-2}$\,mag$^{-1}$, consistent with the higher value of \citet{Liszt_2014a} when $E(B-V) < 0.1$ due to the negative intercept. Likewise, \citet{Mirabel+Gergely_1979} required a negative intercept to fit their data, suggesting a change in behavior at low reddening.
In a subsequent analysis, \citet{Lenz+Hensley+Dore_2017} correlated $N_\ion{H}{i}$ measurements from the HI4PI Survey \citep{HI4pi_2016} and maps of interstellar reddening as determined by \citet{Schlegel+Finkbeiner+Davis_1998} over the diffuse, high-latitude sky. They found a characteristic $N_\ion{H}{i}/E(B-V) = 8.8\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$ on these sightlines, with a systematic uncertainty of about 10\%. Comparing 21\,cm observations to stellar extinction along 34 sightlines with little molecular gas, \citet{Nguyen+etal_2018} found a compatible $N_{\rm H}/E(B-V) = \left(9.4\pm1.6\right)\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$ (95\% confidence interval) and that this relation persists to $N_{\rm H}$ as high as $3\times10^{21}$\,cm$^{-2}$. Using X-ray absorption to infer $N_{\rm H}$, \citet{Zhu+etal_2017} found a mean value of $N_{\rm H}/A_V = \left(2.08\pm0.02\right)\times10^{21}$ toward a sample of supernova remnants, planetary nebulae, and X-ray binaries across the Galaxy. For $R_V = 3.1$, this corresponds to $N_{\rm H}/E(B-V) = 6.45\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$, intermediate between the \citet{Bohlin+Savage+Drake_1978} and \citet{Lenz+Hensley+Dore_2017} values.
The striking difference between these different determinations of $N_{\rm H}/E(B-V)$ is consistent with systematic variations of the dust-to-gas ratio in the Galaxy, with more dust per H atom in the Galactic plane and less at high Galactic latitudes. As we focus here on high latitude sightlines where the dust emission per H atom is best determined (see Section~\ref{sec:irem}), we adopt the value $N_{\rm H}/E(B-V) = 8.8\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$ of \citet{Lenz+Hensley+Dore_2017} as our benchmark.
\subsection{Scattering}
Extinction is the sum of two processes---absorption and scattering. The scattering properties of dust can be constrained by studying the surface brightness profile of scattered light around point sources and the spectrum of the diffuse Galactic light. However, both of these constraints involve simultaneous modeling of both the dust optical properties and the scattering geometry and are therefore difficult to incorporate self-consistently into the present analysis. We provide a brief overview below, but do not incorporate these observations into our final set of model constraints.
\subsubsection{X-ray Scattering}
\label{subsubsec:xray_sca}
Interstellar grains scatter X-rays through small angles \citep{Overbeck_1965,Hayakawa_1970,Martin_1970}, which can be observed as a ``scattering halo'' in X-ray images of point sources with intervening interstellar dust \citep{Catura_1983,Mauche+Gorenstein_1986}. The scattering is sensitive to both dust composition and size distribution, providing additional observational constraints that a grain model should satisfy.
The angular extent of the scattering halo also depends on the location of the dust between us and the source. For Galactic sources (e.g., low-mass X-ray binaries), this introduces uncertainty when comparing models to observations.
The best-studied case is GX~13+1 \citep{Smith_2008}. \citet{Valencic+Smith_2015} surveyed 35 X-ray scattering halos, and concluded that most could be satisfactorily fit by one or more dust models with size distributions having few grains larger than $\sim0.4\,\mu$m. Extragalactic sources with intervening Galactic dust, the exact distance to which would be unimportant, would be optimal for testing dust models, but high signal-to-noise imaging of X-ray halos around reddened AGN is lacking.
The scattering cross section for the dust grains is expected to show spectral structure near X-ray absorption edges \citep{Draine_2003b}. If this could be observed, it would provide a means to detect or constrain variations of grain composition with size. \citet{Costantini+etal_2005} reported spectral structure in the scattering halo around Cyg~X-2. Features appear to be present near the O K, Fe L, Mg K, and Si K absorption edges, although the interpretation remains unclear. Future X-ray telescopes may enable more sensitive spectroscopy of scattering halos.
A population of aligned, aspherical grains can produce observable asymmetries in an X-ray scattering halo \citep{Draine+AllafAkbari_2006}. \citet{Seward+Smith_2013} employed {\it Chandra} observations of Cyg~X-2 to search for these asymmetries, but found the X-ray halo to be uniform in surface brightness to at least the 2\% level. A detection of halo asymmetry has yet to be reported.
Because X-ray scattering is sensitive to grain structure on small scales, X-ray halos can also provide constraints on grain porosity. Analyzing the {\it Chandra} observations of the Galactic binary GX13+1 of \citet{Smith+Edgar+Shafer_2002}, \citet{Heng+Draine_2009} found that the small angle scattering from grains with porosity greater than 0.55 overpredicts the observed surface brightness in the core of the scattering halo. As the degree of compactness of interstellar grains remains a major unresolved question, ancillary data and analysis is needed to test the conclusions of \citet{Heng+Draine_2009}.
\subsubsection{Diffuse Galactic Light}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{scattering.pdf}
\caption{Constraints on the dust albedo $\omega$ and phase function asymmetry $g \equiv \langle \cos\theta \rangle$ inferred from measurements of the diffuse Galactic light.} \label{fig:scattering}
\end{figure}
Even in a dark patch of sky far from point sources, there is still light from emission in the ISM and from starlight that has been scattered off of dust grains. This ``diffuse Galactic light'' (DGL) was first detected in the photoelectric measurements of \citet{Elvey+Roach_1937}, who derived a surface brightness of 5.6\,mag per square degree at $\lambda \approx 4500$\,\AA. These results were corroborated by the photometric observations of \citet{Henyey+Greenstein_1941}, who concluded that dust grains must have a relatively large albedo $\omega$ ($0.3 < \omega < 0.8$) and be relatively forward scattering, having anisotropy parameter $g \equiv \langle \cos\theta \rangle$ (where $\theta$ is the scattering angle) greater than 0.65. Particles in the Rayleigh limit (i.e., small compared to the wavelength) have $g \approx 0$, i.e., isotropic scattering, indicating that the scattering in the ISM is dominated by larger grains (radius $a \gtrsim 0.1\,\mu$m).
The conversion of measurements of the intensity of scattered light into constraints on the scattering properties of interstellar dust is challenging as it requires assumptions on the distribution of both sources and scatterers. Nevertheless, observations of the DGL from the optical to the UV have been used to constrain the wavelength dependence of both $\omega$ and $g$.
Employing 1500--4200\,\AA\ photometric observations from the Orbiting Astronomical Observatory (OAO-2) in 71 fields at varying Galactic longitude, \citet{Lillie+Witt_1976} found good agreement with earlier ground-based measurements of the DGL. They constrained $\omega$ and $g$ through a radiative transfer analysis on an axisymmetric plane-parallel galaxy in which both dust and stars decrease exponentially with height above the disk, finding $0.3 < \omega < 0.7$ with indications of a minimum near 2200\,\AA, coincident with the extinction bump (see Section~\ref{subsubsec:2175}). Except in this minimum where $g$ attained values as high as 0.9, they found $0.6 < g < 0.7$.
The UV spectrometers aboard the two {\it Voyager} spacecraft were used to study dust scattering in the Coalsack Nebula by \citet{Murthy+Henry+Holberg_1994}. They employed a simple scattering model assuming fixed $g$ and single scattering only to infer the wavelength dependence of the dust albedo. Fixing $\omega = 0.5$ at 1400\,\AA, they computed the {\it relative} albedo at other wavelengths, finding little wavelength dependence aside from a modest increase toward shorter wavelengths. A follow-up analysis by \citet{Shalima+Murthy_2004} using a more sophisticated Monte Carlo model for the dust scattering determined the FUV dust albedo to be $0.4\pm0.2$.
The Far Ultraviolet Space Telescope (FAUST) measured the diffuse UV continuum between 140 and 180\,nm. Employing the 156\,nm flux measurements from this experiment and a radiative transfer model that accounted for non-isotropic radiation fields and multiple scatterings, \citet{Witt+Friedmann+Sasseen_1997} derived a FUV dust albedo of $0.45\pm0.05$ and $g = 0.68\pm0.10$. The rocket-borne Narrowband Ultraviolet Imaging Experiment for Wide-Field Surveys (NUVIEWS) measured the diffuse UV background at 1740\,\AA. Using a 3D Monte Carlo scattering model based on that described in \citet{Witt+Friedmann+Sasseen_1997}, \citet{Schiminovich+etal_2001} constrained the dust albedo to be $\omega = 0.45\pm0.05$ and $g = 0.77\pm0.1$.
By correlating the spectra of SDSS sky fibers (i.e., spectra of the ``blank'' sky taken for calibration purposes) against the 100\,$\mu$m dust emission measured by IRAS, \citet{Brandt+Draine_2012} measured the spectrum of the DGL between 3900 and 9200\,\AA. Modeling the DGL scattering geometry with a plane-parallel exponential galaxy, they compared the observed spectrum to predictions from dust models. Their formalism could in principle be used to place constraints directly on $\omega$ and $g$, but we do not pursue such analysis here.
We summarize these constraints on the dust albedo and asymmetry parameter in Figure~\ref{fig:scattering}. Given the modeling uncertainties inherent in translating the DGL intensity to the scattering properties of interstellar dust, we do not at this time incorporate these data into our set of constraints. These limitations notwithstanding, it is clear that interstellar dust must have a UV/optical albedo of order 0.5 and be relatively forward scattering ($g > 0.5$).
\subsection{Spatial Variation of the Extinction Curve}
\label{subsec:ext_variations}
It is well established that there is not a single universal extinction curve that describes all regions of the ISM, but rather a variety of extinction curves typically parameterized by $R_V$ \citep{Johnson+Borgman_1963,Cardelli+Clayton+Mathis_1989}. For instance, measurements of extinction toward the Galactic Bulge have indicated $R_V \approx 2.5$ \citep{Udalski_2003,Nataf+etal_2013}. \citet{Schlafly+etal_2016} found large scale gradients in $R_V$, with a follow-up study indicating a possible dependence on Galactocentric radius such that the outer Galaxy has systematically higher $R_V$ than the inner Galaxy \citep{Schlafly+etal_2017}. The magnitude of the variations in $R_V$, however, was relatively small \citep[$\sigma_{R_V} = 0.18$,][]{Schlafly+etal_2016}. Extinction in dark clouds differs systematically from the diffuse ISM due to the growth of grains by coagulation and the formation of ice mantles. We do not attempt to summarize the observed range of variations in this work, instead restricting our focus to the extinction curve of the local diffuse ISM having an average $R_V \approx 3.1$ \citep{Morgan+etal_1953,Schultz+Wiemer_1975, Sneden+etal_1978, Koornneef_1983, Rieke+Lebofsky_1985,Fitzpatrick+etal_2019}.
\section{Polarized Extinction}
\label{sec:extpol}
Following the discovery that starlight is polarized \citep{Hiltner_1949a, Hiltner_1949b, Hiltner_1949c, Hall_1949, Hall+Miksell_1949, Hall+Mikesell_1950}, it was quickly realized that the origin of this polarization was due to selective extinction by aligned dust grains rather than inherent polarization of the stars themselves. \citet{Davis+Greenstein_1951} proposed a physical model of grain alignment whereby aspherical dust grains preferentially aligned with the local magnetic field. Our understanding of the alignment processes of dust grains has since undergone significant evolution \citep[see][for a review]{Andersson+Lazarian+Vaillancourt_2015}, though it remains clear that observations of polarized extinction constrain the size, shape, composition, and alignment properties of interstellar dust.
In this section we summarize observations of the polarized extinction, focusing upon its wavelength dependence, spectral features, and amplitude per unit reddening.
\subsection{Wavelength Dependence}
\label{sec:extpol_wav}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{extpol_opuv.pdf}
\caption{We plot the wavelength dependence of the polarized
extinction, normalizing to the peak polarization. We employ the Serkowski Law in the UV and optical with $\lambda_{\rm max} = 0.55\,\mu$m, and we match this smoothly onto a power law in the IR such that $p_\lambda \propto \lambda^{-1.6}$. The solid line corresponds to a Serkowski Law parameter $K = 0.87$, while the shaded region illustrates the effects of varying $K$ between 0.82 and 0.92, corresponding to the UV- and IR-optimized forms of the Wilking Law described by \citet{Whittet_2003}.} \label{fig:ext_pol}
\end{figure}
Initial observation of the polarized extinction from UV to NIR wavelengths \citep[e.g.,][]{Behr_1959,Gehrels_1960,Coyne+etal_1974,Gehrels_1974,Serkowski+Mathewson+Ford_1975} established a characteristic wavelength dependence of the polarized extinction that is often parametrized by the ``Serkowski Law'' \citep{Serkowski_1973}:
\begin{equation}
\label{eq:serkowski}
p_\lambda/p_{\rm max} \simeq {\rm
exp}\,\left[-K\ln^2\left(\lambda_{\rm max}/\lambda\right)\right]
~~~,
\end{equation}
where $p_\lambda$ is the polarization fraction of the two linear polarization modes and $p_{\rm max}$ is the maximum value of $p_\lambda$ occurring at wavelength $\lambda_{\rm max}$. \citet{Serkowski_1971} prescribed the values $K = 1.15$ and $\lambda_{\rm max} = 0.55$\,$\mu$m.
Subsequent observations of polarized extinction revealed that the polarization peak becomes narrower (i.e., $K$ increases) as $\lambda_{\rm max}$ increases \citep{Wilking+etal_1980,Wilking+etal_1982}. This relation, known as the ``Wilking Law,'' is parametrized by the linear relationship
\begin{equation}
K \simeq c_1 \lambda_{\rm max} + c_2
~~~,
\end{equation}
where $c_1$ and $c_2$ are constants to be fit. Analyzing the polarized extinction from the $U$ to $K$ band, \citet{Whittet+etal_1992} derived values of $c_1 = 1.66\,\mu$m$^{-1}$ and $c_2 = 0.01$. Employing UV polarimetry from the Wisconsin Ultraviolet Photo-Polarimeter Experiment (WUPPE), \citet{Martin+Clayton+Wolff_1999} fit values of $c_1 = 2.56\,\mu$m$^{-1}$ and $c_2 = -0.59$. As the former determination is a better fit to the observations from the optical to IR, and the latter a better fit from the UV to the optical, \citet{Whittet_2003} recommended a ``compromise fit'' employing the mean of the two determinations, i.e., $c_1 = 2.11\,\mu$m$^{-1}$ and $c_2 = -0.29$, yielding $K = 0.87$ for $\lambda_{\rm max} = 0.55$\,$\mu$m. For $\lambda_{\rm max} = 0.55\,\mu$m, all three parameterizations produce a similar polarized extinction law, as shown in Figure~\ref{fig:ext_pol}.
Constraints on the polarized extinction law in the UV come almost entirely from WUPPE and the Faint Object Spectrograph on {\it Hubble}, and so while the Serkowski Law appears to describe interstellar polarization down to $\lambda \simeq 1300$\,\AA\ \citep{Somerville+etal_1994}, extrapolations to wavelengths shorter than were accessible by these instruments are uncertain. We therefore adopt 1300\,\AA\ as the shortest wavelength for our polarized extinction curve.
Although the Serkowski Law (Equation~\ref{eq:serkowski}) describes well the polarized extinction in the UV and optical, it underestimates the observed polarization in the infrared, particularly between $\sim2$ and $5\,\mu$m \citep{Nagata_1990, Jones+Gehrz_1990}. Compiling determinations of the IR polarized extinction along the lines of sight to a number of molecular clouds observed by \citet{Hough+etal_1989}, \citet{Martin+Whittet_1990} determined that the IR polarized extinction could be fit with a power law $p_\lambda \propto \lambda^{-\beta}$ with indices ranging from $\beta = 1.5$ to 2.0. With polarimetry extending from optical wavelengths to 5\,$\mu$m, \citet{Martin+etal_1992} found the $\sim$1--4\,$\mu$m extinction was well-fit by a power law with index $\beta = 1.6$. Between 4 and 5\,$\mu$m, however, the power law systematically underpredicted the observed polarization.
The behavior of the IR polarized extinction is relatively robust to variations that exist at optical and UV wavelengths as demonstrated by \citet{Clayton+Mathis_1988}.
For our representative polarized extinction curve, we adopt the ``compromise fit'' of \citet{Whittet_2003} with $K = 0.87$ and $\lambda_{\rm max} = 0.55\,\mu$m from 0.12\,$\mu$m to the infrared. From $\lambda = \lambda_{\rm max} {\rm exp}\left(\beta/2K\right) = 1.38\,\mu$m to 4\,$\mu$m, we join this curve smoothly to a power law with $\beta = 1.6$.
\subsection{Silicate Features}
\label{subsec:sil_features}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{extpol_sil.pdf}
\caption{A composite polarized extinction profile of the 9.7\,$\mu$m silicate feature derived by \citet{Wright+etal_2002} from observations toward two Wolf-Rayet stars (WR~48a and WR~112). The extinction on these sightlines appears dominated by the diffuse ISM.} \label{fig:con_silpol}
\end{figure}
If the features in the interstellar extinction curve arise from aspherical, aligned grains, then these features should also produce polarized extinction. The polarization, or lack thereof, of interstellar extinction features therefore constrains the shape and alignment properties of dust of a specific composition.
The 9.7\,$\mu$m feature was first detected in polarization on the sightline toward the Becklin-Neugebauer (BN) Object in the Orion Molecular Cloud \citep{Dyck+etal_1973, Dyck+Beichman_1974}, with a detection made toward the Galactic Center soon after \citep{Dyck+Capps+Beichman_1974}. Subsequent observations of the BN Object have probed the frequency-dependence of the polarization, including determination of the polarization profile of the 18\,$\mu$m feature \citep{Dyck+Lonsdale_1981, Aitken+etal_1985, Aitken+Smith+Roche_1989}.
Although the BN Object is well-studied, its molecular environment does not likely typify the diffuse ISM. \citet{Smith+etal_2000} presented an atlas of spectropolarimetry for 55 sources between 8 and 13\,$\mu$m, and, for six of these, additional spectropolarimetric observations between 16 and 22\,$\mu$m. Drawing on these data, \citet{Wright+etal_2002} constructed a typical polarization profile of the 9.7\,$\mu$m silicate feature in polarization based on observations of the Wolf-Rayet stars WR~48a and WR~112. These sightlines were selected because the polarization appears dominated by interstellar absorption. However, both sightlines have H$_2$O ice features at both 3.1 and 6.0\,$\mu$m \citep{Marchenko+Moffat_2017} and so may differ in detail from purely diffuse sightlines. We present this composite polarization profile in Figure~\ref{fig:con_silpol}.
We are unaware of any published observations that might typify the diffuse ISM along which both 10\,$\mu$m and optical polarimetry have been obtained. Thus, we are unable to normalize the \citet{Wright+etal_2002} polarization profile relative to our polarized extinction curve discussed in Section~\ref{sec:extpol_wav}.
\subsection{Carbonaceous Features}
\label{sec:carbon_extpol}
Unlike the silicate features, the extinction features associated with carbonaceous grains have, with few exceptions, {\it not} been detected in polarization.
The 3.4\,$\mu$m feature is the strongest of the infrared extinction features associated with carbonaceous grains (see Section~\ref{subsubsec:carbon_features_mir}), and as such it is a natural observational target for assessing whether carbonaceous grains give rise to polarized extinction. Low-resolution spectropolarimetric observations of five Galactic Center sources by \citet{Nagata+Kobayashi+Sato_1994} yielded no discernible polarization feature near 3.4\,$\mu$m, nor did high-resolution spectropolarimetric observations of GC-IRS7 by \citet{Adamson+etal_1999}. A subsequent search for the 3.4\,$\mu$m feature in polarization toward the young stellar object IRAS 18511+0146 likewise provided only upper limits \citep{Ishii+etal_2002}.
However, the 9.7\,$\mu$m silicate feature had not been measured along any of these sightlines, leading to ambiguity as to whether the lack of polarization was due to the carbonaceous grains themselves or the magnetic field geometry along the line of sight. This ambiguity was settled by \citet{Chiar+etal_2006} who performed spectropolarimetric observations along two lines of sight in the Quintuplet Cluster which had existing polarimetric measurements of the silicate feature. Finding no evidence of polarization in the 3.4\,$\mu$m feature, they concluded that the carbonaceous grains responsible for the feature are much less efficient polarizers than the silicate grains. Subsequent spectropolarimetric observations of the Seyfert\,2 galaxy NGC\,1068 yielded no detectable feature at 3.4\,$\mu$m \citep{Mason+etal_2007}, supporting the conclusions of \citet{Chiar+etal_2006} in a markedly different interstellar environment and further challenging dust models invoking grains with silicate cores with carbonaceous mantles
\citep[see discussion in][]{Li+Liang+Li_2014}. On the basis of the non-detections reported by \citet{Chiar+etal_2006}, it appears that $\Delta p_{3.4}/\Delta p_{9.7} < 0.03$.
The 2175\,\AA\ feature is a second natural candidate to examine for dichroic extinction arising from carbonaceous grains. Initial WUPPE results suggested excess polarization between 2000 and 3000\,\AA\ on several sightlines, with more detailed modeling suggesting that the excesses toward HD\,197770 and HD\,147933-4 ($\rho$ Oph A and B) did in fact arise from the 2175\,\AA\ feature \citep{Clayton+etal_1992,Wolff+etal_1997}. However, if the 2175\,\AA\ feature had the same strength relative to the continuum polarized extinction along all lines of sight, then other detections should have been made, e.g., toward HD\,161056. The sightlines toward HD\,197770 and HD\,147933-4 do not betray any unusual behavior in other respects (e.g., the wavelength dependence of the polarization, the extinction curve, etc.), leading \citet{Wolff+etal_1997} to conclude that there are sightline-to-sightline variations in the polarizing efficiency of the grains responsible for the 2175\,\AA\ feature.
It is difficult to draw definitive conclusions on the basis of two detections (and $\sim30$ non-detections), emphasizing the need for more observations of UV polarization on more sightlines. Particularly now that synergy is possible with observations of FIR polarized emission, this effort promises to enhance our understanding of both grain composition and alignment.
\subsection{Maximum \texorpdfstring{$p_V/E(B-V)$}{pV/E(B-V)}}
\label{subsec:pv_ebv}
Interstellar dust grains rotate rapidly with angular momentum preferentially parallel to the local magnetic field. The short axis of each grain tends to align with the angular momentum, and hence is preferentially parallel to the magnetic field. When the line of sight is parallel to the magnetic field, grain rotation eliminates any net polarization. In contrast, the polarization is greatest when the magnetic field is in the plane of the sky. Dust models should reproduce the intrinsic polarizing efficiency of dust grains, and so we focus here on the case of maximal polarization. For dust extinction, this has typically been quantified as the maximum $V$-band polarization per unit reddening, i.e., $\left[p_V/E(B-V)\right]_{\rm max}$.
\citet{Serkowski+Mathewson+Ford_1975} used a sample of 364 stars of various spectral types to derive $\left[p_V/E(B-V)\right]_{\rm max} = 9\%$\,mag$^{-1}$. While individual stars and regions were occasionally found to have $p_V/E(B-V)$ exceeding this upper envelope \citep[e.g.,][]{Whittet+etal_1994, Skalidis+etal_2018}, it was ambiguous whether dust on these sightlines was atypical or whether the upper envelope had been underestimated. With full-sky polarimetric measurements of dust emission, the {\it Planck} satellite facilitated a detailed comparison between polarized emission in the FIR and polarized extinction in the optical, finding a remarkably linear relation between the submillimeter polarization fraction $p_S$ and $p_V/E(B-V)$ \citep[][see Section~\ref{sec:pol_opt_ir}]{Planck_Int_XXI,Planck_2018_XII}. Given this relationship, the observed $p_S \gtrsim 20\%$ in some regions implies $p_V/E(B-V) \simeq 13\%$\,mag$^{-1}$, leading \citet{Planck_2018_XII} to conclude the classic envelope of 9\%\,mag$^{-1}$ should be revised.
\citet{Panopoulou+etal_2019} employed $R$-band RoboPol observations of 22 stars in a region with $p_S \gtrsim 20\%$ to find that, indeed, the starlight was polarized in excess of $p_V/E(B-V) = 9\%$\,mag$^{-1}$, perhaps even exceeding 13\%\,mag$^{-1}$. Further, UBVRI polarimetry of six of the 22 stars indicated a typical Serkowski Law in this region, suggesting that the dust on these sightlines is not atypical.
Given these recent observational results, we require that dust models reproduce $p_V/E(B-V) = 13\%$\,mag$^{-1}$, and we normalize our polarization profile to this value.
\section{Emission}
\label{sec:irem}
In this section we review observations of emission from interstellar dust from the infrared to microwave, focusing in particular on the emission per unit H column density characteristic of typical diffuse sightlines.
\subsection{IR Emission}
\label{subsec:irem}
\begin{deluxetable}{ccc}
\tablewidth{0pc}
\tablecaption{Infrared Dust Emission Per H\label{table:ir_sed}}
\tablehead{$\nu$ & $\lambda I_\lambda/N_{\rm H}$
& $\left(\lambda P_\lambda/N_{\rm H}\right)_{\rm max}$ \\
$\left[{\rm GHz}\right]$ & $\left[{\rm erg}\,{\rm s}^{-1}\,{\rm
sr}^{-1}\,{\rm H}^{-1}\right]$ & $\left[{\rm erg}\,{\rm s}^{-1}\,{\rm
sr}^{-1}\,{\rm H}^{-1}\right]$}
\startdata
3000 & $\left(2.05\pm0.29\right)\times10^{-25}$ & \\
2140 & $\left(2.51\pm0.30\right)\times10^{-25}$ & \\
1250 & $\left(1.05\pm0.12\right)\times10^{-25}$ & \\
857 & $\left(3.49\pm0.36\right)\times10^{-26}$ & \\
545 & $\left(6.78\pm0.73\right)\times10^{-27}$ & \\
353 & $\left(1.281\pm0.015\right)\times10^{-27}$ &
$\left(2.514\pm0.030\right)\times10^{-28}$ \\
217 & $\left(1.698\pm0.016\right)\times10^{-28}$ &
$\left(3.407\pm0.054\right)\times10^{-29}$ \\
143 & $\left(2.798\pm0.038\right)\times10^{-29}$ &
$\left(5.68\pm0.10\right)\times10^{-30}$ \\
100 & $\left(6.18\pm0.13\right)\times10^{-30}$ &
$\left(1.174\pm0.031\right)\times10^{-30}$ \\
94 & $\left(4.66\pm0.22\right)\times10^{-30}$ & \\
70.4 & $\left(1.544\pm0.050\right)\times10^{-30}$ &
$\left(2.42\pm0.19\right)\times10^{-31}$ \\
61 & $\left(9.25\pm0.62\right)\times10^{-31}$ & \\
44.1 & $\left(5.08\pm0.27\right)\times10^{-31}$ &
$\left(4.06\pm0.74\right)\times10^{-32}$ \\
41 & $\left(4.66\pm0.24\right)\times10^{-31}$ & \\
33 & $\left(4.41\pm0.17\right)\times10^{-31}$ & \\
28.4 & $\left(4.25\pm0.15\right)\times10^{-31}$ & \\
23 & $\left(4.05\pm0.13\right)\times10^{-31}$ &
\enddata
\tablecomments{Adopted dust SED per H and maximum polarized SED per H for the high latitude diffuse ISM. These SEDs are based on those presented in \citet{Planck_Int_XVII}, \citet{Planck_Int_XXII}, and \citet{Planck_2018_XI} and have been color corrected (see Section~\ref{subsec:irem}).}
\end{deluxetable}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{irem}
\caption{We plot two determinations of the \ion{H}{i}-correlated dust emission from near-IR wavelengths through the microwave. In red is the SED of \citet{Dwek+etal_1997} derived from DIRBE data, and in blue the SED of \citet{Planck_Int_XVII} which employs DIRBE, WMAP, and {\it Planck} data. The horizontal errors on the DIRBE data indicate the bandpasses. We plot in gray a range of dust SEDs per $A_V$ from \citet{Planck_Int_XXIX} which we have renormalized to the hydrogen column (see text), and in green a dust SED based on correlations with the 353\,GHz emission \citep{Planck_Int_XXII}. The anomalous microwave emission (AME) is evidenced by the flattening of the SED at wavelengths $\lambda \gtrsim 6\,$mm.} \label{fig:ir_obs}
\end{figure*}
In radiation fields typical of the diffuse ISM, the bulk of the dust grains are heated to $\sim$20\,K and therefore emit thermally in the far-infrared. These wavelengths are largely inaccessible from the ground, necessitating balloon- and space-based observations.
The DIRBE and FIRAS instruments aboard the Cosmic Background Explorer (COBE) constrained the spectrum of the diffuse ISM from 3.5 to 1000\,$\mu$m. In addition to confirming the presence of PAH emission near 3.5 and 4.9\,$\mu$m, \citet{Dwek+etal_1997} derived the \ion{H}{i}-correlated SED of dust in the diffuse ISM. We plot this SED in Figure~\ref{fig:ir_obs}. We note that these data were color corrected assuming a source spectrum with constant $\lambda I_\lambda$ across the band.
Prior to the release of the {\it Planck} dust maps, several studies synthesized the existing data from COBE and WMAP to produce self-consistent dust SEDs. \citet{Paradis+etal_2011} extracted an area of the sky with $|b| > 6^\circ$ and a FIRAS 240\,$\mu$m intensity greater than 18\,MJy\,sr$^{-1}$, corresponding to a sky fraction of 13.7\%. \citet{Compiegne+etal_2011}, also seeking a composite dust SED in which the emission in each band was determined over the same region of the sky, combined DIRBE, FIRAS, and WMAP observations at high Galactic latitudes ($|b| > 15^\circ$) and low \ion{H}{i} column densities ($N_\ion{H}{i} < 5.5\times10^{20}\,{\rm cm}^{-2}$). The differences between these SEDs and that of \citet{Dwek+etal_1997} are minor at their overlapping wavelengths.
The {\it Planck} satellite made sensitive measurements of the FIR-submillimeter dust emission over the full sky. Combining the {\it Planck} data with WMAP and DIRBE and correlating with \ion{H}{i} emission measured by the Parkes 21\,cm survey, \citet{Planck_Int_XVII} constructed a mean SED of the diffuse ISM ($N_\ion{H}{i} \sim 3\times10^{20}$\,cm$^{-2}$) from infrared to microwave wavelengths, which we plot in Figure~\ref{fig:ir_obs}. Following \citet{Planck_Int_XXII}, we apply a correction of 1.9, -2.2, and -3.5\% to the 353, 545, and 857\,GHz bands, respectively, due to updates in the {\it Planck} bandpass determinations subsequent to the work of \citet{Planck_Int_XVII}, and an additional 1.5\% upward correction to the 353\,GHz band following \citet{Planck_2018_XI}. We color correct these data using the tables in \citet{Planck_Int_XVII} to express the SED in terms of monochromatic intensities at the reference frequencies and thus facilitate direct comparison to models.
Recently, \citet{Planck_Int_LVII} correlated the {\it Planck} 545\,GHz dust amplitude maps from the NPIPE data processing pipeline with \ion{H}{i}4PI maps \citep{HI4pi_2016} filtered to retain only \ion{H}{i} velocities between $\pm90$\,km\,s$^{-1}$ \citep{Lenz+Hensley+Dore_2017}. They found $\lambda I_\lambda/N_{\rm H} = 7.74\times10^{-27}$ erg\,s$^{-1}$\,sr$^{-1}$\,H$^{-1}$ at 545\,GHz, slightly higher than but consistent with the value from \citet{Planck_Int_XVII} quoted in Table~\ref{table:ir_sed}.
The use of \ion{H}{i} correlation to separate the Galactic dust emission from other components becomes increasingly unreliable at low frequencies where these other components, such as free-free and synchrotron, can have non-zero correlation with \ion{H}{i}. \citet{Planck_Int_XXII} derived a microwave dust SED by correlating emission in the lower frequency {\it Planck} bands with the 353\,GHz emission. We plot this SED in Figure~\ref{fig:ir_obs}. While this SED and the \ion{H}{i}-based SED of \citet{Planck_Int_XVII} agree very well from 353 to 94\,GHz, they diverge at lower frequencies.
There is evidence that the shape of the dust SED is not uniform across the sky and indeed varies systematically with the strength of the radiation field that heats the dust. \citet{Planck_Int_XXIX} explored this relationship by fitting the dust model of \citet{Draine+Li_2007} to full-sky maps of infrared dust emission. They then normalized these SEDs to the observed optical extinction based on SDSS observations of more than 250,000 quasars.
At 353\,GHz, the median SED has an intensity per $A_V$ of 0.92\,MJy\,sr$^{-1}$\,mag$^{-1}$, while \citet{Planck_Int_XVII} measured a 353\,GHz intensity per hydrogen of $3.9\times10^{-22}$\,MJy\,sr$^{-1}$\,cm$^2$\,H$^{-1}$. Taking these at face value implies $A_V/N_{\rm H} = 4.2\times10^{-22}$\,mag\,cm$^2$. In contrast, from our adopted $N_{\rm H}/E(B-V) = 8.8\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$ (see Section~\ref{subsec:nh_ebv}) and $R_V = 3.1$ (see Section~\ref{subsec:ext_op}), we compute $A_V/N_{\rm H} = 3.5\times10^{-22}$\,cm$^2$\,mag. \citet{Green+etal_2018} found that the \citet{Planck_2013_XI} reddening map calibrated on SDSS quasars overpredicted stellar reddenings by a factor of $\sim1.25$ at intermediate latitudes, suggesting these discrepancies are rooted in the reddening calibration. We therefore correct the SEDs per $A_V$ of \citet{Planck_Int_XXIX} upward by 25\% when comparing them to other determinations.
In Figure~\ref{fig:ir_obs}, we plot the range of dust SEDs over different values of the radiation field strength from \citet{Planck_Int_XXIX}. While there are expected systematic variations in the individual SEDs, the range is consistent with the other determinations within the uncertainties. The systematic variations of the dust SED with the radiation field may encode information about the evolution of dust properties in different environments \citep{Fanciullo+etal_2015}.
We adopt as a dust model constraint the dust SED of \citet{Planck_Int_XVII} based on \ion{H}{i} correlation from the 100, 140, and 240\,$\mu$m DIRBE bands, which overlap with the SED of \citet{Dwek+etal_1997}, down to the 353\,GHz {\it Planck} band. Given the known issues with \ion{H}{i} correlation at low frequencies, we adopt the SED of \citet{Planck_Int_XXII} from the {\it Planck} 217\,GHz band to the WMAP 23\,GHz band, normalizing to the measured 353\,GHz intensity per H atom derived by \citet{Planck_Int_XVII}. At the lowest frequencies, the dust emission is dominated by the anomalous microwave emission (AME), which we discuss in Section~\ref{subsec:ame}. Our adopted dust SED is presented in Table~\ref{table:ir_sed}, where we have color corrected all data to facilitate direct comparisons to models.
\subsection{Infrared Emission Features}
\label{sec:pah_emission}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{irem_pah}
\caption{In violet we plot the {\it Spitzer} IRS spectrum of the translucent cloud DCld~300.2-16.9 (B) as determined by \citet{Ingalls+etal_2011}, where we have noted the locations of rotational H$_2$ lines. In black we plot the combined {\it Spitzer} and {\it Akari} spectrum of the star-forming SBb galaxy NGC\,5992 \citep{Brown+etal_2014}, which has been corrected for starlight emission by subtraction of a 5000\,K blackbody. We also indicate the strong emission lines present in the spectrum. In red, we plot the \ion{H}{i}-correlated dust emission as seen by DIRBE \citep{Dwek+etal_1997}, which we use to normalize the PAH emission spectra.} \label{fig:pah_spectrum}
\end{figure}
The mid-IR emission from dust is characterized by prominent emission features at 3.3, 6.2, 7.7, 8.6, 11.3, 12.0, 12.7, and 13.55\,$\mu$m (see Figures~\ref{fig:ir_obs} and~\ref{fig:pah_spectrum}). First observed in the 1970s \citep[e.g.,][]{Gillett+Forrest+Merrill_1973, Merrill+Soifer+Russell_1975}, these features were subsequently identified as vibrational modes of PAHs \citep{Leger+Puget_1984, Allamandola+Tielens+Barker_1985}. As grains must be heated to quite high temperatures in order to excite these modes ($T \gtrsim 250$\,K), the carriers must be small enough to be heated through the absorption of a single photon. This process can bring small grains to temperatures in excess of 1000\,K.
The width and ubiquity of these emission features make it implausible that they are due to a single species of PAH. Rather, they represent the aggregate emission from a diverse population of PAH-like molecules. The 3.3\,$\mu$m feature, also observed in extinction (see Section~\ref{subsubsec:carbon_features_mir}), has been identified with the aromatic C-H stretching mode; C-C stretching modes account for the 6.2 and 7.7\,$\mu$m features, which have also been observed in extinction (see Section~\ref{subsubsec:carbon_features_mir}); the C-H in-plane bending mode gives rise to the 8.6\,$\mu$m feature, while the C-H out-of-plane bending mode produces the 11.3, 12.0, 12.7, and 13.5\,$\mu$m features depending on whether one, two, three, or four hydrogen atoms are adjacent to the bond, respectively. A detailed summary of the features and their corresponding modes can be found in \citet{Allamandola+Tielens+Barker_1989} and \citet{Tielens_2008}.
The strength of these features suggests that a substantial amount of interstellar carbon must reside in the grains giving rise to this emission. The dust model of \citet{Draine+Li_2007} required 4.7\% of the total dust mass to reside in PAHs with fewer than $10^3$ carbon atoms, which accounted for $\sim10$\% of the total interstellar carbon abundance.
In addition to aromatic features associated with PAHs, the aliphatic 3.4\,$\mu$m feature has also been observed in emission \citep[e.g.,][]{Geballe+etal_1985, Sloan+etal_1997}, though it is typically much weaker than the 3.3\,$\mu$m aromatic feature. Comparing the strengths of these two features and assuming the 3.4\,$\mu$m feature arises solely from aliphatic carbon, \citet{Li+Draine_2012} concluded that no more than about 10\% of the carbon in grains giving rise to these emission features can be in an aliphatic bond. However, it should be noted that anharmonicity in the aromatic 3.3\,$\mu$m C-H stretching mode may also contribute to the emission at 3.4\,$\mu$m \citep{Barker+Allamandola+Tielens_1987, Li+Draine_2012}, further reducing the abundance of the aliphatic component.
Using {\it Spitzer} IRS, \citet{Ingalls+etal_2011} made spectroscopic measurements between 5.2 and 38\,$\mu$m of several regions in the translucent cloud DCld\,300.2-16.9. In addition to detecting IR H$_2$ transitions, these measurements provide a reasonable proxy for the PAH emission in the diffuse ISM. We plot the spectrum of their sightline ``B'' in Figure~\ref{fig:pah_spectrum}, where we have noted the observed H$_2$ lines.
Combining spectroscopy from {\it Spitzer} and {\it Akari}, along with ancillary data from the UV to the IR, \citet{Brown+etal_2014} presented an atlas of 129 galaxy SEDs spanning a range of galaxy types. We focus on their 2.5--34\,$\mu$m spectrum of NGC\,5992, a star-forming SBb galaxy. To remove the continuum emission from starlight in this spectrum, we subtract a 5000\,K blackbody component. We also note the presence of some emission lines in the spectrum arising from \ion{H}{ii} regions: [\ion{Ne}{ii}] at 12.81\,$\mu$m and [\ion{S}{iii}] at 12.81 and 18.71\,$\mu$m.
In Figure~\ref{fig:pah_spectrum}, we compare the MIR spectra of DCld\,300.2-16.9 (B) and NGC\,5992, finding excellent agreement between $\simeq 5$--12\,$\mu$m. If a column density of $3.9\times10^{21}\,$cm$^{-2}$ is assumed, the bandpass-integrated SED agrees well with the \ion{H}{i}-correlated DIRBE SED of the diffuse ISM as determined by \citet{Dwek+etal_1997} (see Section~\ref{subsec:irem}). $^{12}$CO observations of this cloud suggest $N({\rm H}_2) \sim 2\times10^{21}$\,cm$^{-2}$ \citep{Ingalls+etal_2011}, and so this column density appears reasonable.
As the AKARI data constrains the PAH emission in NGC\,5992 at short wavelengths, we adopt this SED as our benchmark between 3 and 12\,$\mu$m. Given the uncertainty of the starlight subtraction, we do not employ the data at wavelengths less than 3\,$\mu$m. The spectra of NGC\,5992 and DCld\,300.2-16.9 diverge beyond 12\,$\mu$m likely due to the more intense starlight heating, and consequently higher temperature grains, in NGC\,5992. The spectrum of DCld\,300.2-16.9 is more likely to typify the diffuse ISM and is in good agreement with the shape of the DIRBE SED, and thus we adopt it as our benchmark from 12--38\,$\mu$m. However, we excise portions of the spectrum in the vicinity of the S(0), S(1), and S(2) H$_2$ rotational transitions at 28.2, 17.0, and 12.3\,$\mu$m, respectively.
In addition to the hydrocarbon features discussed above, weak mid-infrared emission features from the C-D stretching modes of deuterated aromatic and aliphatic hydrocarbons are expected near 4.5\,$\mu$m, given that in the diffuse ISM D is often substantially depleted from the gas phase \citep{Linsky+etal_2006}. Detections of such emission features have been reported \citep{Peeters+etal_2004,Doney+etal_2016}, but interpretation remains uncertain.
\subsection{Anomalous Microwave Emission}
\label{subsec:ame}
The anomalous microwave emission (AME) was discovered as a dust-correlated emission component in the microwave, both in {\it COBE} maps at 31.5, 53, and 90\,GHz \citep{Kogut+etal_1996,deOliveiraCosta+etal_1997} and observations of the North Celestial Pole made with the Owens Valley Radio Observatory 5.5\,m telescope at 14.5 and 32\,GHz \citep{Leitch+etal_1997}. While these studies suggested free-free emission as a possible explanation, \citet{Draine+Lazarian_1998a} argued against this interpretation on energetic grounds and suggested instead that electric dipole emission from spinning ultra-small grains was the responsible mechanism. For a recent review of AME, see \citet{Dickinson+etal_2018}.
The Perseus Molecular Cloud is perhaps the best-studied AME source and the excellent frequency coverage near the AME peak helps constrain the underlying SED. It exhibits a pronounced emission peak near 30\,GHz with a sharp decline to both higher and lower frequencies \citep[see][for a compilation of low-frequency observations of Perseus]{GenovaSantos+etal_2015}.
The AME of the diffuse ISM appears systematically different than what has been observed in specific clouds. For instance, the AME SED derived from all-sky WMAP and {\it Planck} maps does {\it not} exhibit a low-frequency turnover but rather has a spectrum that appears to rise through the lowest frequency band \citep[WMAP 23\,GHz;][]{MivilleDeschenes+etal_2008, Planck_2015_X}. However, C-BASS observations in the North Celestial Pole region indicate no presence of diffuse AME at 5\,GHz \citep{Dickinson+etal_2019}. More data between 5 and 23\,GHz is required to place constraints on the AME SED of the diffuse ISM, in particular its peak frequency.
The SED of dust-correlated emission derived by \citet{Planck_Int_XXII} and presented in Table~\ref{table:ir_sed} includes an AME component at microwave frequencies, as can be seen in Figure~\ref{fig:ir_obs}. However, the 353\,GHz emission is not perfectly correlated with AME in general \citep[e.g.][]{Planck_Int_XV, Hensley+Draine+Meisner_2016, Planck_2015_XXV, Dickinson+etal_2019}, and so a correlation analysis may underestimate the amount of AME relative to the submillimeter dust emission. Additionally, the other low-frequency foregrounds like free-free and synchrotron emission are also dust-correlated
\citep{Choi+Page_2015, Krachmalnicoff+etal_2018}, which may bias the shape of the derived AME SED.
Parametric component separation with the \texttt{Commander} code has yielded full-sky maps of AME \citep{Planck_2015_X} and mitigates some of the concerns with a correlation-based approach. Employing these maps over the full sky, \citet{Planck_2015_XXV} found the ratio of specific intensities $I_\nu$ of the 22.8\,GHz AME to the 100\,$\mu$m and 545\,GHz dust emission to be $\left(3.5\pm0.3\right)\times10^{-4}$ and $\left(1.0\pm0.1\right)\times10^{-3}$, respectively. When instead restricting to $|b| > 10^\circ$, consistent results are obtained to within the uncertainties. This agrees reasonably well with the \citet{Planck_Int_XXII} SED, which has corresponding ratios of $2.6\times10^{-4}$ and $1.1\times10^{-3}$, respectively.
Given this agreement, we take the SED of \citet{Planck_Int_XXII} as representative even at AME-dominated wavelengths. However, we note that the AME varies both in intrinsic strength and peak frequency from region to region \citep{Planck_Int_XV, Planck_2015_XXV}, so comparisons between dust models and an average SED should be made with care.
\subsection{Luminescence}
In addition to scattering optical light, dust grains also luminesce---emit optical photons following absorption of a higher energy photon. This can be the result of fluorescence---radiative deexcitation of the excited electronic level produced by absorption. Alternatively, internal conversion may lead to excitation of a different electronically-excited state that then deexcites radiatively, a process termed ``Poincar\'e fluorescence'' \citep{Leger+etal_1988}.
Luminescence at extreme red wavelengths (6000--8000\,\AA, corresponding to $1.5\lesssim h\nu\lesssim 2.1$\,eV) has been observed in a number of reflection nebulae, including the well-studied objects NGC\,2023 and NGC\,7023 \citep{Witt+Boroson_1990}. Because the emission is spatially extended, it is referred to as ``extended red emission'' \citep[ERE;][]{Witt+Schild+Kraiman_1984}. ERE is also seen in some planetary nebulae \citep{Furton+Witt_1990, Furton+Witt_1992}, and in some unusual systems such as the Red Rectangle \citep{Cohen+etal_1975,Schmidt+Cohen+Margon_1980}, where it was first discovered. The dust in reflection nebulae is presumed to be interstellar dust that happens to be illuminated by a nearby star, and so we expect ERE to be a property of the general interstellar dust population.
ERE is present in carbon-rich planetary nebulae, but has not been observed in oxygen-rich planetary nebulae. This strongly suggests that carbonaceous material is responsible for the ERE \citep{Witt+Vijh_2004}. In reflection nebulae, ERE is seen only when the exciting star has $T_{\rm eff}>10,000$\,K, hot enough to provide ample far-UV radiation \citep{Darbon+Perrin+Sivan_1999}. From the spatial distribution in IC59 and IC63, \citet{Lai+Witt+Crawford_2017} argue that ERE is excited by $11<h\nu<13.6$\,eV far-UV photons. Observed ERE intensities in reflection nebulae indicate overall photon conversion efficiencies (ERE photons emitted per UV photon absorbed) of $\lesssim 1\%$ \citep{Smith+Witt_2002}.
A number of authors have reported detection of the ERE from dust in Galactic cirrus clouds in the general ISM \citep{Guhathakurta+Tyson_1989,Szomoru+Guhathakurta_1998, Gordon+Witt+Friedmann_1998,Witt+etal_2008}. \citet{Gordon+Witt+Friedmann_1998} estimated the ERE emissivity to be
\begin{equation}
\frac{\rm ERE~photons}{\rm H~atoms} = 5.65\times10^{-14} \frac{\rm photons}{\rm s~H~atom}
\end{equation}
with a required quantum yield $10\pm3\%$ if the ERE is excited by absorbed photons in the $2.25$--$13.6$\,eV range. While certain materials do indeed have high quantum efficiencies \citep[e.g., multilayer structures of SiO$_{0.9}$/SiO$_2$ luminesce at $\sim0.9\,\mu$m with a quantum yield $\sim45\%$;][]{Valenta+etal_2019}, an overall yield of 10\% would strongly constrain candidate grain materials.
Furthermore, if the ERE is actually primarily excited by 11--13.6\,eV photons, as concluded by \citet{Lai+Witt+Crawford_2017}, then the ERE intensities reported by \citet{Gordon+Witt+Friedmann_1998} would require an overall quantum yield approaching 100\%. This would require that (1) the ERE must originate from a major grain component, one accounting for a substantial fraction of the far-UV absorption, and (2) this component must have a quantum efficiency of order 100\% for emitting an ERE photon following a FUV absorption. We are not aware of any candidate grain materials that could meet this requirement while complying with elemental abundance constraints, and the observed extinction properties of interstellar dust.
On the other hand, measurement of the 4000--9000\,\AA\ spectrum of the diffuse Galactic light using SDSS blank sky spectra found that the shape of the diffuse light spectrum was consistent with the scattered light expected for standard grain models \citep{Brandt+Draine_2012}. \citet{Brandt+Draine_2012} estimated that no more than $\sim10\%$ of the dust-correlated diffuse light at $\sim6500\,$\AA\ could be ERE. This upper limit is inconsistent with the claimed detections toward individual cirrus clouds \citep{Guhathakurta+Tyson_1989,Szomoru+Guhathakurta_1998, Gordon+Witt+Friedmann_1998}. Additional observations will be needed to resolve this conflict.
We will assume that dust in both reflection nebulae and the general ISM produces ERE when illuminated by 11\,eV $\lesssim h\nu<13.6$\,eV photons, with an overall photon conversion efficiency $\sim1\%$ as seen in bright reflection nebulae. This conversion efficiency could either be the result of a low conversion efficiency for a major dust component or high conversion efficiency emission from a minor dust component (e.g., elements of the PAH population).
In addition to the ERE, there is evidence for luminescence in the blue, peaking near $\sim3750\,$\AA, in the Red Rectangle \citep{Vijh+Witt+Gordon_2004} and in four reflection nebulae \citep{Vijh+Witt+Gordon_2005}. \citet{Vijh+Witt+Gordon_2004} suggested that the emission is fluorescence in small, neutral PAHs, containing 3--4 rings, such as anthracene (C$_{14}$H$_{10}$) and pyrene (C$_{16}$H$_{10}$). It is not clear what abundance would be required to account for the blue luminescence.
\section{Polarized Emission}
\label{sec:ir_pol}
In this section we review observations of polarized infrared emission from interstellar dust and its connection to the observed polarized extinction.
\subsection{Infrared Emission}
\label{subsec:pir}
Just as aligned, aspherical grains polarize the starlight they absorb, the infrared emission from this same population of grains will be polarized.
The balloon-borne Archeops experiment \citep{Benoit_2002} provided a first look at polarized dust emission from the diffuse ISM in the Galactic plane. The 353\,GHz observations indicated polarization fractions of 4-5\%, with values exceeding 10\% in some clouds \citep{Benoit+etal_2004}, suggesting substantial alignment of the grains providing the submillimeter emission.
WMAP produced full-sky polarized intensity maps from 23--93\,GHz. Utilizing the final 9-year WMAP data, \citet{Bennett+etal_2013} found that the polarized dust emission $P_\nu$ in the WMAP bands is well-fit by a power law $P_{\nu} \propto \nu^{2+\beta}$ with $\beta = 1.44$.
With polarimetric observations extending from 30 to 353\,GHz, the {\it Planck} satellite provided unprecedented constraints on the frequency dependence of the polarized emission. \citet{Planck_Int_XXII} found that the full-sky average of the polarized intensity of the dust emission from 100 to 353\,GHz is consistent with a modified blackbody having power law opacity $\kappa_\nu \propto \nu^\beta$ with $\beta = 1.59\pm0.02$ in contrast with $\beta = 1.51\pm0.01$ for total intensity over the same frequency range. This would imply a decrease in the polarization fraction between 353 and 70.4\,GHz with a significance greater than 3$\sigma$.
Subsequently, \citet{Planck_2018_XI} employed the \texttt{SMICA} component separation algorithm to derive a global polarized dust SED. Despite making no assumptions on the parametric form of the dust SED, they found excellent agreement with a modified blackbody having $T_d = 19.6$\,K and $\beta = 1.53\pm0.02$. Following updates to the {\it Planck} photometric calibration, they revised the $\beta$ for total intensity to 1.48. With these changes, the $\beta$ determined for intensity and polarization are the same within $2\sigma$. In Figure~\ref{fig:ir_pol}, we plot the polarized dust SED of \citet{Planck_2018_XI} and adopt it as a model constraint. We discuss the normalization of this SED to the hydrogen column in Section~\ref{sec:pol_opt_ir}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{ipol}
\caption{The submillimeter polarized dust SED as determined from the \texttt{SMICA} component separation algorithm on {\it Planck} polarization data \citep{Planck_2018_XI}. The normalization of this SED to the hydrogen column is discussed in Section~\ref{sec:pol_opt_ir}.} \label{fig:ir_pol}
\end{figure}
\begin{deluxetable}{cc}
\tablewidth{0pc}
\tablecaption{Dust Polarization Fraction\label{table:pfrac}}
\tablehead{\colhead{$\lambda$} & \colhead{\hspace{2cm}$p\left(\nu\right)/p_{353}$}\hspace{2cm}\\
$\left[\mu{\rm m}\right]$ & }
\startdata
250 & $1.00\pm0.09$ \\
350 & $1.06\pm0.11$ \\
500 & $0.89\pm0.09$ \\
850 & 1. \\
1382 & $1.02\pm0.03$ \\
2100 & $1.03\pm0.03$ \\
3000 & $0.98\pm0.04$ \\
4260 & $0.80\pm0.07$ \\
6800 & $0.13\pm0.03$
\enddata
\tablecomments{The dust polarization fraction relative to 850\,$\mu$m (353\,GHz) as determined from BLASTPol observations in the Vela Molecular Ridge \citep{Ashton+etal_2018} and the {\it Planck} total and polarized dust SEDs \citep{Planck_Int_XVII, Planck_Int_XXII, Planck_2018_XI} compiled in Table~\ref{table:ir_sed}.}
\end{deluxetable}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{pfrac}
\caption{The polarization fraction of the dust emission relative to the 850\,$\mu$m (353\,GHz) polarization fraction as determined by BLASTPol and {\it Planck} observations. The BLASTPol data are from observations in the Vela Molecular Ridge \citep{Ashton+etal_2018} while the {\it Planck} data are based on the total and polarized dust SEDs \citep{Planck_Int_XVII,Planck_Int_XXII, Planck_2018_XI} compiled in Table~\ref{table:ir_sed}. Little wavelength dependence is observed except at the longest wavelengths where AME becomes a significant fraction of the total dust emission.} \label{fig:pfrac}
\end{figure}
In addition to these large-scale observations of the diffuse ISM, polarimetric observations of dense clouds have also shed light on the FIR polarization properties of interstellar dust. In star-forming molecular clouds, the degree of polarization has been observed to fall from 60 to 350\,$\mu$m, then rise from 350 to 450\,$\mu$m \citep{Vaillancourt_2002, Vaillancourt+etal_2008}. This behavior can potentially be explained by correlated variations in the dust temperature and alignment efficiency in different regions along the line of sight, as might be expected in star-forming dense clouds.
However, BLASTPol observations of the Vela~C molecular cloud region and the Carina Nebula have revealed very little ($\lesssim 10\%$) evolution in the dust polarization fraction between 250 and 850\,$\mu$m \citep{Gandilo+etal_2016, Ashton+etal_2018, Shariff+etal_2019}. In particular, \citet{Ashton+etal_2018} studied translucent sightlines in the Vela Molecular Ridge, which is more likely to resemble diffuse sightlines than the observations at higher column densities. We present their determination of the dust polarization fraction, normalized to unity at 353\,GHz, in Table~\ref{table:pfrac} alongside the polarization fractions implied by the polarized dust SED of \citet{Planck_2018_XI} presented in Table~\ref{table:ir_sed}.
Taken together, the {\it Planck} and BLASTPol results suggest a roughly constant dust polarization fraction between 250\,$\mu$m and 3\,mm, as shown in Figure~\ref{fig:pfrac}.
Polarization in the mid-infrared dust emission features is generally not expected due to the small sizes of the grains able to emit at these wavelengths. However, a detection of polarization in the 11.3\,$\mu$m PAH feature has been reported in the nebula associated with the Herbig~Be star MWC~1080 \citep{Zhang+etal_2017}. If the polarization is indeed resulting from aligned PAHs, this may have implications for the theory of alignment of ultrasmall grains and thus predictions of AME polarization \citep{Draine+Hensley_2016,Hoang+Lazarian_2018}. However, it is not clear that either the dust properties or physical conditions in this region are likely to typify the diffuse ISM, and so we do not employ this result as a dust model constraint.
\subsection{Connection to Optical Polarization}
\label{sec:pol_opt_ir}
Because the same grains are believed to provide both polarized extinction in the optical and polarized emission in the infrared, it is expected that these quantities should be tightly related. Indeed, the polarization fraction of the 353\,GHz submillimeter emission $p_S$ divided by the $V$-band polarization per optical depth $p_V/\tau_V$ has a characteristic value between 4 and 5 over a range of column densities \citep[$N_{\rm H} \lesssim 5\times10^{21}\,$cm$^{-2}$, ][]{Planck_Int_XXI, Planck_2018_XII}. We adopt the best-fit value of 4.31 over diffuse sightlines \citep{Planck_2018_XII} as representative of dust in the diffuse ISM.
These relations between the polarized extinction and polarized emission from interstellar dust allow us to normalize the polarized dust SED derived by \citet{Planck_2018_XI} (see Figure~\ref{fig:ir_pol}) to the hydrogen column. First, $p_S/\left(p_V/\tau_V\right) = 4.31$, $\left[p_V/E(B-V)\right]_{\rm max} = 0.13$, and $R_V = 3.1$ together imply a maximum 353\,GHz polarization fraction of 19.6\%, agreeing well with the observed maximum of $p_S = 22^{+3.5}_{-1.4}$\% \citep{Planck_2018_XII}. Applying this polarization fraction to the adopted 353\,GHz dust emission per H (see Table~\ref{table:ir_sed}) yields a maximum 353\,GHz polarized dust emission per H of $2.51\times10^{-28}$\,erg\,s$^{-1}$\,sr$^{-1}$\,H$^{-1}$. The polarized dust SED of \citet{Planck_2018_XI}, which is normalized to unity at 353\,GHz, can then be used to compute the maximum polarized dust emission per H at lower frequencies, as presented in Table~\ref{table:ir_sed}. We have color corrected all values, including corrections both at the observed frequency and at 353\,GHz, to obtain monochromatic spectral energy densities which can be compared directly to models.
\citet{Planck_Int_XXI} also introduced the ratio of the 353\,GHz polarized intensity to the V-band extinction, i.e, $p_S/p_V$. They found a characteristic value of $5.4\pm0.5$\,MJy\,sr$^{-1}$ on translucent sightlines. \citet{Planck_2018_XII} extended this analysis to diffuse lines of sight, finding a characteristic ratio of $5.42\pm0.05$\,MJy\,sr$^{-1}$ with a systematic decrease to roughly 5\,MJy\,sr$^{-1}$ at the lowest ($\lesssim1\times10^{20}$\,cm$^{-2}$) column densities observed. This ratio is not independent of values we have already adopted:
\begin{equation}
\frac{P_{353}}{p_V} = \frac{1.086}{R_V} \frac{N_{\rm
H}}{E(B-V)} \frac{I_{353}}{N_{\rm H}}
\frac{p_S}{p_V/\tau_V} = 4.8\,{\rm
MJy}\,{\rm sr}^{-1}
~~~,
\end{equation}
which is consistent with observations at low column densities. We note, however, that the highly polarized region studied by \citet{Panopoulou+etal_2019} has $P_{353}/p_V = 4.1\pm0.1$\,MJy\,sr$^{-1}$, significantly lower than these values. Further study of this ratio and its variability across the sky is needed to understand this apparent discrepancy.
Performing a similar comparison between optical and submillimeter polarization in the Vela~C molecular cloud, \citet{Santos+etal_2017} related the 500\,$\mu$m polarization fraction $p_{500}$, $I$ band polarized extinction $p_I$, and $V$ band total extinction, finding a characteristic $p_{500}/\left(p_I/\tau_V\right) = 2.4\pm0.8$. For a typical Serkowski Law (see Section~\ref{sec:extpol_wav}), $p_V$ and $p_I$ differ by only about 10\%. If the FIR polarization fraction is relatively flat between 500 and 850\,$\mu$m (see Figure~\ref{fig:pfrac}), then $p_{500}/\left(p_I/\tau_V\right)$ should be approximately 10\% larger than $p_S/\left(p_V/\tau_V\right)$, which has characteristic value 4.31 \citep{Planck_2018_XII}. This apparent discrepancy may be due to the very different environments sampled by these observations, but given the importance of this ratio in constraining models, further investigation is warranted.
\subsection{AME Polarization}
\label{subsec:ame_pol}
If the AME arises from aligned, aspherical grains, then it too will be polarized. However, searches for polarized AME have thus far yielded only upper limits at the $\sim1$\% level \citep{Dickinson+Peel+Vidal_2011,Macellari+etal_2011,GenovaSantos+etal_2015, Planck_2015_XXV,GenovaSantos+etal_2017}. See \citet{Dickinson+etal_2018} for a recent review.
To the extent that the smallest interstellar grains produce AME through rotational electric dipole radiation, the amount of AME polarization depends on how well these grains are able to align with the local magnetic field. The lack of polarization in the UV extinction curve (see Section~\ref{sec:extpol}) despite strong total extinction in the UV (see Section~\ref{subsec:ext_uv}) suggests that these grains are poorly aligned. However, \citet{Hoang+Lazarian+Martin_2013} demonstrated that if aligned PAHs were responsible for the claimed detections of polarization in the 2175\,\AA\ feature and also produced the AME, then the AME should be polarized at the $\lesssim 1\%$ level, near current upper limits. On the other hand, \citet{Draine+Hensley_2016} argued that quantization of the vibrational energy levels in ultrasmall grains leads to exponential suppression of their alignment, resulting in negligible AME polarization.
\section{Summary and Discussion}
\label{sec:summary}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{extcrv_summary}
\includegraphics[width=\textwidth]{irem_summary}
\caption{In the top panel, we plot our adopted constraints on the total (black) and polarized (red) extinction from dust in the diffuse ISM. In the bottom panel, we plot our adopted constraints on the total (black) and polarized (red) emission from interstellar dust. Note that for both polarized extinction and emission, we show the maximum level of polarization, corresponding the interstellar magnetic field lying in the plane of the sky. We have made use of the values in Table~\ref{table:constant_summary} where necessary to normalize the observational data to the hydrogen column. The data underlying the FIR emission constraints, including uncertainties, are presented in Table~\ref{table:ir_sed}. A summary of the adopted constraints is given in Section~\ref{sec:summary}. These curves will be made available in tabular form upon publication. The extrapolation of the extinction curve to FIR wavelengths can be found in \citet{Hensley+Draine_2020}.} \label{fig:constraints}
\end{figure*}
\begin{deluxetable*}{ccc}
\tablewidth{0pc}
\tablecaption{Adopted Values of Select Quantities for the
Diffuse ISM\label{table:constant_summary}}
\tablehead{\multicolumn{3}{c}{Reference Quantities}}
\startdata
Quantity & Value &
Reference \\ \hline
$A( 5500\,\text{\AA})/E(4400\,\text{\AA}-5500\,\text{\AA})$ & 3.02 & \citet{Fitzpatrick+etal_2019} \\
$A_H/A_{K_s}$ & 1.55 & \citet{Indebetouw+etal_2005} \\
$N_H/E(B-V)$ & $8.8\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$ &
\citet{Lenz+Hensley+Dore_2017} \\
$\left[p_V/E(B-V)\right]_{\rm max}$ & 0.13\,mag$^{-1}$ &
\citet{Planck_2018_XII} \\
$p_{353}/\left(p_V/\tau_V\right)$ & 4.31 & \citet{Planck_2018_XII} \\
\hline \hline
\multicolumn{3}{c}{Derived Quantities} \\ \hline
Quantity & Value & Reference\\ \hline
$R_V$ & 3.1 & \citet{Fitzpatrick+etal_2019} \\
$A_V/N_{\rm H}$ & $3.5\times10^{-22}$\,mag\,cm$^2$ & \\
$p_{353}^{\rm max}$ & 19.6\% & \\
$P_{353}/p_V$ & 4.8\,MJy\,sr$^{-1}$ &
\enddata
\end{deluxetable*}
Based on the foregoing discussion, we argue that the following data represent the current state of observations that constrain models of interstellar dust, and so a successful model of interstellar dust in the diffuse ISM should be measured against its consistency with these data. We also present a table of constants (Table~\ref{table:constant_summary}) based on observational data that enables the translation of these observables into constraints on the material properties of dust.
\begin{itemize}
\item {\bf Abundances}: Our adopted ISM abundances, given in Table~\ref{table:solid_abundance}, are based on Solar reference abundances \citep{Asplund+etal_2009,Scott+etal_2015b,Scott+etal_2015a} corrected for diffusion \citep{Turcotte+Wimmer-Schweingruber_2002} and with chemical enrichment \citep{Chiappini+Romano+Matteucci_2003, Bedell+etal_2018}. Solid phase abundances are then derived based on the gas phase abundances determined by \citet{Jenkins_2009} for sightlines with moderate depletion ($F_* = 0.5$).
\item {\bf Extinction}: We synthesize the extinction curves of \citet{Gordon+etal_2009} and \citet{Cardelli+Clayton+Mathis_1989} in the FUV, which we join to that of \citet{Fitzpatrick+etal_2019} in the UV through the optical. From 0.55 to 2.2\,$\mu$m, we employ the extinction curve of \citet{Schlafly+etal_2016} assuming $A_H/A_K = 1.55$ \citep{Indebetouw+etal_2005}. From 2.2 to 37\,$\mu$m, we adopt the MIR extinction curve derived by \citet{Hensley+Draine_2020} on the sightline toward Cyg\,OB2-12. Finally, we normalize this composite curve to the hydrogen column via $N_{\rm H}/E(B-V) =8.8\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$ \citep{Lenz+Hensley+Dore_2017}.
\item {\bf Polarized Extinction}: Between 0.12 and 4\,$\mu$m, we join a Serkowski Law with parameters $K = 0.87$ and $\lambda_{\rm max} = 0.55$\,$\mu$m \citep{Whittet_2003} smoothly to a power law with index $\beta = 1.6$ in the IR \citep{Martin+etal_1992}. We normalize this curve to a maximum starlight polarization of $p_V/E(B-V) = 0.13$\,mag$^{-1}$ \citep{Planck_2018_XII, Panopoulou+etal_2019}.
\item {\bf Emission}: In the MIR, we adopt the {\it AKARI} and {\it Spitzer} spectrum of the star-forming galaxy NGC\,5992 \citep{Brown+etal_2014} between 3 and 12\,$\mu$m and the {\it Spitzer} IRS observations of the translucent cloud DCld 300.2-16.9 (B) \citep{Ingalls+etal_2011} between 6 and 38\,$\mu$m. The composite spectrum is scaled to the hydrogen column to match observations of diffuse Galactic emission in the DIRBE bands \citep{Dwek+etal_1997}. In the FIR, we adopt the $\ion{H}{i}$-correlated dust emission measured in the DIRBE and {\it Planck} bands with $\nu \ge 353$\,GHz \citet{Planck_Int_XVII}, and the 353\,GHz-correlated emission measured in the lower frequency {\it Planck} and WMAP bands by \citet{Planck_Int_XXII}.
\item {\bf Polarized Emission}: We adopt the frequency dependence of the polarized infrared emission determined by \citet{Planck_2018_XI} scaled to match the relation between polarized extinction and emission derived by \citet{Planck_2018_XII}.
\end{itemize}
These constraints are summarized visually in Figure~\ref{fig:constraints}, which illustrates the impressive breadth of our current knowledge, spanning a large dynamic range in wavelength, magnitudes of extinction, and intensity, and highlights the most pressing needs for augmenting the state of art. We close by highlighting a few such future directions of key importance for dust modeling.
The spectroscopic features in extinction, emission, and polarization are the ``fingerprints'' of the specific materials that constitute interstellar grains, enabling determination of their chemical makeup. The Near InfraRed spectrograph \citep[NIRSpec, 0.6--5\,$\mu$m;][]{NIRSpec} and Mid-Infrared Instrument \citep[MIRI, 5--28\,$\mu$m;][]{MIRI} aboard the {\it James Webb Space Telescope} ({\it JWST}) will characterize the NIR and MIR spectroscopic dust features in unprecedented detail. Observing the full sky between 0.75 and 5\,$\mu$m with a resolving power of up to 130, SPHEREx \citep{SPHEREx} will enable mapping of the strength of dust absorption and emission features and thus probe their variation with location in the Galaxy. The high spectral resolution of the XRISM \citep{XRISM} and Athena \citep{Athena} X-ray observatories promises to reveal the mineralogical composition of interstellar grains in ways complementary to what can be gleaned from the infrared features.
As the 3.4\,$\mu$m complex has been observed on very few sightlines that might typify the diffuse ISM, a number of questions can be addressed by more sensitive observations. Is it indeed generic of the diffuse ISM that the 3.3\,$\mu$m aromatic feature is substantially broader in absorption than emission? To what extent does diamond-like carbon contribute emission and absorption in the 3.47\,$\mu$m feature? How does the 3.4\,$\mu$m profile change systematically with interstellar environment?
The 6.2 and 7.7\,$\mu$m aromatic features have been observed in absorption, but on few sightlines. Detailed characterization of these features, particularly comparison of the emission and absorption profiles, will clarify which grains are the carriers of aromatic material in the ISM. The aromatic features at still longer wavelengths have not been observed in absorption, making them a compelling target for {\it JWST} and an important constraint on PAH models.
While the aliphatic 6.85\,$\mu$m feature appears generic to the diffuse ISM on the basis of its detection in absorption toward Cyg\,OB2--12, the ubiquity of the 7.25\,$\mu$m methylene feature is less clear. Characterization of these aliphatic absorption features and their strengths relative to the aromatic features is a relatively unexplored window into the hydrocarbon chemistry of the ISM which {\it JWST} will enable. Likewise, the deuterated counterparts of both the aliphatic and aromatic features, inaccessible from the ground, will be accessible to {\it JWST} and SPHEREx in emission and absorption.
The sensitivity of MIRI will enable searches for as-yet undetected spectroscopic features and will characterize in greater detail those already observed. The silicate features can be probed for trace amounts of crystallinity, and the detection of crystalline forsterite can be verified on many more sightlines. Dedicated searches can be undertaken for the 11.2\,$\mu$m SiC feature and the 11.53\,$\mu$m feature from polycrystalline graphite, perhaps finally confirming or ruling out graphite as a major constituent of interstellar dust.
In the NIR, NIRSpec can characterize the many DIBs found longward of 600\,nm and perform sensitive searches for new ones. Likewise, the presence or absence of predicted features at 1.05 and 1.26\,$\mu$m from ionized PAHs can be strongly constrained.
While we anticipate advances in infrared spectroscopy, it is unfortunate that this is not the case for infrared spectropolarimetry. Polarimetry is a powerful complementary constraint on the properties of interstellar dust, particularly given the dichotomy observed in polarization between carbonaceous and silicate features. Additionally, the profiles of the spectroscopic features in extinction and polarization generically differ because each depends differently on the optical constants, and so measurement of both strongly constrains grain material properties. Additional spectropolarimetric measurements of the 9.7 and 18\,$\mu$m silicate features and the 3.4\,$\mu$m carbonaceous feature are desperately needed. In addition, the continuum polarization between 4--8\,$\mu$m is poorly determined. Unfortunately, we are unaware of any operational facilities, nor of any planned ones, capable of spectropolarimetry or even broadband polarimetry between 3 and 8\,$\mu$m. However, new polarimetric measurements of the 9.7\,$\mu$m silicate feature are possible with CanariCam \citep{CanariCam}.
Stellar optical polarimetry, on the other hand, will be pushed to high latitude, diffuse sightlines in the 2020s with the PASIPHAE survey \citep{PASIPHAE}. With a many-fold expansion of stellar polarization catalogues, new insights will be gained in the variations in the polarized extinction curve throughout the Galaxy, including its connection to polarized infrared emission.
Because of the role of dust polarization in mapping magnetic fields and as a contaminant for Cosmic Microwave Background (CMB) polarization science, the prospects are better for studies of polarized emission. Of critical importance from the perspective of dust modeling is extending coverage of the polarized dust SED to higher frequencies on sightlines that might typify the diffuse ISM. Measuring the wavelength-dependence of polarization near the peak of the dust SED will allow the contributions from different dust populations to be more efficiently disentangled. At even shorter wavelengths, we expect emission to be dominated by smaller, unaligned grains. While such measurements are already possible on dense sightlines using instruments like HAWC+ aboard SOFIA \citep{HAWC+}, the greater sensitivity afforded by upcoming facilities like CCAT-prime \citep[$260 \leq \nu/{\rm GHz} \leq 860$;][]{CCAT-prime} is required to access diffuse sightlines. However, we are unaware of upcoming facilities that can perform polarimetry on the Wien side of the dust SED along diffuse sightlines.
Particularly given the uncertainties in the level of polarization in the AME and the abundance of material able to emit microwave magnetic dipole radiation, extension of the determination of the polarized dust SED to lower frequencies is also of great interest. Such measurements will be made by upcoming CMB experiments such as the Simons Observatory \citep{SimonsObservatory}, LiteBIRD \citep{LiteBIRD}, and CMB-S4 \citep{CMB-S4}, all of which have the sensitivity to characterize dust emission on the diffuse, high latitude sightlines of greatest interest to this work.
These directions are but a few avenues to be explored with the wealth of upcoming data and are not intended to be exhaustive. As we emphasize in this work, dust modeling should be informed by the full range of optical phenomena associated with interstellar grains, and by combining the insights gleaned from a variety of observations across the electromagnetic spectrum, we can paint the clearest picture possible of the nature of interstellar grains.
\acknowledgments
{We are grateful to many stimulating conversations that informed this work over its long completion. We thank in particular Megan Bedell, Tuhin Ghosh, Vincent Guillet, Jim Ingalls, Ed Jenkins, Eddie Schlafly, and Chris Wright for sharing their expertise. This research was carried out in part at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. This work was supported in part by NSF grants AST-1408723 and AST-1908123. BH acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. DGE-0646086 during the earliest stages of this work.}
\software{Astropy \citep{Astropy,Astropy_2}, Matplotlib \citep{Matplotlib}, NumPy \citep{NumPy}, SciPy \citep{SciPy}}
|
1,314,259,993,416 | arxiv | \section{Introduction}
\label{intro}
The study of the behaviour of hadronic matter in the density-temperature, $(\rho, T)$, diagram allows to have a deeper understanding of matter under extreme conditions. In this context, the high density, low temperature limit can be addressed for a fermion system using the Landau Fermi Liquid Theory (FLT) \cite{bookFL}. From a theoretical point of view the properties of this type of normal quantum systems can be studied calculating the interaction matrix element of quasiparticle (qp) excitations close to the Fermi surface. The inclusion of an additional component in the problem, a magnetic field, $B$, allows further testing the properties of magnetized Fermi Liquids. The role of magnetic fields in bulk properties and equation of state has been partially analyzed in the past for nuclear matter \cite{latt} \cite{chak} and quark matter \cite{quark1} \cite{quark2}.
Due to the tiny value of the neutron magnetic moment $\mu_n=-1.9130427(5)\mu_N$ ($\mu_N=3.152 451 2326(45)$ $\times 10^{-18}$ MeV $G^{-1}$)~\cite{pdb} and in order to provide a sizable magnetization, huge magnetic fields are needed.
The only scenarios where we have indication of such intense fields are, first, from estimates of the background magnetic fields created in heavy-ion collisions like those at RHIC \cite{rhic} and, second, in a subgroup of pulsars called magnetars. For these astrophysical objects surface magnetic field strengths are of the order $B \approx 10^{15}$ G \cite{thom, lazzati}. Recent numerical simulations \cite{sim} of formation of proto-neutron stars show that the field configuration plays a significant role in the dynamics of the core if the initial magnetic field is large enough. In particular, in the rapid cooling of the newly formed neutron-rich object neutrino transport is an important ingredient \cite{cooling}. However, some of these simulations lack from accurate and consistent neutrino transport, missing the impact of magnetic fields in the microphysics input that affects the dynamics of the collapsing dense objects.
In most of the existing calculations of nuclear matter (either symmetric, pure neutron or beta-equilibrated) the effect due to the presence of strong magnetic fields and the consistently induced spin polarization are discarded in a first approximation. Either relativistic \cite{prakash, chakra} or effective approaches \cite{vida1} have been used to obtain some insight into the equation of state (EOS) or some structure properties \cite{latt} in presence of magnetic fields. These include a possible transition to a ferromagnetic state, although simulations using realistic potentials seem to prevent it \cite{ferro}. In general, a non-vanishing magnetization in a low temperature nuclear plasma \cite{angprc} produces a resolution of some degenerated observables as obtained in the context of the FLT \cite{ang2, ang3}.
\section{Formalism}
\label{sec2}
In this work we are interested in the response of a spin-polarized pure neutron plasma to a weak neutrino probe. It can be seen \cite{prakash} that for the density range $\rho \le 4\rho_0$, where the quark deconfinement is not expected to take place, and for magnetic field strengths of maximum strength $B \approx 10^{18}$ G, allowed in principle by the scalar virial theorem, the neutral system is mostly neutrons. The maximum magnetic field strength we will consider is $B^* \approx 2 \times 10^4$ (as measured in units of the electron critical field $B^*=B/B^c_e$ with $B^c_e=4.4\,\times \,10^{13}$ G) and the neutron fraction is $Y_n > 0.98$ \cite{prakash}. So the neutral plasma is mostly neutrons but leptons and additional baryons are also present in a tiny fraction that should be considered for full application in an astrophysical scenario where $\beta$-equilibrium holds.
We are interested in exploring the effect of a strong magnetic field and the spin polarization of a pure neutron plasma through the structure functions, which provide information on density and spin density in-medium correlations. The homogeneous system under study is under the presence of an internal magnetic field, ${\bf B}=B {\bf k}$ populated by species with paricle density $\rho_{\sigma}$, where $\sigma=\pm 1$ is the spin $z$-projection. $\Delta=\frac{\rho_+ - \rho_-}{\rho}$ is the spin excess and $\rho=\rho_+ + \rho_-$ is the total particle density. For given thermodynamical conditions $\Delta$ is obtained by minimizing the Helmholtz free energy per particle, $f(\rho, T, B,\Delta )=\epsilon-\mu_n \Delta \rho B$, where $\epsilon$ is the energy per particle. Note that parallel (antiparallel) aligned magnetic moments (spins) are energetically favoured.
We have considered an effective approach to describe the nuclear interaction using zero-range Skyrme forces~\cite{vautherin} with two of the most widely used parametrizations given by the Lyon group SLy4 and SLy7~\cite{chabanat1,chabanat2} and finite range Gogny with D1P \cite{d1p} and D1S \cite{d1s} forces. All of them provide good values for binding of nuclei and also for neutron matter EOS.
In the context of the FLT the properties of non-magnetized systems at low temperature have been evaluated \cite{plbbackman} by calculating the qp matrix element around the Fermi surface where the only dependence is on fermionic densities and the qp scattering angle, $\theta$, involved. In the usual formalism, for the non-magnetized case the qp matrix element is written as a multipolar expansion in Legendre polinomials,
\begin{equation}
V_{ph}=\sum_{l=0}^{\infty} \big [ f_l + g_l {\bf \sigma_1 .\sigma_2} \big ] P_l ( cos\theta) ,
\label{elemento}
\end{equation}
$f_l$ and $g_l$ are the so-called Landau parameters of multipolarity $l$. In the more general case where any two possible spin orientations $(\sigma ,\sigma')$ are taken into account, the polarized qp matrix elements \cite{notes} \cite{ang2} are a crucial ingredient to compute the response functions to a a weakly interacting neutrino probe that excites a collective mode $(\omega, q)$ under the presence of a magnetic field $B$. The Lindhard function in the system, $\chi^{(\sigma ,\sigma')}(\omega, q)$, satisfies the Bethe-Salpeter equation and can be written in the dipolar ($l \le 1$) case in the random phase approximation (RPA) as a coupled system,
\begin{eqnarray}
\chi^{(\sigma ,\sigma')} &=& \chi_0^{(\sigma)} \delta(\sigma, \sigma') +
\chi_0^{(\sigma)} \sum_{\sigma''=+,-} f_0^{(\sigma \sigma'')}
\chi^{(\sigma'' \sigma')}\nonumber \\
&& + \gamma_1^{(\sigma)} \sum_{\sigma''=+,-} f_1^{(\sigma, \sigma'')}
\Gamma^{(\sigma'', \sigma')},
\label{chil1}
\end{eqnarray}
\begin{eqnarray}
\Gamma^{(\sigma ,\sigma')} &=& \gamma_1^{(\sigma)} \delta(\sigma, \sigma')
+ \gamma_1^{(\sigma)} \sum_{\sigma''=+,-} f_0^{(\sigma \sigma'')}
\chi^{(\sigma'' \sigma')}\nonumber \\
&& + \gamma_2^{(\sigma)} \sum_{\sigma''=+,-} f_1^{(\sigma, \sigma'')}
\Gamma^{(\sigma'' \sigma')},
\label{chig2}
\end{eqnarray}
with the auxiliar definitions, $\Gamma^{(\sigma, \sigma')}=\int \frac{d^3 k}{(2 \pi)^3} cos (\theta)\, G^{(\sigma,\sigma')}$ and $\gamma_n^{(\sigma)}=\int \frac{d^3 k}{(2 \pi)^3} cos^n (\theta)\, G_{0}^{(\sigma)}$. Notice that the qp propagators $G_{0}^{(\sigma)}$ have been given in \cite{annals} and the expressions for the coefficients $\gamma_i^{(\sigma)}$ can be written \cite{notes} in the Landau limit as $\gamma_1^{(\sigma)}= \nu^{(\sigma)} \chi^{(\sigma)}_0$ and $\gamma_2^{(\sigma)}=\nu^{2 (\sigma)} \chi^{(\sigma)}_0-\frac{k_{F,\sigma} m^{*}_{\sigma}}{6 \pi^2}$ where $\nu^{(\sigma)}=\frac{m^{*}_{\sigma} \omega}{k_{F,\sigma} q }$.
The qp effective mass in a magnetized system depends on the polarized dipolar coefficients \cite{bookFL},
\begin{equation}
m^*_{\sigma}/m=1+\frac{1}{3} N_{0 \sigma} \big [ f_1^{(\sigma,\sigma)}+(\frac{k^2_{F,-\sigma}} {k^2_{F,\sigma}})f_1^{(\sigma,-\sigma)} \big ]
\label{mef}
\end{equation}
where $N_{0 \sigma}=\frac{m^{*}_{\sigma} k_{F,\sigma}}{2 \pi^2}$ is the quasiparticle level density at each polarized Fermi surface with momentum $k_{F,\sigma}$.
The generalized parameters $f_l^{(\sigma,\sigma')}$ are obtained by derivating the Helmhotz free energy with respect to the polarized density component, $f_{{\bf k},\sigma,{\bf k}',\sigma'}=\frac{\partial^2 F}{\partial n_{{\bf k},\sigma}\partial n_{{\bf k}',\sigma'}}$ \cite{ang2}, setting momenta on the polarized Fermi surfaces and expanding the resulting expression as a series in Legendre polinomials of multipolarity $l$. These generalized parameters fullfill the following relations recovering the usual ones in FLT in the limit $\Delta \rightarrow 0$ \cite{ang2},
\begin{equation}
f_l=\frac{f_l^{(\sigma,\sigma)}+f_l^{(\sigma,-\sigma)}}{2},
\label{f0sum}
\end{equation}
\begin{equation}
g_l=\frac{f_l^{(\sigma,\sigma)}-f_l^{(\sigma,-\sigma)}}{2}.
\label{g0sum}
\end{equation}
With the generalized paramters and using the expressions in Eq. (\ref{chil1}) the corresponding Lindhard function for the isovector ($S=0$) response of the plasma can be written as,
\begin{equation}
\chi^{(S=0)} = \chi^{(++)} + \chi^{(--)} + \chi^{(+-)} + \chi^{(-+)} ,
\end{equation}
and for the vector-axial ($S=1$) response as,
\begin{equation}
\chi^{(S=1)} = \chi^{(++)} + \chi^{(--)} -\chi^{(+-)} - \chi^{(-+)} .
\end{equation}
Then the previous expression of the Lindhard function in RPA approximation \cite{rpa} include in-medium correlations at zero temperature. From them, one can obtain the structure functions given by,
\begin{equation}
S^{S=0,1}(\omega,q)=\frac{-1}{\pi} Im \,\chi^{S=0,1}(\omega,q).
\end{equation}
The structure function allows to calculate the non-relativistic differential cross section of neutrinos scattering off matter via neutral currents from \cite{peth}
\begin{equation}
\frac{1}{V} \frac{d \sigma} {d \Omega d \omega} = \frac{G^2_F}{8 \pi^3} E'^2
[ C_V^2 (1+ cos \theta) S^{0}(\omega,q)+ C_A^2 (3-cos \theta) S^{1}(\omega,q)]
\label{cs}
\end{equation}
where $E(E')$ is the incoming (outgoing) neutrino energy and $\vec{k}$ $(\vec{k'})$ is the neutrino incoming (outgoing) three-momentum. The transferred energy is $\omega=E-E'$ and the transferred three-momentum is $\vec{q}=\vec{k} -\vec{k'}$. The neutral current vector and axial vector charges are $C_V=1/2$ and $C_A=-g_a/2$ where $g_a=1.260$ \cite{pdb}. $G_F/(\hbar c)^3=1.166\,39(1) \times 10^{-5} GeV^{-2}$ is the Fermi coupling constant. Once the response has been evaluated it is straightforward to evaluate the neutrino mean free paths in the medium , $\lambda^{-1}=\int \frac{1}{V} \frac{d \sigma} {d \Omega d\omega} d \Omega d \omega$.
\section{Results}
\label{results}
In this section we include the effect of in-medium correlations in the neutron magnetized system as obtained in the Hartree-Fock approximation in the presence of a strong magnetic field. In Fig.~\ref{Fig1} the ratio of effective neutron mass as compared to the free value at saturation density, $\rho_0$, is shown as a function of the logarithm of the magnetic field strength for the Skyrme SLy7 (a) and Gogny D1P (b) parametrizations. For each model upper (lower) curves correspond to spin up (down) polarized particles. With Skyrme description the intense field affects more both effective nucleon mass absolute and relative (up-down polarized components) values than with Gogny. The impact of density effects on the mean free path can be seen in Fig.~\ref{Fig2}. We plot the ratio of change of neutrino mean free paths in the RPA dipolar approximation for a fixed value of magnetic field strength, $B=5 \times 10^{17}$ G, with respect to the field-free case, $R_B=\frac{\lambda_{B=5 \times 10^{17}G} - \lambda_{B=0}}{\lambda_{B=0}}$, as a function of the density (in units of nuclear saturation density, $\rho_0$). We consider Skyrme SLy7 (solid line), SLy4 (long dashed line) Gogny D1P (short dashed line) and D1S (dotted line) parametrizations and set as a typical value of neutrino incoming energy $E_{\nu}=15$ MeV. While Gogny forces show almost unchanged or very mild reduction of mean free paths at densities in the range $[1-2]\rho_0$, the Skyrme forces show a high density dramatic increase with respect to the field-free case. Note that all standard Skyrme forces predict the onset of a ferromagnetic transition in the range $[1-4]\rho_0$, and in our selection of interactions for the study cases it is near $3.3\rho_0$. However this feature is not present in the Gogny forces that prevent ferromagnetic transitions. At densities $[1-2]\rho_0$, effects due to the energetic contribution of the magnetic perturbation, introduced by the neutron magnetic moment, are small at the selected field ($B=5 \times 10^{17}$ G) with respect to changes in other single particle properties like effective masses. For even lower densities (i.e. $0.5\rho_0$) it can be seen (see Fig. 7 in \cite{angprc}) that the $\mu_n \Delta \rho B$ term produces a relevant contribution to the magnetization. For application in astrophysical scenarios and at low densities one should consider that the effect of a non-zero proton fraction determines the appearance of pasta phases \cite{pasta} where electromagnetic and nuclear interactions are frustrated and clustering of matter arises. As density grows, at fixed values of $B$, the spin polarization decreases forming a {\it plateau} at intermediate densities before the possible appearance of a phase transition in the system.
\begin{figure}
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[angle=-90,scale=.6]{mass-sly7.eps}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[angle=-90,scale=.6]{mass-d1p.eps}
\caption{Effective neutron mass at saturation density $\rho_0$ as a function of the logarithm of the magnetic field strength for the Skyrme SLy7 (a) and Gogny D1P (b) parametrizations. For each model upper (lower) curves correspond to spin up (down) polarized particles.}
\label{Fig1}
\end{minipage}
\end{figure}
\begin{figure}[hbtp]
\begin{center}
\includegraphics [angle=-90,scale=.75] {fig1.eps}
\caption{Relative change ratio of neutrino mean free paths for $B=5 \times 10^{17}$ G with respect to the field-free case as a function of density calculated with Skyrme (SLy4 and SLy7) and Gogny (D1P and D1S) forces for a neutrino energy $E_{\nu}=15$ MeV.}
\label{Fig2}
\end{center}
\end{figure}
In Fig.~\ref{Fig3} we plot the ratio of change of neutrino mean free paths in the RPA dipolar approximation computed for a generic value of magnetic field strength with respect to the field-free case as a function of the logarithm (base 10) of the magnetic field strength, $R_{\rho}=\frac{\lambda_B-\lambda_{B=0}}{\lambda_{B=0}}$. We set a value of density $\rho=3\rho_0$ and use SLy7 (solid line), SLy4 (long dashed line), D1P (short dashed line) and D1S (dotted line) parametrizations. For fields below $B \approx 10^{17}$ G there is almost no change in the ratio but for larger strengths there is a decrease (increase) as computed with Gogny (Skyrme) forces. For this high density case the change can be $\approx 10\%$ as computed with the SLy7 parametrization while the Gogny D1P predicts a relative change $\lesssim 1\%$. Note that the main contribution to the mean free paths comes from the fact that, as shown in Fig.~\ref{Fig1}, the Skyrme parametrization predicts a larger change in the absolute and relative values of the two effective masses of the spin polarized components. The Landau parameters and the energetic contribution of the magnetic perturbation \cite{ang2} show a minor contribution to the structure functions, that in turn determine the mean free paths. It is worth mentioning that the Lindhard function,$\chi^{(S)}$, has a rich structure in ($\omega, q$) that has been studied in \cite{ang3}, however, the smallnes of the magnetic perturbation is washed out in the response of the system by the influence of the magnetization \cite{angprc} and density effects in the neutron effective mass. As we can see from Fig.~\ref{Fig3}, this result shows not only quantitative but also qualitative differences in the neutrino transparency of magnetized neutron matter.
\begin{figure}[hbtp]
\begin{center}
\includegraphics [angle=-90,scale=.75] {fig2.eps}
\caption{Ratio of change of neutrino mean free paths as a function of the logarithm of magnetic field strength with Skyrme (SLy4 and SLy7) and Gogny (D1P and D1S) parametrizations at $\rho=3\rho_0$ and a neutrino energy $E_{\nu}=15$ MeV.}
\label{Fig3}
\end{center}
\end{figure}
\section{Conclusions}
\label{conc}
We have investigated for the first time in the context of the Landau Theory of normal Fermi Liquids, the effect of a strong magnetic field on the variation of the neutrino mean free path in a partially magnetized pure neutron system within the framework of the non-relativistic Hartree-Fock approximation comparing Skyrme and Gogny forces. We find that for fields up to the maximum strength studied in this work, $B=10^{18}$ G, Skyrme forces show at high density an enhancement of neutrino transparency of the system, while Gogny forces predict a small decrease. These results can be explanined due to the fact that for the density, and $B$ field range considered in this work the variation of Landau parameters is a minor contribution compared to the effective mass and magnetization.
\vspace{2ex}
\noindent{\bf Acknowledgments}\\
We acknowledge discussions with J. Navarro and A. Polls. This work has been partially funded by the Spanish Ministry of Science and Education under projects FIS2006-05319, FIS2009-07238 and Junta de Castilla y Leon Excellence program GR234. We also acknowledge support by CompStar, a research networking programme of the European Science Foundation.
|
1,314,259,993,417 | arxiv | \section{Introduction}
\label{intro}
\IEEEPARstart{T}he ever increasing demand for higher data rates in wireline communication links imposes the use in sophisticated digital equalization techniques, usually implemented at the receiver side \cite{adc_mot3}. Those, require using high-speed front-end ADCs for proper analog to digital signal conversion.
It is well-known that a major non-ideal issue of wireline links is the frequency dependent channel loss. Such a behaviour causes an increment in both ISI and PAPR of the signal at the channel output as baud rate increases.
To avoid excessive signal distortion due to clipping the ADC is required to supply large dynamic range, which leads to high ENOB \cite{IEEEADC} requirement to achieve the desired system performance. The demand of large dynamic range translates to higher circuit design complexity and higher power consumption, which are among the key issues in high-speed applications.
Fischer proposed Dynamics Limited Precoding (DLP) technique \cite{thp_ext} that allows receiver PAPR control.
DLP is an extension of the well-known Tomlinson-Harashima Precoder (THP) \cite{thp1}, which offers a trade-off between transmitter and receiver PAPR. One extreme point of DLP is original THP, with minimal PAPR at channel input and maximal PAPR at channel output. The other extreme point is essentially channel inversion at the transmitter which provides minimal PAPR at channel output in expense of maximal PAPR at channel input. Since channel input (transmitter output) is voltage limited and quantization prone due to the Digital to Analog Converter (DAC), DLP shifts the problem to the transmitter side without providing any gain overall. On the contrary, the quantization noise at the transmitter is an additional noise source.
A common equalizer that can be used for ISI (and PAPR) reduction at the ADC input is a Continuous Time Linear Equalizer (CTLE) \cite{ctler}. CTLE has several disadvantages. First, CTLE introduces large impedance discontinuity at the channel and equalizer interface. Impedance matching networks, often employ inductors, can be used to prevent the discontinuity. However, the large inductors make this approach less suitable for on-chip integration.
In addition, CTLE must be optimized for each channel and both devising adaptation algorithm and practically modifying the components at high frequencies are formidable challenges.
Inspired by the mathematical similarity between the problem at hand and the problem of PAPR reduction at the transmitter due to the pulse shaping filter effect, we sought to derive a parallel technique.
A recent shaping technique for PAPR reduction at the transmitter was presented in \cite{papr2}.
To avoid peak excursions at the pulse shaping filter output, symbol transitions which result in high peak values are removed from the trellis graph, so that PAPR gain is achieved compared to un-shaped transmission. However, both implementation and theoretical analysis require a prior calculation of the shaped distribution which is stored in a table. The table size depends exponentially on the pulse shape filter span. Hence, it cannot be used for practical long channels due to the enormous size of the required memory.
In this paper, we propose an online shaping scheme for PAPR reduction at the output of wireline channels that enables to reduce the ADC ENOB requirement compared to un-shaped transmission. By theoretical analysis we derive an upper-bound on the shaping gain and we show that the proposed scheme approaches it.
The shaping scheme is attractive especially in high-speed links due to high PAPR at the ADC input on the one hand, but limited receiver power consumption and complexity on the other hand.
Whereas \cite{papr2} requires high memory and a prior calculation of the shaped distribution, the proposed shaping scheme is designed to eliminate these demands by employing online calculation of the distribution (both at transmitter and receiver). It can therefore be used for practical long channels.
Since the suggested shaping scheme uses transmitter precoding over a standard PAM constellation, and does not use any filter, then, unlike transmitter equalization and DLP it does not increase either transmitter PAPR or number of signal levels at the transmitter. Therefore, it has no effect on the required transmitter hardware (e.g., DAC and power amplifier).
For data rates 200 Gbps and 400 Gbps, a shaped 8-PAM transmission achieves ADC ENOB gains of 1.43 bit and 1.78 bit, respectively, compared to uniform 4-PAM transmission with Turbo-Equalizer (TE) \cite{TE} at the receiver.
The rest of the paper is organized as follows: Section \ref{sm} describes the system model. In Section \ref{Implementation} we present the online shaping process, both for the Tx and Rx parts. In Section \ref{UB} we present a theoretical analysis and derive the achievable gains using shaping. Section \ref{Results} presents simulation results of the shaped system, compared to a uniform transmission. In Section \ref{conclusion} conclusion remarks are given.
\section{System Model}
\label{sm}
A typical communication link may be adequately described by the model shown in Fig. \ref{block_diagram}.
Let $\boldsymbol{X}=(-Q+1,-Q+3,...,Q-3,Q-1)$ be one dimensional $Q$-PAM constellation with cardinality $|\boldsymbol{X}|=Q$,
\begin{figure}[ht]
\centering
\includegraphics[width=3in]{sm_nodac}
\caption{System model.}
\label{block_diagram}
\end{figure}
and let ${x}^{N}\triangleq (x_0,...,x_{N-1})$ be a frame of $N$ symbols where $x_n \in \boldsymbol{X}$ $\forall n$.
The frame ${x}^{N}$ is transmitted in symbol rate $f_s$ symbols/sec through a noisy channel with an impulse response ${h(t)}$ and sampled by ADC every $1/f_s$ seconds. The resulting sampled signal at the ADC output is given by
\begin{equation}
\label{discrete_waveform}
y_n =\sum_{i=0}^{L-1}h_i x_{n-i} + z_n+\eta_n=r_n+z_n+\eta_n
\end{equation}
where $z_n$ and $\eta_n$ are two independent sources of one dimensional white Gaussian noise with variance $N_{0}/2$ and $N_{A}/2$, respectively, $L$ is the channel span in symbol periods units, and $r_n=\small\sum_{i=0}^{L-1}h_i x_{n-i}$ is the sampled signal at the $n$-th time step. The noise $z_n$ is receiver thermal noise and $\eta_n$ is additional noise caused by the ADC as a result of the quantization process and distortion, approximated as AWGN. This approximation is justified since quantization noise in practical ADCs rarely have uniform quantization noise. The noise in practical ADCs is influenced by inaccuracies, non-linearities, clock jitter, and thermal noise inside ADC which overall can be approximated as white gaussian noise sources \cite{real_ADC3}.
The instantaneous power of the received signal is $p_n = |r_n|^2$ and the Signal to Noise Ratio (SNR) is defined by (\ref{SNR}) where $P_r=E\{p_n\}$ is the average power of the signal, and $E\{\cdot\}$ denotes the statistical averaging.
\begin{equation}
\label{SNR}
\small SNR\triangleq \frac{2P_r}{N_{A}+N_0}
\end{equation}
The PAPR at the ADC input is the ratio between the peak power $p_{peak}$ and the average power $P_r$, where $p_{peak}$ is defined as the value of $p_n$ which is exceeded with probability $\epsilon$. In this paper we use $\epsilon=10^{-4}$. In addition, we normalized constellation points so that average uniform transmission power is 1.
The Signal to Noise and Distortion Ratio (SNDR) and the ENOB of an ADC device are defined by \cite{IEEEADC}
\begin{equation}
\label{SNR_ADC}
\small SNDR\triangleq \frac{2P_r}{N_{A}}
\end{equation}
\begin{equation}
\label{ENOB}
\small ENOB \triangleq \frac{10\cdot log_{10}(SNDR\cdot PAPR)-4.76}{6}
\end{equation}
Since shaping for PAPR reduction allows an equivalent increment in the average power of the received signal, the overall shaping gain $G_T$ is the sum of the PAPR and SNDR gains (if denoted in dB), given constant ratio between the transmitted average power $P_t=E\{|x_n|^2\}$ and thermal noise $N_0$. This ratio is denoted by Transmitter Signal to Thermal Noise Ratio (TSTNR).
A typical wireline channel, with a causal continuous time impulse response, could be approximated as \cite{channel}
\begin{equation}
\label{channelimpulse}
h(t)=\frac{A}{\sqrt{(t_0+t)^3}e^{\frac{\pi A^2}{(t_0+t)}}}\circledast \frac{2B}{\pi ((t_0+t)^2+B^2)}, t\geq 0
\end{equation}
where A and B are positive constants that determine the relaxation time of the response and $t_0 \geq 0$ is a parameter that determines the first sample of $h(t)$. In this paper, we use $t_0=7.7\cdot10^{-13} sec$, $A=10^{-6} \sqrt{sec}$ and $B=8.8\cdot10^{-12} sec$, which are typical values of a microstrip trace of length 50 cm used for communicating between two chips \cite{channel}. The discrete sampled impulse response is therefore $h_n=h(t_0+n/f_s)$.
The symbol rates we use in this paper are $f_s = 112$ Gsymbol/sec and $f_s = 224$ Gsymbol/sec. The resulting voltage gain normalized sampled impulse responses are
\begin{equation}
\label{cha}
\begin{split}
h_A & = \{0.13, 0.19, 0.14, 0.09, 0.07, 0.05, 0.037, 0.031, 0.025,\\
& 0.02, 0.016, 0.014, 0.013, 0.012, 0.011, 0.01, 0.009, 0.008,\\
& 0.0075, 0.0072, 0.0065, 0.0071, 0.0057, 0.0055, 0.0044,\\
& 0.0044, 0.0033, 0.0033, 0.0032, 0.0029\}.
\end{split}
\end{equation}
\begin{equation}
\label{chb}
\begin{split}
h_B & = \{0.069, 0.1, 0.11, 0.098, 0.08, 0.06, 0.05, 0.04, 0.038,\\
& 0.032, 0.028, 0.024, 0.021, 0.019, 0.017, 0.015, 0.014,\\
& 0.013, 0.0118, 0.0108, 0.01, 0.0092, 0.0086, 0.008, 0.0075, \\
& 0.007, 0.0066, 0.0062, 0.0058, 0.0055, 0.0052, 0.00498,\\
& 0.00474, 0.00451, 0.00429, 0.0041, 0.0039, 0.0037,\\
& 0.0036, 0.0034, 0.0037, 0.0034, 0.0029, 0.0028,\\
& 0.0028, 0.0025, 0.0023, 0.0023, 0.002, 0.0017\}.
\end{split}
\end{equation}
The impulse responses (\ref{cha}) and (\ref{chb}) are denoted Channel-A and Channel-B, respectively. Note that Channel-A and Channel-B span over $L = 30$ and $L=50$ symbols, respectively.
\section{Implementation}
\label{Implementation}
A binary information stream $\boldsymbol{u}$ is firstly encoded by an Error Correcting Code (ECC) into a code word in rate $R$ bit/symbol.
In every time step $n$, the precoder maps $m=\log_2(Q)$ coded bits ${b}^{m}_n\triangleq (b_{n0}, b_{n1},..b_{n(m-1)})$ to a symbol $x_n$ that satisfies a peak power constraint $p_n\leq\gamma$.
\begin{comment}
The precoder firstly calculates an indicator vector $\boldsymbol{A}=({A_0},{A_1},...,{A_{Q-1}})$ of the constellation $\boldsymbol{X}$ according to
\begin{equation}
A_i=\mathbbm{1}_{\boldsymbol{F}}({x_i})=
\begin{cases}
0, & x_i\in \boldsymbol{F} \\
1, & x_i\in \boldsymbol{\overline{F}}
\end{cases}
, i=0,1,...,Q-1
\label{indicators_set}
\end{equation}
where
\begin{equation}
\label{Fcalc}
\boldsymbol{\overline{F}}=\bigg \{x \in \boldsymbol{X} \colon \bigg|h_0x+\sum_{i=1}^{L-1}h_i x_{n-i}\bigg|^2 \leq \gamma \bigg \}
\end{equation}
and $\boldsymbol{{F}}= \{x \in \boldsymbol{X} \colon x \notin \boldsymbol{\overline{F}}\}$. Note that $\boldsymbol{F} \cap \boldsymbol{\overline{F}}=\emptyset$ and $\boldsymbol{F} \cup \boldsymbol{\overline{F}}=\boldsymbol{X}$.
The calculation of $\boldsymbol{A}$ (\ref{indicators_set}) is repeated in every step $n$ by substituting the last $L-1$ transmitted symbols $(x_{n-1},x_{n-2},...,x_{n-L+1})$ to (\ref{Fcalc}).
Then, according to a mapping table $\boldsymbol{T}$ and the set $\boldsymbol{A}$ in time step $n$, the bits ${b}^{m}_n$ are uniquely map to a symbol $x_n \in \boldsymbol{\overline{F}}$.
Note, the size of $\boldsymbol{T}$ is $2^Q$-by-$Q$ (it does not depend on the channel length $L$).
The precoder operation is summarized by the block diagram illustrated in Fig. \ref{precoder}.
\end{comment}
To do so, the precoder firstly calculates the forbidden symbols for transmission at step $n$ (symbols that would yield $p_n>\gamma$) according to the \textit{channel state} ${s}_n$, where ${s}_n$ is defined as the last $L-1$ transmitted symbols $(x_{n-1},x_{n-2},...,x_{n-L+1})$.
The sets of the forbidden and non-forbidden symbols are denoted by $\boldsymbol{F}$ and $\boldsymbol{\overline{F}}$, respectively, where $\boldsymbol{F} \cap \boldsymbol{\overline{F}}=\emptyset$ and $\boldsymbol{F} \cup \boldsymbol{\overline{F}}=\boldsymbol{X}$.
The calculation of $\boldsymbol{F}$ is preformed according to (\ref{Fcalc}), and $\boldsymbol{\overline{F}}= \{x \in \boldsymbol{X} \colon x \notin \boldsymbol{F}\} $.
\begin{equation}
\label{Fcalc}
\boldsymbol{F}=\bigg \{x \in \boldsymbol{X} \colon \bigg|h_0x+\sum_{i=1}^{L-1}h_i x_{n-i}\bigg|^2 > \gamma \bigg \}
\end{equation}
Let us define the indicator vector $\boldsymbol{A}=({A_0},{A_1},...,{A_{Q-1}})$ of the constellation $\boldsymbol{X}$ as
\begin{equation}
A_i=\mathbbm{1}_{\boldsymbol{F}}({x_i})=
\begin{cases}
0, & x_i\in \boldsymbol{F} \\
1, & x_i\in \boldsymbol{\overline{F}}
\end{cases}
, i=0,1,...,Q-1
\label{indicators_set}
\end{equation}
According to a mapping table $\boldsymbol{T}$ and the set $\boldsymbol{A}$ in time step $n$, the bits ${b}^{m}_n$ are uniquely map to a symbol $x_n \in \boldsymbol{\overline{F}}$.
Note, the size of $\boldsymbol{T}$ is $2^Q$-by-$Q$ (it does not depend on the channel length $L$).
The precoder operation is summarized by the block diagram illustrated in Fig. \ref{precoder}.
\begin{figure}[ht]
\centering
\includegraphics[width=3in]{precoder3}
\caption{The precoding process.}
\label{precoder}
\end{figure}
The mapping table $\boldsymbol{T}$ is constructed according to the following.
If $\boldsymbol{F}=\emptyset$, $\boldsymbol{\overline{F}}=\boldsymbol{X}$ and the symbols bit labeling is the Gray labeling. Otherwise, all the bit labels of the symbols in $\boldsymbol{F}$ cannot be used, and should be assigned to a corresponding symbols from $\boldsymbol{\overline{F}}$. Each label from $\boldsymbol{F}$ should be assigned to a symbol from $\boldsymbol{\overline{F}}$ such that the hamming distance between the labels is minimal. In case of several choices of symbols from $\boldsymbol{\overline{F}}$, with the same minimal hamming distance, the symbol with the lowest Euclidean distance is chosen. The different bits among the common labels of a symbol are equivalent to erasure (could be zero or one since they are unknown to the receiver). The mapping table $\boldsymbol{T}$ of a 4-PAM constellation, $\boldsymbol{X}=(-3,-1,1,3)$, is presented in Table \ref{all_mapping_tables}. The row index is the decimal representation of the binary set $\boldsymbol{A}$ (i.e., if for example $\boldsymbol{A}=(0,1,1,1)$ and ${b}^{m}_n=(1,1)$, the transmitted symbol is $x_n=\boldsymbol{T}_{73}=3$).
\begin{table} [H]
\begin{center}
\begin{tabular}{| c | c | c | c | c |}
\hline
&10 & 00 & 01 & 11 \\ \hline
0& - & - & - & - \\ \hline
1& 3 & 3 & 3 & 3 \\ \hline
2& 1 & 1 & 1 & 1 \\ \hline
3& 3 & 1 & 1 & 3 \\ \hline
4& -1 & -1 &-1 &-1 \\ \hline
5& -1 & -1 & 3 & 3 \\ \hline
6& -1 & -1 & 1 & 1 \\ \hline
7& -1 & -1 & 1 & 3 \\ \hline
8& -3& -3 &-3 &-3 \\ \hline
9& -3 & -3 & 3 & 3 \\ \hline
10& -3 & -3 & 1 & 1 \\ \hline
11&-3 & -3 & 1 & 3 \\ \hline
12& -3& -1 & -1 & -3 \\ \hline
13& -3& -1 & -1 & 3 \\ \hline
14&-3 & -1 & 1 & 1 \\ \hline
15& -3 & -1 &1 & 3 \\ \hline
\end{tabular}
\caption{Mapping table $\boldsymbol{T}$ of a 4-PAM constellation.}
\label{all_mapping_tables}
\end{center}
\end{table}
The signal $x^N$ at the precoder output is a Markov process. The $Q^{L-1}$ distinct states of the Markov process are indexed by $i\in \mathbb{Z}$, $i=0,1,...Q^{L-1}-1$. Since $\Pr(s_n=j|{s_{n-1}}=i)=\Pr(j|i)$ $\forall n$ then, the transmission is a stationary time-homogeneous Markov chain, and the transition between channel states is uniquely defined by a symbol $x_{ij}\in \boldsymbol{X}$ i.e., $\Pr(j|i)=\Pr(x_{ij}|i)$ where $x_{ij}$ is the symbol that causes a transition from state $i$ to state $j$.
At the receiver side we used a modified M-BCJR algorithm which computes online the states probabilities of the infinite-state Markov process.
The M-BCJR algorithm \cite{MBCJR} computes ${\zeta_{ij}}_n\triangleq\Pr(s_{n-1}=i;s_{n}=j;{y}^{N})$ for all $0< n \leq N-1$ and for $M$ states with the highest metrics at step $n-1$.
Next, $m$ Log Likelihood Ratios (LLR), $\Lambda(b_{nl})$, $0\leq l \leq m-1$, are computed for each noisy symbol $y_n$ according to
\setlength{\arraycolsep}{0.0em}
\footnotesize
\begin{equation}
\begin{split}
{\Lambda(b_{nl})} & =\log\bigg(\sum_{(i,j)}\frac{\sum_{x_{ij}:\hat{b}_l=0}\zeta_{ij_n} + \sum_{x_{ij}:\hat{b}_l=X}\zeta_{ij_n}\cdot \Pr(b_{nl}=0) }{\sum_{x_{ij}:\hat{b}_l=1}\zeta_{ij_n} + \sum_{x_{ij}:\hat{b}_l=X}\zeta_{ij_n}\cdot \Pr(b_{nl}=1) }\bigg)\\
& -{\Lambda^e(b_{nl})}
\end{split}
\label{BCJRllr}
\end{equation}
\normalsize
where the bit label of the symbol $x_{ij}$ is denoted by $\hat{b}^m$ and the ambiguous bits in the bit label are denoted by X.
The bit probabilities $\Pr(b_{nl}=0)$ and $\Pr(b_{nl}=1)$ are calculated from $\Lambda^e(b_{nl})$, which is the extrinsic LLR from the code decoder.
The calculation (\ref{BCJRllr}) requires, for each $x_{ij}$, both $\hat{b}^m$ and the trellis branch probability $\Pr(x_{ij}|{s}=i)$.
In uniform transmission, $\Pr(x_{ij}|{s}=i)=1/Q$ $\forall i,j$ and the symbols bit label is the Gray labeling in all states.
However, in the suggested shaping scheme, $\Pr(x_{ij}|{s}=i)$ and $\hat{b}^m$ depend on the state $i$, $\gamma$, and ${h}^{L}$.
Calculation of these metrics is preformed according to the process illustrated in Fig. \ref{precoderBCJR}
\begin{figure}[ht]
\centering
\includegraphics[width=2.5in]{bcjr6}
\caption{Trellis branch probability online calculation process.}
\label{precoderBCJR}
\end{figure}
The LLR values ${\Lambda}^{mN}$ at the BCJR output can be used as an a priory input to a ECC decoder. In each iteration, the decoder produces extrinsic LLR values $({\Lambda^e})^{mN}$ which are used as an a priory input to the BCJR module, which in turn calculates new extrinsic LLRs which are sent back to the code decoder. After a pre-determined number of iterations has reached, the bit estimations $\boldsymbol{\hat{u}}$ are determined by performing hard decision on the decoder LLR values $({\Lambda^e})^{mN}$.
Initially, all $({\Lambda^e})^{mN}$ are set to 0.
\section{Theoretical Analysis}
\label{UB}
This section aims to study the achievable theoretical gains using shaping. In Section \ref{rsqnr_ub} we derive a Lower-Bound (LB) on the SNDR given rate and TSTNR. It is well known that in this case the optimal one-dimensional symbols distribution is Gaussian. We optimized the Power Spectral Desnsity (PSD) of the Gaussian distribution such that the achievable rate is maximized. This LB can be used for upper-bounding the SNDR gain, by comparing between the LB and the SNDR of a flat PSD (i.i.d distribution) at a given rate.
In Section \ref{papr_ub} we estimate the receiver PAPR of a peak constrained transmission. We then use this estimation, together with the UB on the SNDR gain, to derive the theoretical shaping gain.
\subsection{Upper Bound for Infinite Constellation}
\label{rsqnr_ub}
The channel capacity can be tightly approximated by $C=lim_{N\to \infty}C_N$ \cite{isicapacity} where
\begin{equation}
\label{capacity}
C_N \triangleq \frac{1}{2N}\sum_{i=0}^{N-1} \log\bigg(1+\frac{2q_i|H_i|^2}{N_A+N_0}\bigg),
\end{equation}
${q}^{N}$ are the energy spectral components of a Gaussian input process $g^{N}$, and ${H}^{N}$ is $N$-points Discrete Fourier Transform (DFT) of the channel impulse response ${h}^{L}$.
UB on the achievable rate given SNDR and TSTNR is found by optimizing the channel capacity (\ref{capacity}) under the following constraints
\begin{equation}
\label{pr_const}
\begin{aligned}
& \frac{1}{N}\sum_{i=0}^{N-1}q_i|H_i|^2 \leq K \\
& \frac{1}{N}\sum_{i=0}^{N-1}q_i \leq P
\end{aligned}
\end{equation}
where $K, P \in \mathbb{R}_{\geq 0}$ are the constraints on receiver and transmitter average power, respectively.
We determine the maximum of $C_N$ subject to the constraints (\ref{pr_const}) by introducing Lagrange multipliers $(\alpha,\beta)$, $\alpha,\beta \geq 0$, and find the maximum of
\begin{equation}
\label{LAGRA}
\begin{split}
J&=\sum_{i=0}^{N-1}\log\bigg(1+\frac{2q_i|H_i|^2}{N_A+N_0}\bigg)-\alpha\bigg(\sum_{i=0}^{N-1}q_i|H_i|^2-K\bigg)\\
&-\beta\bigg(\sum_{i=0}^{N-1}q_i-P\bigg)
\end{split}
\end{equation}
We get
\begin{equation}
\label{LAGRAd}
\frac{\partial J}{\partial q_i} = \frac{2|H_i|^2}{2q_i|H_i|^2+N_A+N_0}-\alpha|H_i|^2-\beta=0
\end{equation}
Solving (\ref{LAGRAd}) for $q_i$ yields
\begin{equation}
\label{opt_s}
q^{o}_i = \max \Bigg[0, \frac{1}{\alpha|H_i|^2+\beta}-\frac{N_A+N_0}{2|H_i|^2} \Bigg]
\end{equation}
Capacity is achievable if the input sequence ${g}^{N}$ is a Gaussian process with energy spectral components $({q^{o}})^{N}$ and the multipliers $\alpha$ and $\beta$ are chosen such that the constraints (\ref{pr_const}) are satisfied.
Since we are interested in $({q^{o}})^{N}$ that yields the highest capacity for a given SNDR and TSTNR values, we optimized the capacity (\ref{capacity}) with respect to $K$, while TSTNR and SNDR are kept constants i.e.,
\begin{equation}
\label{optK}
C_o=\max_{K}\bigg(C_N\vert_{\small(\small SNDR,\small TSTNR\small)}\bigg)
\end{equation}
The value of $K$ that maximized (\ref{optK}) is denoted as $K_{o}$.
\begin{figure}[ht]
\centering
\subfloat[]{{\includegraphics[width=1.65in]{UB_WATER_POUR_GAUS_IID45dB} }}
\subfloat[]{{\includegraphics[width=1.65in]{UB_WATER_POUR_GAUS_IID40dB} }}
\caption{UB on the achievable rate for Channel-A. (a) TSTNR 45 dB. (b) TSTNR 40 dB.}
\label{UBSTNR45}
\end{figure}
The UB expression (\ref{opt_s}) can be divided to three regions, (a) $N_A>>N_0$, (b) $N_A<<N_0$, and (c) $N_A \approx N_0$. In the region $N_A>>N_0$, $N_0$ has a negligible influence on the total noise power. The optimization process (\ref{optK}) therefore yields low $K_{o}$ value. The reason is that in low $K$ value, the constraint on the transmitted average power $P$ is not effective since it is already satisfied. Hence, the optimal solution is to invert the channel, which is obtained from (\ref{opt_s}) by setting $\beta=0$. The Lagrange multiplier $\alpha$ is chosen such that the average power constraint at the receiver is kept.
In the region $N_A<<N_0$, $N_A$ has a negligible influence on the total noise. Since $N_0$ is constant, the optimization process (\ref{optK}) yields high $K$ value. However, the receiver power constraint is not effective in case where $K$ is higher than the average power that would have been obtained at the receiver without any constraint (as it is already met). The optimal solution is therefore reduced to the well-known water-pouring solution, which is obtained from (\ref{opt_s}) by setting $\alpha=0$. The Lagrange multiplier $\beta$ is chosen such that the average power constraint at the transmitter is kept.
In the region $N_A \approx N_0$, both noises influences on the total noise power thus, the optimal solution is given by (\ref{opt_s}). As an example, the channel capacity (\ref{optK}), under the constraints (\ref{pr_const}), was calculated over Channel-A for TSTNR 45 dB and 40 dB.
UB on the rate (or, equivalently, LB on the SNDR) in these TSTNR values is illustrated in Fig. \ref{UBSTNR45}(a) and Fig. \ref{UBSTNR45}(b), respectively.
\subsection{Estimation of Receiver PAPR}
\label{papr_ub}
In an un-shaped transmission, the one-dimensional distribution of each sample at channel output, according to CLT, approaches the Gaussian distribution. In a peak constrained transmission, this distribution could be therefore approximated by the Truncated Gauss (TG) distribution in the region $[-\sqrt{\gamma},\sqrt{\gamma}]$. The probability density function of such distribution is
\begin{equation}
\label{truncated_gauss_pdf}
f(r)=\frac{\exp\Big({-{r^2}/{2\sigma^2}}\Big)}{\sqrt{2\pi\sigma^2}(1+erf(\small\sqrt{{\gamma} / {2\sigma^2}}))}
\end{equation}
\begin{table}[ht]
\centering
\begin{tabular}{| c |c |c |c|c|}
\hline
&\multicolumn{2}{|c|}{Channel-A}&\multicolumn{2}{|c|}{Channel-B} \\
\hline
$\gamma$ & TG Gauss & Simulation & TG Gauss & Simulation
\\ \hline
-16 dB &4.89 dB &4.9 dB &4.95 dB & 5.15 dB \\ \hline
-14 dB &4.96 dB&5.27 dB & 5.06 dB & 5.3 dB \\ \hline
-12 dB & 5.07 dB& 5.3 dB & 5.24 dB & 5.42 dB \\ \hline
-10 dB &5.24 dB &5.56 dB &5.51 dB & 5.63 dB \\ \hline
-8 dB & 5.52 dB & 5.75 dB & 5.95 dB & 6.1 dB \\ \hline
-6 dB & 6 dB & 6.07 dB & 6.65 dB & 6.85 dB \\ \hline
-4 dB &6.65 dB &6.63 dB &7.73 dB & 8.09 dB \\ \hline
-2 dB &7.7 dB &7.58 dB & 9.15 dB & 9.6 dB \\ \hline
0 dB &10.08 dB &10.03 dB & 11.17 dB& 11.4 dB \\ \hline
\end{tabular}
\caption{PAPR of TG Gauss and online shaping scheme.}
\label{papr_comp}
\end{table}
and its PAPR is
\begin{equation}
\label{truncated_gauss_papr}
PAPR_{TG}=\frac{\gamma}{K_{TG}}
\end{equation}
where
\begin{equation}
\label{K_TG}
K_{TG}(\gamma)=\sigma^2-{erf\Big(\small\sqrt{{\gamma}/{2\sigma^2}}\Big)}^{-1}\sqrt{\frac{{2\gamma\sigma^2}}{\pi}}\exp({-\gamma/2\sigma^2})
\end{equation}
is the average power and $\sigma^2$ is the un-shaped (i.i.d) received signal average power i.e., $\sigma^2=\sum_{i=0}^{L-1}{h_i}^2$.
A comparison between (\ref{truncated_gauss_papr}) and the PAPR which yields the online shaping scheme at Channel-A and Channel-B outputs is summarized by Table \ref{papr_comp}, for different $\gamma$ values. It can be seen that indeed (\ref{truncated_gauss_papr}) approximates well the practical PAPR achieved by the online shaping scheme.
\subsection{Theoretical Shaping Gain}
\label{theoretica_GT}
The PAPR gain in a specified $\gamma$ is found by comparing (\ref{truncated_gauss_papr}) to receiver PAPR of uniform 4-PAM transmission.
The SNDR gain is found in a specified $\gamma$, rate and TSTNR, by comparing the theoretical SNDR of an un-shaped (i.i.d) Gaussian input distribution and the theoretical SNDR achieved by constraining the receiver power to $K=K_{TG}({\gamma})$. The relationships between the theoretical shaping gains and $\gamma$ in rate 1.8 bits/symbol over Channel-A are demonstrated in Fig. \ref{THEOR_G_T}(a) and Fig. \ref{THEOR_G_T}(b), for TSTNR of 40 dB and 34 dB, respectively.
It can be seen that the maximal theoretical shaping gains in these cases are 11.65 dB and 8.83 dB, respectively.
The maximal theoretical shaping gains in rate 1.8 bits/symbol over Channel-A, and the corresponding $\gamma$ values, are summarized in Table \ref{theoretica_GT_vs_tstnr} for several TSTNR values.
\begin{figure}[ht]
\centering
\subfloat[]{{\includegraphics[width=1.65in]{theoretical_gain_TSTNR40_v1.eps} }}
\subfloat[]{{\includegraphics[width=1.65in]{theoretical_gain_TSTNR34.eps} }}
\caption{Relationship between theoretical gains and $\gamma$ in rate 1.8 bits/symbol over Channel-A. (a) TSTNR 40 dB. (b) TSTNR 34 dB.}
\label{THEOR_G_T}
\end{figure}
\begin{table}[ht]
\centering
\begin{tabular}{| c |c |c |}
\hline
TSTNR & $\gamma$ & $G_T$
\\ \hline
45 dB & -15 dB &12.34 dB \\ \hline
40 dB &-15 dB &11.25 dB \\ \hline
37 dB & -13 dB& 9.75 dB \\ \hline
34 dB &-12 dB &8.83 dB \\ \hline
31 dB & -10 dB & 7.71 dB \\ \hline
29 dB & -7 dB & 6.5 dB \\ \hline
\end{tabular}
\caption{Theoretical shaping gains in rate 1.8 bits/symbol over Channel-A for several TSTNR values.}
\label{theoretica_GT_vs_tstnr}
\end{table}
\section{Simulation Results}
\label{Results}
The shaping was applied over 4-PAM and 8-PAM constellations with code rates 0.9 and 0.6, respectively. Hence, the data rate in all systems is $R=1.8$ bits/symbol or, equivalently, 200 Gbps for Channel-A and 400 Gbps for Channel-B.
The code used with all schemes is a standard turbo encoder \cite{Turbo}, made up of two elementary encoders with memory size 4 and the same generator polynomial 37-23 (octal number 37 represents the feed-forward connections and 23 the feedback connections). This code is known to be an optimal code with memory size 4 for various turbo-code rates \cite{optTurbo}.
At the receiver, the number of survivors states we used in the M-BCJR module, in all systems, was $M=16$ states per time step.
The turbo decoding ran for maximum 12 iterations on block length of 4096 information bits.
The shaped systems are compared to uniform 4-PAM transmission with TE at the receiver, over the same channel.
\begin{figure}[ht]
\centering
\subfloat[]{{\includegraphics[width=1.65in]{PAPR_RX_V1} }}
\subfloat[]{{\includegraphics[width=1.65in]{PAPR_RX_V1_224Gsymsec} }}
\caption{PAPR distributions at the channel output for TSTNR 40 dB and rate 1.8 bits/symbol. (a) Channel-A. (b) Channel-B.}
\label{PAPRDIST}
\end{figure}
\begin{figure}[ht]
\centering
\subfloat[]{{\includegraphics[width=1.65in]{RSDNRBER_TSTNR_45dB_16st_1} }}
\subfloat[]{{\includegraphics[width=1.65in]{RSDNRBER_TSTNR_45dB_16st_224G} }}
\caption{BER Vs. SNDR for TSTNR 40 dB and rate 1.8 bits/symbol. (a) Channel-A. (b) Channel-B.}
\label{BERPQNRB}
\end{figure}
The resulting PAPR distributions at Channel-A output and the BER curves are presented in Fig. \ref{PAPRDIST}(a) and Fig. \ref{BERPQNRB}(a) respectively, for TSTNR 40 dB.
As was shown in Fig. \ref{THEOR_G_T}(a), the maximal gain is achieved in $\gamma$ -15 dB. However, since for practical implementation the shaping was applied over $Q$-point constellation rather than infinite set of points, the optimal BER performance was achieved in $\gamma$ of -14 dB and -3.9 dB for shaped 8-PAM and shaped 4-PAM systems, respectively.
The SNDR at which BER $10^{-6}$ is reached, the PAPR at Channel-A output and the required ENOB are summarized in Table \ref{GAINSSUM}. It can be seen that the shaped 8-PAM and 4-PAM systems achieve, overall shaping gains of 8.55 dB and 4.05 dB, respectively, compared to uniform 4-PAM transmission with TE. These gains translates to ENOB gain of 1.43 bit and 0.68 bit, respectively.
Comparing the SNDR gain to the theoretical SNDR gain indicates that the online shaping scheme suffers from loss of 1.98 dB.
\begin{table} [H]
\begin{center}
\begin{tabularx}{0.912\linewidth}{|c | c |c |c|}
\hline
System & PAPR $10^{-4}$ & SNDR $10^{-6}$ & ENOB
\\ \hline
8-PAM uniform + TE & 10.35 dB & 23.85 dB & 4.9 bit \\ \hline
4-PAM uniform + TE & 10.13 dB & 20.02 dB & 4.23 bit \\ \hline
8-PAM shaped & 5.3 dB & 16.3 dB & 2.8 bit \\ \hline
4-PAM shaped & 6.45 dB & 19.65 dB & 3.55 bit \\ \hline
\end{tabularx}
\caption{Rate 1.8 bits/symbol and TSTNR 40 dB over Channel-A, PAPR at channel output, SNDR and ENOB summary.}
\label{GAINSSUM}
\end{center}
\end{table}
In the case of rate 1.8 bits/symbol over Channel-B and TSTNR 40 dB, the maximal gain was obtained when constraining the peak power, $\gamma$, to -17 dB. The resulting PAPR distributions and the BER curves are presented in Fig. \ref{PAPRDIST}(b) and Fig. \ref{BERPQNRB}(b), respectively. As before, the metrics of interest are summarized in Table \ref{GAINSSUM1}. It can be seen that the shaped 8-PAM and 4-PAM systems achieve overall shaping gains of 10.65 dB and 5.45 dB, respectively, compared to uniform 4-PAM transmission with TE. These gains translates to ENOB gain of 1.78 bit and 0.91 bit, respectively.
\begin{table} [H]
\begin{center}
\begin{tabularx}{0.912\linewidth}{|c | c |c |c|}
\hline
System & PAPR $10^{-4}$ & SNDR $10^{-6}$ & ENOB
\\ \hline
8-PAM uniform + TE & 11 dB & 28.3 dB & 5.75 bit \\ \hline
4-PAM uniform + TE & 10.95 dB & 24 dB & 5.03 bit \\ \hline
8-PAM shaped & 5.3 dB & 19 dB & 3.25 bit \\ \hline
4-PAM shaped & 6.4 dB & 23.1 dB & 4.12 bit \\ \hline
\end{tabularx}
\caption{Rate 1.8 bits/symbol and TSTNR 40 dB over Channel-B, PAPR at channel output, SNDR and ENOB summary.}
\label{GAINSSUM1}
\end{center}
\end{table}
\section{Conclusion}
\label{conclusion}
A novel online shaping technique for PAPR reduction at the output of high-speed wireline channels has presented. The technique is effective to reduce the large ADC dynamic range requirement and by that the required ENOB, such that an overall gain is achieved compared to uniform transmission with TE at the receiver.
Theoretical analysis which provides a LB on the SNDR and theoretical shaping gains has derived as well.
In data rate of 200 Gbps and 400 Gbps, an overall ENOB gains was demonstrated to be up to 1.43 bit and 1.78 bit, respectively, compared to a uniform 4-PAM transmission with TE at the receiver side.
|
1,314,259,993,418 | arxiv | \section{Introduction}
\hspace{4mm} We consider the problem of computing the top $k$ eigenvectors of a symmetric and positive semi-definite matrix $A \in \mathbb{R}^{n \times n}$, which has many applications in numerical linear algebra (low rank approximation), statistics (principal component analysis) and signal processing.
We denote by $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n$ the eigenvalues of $A$ counted with multiplicity and by $\delta:=\lambda_k-\lambda_{k+1}$ the eigengap for some $k$ between $1$ and $n-1$. We also denote $\Lambda_{\alpha} = \textnormal{diag}(\lambda_1, \ldots ,\lambda_k)$ and $\Lambda_{\beta} = \textnormal{diag}(\lambda_{k+1}, \ldots ,\lambda_n)$.
A set of $k$ leading eigenvectors of $A$ can be found by minimizing the function
\[
f(X) = -\Tr(X^TAX)
\]
over the set of $n \times k$ matrices with orthonormal columns. Indeed, from the Ky-Fan theorem we know that
\begin{equation}\label{eq:min_f_over_X}
\min \{ f(X) \colon X \in \mathbb{R}^{n \times k}, X^T X = I_k \} = -(\lambda_1 + \cdots + \lambda_k)=-\Tr(\Lambda_{\alpha})=:f^*.
\end{equation}
Since $A$ is symmetric, we can define the matrix $V_{\alpha} = \begin{bmatrix} v_1 \hspace{2mm} \cdots \hspace{2mm} v_k \end{bmatrix}$ such that $V_{\alpha}^T V_{\alpha} = I_k$ and with $v_i \in \mathbb{R}^n$ a unit-norm eigenvector corresponding to $\lambda_i$. If the eigengap $\delta$ is strictly positive, then $\myspan(V_{\alpha})$ is unique; otherwise, we can choose any $v_k$ from a subspace with dimension equal to the multiplicity of $\lambda_k$. It is readily seen that $f(V_{\alpha})= -(\lambda_1 + \cdots + \lambda_k)$. In fact, all minimizers of~\eqref{eq:min_f_over_X} are of the form $V_{\alpha} Q$ with $Q$ a $k \times k$ orthogonal matrix. We also define $V_{\beta}=\begin{bmatrix} v_{k+1} \hspace{2mm} \cdots \hspace{2mm} v_n \end{bmatrix}$ that contains the eigenvectors corresponding to the eigenvalues $\lambda_{k+1},\ldots,\lambda_n$. Its columns span the orthogonal complement of $\myspan(V_{\alpha})$ in $\mathbb{R}^n$ and thus $V_{\beta}^T V_{\beta} = I_{n-k}$ and $V_{\alpha}^T V_{\beta} = {0}_{k \times (n-k)}$.
Since $\myspan(V_{\alpha}) = \myspan(V_{\alpha}Q)$, it is more natural to consider this problem as a minimization problem on the Grassmann manifold $\Gr(n,k)$, the set of $k$-dimensional subspaces in $\mathbb{R}^n$. Let us therefore redefine the objective function as
\begin{equation}\label{eq:min_f_over_Gr}
f(\mathcal{X}) = -\Tr(X^TAX) \text{ where $\mathcal{X} = \myspan(X)$ for $X \in \mathbb{R}^{n \times k}$ s.t. $X^T X = I_k$}.
\end{equation}
This cost function can be seen as a block version of the standard Rayleigh quotient.
An immediate benefit is that, if $\delta > 0$, the minimizer of~\eqref{eq:min_f_over_Gr} is isolated since it is the subspace $\mathcal{V_{\alpha}} = \myspan(V_\alpha)$.
To minimize $f$ on $\Gr(n,k)$, we shall use the Riemannian steepest descent method (RSD) along geodesics in $\Gr(n,k)$. Quite remarkably, for $\Gr(n,k)$ these geodesics can be implemented efficiently in closed form.
For analyzing the convergence properties of steepest descent on $\Gr(n,k)$, we extend results of the recent work \cite{alimisis2021distributed}, where it is shown that the Rayleigh quotient on the sphere enjoys favourable geodesic convexity-like properties, namely, \emph{weak-quasi-convexity} and \emph{quadratic growth}. In this work, we show that these convexity-like properties continue to hold in the more general case of the block Rayleigh quotient function $f\colon \Gr(n,k) \rightarrow \mathbb{R}$. These results are of general interest, but also sufficient to prove a local convergence rate for steepest descent for minimizing $f$ when started from an initial point outside the region of local convexity. For the latter, a crucial help is provided by the fact that the Grassmann manifold is positively curved.
In particular, assuming a \emph{strictly positive eigengap} $\delta$ between $\lambda_k$ and $\lambda_{k+1}$, we prove an exponential convergence rate to the subspace spanned by the $k$ leading eigenvectors, similar to the convergence of power method and subspace iteration (Theorem \ref{thm:exponential_conv}). If we do not assume any knowledge regarding the eigengap, then we can still prove a sub-exponential (polynomial) convergence rate of the function values to the global minimum (Theorem \ref{thm:convex_conv}), but we cannot directly study the convergence to a global minimizer. This is in line with previous work but our analysis does not use standard notions of geodesic convexity and allows for an initial guess further from the global minimizer. In Appendix \ref{sec:big_step} we present related convergence results for steepest descent with a more tractable step size but at the expense of needing a slightly better initialization.
\section{Related work}
\hspace{4mm} Over the last few years, different aspects of the convexity of eigenvalue problems have received quite some attention. In \cite{zhang2016riemannian}, the authors prove (in Theorem 4) that the Rayleigh quotient is geodesically gradient dominated in the sphere, that is, it satisfies a spherical version of the Polyak--Łojasiewicz inequality. In \cite{alimisis2021distributed}, it is shown that this result of \cite{zhang2016riemannian} can be strengthened to a geodesic weak-quasi-convexity and quadratic growth property, which imply gradient dominance when combined. Finally, the recent paper \cite{ahn2021riemannian} examines (among other contributions) the convexity structure of the same block version of the symmetric eigenvalue problem on the Grassmann manifold that we introduced above. Unfortunately, the characterization of the geodesic convexity region independently of the eigengap $\delta$ (Corollary 5 in \cite{ahn2021riemannian}) is wrong (see our Appendix~\ref{sec:convexity}).
As we will prove in Theorem~\ref{eq:g-convex_domain}, the geodesic convexity region of $f$ (and the one of the equivalent cost function used in \cite{ahn2021riemannian}) needs to depend on the eigengap, as appears also in \cite[Lemma 7]{pmlr-v119-huang20e} in the case of the sphere ($k=1$).
To the best of our knowledge, the current work is the first that deals with the convergence of the steepest descent algorithm for the multiple eigenvalue-eigenvector problem on the Grassmann manifold. The work \cite{alimisis2021distributed} proves exponential convergence of steepest descent only in the case of $k=1$, that is, for the leading eigenvector. In this paper, we take a reasonable but highly non-trivial step forward by extending this analysis for general $k$, that is, for a block of $k$ leading eigenvectors.
The standard algorithm for computing the leading eigenspace of dimension $k$ is subspace iteration (or power method when $k=1$).\footnote{Krylov methods are arguably the most popular algorithms but they do not iterate on a subspace directly and are typically started from a single vector. In particular, they cannot easily improve a given approximation of a subspace for large $k>1$.} However, there are reasons to believe that, in certain cases, Riemannian steepest descent (and its accelerated version with non-linear conjugate gradients) should be preferred, especially in noisy settings \cite{alimisis2021distributed} or in electronic structure calculations where the leading eigenspace of many varying matrices $A$ needs to be computed.\footnote{Personal communication by Yousef Saad.} In particular, \cite{alimisis2021distributed} presents strong experimental evidence that steepest descent is more robust to perturbations of the matrix-vector products than subspace iteration close to the optimum. While subspace iteration still behaves better at the start of the iteration, it asymptotically fails to converge to an approximation of the leading subspace that is as good as the one estimated by Riemannian steepest descent. While \cite{alimisis2021distributed} dealt with a noisy situation due to calculations in a distributed setting with limited communication, exactly the same effect can be observed when we inject the matrix-vector products with Gaussian noise. Thus, we expect steepest descent to perform better than subspace iteration close to the optimum in any stochastic regime \cite{hardt2014noisy}.
Regarding worst-case theoretical guarantees, the strongest convergence result for subspace iteration in the presence of a strictly positive eigengap $\delta$ is in terms of the largest principal angle between the iterates and the optimum \cite{golub2013matrix}, that is, the $\ell_\infty$-norm of the vector of principal angles. In contrast, our convergence result for steepest descent for $\delta > 0$ (Theorem \ref{thm:exponential_conv}) is in terms of the $\ell_2$-norm of the same vector of angles, which is in general stronger. When $\delta = 0$, it is known from \cite{o1979estimating,Kuczynski92estimatingthe} that the largest eigenvalue ($k=1$) can still be efficiently estimated. We extend this result for $k>1$ and prove a convergence rate of steepest descent for the function values $f$ (Theorem \ref{thm:convex_conv}), relying only on weak-quasi-convexity (and thus using a different argument from \cite{o1979estimating,Kuczynski92estimatingthe}).
\section{Geometry of the Grassmann manifold and block Rayleigh quotient} \label{sec:background}
We present here a brief introduction into the geometry of the Grassmann manifold. The content is not new and for more details, we refer to \cite{absilOptimizationAlgorithmsMatrix2008, bendokatGrassmannManifoldHandbook2020, edelmanGeometryAlgorithmsOrthogonality1999}.
The $(n,k)$-Grassmann manifold is defined as the set of all $k$-dimensional subspaces of $\mathbb{R}^n$:
\begin{equation*}
\Gr(n,k)=\lbrace \mathcal{X} \subseteq \mathbb{R}^n \colon \mathcal{X} \hspace{1mm} \text{is a subspace and} \dim(\mathcal{X})=k \rbrace.
\end{equation*}
Any element $\mathcal{X}$ of $\Gr(n,k)$ can be represented by a matrix $X \in \mathbb{R}^{n \times k}$ that satisfies $\mathcal{X} = \myspan(X)$. Such a representative is not unique since $Y=XQ$ for some invertible matrix $Q \in \mathbb{R}^{k \times k}$ satisfies $\myspan(Y) = \myspan(X)$. Without loss of generality, \emph{we will therefore always take matrix representatives $X$ of subspaces $\mathcal{X}$ that have orthonormal columns throughout the paper}.
With some care, the non-uniqueness of the representatives is not a problem.\footnote{This can be made very precise by describing $\Gr(n,k)$ as the quotient of the Stiefel manifold with the orthogonal group. The elegant theory of this quotient manifold is worked out in \cite{absilOptimizationAlgorithmsMatrix2008}.} For example, our objective function \eqref{eq:min_f_over_Gr} is invariant to $Q$.
\paragraph{Riemannian structure.} The set $\Gr(n,k)$ admits the structure of a differential manifold with tangent spaces
\begin{equation}\label{eq:def_TXGr}
T_{\mathcal{X}} \Gr(n,k)=\lbrace G \in \mathbb{R}^{n \times k} \colon X^T G=0 \rbrace,
\end{equation}
where $\mathcal{X} = \myspan(X)$. Since $X^T G = 0$ if and only if $X^T (G Q)=0$, for any invertible matrix $Q \in \mathbb{R}^{k \times k}$, this description of the tangent space does not depend on the representative $X$. However, a specific tangent vector $G$ will depend on the chosen $X$. With slight abuse of notation,\footnote{Using the quotient manifold theory, one would use horizontal lifts.} the above definition should therefore be interpreted as: given a fixed $X$, we define tangent vectors $G_1, G_2, \ldots $ of $\Gr(n,k)$ at $\mathcal{X}=\myspan(X)$.
This subtlety is important, for example, when defining an inner product on $T_{\mathcal{X}} \Gr(n,k)$:
\[
\langle G_1, G_2 \rangle_{\mathcal{X}} = \Tr(G^T_1 G_2) \ \text{\ with\ $G_1,G_2 \in T_{\mathcal{X}} \Gr(n,k)$ }.
\]
Here, $G_1$ and $G_2$ are tangent vectors of the same representative $X$. Observe that the inner product is invariant to the choice of orthonormal representative: If $\Bar{G}_1=G_1 Q$ and $\Bar{G}_2 = G_2 Q$ with orthogonal $Q$, then we have
\begin{equation*}
\langle \bar G_1, \bar G_2 \rangle_{\mathcal{X}} = \Tr(\bar G^T_1 \bar G_2) = \Tr(Q^T G_1^T G_2 Q)= \Tr(G_1^T G_2 Q Q^T) = \Tr(G_1^T G_2).
\end{equation*}
It is easy to see that the norm induced by this inner product in any tangent space is the Frobenius norm, which we will denote throughout the paper as $\| \cdot \|:=\| \cdot \|_F$.
\paragraph{Exponential map.} Given the Riemannian structure of $\Gr(n,k)$, we can compute the exponential map at a point $\mathcal{X}$ as \cite[Thm.~3.6]{absilRiemannianGeometryGrassmann2004}
\begin{equation}\label{eq:formula_geo}
\begin{aligned}
\Exp_{\mathcal{X}}: T_{\mathcal{X}} \Gr(n,k) &\rightarrow \Gr(n,k) \\
G &\mapsto \myspan(\, X V \cos(\Sigma) + U \sin(\Sigma) \, ),
\end{aligned}
\end{equation}
where $ U \Sigma V^T$ is the \emph{compact} SVD of $G$ such that $\Sigma$ and $V$ are square matrices.
The exponential map is invertible in the domain \cite[Prop.~5.1]{bendokatGrassmannManifoldHandbook2020}
\begin{equation}\label{eq:inj_exp}
\left \lbrace G \in T_{\mathcal{X}} \Gr(n,k) \colon \| G \|_2 < \frac{\pi}{2} \right \rbrace,
\end{equation}
where $\| G \|_2$ is the spectral norm of $G$. The inverse of the exponential map restricted to this domain is the logarithmic map, denoted by $\Log$. Given two subspaces $\mathcal{X},\mathcal{Y}\in \Gr(n,k)$, we have
\begin{equation}\label{eq:log formula}
\Log_{\mathcal{X}}(\mathcal{Y}) = U \atan(\widehat\Sigma) \, V^T,
\end{equation}
where $U \widehat \Sigma V^T = (I - X X^T) Y (X^T Y)^{-1}$ is again a compact SVD. This is well-defined if $X^T Y$ is invertible, which is guaranteed if all principal angles between $\mathcal{X}$ and $\mathcal{Y}$ are strictly less than $\pi / 2$ (see below). By taking $G = \Log_{\mathcal{X}}(\mathcal{Y})$, we see that $\Sigma = \atan(\widehat\Sigma)$.
\paragraph{Principal angles.} The Riemannian structure of the Grassmann manifold can be conveniently described by the notion of the principal angles between subspaces. Given two subspaces $\mathcal{X},\mathcal{Y} \in \Gr(n,k)$, the principal angles between them are $0 \leq \theta_1 \leq \cdots \leq \theta_k \leq \pi/2$ obtained from the SVD
\begin{equation}\label{eq:SVD_for_principal_angles}
Y^T X=U_1 \cos \theta \ V_1^T
\end{equation}
where $U_1 \in \mathbb{R}^{k \times k}, V_1 \in \mathbb{R}^{k \times k}$ are orthogonal and the diagonal matrix $\cos \theta= \diag(\cos \theta_1,...,\cos \theta_k)$.
We can express the Riemannian logarithm using principal angles and the intrinsic distance induced by the Riemannian inner product discussed above is
\begin{equation}\label{eq:distance_with_Log_and_angles}
\dist(\mathcal{X},\mathcal{Y)}=\| \Log_{\mathcal{X}} (\mathcal{Y}) \| = \| \Log_{\mathcal{Y}} (\mathcal{X}) \|=\sqrt{\theta_1^2+...+\theta_k^2}=\| \theta \|_2,
\end{equation}
where $\theta=(\theta_1, \ldots ,\theta_k)^T$
If $X \in \mathbb{R}^{n \times k}$ is an arbitrary matrix with orthonormal columns, then, generically, these columns will not be exactly orthogonal to the $k$ leading eigenvectors $v_1, \ldots, v_k$ of $A$. Thus, we have with probability one that the principal angles between $\mathcal{X}$ and the space of $k$ leading eigenvectors satisfy $0 \leq \theta_1 \leq \cdots \leq \theta_k < \pi/2$.
\paragraph{Curvature.}
We can compute exactly the sectional curvatures in $\Gr(n,k)$, but for our purposes we only need that they are everywhere non-negative \cite{Wong, bendokatGrassmannManifoldHandbook2020}. This means that the geodesics on the Grassmann manifold spread more slowly than in Euclidean space. This is consequence of the famous Toponogov's theorem that we state here in the form of the following technical lemma, which will be important in our convergence analysis.
\begin{lemma}
\label{prop:tangent_space}
Let $\mathcal{X}, \mathcal{Y}, \mathcal{Z} \in \Gr(n,k)$, such that
$$\max\{ \textnormal{dist}(\mathcal{X}, \mathcal{Z}) , \textnormal{dist} (\mathcal{Y}, \mathcal{Z}) \} < \frac{\pi}{2}.
$$ Then
\begin{equation*}
\textnormal{dist}(\mathcal{X} , \mathcal{Y}) \leq \| \log_\mathcal{Z}(\mathcal{X})-\log_\mathcal{Z}(\mathcal{Y}) \|.
\end{equation*}
\end{lemma}
\begin{lemma}(Law of cosines) \label{lem:geo_triangle_nonneg}
Let $\mathcal{X},\mathcal{Y},\mathcal{Z}$ as in Lemma \ref{prop:tangent_space}. Then
\begin{equation*}
\dist^2(\mathcal{X}, \mathcal{Y}) \leq \dist^2(\mathcal{Z}, \mathcal{X})+ \dist^2(\mathcal{Z}, \mathcal{Y})-2 \langle \Log_{\mathcal{Z}} (\mathcal{X}), \Log_{\mathcal{Z}} (\mathcal{Y}) \rangle.
\end{equation*}
\end{lemma}
\begin{proof}
Apply Lemma \ref{prop:tangent_space} and expand $\| \log_\mathcal{Z}(\mathcal{X})-\log_\mathcal{Z}(\mathcal{Y}) \|^2$.
\end{proof}
\paragraph{Block Rayleigh quotient.}
Our objective function for minimization is the block version of the Rayleigh quotient:
\[
f(\mathcal{X}) = -\Tr(X^TAX) \text{ where $\mathcal{X} = \myspan(X) \in \Gr(n,k)$ s.t. $X^T X = I_k$}.
\]
This function has $\mathcal{V}_{\alpha}=\myspan(\begin{bmatrix} v_1 \hspace{2mm} \cdots \hspace{2mm} v_k \end{bmatrix})$ as global minimizer. This minimizer is unique on $\Gr(n,k)$ if and only if $\delta>0$.
Given any differentiable function $f\colon\Gr(n,k) \rightarrow \mathbb{R}$, we can define its Riemannian gradient as the vector field that satisfies
\begin{equation*}
df(\mathcal{X})(\mathcal{G}) = \langle \grad f(\mathcal{X}),\mathcal{G} \rangle_{\mathcal{X}}.
\end{equation*}
For a given representative $X$ of $\mathcal{X}$, the Riemannian gradient of the block Rayleigh quotient satisfies
\begin{equation*}\label{eq:grad f formula}
\grad f(\mathcal{X}) = -2(I-X X^T) A X.
\end{equation*}
Using the notions of the Riemannian gradient and Levi-Civita connection, we can define also a Riemannian notion of Hessian.
For the block Rayleigh quotient $f$, the Riemannian Hessian $\Hess f$ evaluated as bilinear form satisfies
\begin{equation}\label{eq:Hessian_f_inner_product}
\Hess f(\mathcal{X})[{G},{G}] = 2 \langle G, G X^T A X - A G \rangle,
\end{equation}
for $G \in T_{\mathcal{X}} \Gr(n,k)$; see \cite[\S4.4]{edelmanGeometryAlgorithmsOrthogonality1999} or \cite[\S6.4.2]{absilOptimizationAlgorithmsMatrix2008}.
\section{Convexity-like properties of the block Rayleigh quotient} \label{sec:cost_conv}
We now prove the new analytic properties of the block Rayleigh quotient $f(\mathcal{X})=-\Tr(X^T A X)$. These are important in their own right but will also be used later for the convergence of the Riemannian steepest descent method.
\subsection{Smoothness}
A $C^2$ function defined on the Grassmann manifold is called $\gamma$-smooth if the maximum eigenvalue of its Riemannian Hessian is everywhere upper bounded by $\gamma$. By the second-order Taylor expansion of $f$, it is easy to see that such a function then satisfies (see, e.g., \cite[Thm.~7.1.2]{absilOptimizationAlgorithmsMatrix2008})
\begin{equation}\label{eq:quadratic_upper_bound}
f(\mathcal{X}) \leq f(\mathcal{Y})+ \langle \grad f(\mathcal{Y}), \Log_{\mathcal{Y}} (\mathcal{X}) \rangle+\frac{\gamma}{2} \dist^2(\mathcal{X},\mathcal{Y}),
\end{equation}
for any $\mathcal{X}, \mathcal{Y} \in \Gr(n,k)$ with $\textnormal{dist}(\mathcal{X}, \mathcal{Y}) < \frac{\pi}{2}$.
As in the introduction, denote the global minimum of $f$ by $f^*$ which is attained at $V_\alpha \in \Gr(n,k)$. The previous inequality also leads to
\begin{equation}\label{eq:optim_gap_with_gradient}
f(\mathcal{X})-f^* \geq \frac{1}{2 \gamma} \| \grad f(\mathcal{X}) \|^2,
\end{equation}
for any $\mathcal{X} \in \Gr(n,k)$ with $\textnormal{dist}(\mathcal{X}, \mathcal{V}_{\alpha}) < \frac{\pi}{2}$. We present a proof below although the result is well known.
\begin{proof}[Proof of~\eqref{eq:optim_gap_with_gradient}]
Since $f^*$ is a global minimum of $f$, we have from~\eqref{eq:quadratic_upper_bound} that
\begin{equation*}
f^* \leq f(\mathcal{X}) \leq f(\mathcal{Y})+\langle \textnormal{grad}f(\mathcal{Y}),\Log_{\mathcal{Y}}(\mathcal{X}) \rangle+\frac{\gamma}{2} \| \Log_{\mathcal{Y}}(\mathcal{X}) \|^2,
\end{equation*}
for any $\mathcal{X}, \mathcal{Y} \in \Gr(n,k)$ with $\textnormal{dist}(\mathcal{X}, \mathcal{Y}) < \frac{\pi}{2}$.
We set $\mathcal{X}:=\Exp_{\mathcal{Y}} \left(-\frac{1}{\gamma} \textnormal{grad}f(\mathcal{Y})\right)$. By the mean value theorem in $\grad f$, we have $\tfrac{1}{\gamma} \| \grad f(\mathcal{Y}) \| \leq \textnormal{dist}(\mathcal{Y},\mathcal{V}_{\alpha})$. Indeed, consider the geodesic connecting $\mathcal{Y}$ and $\mathcal{V}_{\alpha}$ and apply Lemma 4 from \cite{alimisis2020continuous} for the vector field $A=\textnormal{grad}f$. Taking norms of both sides and using that $\textnormal{grad}f(\mathcal{V}_{\alpha}) = 0$, the norm of the integral is less or equal than the integral of the norms. Since parallel transport preserves the norm and the covariant derivative of the gradient is the Riemannian Hessian, we obtain the claimed inequality. If we now assume that $\textnormal{dist}(\mathcal{Y},\mathcal{V}_{\alpha})<\frac{\pi}{2}$, then $-\tfrac{1}{\gamma} \textnormal{grad}f(\mathcal{Y})$ satisfies the condition of the domain in \eqref{eq:inj_exp} and $\Log$ is well defined: $\Log_{\mathcal{Y}}(\mathcal{X}) = -\frac{1}{\gamma} \textnormal{grad}f(\mathcal{Y})$.
In that case, we also have
\begin{equation}\label{eq:distance_pi2_SD_step}
\textnormal{dist}(\mathcal{X},\mathcal{Y}) = \| \Log_{\mathcal{Y}}(\mathcal{X}) \| < \frac{\pi}{2}
\end{equation}
and the right hand side of the initial inequality becomes
\begin{align*}
f^* \leq f(\mathcal{Y})-\frac{1}{\gamma} \| \textnormal{grad}f(\mathcal{Y})\|^2+ \frac{1}{2\gamma} \| \textnormal{grad}f(\mathcal{Y})\|^2 = f(\mathcal{Y})-\frac{1}{2 \gamma} \| \textnormal{grad}f(\mathcal{Y})\|^2,
\end{align*}
for any $\mathcal{Y} \in \Gr(n,k)$ with $\textnormal{dist}(\mathcal{Y},\mathcal{V}_{\alpha}) < \frac{\pi}{2}$. Rearranging the last inequality and substituting $\mathcal{Y}=\mathcal{X}$, we get the desired result.
\end{proof}
We start our analysis by showing that the largest eigenvalue of the Riemannian Hessian of the block Rayleigh quotient $f$ is indeed upper bounded on the Grassmann manifold. Thus the properties from above hold for the stated $\gamma$.
\begin{tcolorbox}
\begin{Proposition}[Smoothness]\label{prop:smoothness}
The eigenvalues of the Riemannian Hessian of $f$ on $\Gr(n,k)$ are upper bounded by $\gamma := 2 (\lambda_1 - \lambda_n)$.
\end{Proposition}
\end{tcolorbox}
\begin{proof}
Let $G$ be a tangent vector of $\Gr(n,k)$ at $X$. Then the Riemannian Hessian satisfies (see~\eqref{eq:Hessian_f_inner_product})
\[
\tfrac{1}{2} \Hess f(\mathcal{X})[G,G] = \Tr(G^T G X^T A X) - \Tr(A G G^T).
\]
Since $A, X^T A X, G G^T$, and $G^T G$ are all symmetric and positive definite matrices, standard trace inequality (see, e.g, \cite[Thm.~4.3.53]{hornMatrixAnalysis2012a}) gives
\[
\Hess f(\mathcal{X})[G,G] \leq 2 (\lambda_{\max}(X^T A X)- \lambda_{\min} (A)) \| G \|^2.
\]
Since $X$ has orthonormal columns, $\lambda_{\max}(X^T A X) \leq \lambda_{\max}( A )$; see, e.g., \cite[Cor.~4.3.37]{hornMatrixAnalysis2012a}. The proof is now complete with the definition of $\lambda_1$ and $\lambda_n$.
\end{proof}
The result in Prop.~\ref{prop:smoothness} is tight: Choosing $X = V_\alpha$ and $G = v_n e_1^T$, it is readily verified that the upper bound is attained.
We will also need the following upper bound of the spectral norm of the Riemannian gradient, independently of $\mathcal{X}$.
\begin{tcolorbox}
\begin{lemma}\label{lem:uniform upper bound grad}
For all $\mathcal{X} \in \Gr(n,k)$, the Riemanian gradient of $f$ satisfies
\[
\| \grad f(\mathcal{X}) \|_2 \leq \frac{\gamma}{2}.
\]
\end{lemma}
\end{tcolorbox}
\begin{proof}
Since $X$ has orthonormal columns, we can complete it to the orthogonal matrix $Q = \begin{bmatrix} X & X_\perp \end{bmatrix}$. Hence, $\| \grad f(\mathcal{X}) \|_2 = \| 2 (I - XX^T)AX \|_2 = 2 \| X_\perp^T A X \|_2$. The result now follows directly from~\cite[Thm.~2]{liInequalitiesSingularValues1999a} since $A$ is real symmetric and the definition of $\gamma = 2(\lambda_1 - \lambda_n)$.
\end{proof}
\subsection{Weak-quasi-convexity and quadratic growth}
We now turn our interest in the convexity properties of the block Rayleight quotient function. We start by proving a property which is known in the literature as \emph{quadratic growth}.
\begin{tcolorbox}
\begin{Proposition}[Quadratic growth]\label{prop:quadratic growth}
\label{prop:quadratic_growth}
Let $0 \leq \theta_1 \leq \cdots \leq \theta_k < \pi/2$ be the principal angles between the subspaces $\mathcal{X}$ and $\mathcal{V}_\alpha$. The function $f$ satisfies
$$f(\mathcal{X})-f^* \geq c_Q \, \delta \, \dist^2(\mathcal{X},\mathcal{V_{\alpha}})$$
where $c_Q = 4/\pi^2 > 0.4$.
\end{Proposition}
\end{tcolorbox}
\begin{proof}
The spectral decomposition of $A = V_{\alpha} \Lambda_{\alpha}V_{\alpha}^T + V_{\beta} \Lambda_{\beta} V_{\beta}^T$ implies
\begin{equation}\label{eq:XAX worked out}
X^T AX = X^T V_{\alpha} \Lambda_{\alpha}V_{\alpha}^T X+ X^T V_{\beta} \Lambda_{\beta} V_{\beta}^T X.
\end{equation}
Since $f(\mathcal{X}) = -\Tr(X^TAX)$, we have
\begin{align*}
f(\mathcal{X})-f^* &= \Tr(\Lambda_{\alpha})-\Tr(X^T V_{\alpha} \Lambda_{\alpha}V_{\alpha}^T X)-\Tr(X^T V_{\beta} \Lambda_{\beta} V_{\beta}^T X) \\ &= \Tr(\Lambda_{\alpha})-\Tr( \Lambda_{\alpha}V_{\alpha}^T X X^T V_{\alpha})-\Tr( \Lambda_{\beta} V_{\beta}^T X X^T V_{\beta}) \\ & = \Tr(\Lambda_{\alpha} (I_k- V_{\alpha}^T X X^T V_{\alpha})) - \Tr(\Lambda_{\beta} V_{\beta}^T X X^T V_{\beta}).
\end{align*}
From the definition~\eqref{eq:SVD_for_principal_angles} of the principal angles between $X$ and $V_{\alpha}$, we recall that
\begin{equation}\label{eq:SVD_Va_X}
V_{\alpha}^T X=U_1 \cos \theta \, V_1^T,
\end{equation}
where $\cos \theta = \diag(\cos \theta_1, \ldots, \cos \theta_k)$ is a diagonal matrix and $U_1, V_1$ are orthogonal matrices. Plugging this equality in, we get that the $j$th eigenvalue of the matrix $I_k- V_{\alpha}^T X X^T V_{\alpha}$ is equal to $1-\cos^2\theta_j = \sin^2 \theta_j \geq 0$. Thus, by standard trace inequality for symmetric and positive definite matrices (see, e.g.,~\cite[Thm.~4.3.53]{hornMatrixAnalysis2012a}), the first summand above satisfies
\begin{equation*}
\Tr(\Lambda_{\alpha} (I_k- V_{\alpha}^T X X^T V_{\alpha})) \geq \lambda_{k} \sum_{j=1}^k \sin^2 \theta_j.
\end{equation*}
The matrix $V_{\beta}^T X X^T V_{\beta}$ has the same non-zero eigenvalues with the same multiplicity as the matrix
\begin{equation*}\label{eq:SVD of Gramm of XVbeta}
X^T V_{\beta} V_{\beta}^T X = I_k-V_1 \cos^2 \theta \, V_1^T = V_1 \sin^2 \theta \, V_1^T
\end{equation*}
where we used $V_{\beta} V_{\beta}^T= I_n - V_{\alpha} V_{\alpha}^T$ and the SVD of $V_{\alpha}^T X$.
Thus the $j$th eigenvalue of $V_{\beta}^T X X^T V_{\beta}$ is $\sin^2 \theta_j \geq 0$. By trace inequality again, the second summand therefore satisfies
\begin{equation*}
\Tr(\Lambda_{\beta} V_{\beta}^T X X^T V_{\beta}) \leq \lambda_{k+1} \sum_{j=1}^k \sin^2 \theta_j.
\end{equation*}
Putting both bounds together, we get
\begin{equation*}
f(\mathcal{X})-f^* \geq (\lambda_k - \lambda_{k+1}) \sum_{j=1}^k \sin^2 \theta_j
\geq \delta \sum_{j=1}^k \frac{4}{\pi^2}\theta_j^2
\end{equation*}
and the proof is complete by the definition~\eqref{eq:distance_with_Log_and_angles} of $\dist$.
\end{proof}
We say that $f$ is geodesically convex if for all $\mathcal{X}$ and $\mathcal{Y}$ in a suitable region it holds
$$ f(\mathcal{X})-f(\mathcal{Y}) \leq \langle \textnormal{grad}f(\mathcal{X}), -\Log_{\mathcal{X}}(\mathcal{Y}) \rangle.$$
This generalizes the classical convexity of differentiable functions on $\mathbb{R}^n$ to manifolds by taking the logarithmic map instead of the difference $\mathcal{X}-\mathcal{Y}$.
In Appendix~\ref{sec:convexity}, we prove that our objective function $f$ is only geodesically convex in a small neighbourhood of size $\mathcal{O}(\sqrt{\delta})$ around the minimizer $\mathcal{V}_\alpha$. Fortunately, our key result of this section shows that $f$ satisfies a much weaker notion of geodesic convexity, known in the literature as \emph{weak-quasi-convexity}, that does not depend on the eigengap $\delta$.
We first need the following lemma which is the general version of the CS decomposition but applied to our setting of square blocks.
\begin{lemma}\label{lemma:CS_square_blocks}
Let $X,Y \in \mathbb{R}^{n \times k}$ be such that $X^T X = Y^T Y = I_k$ with $k < n$.
Choose $X_\perp, Y_\perp \in \mathbb{R}^{n \times (n-k)}$ such that $X_\perp^T X_\perp = Y_\perp^T Y_\perp = I_{n-k}$ and $\myspan(X_\perp) = \myspan(X)^\perp$, $\myspan(Y_\perp) = \myspan(Y)^\perp$.
Then there exists $0 \leq r,s \leq k$ such that
\begin{align*}
Y^T X &= U_1 \begin{bmatrix}I_r \\ & C_s \\ & & O_{p \times p} \end{bmatrix} V_1^T, &
Y^T X_\perp &= U_1 \begin{bmatrix}O_{r \times m} \\ & S_s \\ & & I_{p} \end{bmatrix} V_2^T \\
Y_\perp^T X &= U_2 \begin{bmatrix}O_{m \times r} \\ & S_s \\ & & I_{p} \end{bmatrix} V_1^T, & Y_\perp^T X_\perp &= U_2 \begin{bmatrix}-I_{m} \\ & -C_s \\ & & O_{p \times p} \end{bmatrix} V_2^T
\end{align*}
with $p=k-r-s$ and $m = n - 2k +r$, and we have
\begin{itemize}
\item orthogonal matrices $U_1, V_1$ of size $k$ and $U_2, V_2$ of size $n-k$;
\item identity matrices $I_q$ of size $q$;
\item zero matrices $O_{q \times t}$ of size $q \times t$;
\item diagonal matrices $C_s = \diag(\alpha_1, \ldots, \alpha_s)$ and $S_s = \diag(\beta_1, \ldots, \beta_s)$ such that $1 > \alpha_1 \ge \cdots \ge \alpha_s > 0$, $0 < \beta_1 \le \cdots \le \beta_s < 1$ and $C_s^2 + S_s^2 = I_s$.
\end{itemize}
\end{lemma}
\begin{proof}
Since $\begin{bmatrix} X & X_\perp \end{bmatrix}$ and $\begin{bmatrix} Y & Y_\perp \end{bmatrix}$ are orthogonal, the result follows directly from the CS decomposition of the orthogonal matrix $P = \begin{bmatrix} Y & Y_\perp \end{bmatrix}^T \begin{bmatrix} X & X_\perp \end{bmatrix}$; see the Theorem of \S 4 in \cite{paigeGeneralizedSingularValue1981}.
\end{proof}
Observe that the matrix $\diag(I_r,C_s,O_{p \times p})$ in this lemma corresponds to the matrix $\cos(\theta)$ in~\eqref{eq:SVD_for_principal_angles} with $\theta$ the vector of principal angles $0 \leq \theta_1 \leq \cdots \leq \theta_k \leq \pi/2$ between $\myspan(X)$ and $\myspan(Y)$. However, the lemma explicitly splits off the angles that are zero and $\pi/2$ so that it can formulate the related decompositions for $Y^T X_\perp, Y_\perp^T X,$ and $Y_\perp^T X_\perp$ with $C_s$ and $S_s$.
We are now ready to state our weak quasi-convexity result. In the statement of the proposition below (and throughout this paper), we use the convention that $\frac{0}{\tan 0} = 1$.
\begin{tcolorbox}
\begin{Proposition}[Weak-quasi-convexity]\label{prop:weak-quasi-convexity}
Let $0 \leq \theta_1 \leq \cdots \leq \theta_k < \pi/2$ be the principal angles between the subspaces $\mathcal{X}$ and $\mathcal{V}_\alpha$. Then, $f$ satisfies
$$2 a(\mathcal{X}) \, (f(\mathcal{X})-f^*) \leq \langle \textnormal{grad}f(\mathcal{X}), -\Log_{\mathcal{X}}(\mathcal{V_{\alpha})} \rangle$$
with $a(\mathcal{X}) := \theta_k / \tan \theta_k$.
\end{Proposition}
\end{tcolorbox}
\begin{proof}
Take $X$ and $V_\alpha$ matrices with orthonormal columns such that $\mathcal{X} = \myspan(X)$ and $\mathcal{V}_\alpha = \myspan(V_\alpha)$. Since $\theta_k < \pi / 2$, we know that $p=0$ in Lemma~\ref{lemma:CS_square_blocks} and thus $s = k - r$ with $r$ the number of principal angles that are equal to zero. Choosing a matrix $X_\perp$ with orthonormal columns such that $\myspan(X_\perp) = \myspan(X)^\perp$, we therefore get from Lemma~\ref{lemma:CS_square_blocks} that there exist orthogonal matrices $U_1,V_1$ of size $k$ and $V_2$ of size $n-k$ such that
\begin{align}\label{eq:SVD_XVa_XperpVa}
V_\alpha^T X &= U_1 \begin{bmatrix}I_r \\ & C_{k-r} \end{bmatrix} V_1^T, &
V_\alpha^T X_\perp &= U_1 \begin{bmatrix}O_{r \times m} \\ & S_{k-r} \end{bmatrix} V_2^T.
\end{align}
Comparing with~\eqref{eq:SVD_for_principal_angles}, we deduce that $C_{k-r} = \diag(\cos \theta_{r+1} , \ldots, \cos \theta_k)$ and $S_{k-r} = \diag(\sin \theta_{r+1}, \ldots, \sin \theta_k)$ since $C_{k-r}^2 + S_{k-r}^2 = I$.
We recall from~\eqref{eq:log formula} that
\begin{equation}\label{eq:log formula_XVa}
\Log_{\mathcal{X}}(\mathcal{V}_{\alpha}) = U \atan(\Sigma) V^T,
\end{equation}
where $U \Sigma V^T=(I_n - X X^T) V_{\alpha} (X^T V_{\alpha})^{-1}=:M$ is a compact SVD (without the requirement that the diagonal of $\Sigma$ is non-increasing). Using $X_\perp$ from above, we can also write $M = X_\perp X_\perp^T V_{\alpha} (X^T V_{\alpha})^{-1}$. Substituting~\eqref{eq:SVD_XVa_XperpVa} and using that $U_1$ and $V_1$ are orthogonal gives
\[
M = X_\perp V_2 \begin{bmatrix}O_{m \times r\textbf{}} \\ & S_{k-r} C_{k-r}^{-1} \end{bmatrix} V_1^T
= X_\perp \tilde V_2 \begin{bmatrix}O_{r \times r} \\ & S_{k-r} C_{k-r}^{-1} \end{bmatrix} V_1^T,
\]
where $\tilde V_2 \in \mathbb{R}^{(n-k) \times k}$ contains the last $k$ columns of $V_2$ in order. Since $\theta_1 = \cdots = \theta_r = 0$, we can therefore formulate the compact SVD of $M$ using the vector $\theta$ of all principal angles as follows:
\[
M = U \Sigma V^T \quad \text{with $U = X_\perp \tilde V_2, \ \Sigma = \tan(\theta), \ V = V_1$}.
\]
Hence from~\eqref{eq:log formula_XVa} we get directly that
\begin{equation}\label{eq:Log_with_V2}
\Log_{\mathcal{X}}(\mathcal{V}_{\alpha}) = X_\perp \tilde V_2\, \theta \, V_1^T,
\end{equation}
where $\theta$ is a diagonal matrix.
We now claim that~\eqref{eq:Log_with_V2} also satisfies
\begin{equation}\label{eq:Log_with_Va}
\Log_{\mathcal{X}}(\mathcal{V}_{\alpha}) = X_\perp X_\perp^T V_\alpha U_1 \frac{\theta}{\sin \theta} V_1^T,
\end{equation}
where $\frac{\theta}{\sin \theta}$ is a diagonal matrix for which $\frac{0}{\sin 0} = 1$. Indeed, recalling that $\theta_1 = \cdots = \theta_r = 0$ and using the identities
\[
X_\perp^T V_\alpha = \tilde V_2 \begin{bmatrix}O_{r \times r} \\ & S_{k-r} \end{bmatrix} U_1^T, \quad\frac{\theta}{\sin \theta} = \begin{bmatrix}I_{r} \\ & S_{k-r}^{-1} \end{bmatrix} \begin{bmatrix}I_{r} \\ & T_{k-r} \end{bmatrix}
\]
where $T_{k-r}=\diag(\theta_{r+1}, \ldots, \theta_k)$, we obtain
\begin{align*}
\textrm{rhs of~\eqref{eq:Log_with_Va}} &= X_\perp \tilde V_2 \begin{bmatrix}O_{r \times r} \\ & S_{k-r} \end{bmatrix} \begin{bmatrix}I_{r} \\ & S_{k-r}^{-1} \end{bmatrix} \begin{bmatrix}I_{r} \\ & T_{k-r} \end{bmatrix} \, V_1^T \\ & = X_\perp \tilde V_2 \begin{bmatrix} O_{r \times r} \\ & T_{k-r} \end{bmatrix} \, V_1^T = X_\perp \tilde V_2 \, \theta \, V_1^T = \textrm{rhs of~\eqref{eq:Log_with_V2}}.
\end{align*}
Next, we work out
\[
s := \langle \grad f(\mathcal{X}), -\Log_{\mathcal{X}}(\mathcal{V_{\alpha})} \rangle.
\]
Since $\grad f(\mathcal{X})$ and $\Log_{\mathcal{X}}(\mathcal{V}_{\alpha})$, respectively, give tangent vectors for the same representative $X$ of $\mathcal{X}$, the inner product above is the trace of the corresponding matrix representations. Using~\eqref{eq:Log_with_Va} with $I-XX^T = X_\perp X_\perp^T$, we therefore get
\begin{align*}
s &= 2 \Big\langle (I-XX^T)AX, (I-XX^T) V_\alpha U_1 \frac{\theta}{\sin(\theta)} V_1^T \Big\rangle \\
&= 2 \Tr \Big( \frac{\theta}{\sin(\theta)} U_1^T V_\alpha^T (I-XX^T) AX V_1 \Big).
\end{align*}
Since $AV_\alpha = V_\alpha \Lambda_\alpha$, we can simplify
\begin{equation}\label{eq:grad_with_Va}
V_\alpha^T (I-XX^T) AX = \Lambda_\alpha V_\alpha^T X - V_\alpha^T XX^T AX.
\end{equation}
Substituting in the expression above and using that $V_{\alpha}^T X=U_1 \cos \theta \, V_1^T$, we get
\begin{align*}
\tfrac{1}{2} s &= \Tr \Big( \frac{\theta}{\sin(\theta)} U_1^T \Lambda_\alpha U_1 \cos(\theta) \Big) -
\Tr \Big( \frac{\theta}{\sin(\theta)} \cos(\theta) V_1^T X^T AX V_1 \Big) \\
&= \Tr \Big( \frac{\theta}{\tan(\theta)} \Big( U_1^T \Lambda_\alpha U_1 - V_1^T X^T AX V_1 \Big) \Big),
\end{align*}
with the convention $\frac{0}{\tan 0} = 1$.
Denote the symmetric matrix
\begin{equation}\label{eq:def_S}
S := U_1^T \Lambda_\alpha U_1 - V_1^T X^T AX V_1.
\end{equation}
We show below that all diagonal entries $S_{11}, \ldots, S_{kk}$ of $S$ are positive. Hence, by diagonality of the matrix $\tfrac{\theta}{\tan(\theta)}$, we obtain
\textbf{}\begin{align*}
\tfrac{1}{2}s &= \sum_j \frac{\theta_j}{\tan\theta_j} \, S_{jj} \geq \min_j \frac{\theta_j}{\tan\theta_j} \, \Tr(S) = \frac{\theta_k}{\tan\theta_k} \, \Big[ \Tr( \Lambda_{\alpha}) - \Tr( X^T AX ) \Big]
\end{align*}
since $U_1$ and $V_1$ are orthogonal matrices. We recover the desired result after substituting $f(\mathcal{X}) = -\Tr(X^T A X)$ and $f_* = -\Tr(V_\alpha^T A V_\alpha) = -\Tr(\Lambda_\alpha)$,
It remains to show that $S_{jj} \geq 0$ for $j=1,\ldots, k$. Since $\myspan(V_\beta) = \myspan(V_\alpha)^\perp$, Lemma~\ref{lemma:CS_square_blocks} gives us in addition to~\eqref{eq:SVD_XVa_XperpVa} also
\begin{equation}\label{eq:SVD_Vb_X}
V_\beta^T X = U_2 \begin{bmatrix} O_{m \times r} \\ & S_r \end{bmatrix} V_1^T = \tilde U_2 \sin \theta \, V_1^T,
\end{equation}
where $\tilde U_2 \in \mathbb{R}^{(n-k) \times k}$ contains the last $k$ columns of the orthogonal matrix $U_2$ in order.
A short calculation using~\eqref{eq:XAX worked out} then shows that~\eqref{eq:def_S} satisfies
\[
S= U_1^T \Lambda_\alpha U_1 - \cos\theta \, U_1^T \Lambda_\alpha U_1 \cos\theta - \sin\theta \, \tilde U_2^T \Lambda_\beta\tilde U_2 \sin\theta
\]
with diagonal elements
\[
S_{jj}
= \sin^2\theta_j \, (U_1^T \Lambda_\alpha U_1 -\tilde U_2^T \Lambda_\beta\tilde U_2)_{jj}.
\]
Since $U_1$ and $\tilde U_2$ have orthonormal columns, we obtain
\[
\lambda_{\min} (U_1^T \Lambda_\alpha U_1) \geq \lambda_{\min} (\Lambda_\alpha) = \lambda_k, \quad \lambda_{\max}(\tilde U_2^T \Lambda_\beta \tilde U_2) \leq \lambda_{\max}(\Lambda_\beta) = \lambda_{k+1},
\]
from which we get with Weyl's inequality that
\[
\lambda_{\min} (U_1^T \Lambda_\alpha U_1 - \tilde U_2^T \Lambda_\beta \tilde U_2)
\geq \lambda_{\min} (U_1^T \Lambda_\alpha U_1) - \lambda_{\max}(\tilde U_2^T \Lambda_\beta \tilde U_2) \geq \lambda_k - \lambda_{k+1} \geq 0.
\]
Hence, the matrix
\begin{equation}\label{eq:La_min_Lb_is_PSD}
U_1^T \Lambda_\alpha U_1 - \tilde U_2^T \Lambda_\beta \tilde U_2
\end{equation}
is symmetric and positive semi-definite. Its diagonal entries, and thus also $S_{jj}$, are therefore positive.
\end{proof}
We now arrive at a useful property of $f$ that will later allow us to analyze the convergence of Riemannian steepest descent. It is a \emph{weaker version of strong geodesic convexity} and can be proved easily using quadratic growth and weak-quasi-convexity.
\begin{tcolorbox}
\begin{theorem}[Weak-strong convexity]\label{thm:weak_strong_convex}
Let $0 \leq \theta_1 \leq \cdots \leq \theta_k < \pi/2$ be the principal angles between the subspaces $\mathcal{X}$ and $\mathcal{V}_\alpha$. Then, $f$ satisfies
\begin{equation*}
f(\mathcal{X})-f^* \leq \frac{1}{a(\mathcal{X})} \langle \grad f(\mathcal{X}), -\Log_{\mathcal{X}}(\mathcal{V}_{\alpha}) \rangle - c_Q \delta \, \dist^2 (\mathcal{X},\mathcal{V}_{\alpha})
\end{equation*}
with $a(\mathcal{X})= \theta_k / \tan \theta_k >0$, $c_Q = 4/\pi^2 > 0.4$, and $\delta = \lambda_k - \lambda_{k+1} \geq 0$.
\end{theorem}
\end{tcolorbox}
\begin{proof}
Combining Propositions~\ref{prop:quadratic_growth} and~\ref{prop:weak-quasi-convexity} leads to
\begin{equation*}
c_Q \delta \, \dist^2(\mathcal{X},\mathcal{V}_{\alpha}) \leq f(\mathcal{X})-f^* \leq \frac{1}{2 a(\mathcal{X})} \langle \grad f(\mathcal{X}), -\Log_{\mathcal{X}}(\mathcal{V}_{\alpha}) \rangle.
\end{equation*}
At the same time, Proposition~\ref{prop:weak-quasi-convexity} also implies
\begin{align*}
f(\mathcal{X})-f^* &\leq \frac{1}{2 a(\mathcal{X})} \langle \grad f(\mathcal{X}), -\Log_{\mathcal{X}}(\mathcal{V}_\alpha) \rangle - c_Q \delta \, \dist^2(\mathcal{X},\mathcal{V}_\alpha) \\ & \qquad + c_Q \delta \, \dist^2(\mathcal{X},\mathcal{V}_\alpha).
\end{align*}
Substituting this inequality in the first one gives the desired result.
\end{proof}
\begin{remark}
Theorem \ref{thm:weak_strong_convex} is also valid when the eigengap $\delta = 0$. In that case, $\mathcal{V}_{\alpha}$ is \textit{any subspace spanned by $k$ leading eigenvectors of $A$} and the theorem reduces to Proposition \ref{prop:weak-quasi-convexity}.
\end{remark}
While not needed for our convergence proof, the next result is of independent interest and shows that $f$ is gradient dominated in the Riemannian sense when the eigengap $\delta$ is strictly positive. This property is the \emph{Riemannian version of the Polyak--Łojasiewicz inequality} and generalizes a result by \cite{zhang2016riemannian} for the Rayleigh quotient on the sphere.
\begin{tcolorbox}
\begin{Proposition}[Gradient dominance] \label{prop:PL}
The function $f$ satisfies
\begin{equation*}
\| \textnormal{grad}f(\mathcal{X}) \|^2 \geq 4 \, c_Q \, \delta \, a^2(\mathcal{X}) (f(\mathcal{X})-f^*)
\end{equation*}
for all subspaces $\mathcal{X}$ that have a largest principal angle $<\pi/2$ with $\mathcal{V}_\alpha$.
\end{Proposition}
\end{tcolorbox}
\begin{proof}
We assume that $\delta > 0$ since otherwise the statement is trivially true. By Theorem~\ref{thm:weak_strong_convex}, we have
\begin{equation*}
f(\mathcal{X})-f^* \leq \frac{1}{a(\mathcal{X})} \langle \grad f(\mathcal{X}), -\Log_{\mathcal{X}}(\mathcal{V}_{\alpha}) \rangle - c_Q \delta \dist^2 (\mathcal{X}, \mathcal{V}_{\alpha}).
\end{equation*}
Since $\langle G_1, G_2 \rangle \leq (\|G_1\|^2 + \|G_2 \|^2)/2$ for all matrices $G_1, G_2$, we can write for any $\rho > 0$ that
\begin{equation*}
\langle \grad f(\mathcal{X}), -\Log_{\mathcal{X}}(\mathcal{V}_{\alpha}) \rangle \leq \frac{\rho}{2} \| \textnormal{grad}f(\mathcal{X}) \|^2 + \frac{1}{2 \rho} \| \Log_{\mathcal{X}}(\mathcal{V}_{\alpha}) \|^2.
\end{equation*}
Using that $\dist (\mathcal{X}, \mathcal{V}_{\alpha})= \| \Log_{\mathcal{X}}(\mathcal{V}_{\alpha}) \|$ and choosing $\rho=1/(2 c_Q \delta a(\mathcal{X}))$, we get the desired result.
\end{proof}
\section{Convergence of Riemannian steepest descent}
We now have everything in place to prove the convergence of the Riemannian steepest descent (RSD) method on the Grassmann manifold for minimizing $f$. Starting from a subspace $\mathcal{X}_0 \in \Gr(n,k)$, we iterate
\begin{equation}\label{eq:GD}
\mathcal{X}_{t+1}=\Exp_{\mathcal{X}_t} (-\eta_t \, \grad f(\mathcal{X}_t) ).
\end{equation}
Here, $\eta_t > 0$ is a step size that may depend on the iteration $t$ and will be carefully chosen depending on the specific case.
We start by a general result which shows that the distance to the optimal subspace contracts after one step of steepest descent. The step size depends on the smoothness and weak-quasi-convexity constants of $f$ from Propositions~\ref{prop:smoothness} and~\ref{prop:weak-quasi-convexity}. This is crucial since the constant $a(\mathcal{X})$ depends on the biggest principal angle between $\mathcal{X}$ and $\mathcal{V}_{\alpha}$ and bounding the evolution of distances of the iterates to the minimizer will help us also bounding this constant.\footnote{The analysis of \cite{pmlr-v119-huang20e} is wrong with respect to this issue as discussed in detail in \cite{alimisis2021distributed}.} An alternative contraction property with a more tractable step size is presented in Proposition \ref{prop:big_step_distance} of Appendix \ref{sec:big_step}.
\begin{tcolorbox}
\begin{lemma}[Contraction of RSD]\label{lem:GD convergence 1 step}
Let $\mathcal{X}_t$ and $\mathcal{V}_\alpha$ have principal angles $0 \le \theta_1 \leq \cdots \leq \theta_k < \pi/2$.
Then, iteration~\eqref{eq:GD} with $0 \le \eta_t \leq \frac{a(\mathcal{X}_t)}{\gamma}$ satisfies
\begin{equation*}
\dist^2(\mathcal{X}_{t+1},\mathcal{V}_{\alpha}) \leq \big(1- 2 c_Q \delta a(\mathcal{X}_t) \, \eta_t \big) \dist^2(\mathcal{X}_t,\mathcal{V}_{\alpha})
\end{equation*}
\end{lemma}
\end{tcolorbox}
Observe that $\gamma= 0$ implies $A = \lambda_1 I$ and any subspace $\mathcal{X}$ of dimension $k$ will be an eigenspace of $A$ with $\dist(\mathcal{X},\mathcal{V}_\alpha)=0$. We will therefore not explicitly prove this lemma and all forthcoming convergence results for $\gamma=0$ since the statements will be trivially true.
\begin{proof}[Proof of Lemma~\ref{lem:GD convergence 1 step}]
By the assumption on the principal angles, we get that $0< a(\mathcal{X}_t) = \theta_k / \tan \theta_k \leq 1$.
The hypothesis on $\eta_t$ and Lemma~\ref{lem:uniform upper bound grad} then gives
\[
\eta_t \|\grad f(\mathcal{X}_t)\|_2 \leq \frac{a(\mathcal{X}_t)}{\gamma} \|\grad f(\mathcal{X}_t)\|_2 \leq \frac{1}{2} < \frac{\pi}{2}.
\]
By~\eqref{eq:inj_exp}, this guarantees that the geodesic $\tau \mapsto \Exp(- \tau \eta_t \, \grad f(\mathcal{X}_t))$ lies within the injectivity domain at ${\mathcal{X}_t}$ for $\tau \in [0,1]$. Hence, $\Exp$ is bijective along this geodesic and thus $\Log_{\mathcal{X}_t}(\mathcal{X}_{t+1}) = -\eta_t \, \grad f(\mathcal{X}_t)$. We can thus apply Lemma~\ref{prop:tangent_space} to obtain
\begin{align}\label{eq:dist_with_sigma}
\dist^2(\mathcal{X}_{t+1},\mathcal{V}_{\alpha}) &\leq \|-\eta_t \grad f(\mathcal{X}_t)-\textnormal{log}_{\mathcal{X}_t}(\mathcal{V}_{\alpha}) \|^2 \notag \\
& = \eta_t^2 \|\grad f(\mathcal{X}_t)\|^2 + \dist^2(\mathcal{X}_t, \mathcal{V}_{\alpha}) +2 \eta_t \, \sigma \
\end{align}
with
\[
\sigma := \langle \grad f(\mathcal{X}_t),\log_{\mathcal{X}_t}(\mathcal{V}_{\alpha}) \rangle.
\]
Theorem~\ref{thm:weak_strong_convex} and~\eqref{eq:optim_gap_with_gradient} together with Proposition~\ref{prop:smoothness} give
\begin{align*}
\frac{\sigma}{a(\mathcal{X}_t)} &\leq f^*-f(\mathcal{X}_t)-c_Q \delta \dist^2(\mathcal{X}_t,\mathcal{V}_{\alpha}) \\
&\leq -\frac{1}{2 \gamma} \| \grad f(\mathcal{X}_t) \|^2- c_Q \delta \dist^2(\mathcal{X}_t,\mathcal{V}_{\alpha}).
\end{align*}
Multiplying by $2 a(\mathcal{X}_t)\, \eta_t$ and using $\eta_t \leq a(\mathcal{X}_t) / \gamma$, we get
\begin{align*}
2 \eta_t \, \sigma &\leq -\frac{a(\mathcal{X}_t) \, \eta_t }{\gamma} \| \grad f(\mathcal{X}_t) \|^2 - 2 c_Q \delta a(\mathcal{X}_t) \, \eta_t \, \dist^2(\mathcal{X}_t,\mathcal{V}_{\alpha}) \\ & \leq -\eta_t^2 \| \grad f(\mathcal{X}_t) \|^2 - 2 c_Q \delta a(\mathcal{X}_t)\, \eta_t \, \dist^2(\mathcal{X}_t, \mathcal{V}_{\alpha}).
\end{align*}
Substituting into~\eqref{eq:dist_with_sigma}, we obtain the first statement of the lemma.
\end{proof}
\begin{remark}
When $\delta=0$, Lemma \ref{lem:GD convergence 1 step} still holds \emph{for any subspace $\mathcal{V}_{\alpha}$ spanned by $k$ leading eigenvectors of $A$}. In that case, the lemma only guarantees that the distance between the iterates of steepest descent and this $\mathcal{V}_{\alpha}$ does not increase.
\end{remark}
\subsection{Linear convergence rate under positive eigengap}
When $\delta>0$, we can extend Lemma \ref{lem:GD convergence 1 step} to a linear convergence rate of distances to the minimizer:
\begin{tcolorbox}
\begin{theorem} \label{thm:exponential_conv}
If $\dist(\mathcal{X}_0,\mathcal{V}_{\alpha}) < \pi / 2$ then the iterates $\mathcal{X}_t$ of Riemannian steepest descent~\eqref{eq:GD} with step size $\eta_t$ such that
\begin{equation*}
0<\eta \leq \eta_t \leq \cos(\dist(\mathcal{X}_0,\mathcal{V}_{\alpha})) / \gamma
\end{equation*}
satisfy
\begin{equation*}
\textnormal{dist}^2(\mathcal{X}_t,\mathcal{V}_{\alpha}) \leq \left(1- c_Q \cos (\dist(\mathcal{X}_0,\mathcal{V}_{\alpha}))\, \delta\, \eta \right) ^ t \dist^2(\mathcal{X}_0,\mathcal{V}_{\alpha}).
\end{equation*}
\end{theorem}
\end{tcolorbox}
\begin{proof}
We first claim that $\dist (\mathcal{X}_{t},\mathcal{V}_{\alpha}) \leq \dist (\mathcal{X}_0,\mathcal{V}_{\alpha})$ for all $t \geq 0$. This would then also imply that $\theta_k(\mathcal{X}_t, \mathcal{V}_\alpha) < \pi/2$ for all $t\geq 0$ since
\[
\theta_k(\mathcal{X}_t, \mathcal{V}_\alpha) \leq \sqrt{\sum_{i=1}^k \theta_i (\mathcal{X}_t, \mathcal{V}_\alpha)^2} = \dist(\mathcal{X}_t, \mathcal{V}_\alpha).
\]
For $t=0$, we have $\theta_k(\mathcal{X}_{0},\mathcal{V}_{\alpha}) < \pi/2$ by hypothesis on $\mathcal{X}_0$ and thus
\begin{equation*}
a(\mathcal{X}_0)=\frac{\theta_k(\mathcal{X}_0,\mathcal{V}_{\alpha})}{\tan(\theta_k(\mathcal{X}_0,\mathcal{V}_{\alpha}))} \geq \cos(\theta_k(\mathcal{X}_0,\mathcal{V}_{\alpha})) \geq \cos(\dist(\mathcal{X}_0,\mathcal{V}_{\alpha})).
\end{equation*}
Since by construction $\eta_0 \leq \cos(\dist(\mathcal{X}_0,\mathcal{V}_{\alpha})) / \gamma$ , this implies that $\eta_0 \leq a(\mathcal{X}_0) / \gamma$ and Lemma \ref{lem:GD convergence 1 step} guarantees that $\dist(\mathcal{X}_{1},\mathcal{V}_{\alpha}) \leq \textnormal{dist}(\mathcal{X}_0,\mathcal{V}_{\alpha})$. In particular, we also have $\theta_k(\mathcal{X}_{1},\mathcal{V}_{\alpha}) < \pi/2$.
Next, assume that
\begin{equation*}
\dist (\mathcal{X}_t,\mathcal{V}_{\alpha}) \leq \dist (\mathcal{X}_{0},\mathcal{V}_{\alpha}),
\end{equation*}
which implies $\theta_k(\mathcal{X}_{t},\mathcal{V}_{\alpha}) < \pi/2$.
Then by a similar argument like above, we have
\begin{equation}\label{eq:aX_with_cos_X0}
a(\mathcal{X}_t)
\geq \cos(\dist(\mathcal{X}_t,\mathcal{V}_{\alpha})) \geq \cos(\dist(\mathcal{X}_{0},\mathcal{V}_{\alpha})).
\end{equation}
By hypothesis on $\eta_t$, we observe
\begin{equation*}
\eta_t \leq \frac{\cos(\dist(\mathcal{X}_0,\mathcal{V}_{\alpha}))}{\gamma} \leq \frac{\cos(\dist(\mathcal{X}_t,\mathcal{V}_{\alpha}))}{\gamma}
\leq \frac{a(\mathcal{X}_t)}{\gamma}.
\end{equation*}
Applying Lemma~\ref{lem:GD convergence 1 step} once again with the induction hypothesis proves the claim:
\begin{equation*}
\dist (\mathcal{X}_{t+1},\mathcal{V}_{\alpha}) \leq \dist (\mathcal{X}_t,\mathcal{V}_{\alpha}) \leq \dist (\mathcal{X}_0,\mathcal{V}_{\alpha}).
\end{equation*}
The main statement of the theorem now follows easily: Since $\eta_t \leq a(\mathcal{X}_t) / \gamma$ and $\theta_k(\mathcal{X}_{t}, \mathcal{V}_{\alpha}) < \pi/2$ for all $t\geq 0$, Lemma \ref{lem:GD convergence 1 step} gives
\begin{equation*}
\textnormal{dist}^2(\mathcal{X}_{t+1},\mathcal{V}_{\alpha}) \leq \left(1- c_Q a(\mathcal{X}_t) \delta \eta_t \right) \textnormal{dist}^2(\mathcal{X}_t,\mathcal{V}_{\alpha})
\end{equation*}
Combining with~\eqref{eq:aX_with_cos_X0} and $\eta_t \geq \eta$ shows the desired result by induction.
\end{proof}
If the eigengap $\delta$ is strictly positive, then Theorem \ref{thm:exponential_conv} gives an exponential convergence rate towards the optimum $\mathcal{V}_{\alpha}$. If $\delta=0$, then Theorem \ref{thm:exponential_conv} \emph{does not provide a convergence rate} but rather implies that the intrinsic distances of the iterates to the optimum do not increase.
From Theorem \ref{thm:exponential_conv} we get immediately the following iteration complexity.
\begin{corollary}
Let Riemannian steepest descent be started from a subspace $\mathcal{X}_0$ that satisfies $\dist(\mathcal{X}_0,\mathcal{V}_{\alpha}) < \pi/2$. Then after at most
\begin{equation*}
T \leq 2 \frac{\log(\dist(\mathcal{X}_0,\mathcal{V}_{\alpha})) - \log(\varepsilon)}{\log(1- 0.4 \cos (\dist(\mathcal{X}_0,\mathcal{V}_{\alpha})) \delta \eta)} +1
= \mathcal{O} \left(\frac{\log(\dist(\mathcal{X}_0,\mathcal{V}_{\alpha})) - \log(\varepsilon)}{\cos (\dist(\mathcal{X}_0,\mathcal{V}_{\alpha})) \delta \eta} \right)
\end{equation*}
many iterations, $\mathcal{X}_T$ will satisfy $\dist(\mathcal{X}_T,\mathcal{V}_{\alpha}) \leq \varepsilon$. With the maximal step size allowed in Theorem \ref{thm:exponential_conv}, we get
\[
T = \mathcal{O} \left(\frac{\lambda_1 - \lambda_n}{\delta}
\frac{1}{\cos^2(\dist(\mathcal{X}_0,\mathcal{V}_{\alpha}))}
\log\left(\frac{\dist(\mathcal{X}_0,\mathcal{V}_{\alpha})}{\varepsilon}\right) \right).
\]
\end{corollary}
As expected, $T$ depends inversely proportional on the eigengap $\delta$ and proportional to the spread of the eigenvalues. In addition, we also have an extra term $1/\cos^2(\dist(\mathcal{X}_0,\mathcal{V}_{\alpha}))$ that depends on the initial distance $\dist(\mathcal{X}_0,\mathcal{V}_{\alpha})$, which is due to the weak-quasi-convexity property of $f$. This is a conservative overestimation, since this quantity improves as the iterates get closer to the optimum.
\begin{remark}
If $\delta>0$, the exponential convergence rate is in terms of the intrinsic distance on the Grassmann manifold, that is, the $\ell_2$ norm of the principal angles. Standard convergence results for subspace iteration are stated for the biggest principal angle, that is, the $\ell_\infty$ norm. This is weaker than the intrinsic distance. For subspace iteration with projection, the convergence result from~\cite[Thm.~5.2]{saadNumericalMethodsLarge2011} shows that all principal angles $\theta_i$ converge to zero and eventually gives convergence of the $\ell_4$ norm of the principal angles. This is also weaker than the intrinsic distance.
\end{remark}
\subsection{Convergence of function values without an eigengap assumption}
When $\delta=0$, Theorem \ref{thm:exponential_conv} still holds, but does not provide a rate of convergence as discussed above. Instead, we can prove the following result:
\begin{tcolorbox}
\begin{theorem} \label{thm:convex_conv}
If the distance $\dist(\mathcal{X}_0,\mathcal{V}_{\alpha})$ of the initial subspace $\mathcal{X}_0$ to the minimizer satisfies $\dist(\mathcal{X}_0,\mathcal{V}_{\alpha})<\pi / 2$ for a subspace $\mathcal{V}_{\alpha}$ that is spanned by any $k$ leading eigenvectors of $A$, then the iterates $\mathcal{X}_t$ of Riemannian steepest descent~\eqref{eq:GD} with fixed step size
\begin{equation*}
\eta \leq \cos(\dist(\mathcal{X}_0,\mathcal{V}_{\alpha})) / \gamma
\end{equation*}
satisfy
\begin{equation*}
f(\mathcal{X}_t) - f^* \leq \frac{2\gamma+\frac{1}{\eta}}{\cos(\dist(\mathcal{X}_0,\mathcal{V}_{\alpha}))t+1} \dist^2(\mathcal{X}_0,\mathcal{V}_{\alpha})=\mathcal{O}\left(\frac{1}{t} \right).
\end{equation*}
\end{theorem}
\end{tcolorbox}
\begin{proof}
Since we satisfy all the hypotheses of Theorem \ref{thm:exponential_conv}, we know that for all $t\geq 0$ it holds
$\textnormal{dist}(\mathcal{X}_{t},\mathcal{V}_{\alpha}) \leq
\textnormal{dist}(\mathcal{X}_0,\mathcal{V}_{\alpha}) < \pi/2$ and thus also that $\mathcal{X}_t$ is in the injectivity domain of $\Exp$ at $\mathcal{V}_{\alpha}$. In addition, its proof states in~\eqref{eq:aX_with_cos_X0} that
\begin{equation*
a(\mathcal{X}_t)
\geq C_0 := \cos(\dist(\mathcal{X}_{0},\mathcal{V}_{\alpha})) > 0,
\end{equation*}
which implies that the function $f$ is weakly-quasi-convex at every $\mathcal{X}_t$ with constant $2 C_0$. Hence
\begin{equation}\label{eq:weak-quasi-conv_with_Delta_t}
2 C_0 \Delta_t \leq \langle \grad f(\mathcal{X}_t) , - \Log_{\mathcal{X}_t} (\mathcal{V}_{\alpha}) \rangle,
\end{equation}
where we defined
\begin{equation*}
\Delta_t := f(\mathcal{X}_t) - f^*.
\end{equation*}
Similar to the proof of Theorem \ref{thm:exponential_conv}, by the hypothesis on the step size $\eta_t$, Lemma~\ref{lem:GD convergence 1 step} shows that $\mathcal{X}_{t+1}$ is in the injectivity domain of $\Exp$ at $\mathcal{X}_t$.
Hence, by the definition of Riemannian steepest descent, we have
\begin{equation}\label{eq:Log_SD}
\Log_{\mathcal{X}_t} (\mathcal{X}_{t+1})=-\eta \grad f(\mathcal{X}_t).
\end{equation}
In addition, the smoothness property~\eqref{eq:quadratic_upper_bound} of $f$ gives
\begin{equation*}
\Delta_{t+1}-\Delta_t \leq \langle \grad f(\mathcal{X}_t), \Log_{\mathcal{X}_t} (\mathcal{X}_{t+1}) \rangle+\frac{\gamma}{2} \dist^2(\mathcal{X}_t,\mathcal{X}_{t+1}).
\end{equation*}
Substituting~\eqref{eq:Log_SD}, we obtain
\begin{equation}\label{eq:conv_SD_delta_zero_diff_Delta}
\Delta_{t+1}-\Delta_t \leq \left(-\eta +\frac{\gamma}{2} \eta^2 \right) \| \grad f(\mathcal{X}_t)\|^2 \leq 0
\end{equation}
since $\eta \leq C_0/\gamma$ with $0 < C_0:= \cos(\dist(\mathcal{X}_0, \mathcal{V}_{\alpha})) \leq 1$ and $\gamma > 0$.
Since $\Gr(n,k)$ has nonnegative sectional curvature, Lemma~\ref{lem:geo_triangle_nonneg} implies
\begin{equation*}
\dist^2(\mathcal{X}_{t+1}, \mathcal{V}_{\alpha}) \leq \dist^2(\mathcal{X}_t, \mathcal{X}_{t+1})+ \dist^2(\mathcal{X}_t, \mathcal{V}_{\alpha})-2 \langle \Log_{\mathcal{X}_t} (\mathcal{X}_{t+1}), \Log_{\mathcal{X}_t} (\mathcal{V}_{\alpha}) \rangle.
\end{equation*}
Substituting~\eqref{eq:Log_SD} into the above and rearranging terms gives
\begin{equation*}
2 \eta \langle \grad f(\mathcal{X}_t) , - \Log_{\mathcal{X}_t} (\mathcal{V}_{\alpha}) \rangle \leq \dist^2(\mathcal{X}_t, \mathcal{V}_{\alpha})-\dist^2(\mathcal{X}_{t+1}, \mathcal{V}_{\alpha})+ \eta^2 \| \grad f(\mathcal{X}_t) \|^2.
\end{equation*}
Combining with~\eqref{eq:weak-quasi-conv_with_Delta_t}, we get
\begin{equation}\label{eq:bound_Delta_t}
\Delta_t \leq \frac{1}{4 C_0 \eta} ( \dist^2(\mathcal{X}_t, \mathcal{V}_{\alpha})-\dist^2(\mathcal{X}_{t+1}, \mathcal{V}_{\alpha})) + \frac{\eta}{4 C_0} \| \grad f(\mathcal{X}_t) \|^2.
\end{equation}
Now multiplying \eqref{eq:conv_SD_delta_zero_diff_Delta} by $\frac{1}{C_0}$ and summing with~\eqref{eq:bound_Delta_t} gives
\begin{multline}\label{eq:diff_Delta_intermediate}
\frac{1}{C_0} \Delta_{t+1} - \left( \frac{1}{ C_0} - 1 \right) \Delta_t \leq \frac{1}{4 C_0 \eta} ( \dist^2(\mathcal{X}_t, \mathcal{V}_{\alpha})-\dist^2(\mathcal{X}_{t+1}, \mathcal{V}_{\alpha})) \\
+\frac{1}{C_0} \left( -\eta + \frac{\gamma}{2}\eta^ 2 + \frac{\eta}{4} \right)
\| \grad f(\mathcal{X}_t) \|^2.
\end{multline}
By assumption $\eta \leq C_0/\gamma$ where $0 < C_0:= \cos(\dist(\mathcal{X}_0, \mathcal{V}_{\alpha})) \leq 1$ and $\gamma > 0$. Since
\begin{equation*}
\frac{\eta}{C_0} \left( -1 + \frac{\gamma}{2} \eta + \frac{1}{4} \right) \leq \frac{1}{\gamma} \left(\frac{C_0}{2} -\frac{3}{4} \right) \leq - \frac{1}{4 \gamma}< 0.
\end{equation*}
the inequality~\eqref{eq:diff_Delta_intermediate} can be simplified to
\begin{equation*}
\frac{1}{C_0} \Delta_{t+1} - \left( \frac{1}{ C_0} - 1 \right) \Delta_t \leq \frac{1}{4 C_0 \eta} ( \dist^2(\mathcal{X}_t, \mathcal{V}_{\alpha})-\dist^2(\mathcal{X}_{t+1}, \mathcal{V}_{\alpha})).
\end{equation*}
Summing from $0$ to $t-1$ gives
\[
\frac{1}{C_0} \Delta_t + \sum_{s=1}^{t-1} \Delta_s - \left( \frac{1}{C_0} -1 \right) \Delta_0 \leq \frac{1}{4 C_0 \eta} \left( \dist^2(\mathcal{X}_0, \mathcal{V}_{\alpha}) - \dist^2(\mathcal{X}_t, \mathcal{V}_{\alpha}) \right).
\]
From the smoothness property~\eqref{eq:quadratic_upper_bound} at the critical point $\mathcal{V}_\alpha$ of $f$, we get
\[
\Delta_0 \leq \frac{\gamma}{2} \dist^2(\mathcal{X}_0, \mathcal{V}_{\alpha}).
\]
Combining these two inequalities then leads to
\begin{align*}
\frac{1}{C_0} \Delta_t + \sum_{s=0}^{t-1} \Delta_s & \leq \frac{1}{C_0} \Delta_0 + \frac{1}{4 C_0 \eta} \dist^2(\mathcal{X}_0, \mathcal{V}_{\alpha}) \\
& = \frac{1}{2C_0} \left(\gamma +\frac{1}{2 \eta}\right) \dist^2(\mathcal{X}_0, \mathcal{V}_{\alpha}).
\end{align*}
Since \eqref{eq:conv_SD_delta_zero_diff_Delta} holds for all $t \geq 0$, it also implies $\Delta_t \leq \Delta_s$ for all $1 \leq s \leq t$. Substituting
\[
t \Delta_t \leq \sum_{s=0}^{t-1} \Delta_s.
\]
into the inequality from above,
\begin{equation*}
\Delta_t \leq \frac{1}{2C_0} \frac{\gamma+\frac{1}{2 \eta}}{\frac{1}{C_0}+t} \dist^2(\mathcal{X}_0, \mathcal{V}_{\alpha}) = \frac{\gamma+\frac{1}{2\eta}}{2(C_0 t+1)} \dist^2(\mathcal{X}_0, \mathcal{V}_{\alpha}),
\end{equation*}
we obtain the desired result.
\end{proof}
\begin{remark}
This type of result is standard for functions that are geodesically convex (see, e.g. \cite{zhangFirstorderMethodsGeodesically2016}). Our objective function does not satisfy this property,
but we can still have a similar upper bound on the iteration complexity for convergence in function value. We note that this does not imply convergence of the iterates to a specific $k$-dimensional subspace, but only convergence of a subsequence of the sequence of the iterates.
\end{remark}
\subsection{Sufficiently small step sizes}
The convergence results in Theorems~\ref{thm:exponential_conv} and~\ref{thm:convex_conv} require that the initial subspace $\mathcal{X}_0$ lies within a distance strictly less than $\pi/2$ from a global minimizer $\mathcal{V}_{\alpha}$. While this condition is independent from the eigengap (unlike results that rely on standard convexity, see appendix), it is also not fully satisfactory: it is hard to verify in practice, and it is unnecessarily severe in numerical experiments. In fact, this condition is only used to obtain a uniform lower bound on the weak-quasi-convexity constant $a(\mathcal{X}_t) = \theta_k^{(t)} / \tan(\theta_k^{(t)})$ with $\theta_k^{(t)}$ the largest principal angle between $\mathcal{X}_t$ and $\mathcal{V}_{\alpha}$. Since the Riemannian distance is the $\ell_2$ norm of the principal angles, a contraction in this distance leads automatically to $\theta_k^{(t)} < \pi/2$ if $\theta_k^{(0)} < \pi/2$. If one could guarantee by some other reasoning that $\theta_k^{(t)} $ does not increase after one step, the condition $\dist(\mathcal{X}_0, \mathcal{V}_\alpha) < \pi/2$ would not be needed.
We now show that for sufficiently small step sizes $\eta_t$, the largest principal angle $\theta_k^{(t)} $ between $\mathcal{X}_t$ and $\mathcal{V}_{\alpha}$ does indeed not increase after each iteration of Riemannian steepest descent regardless of the initial subspace $\mathcal{X}_0$. While it does not explain what we observe in numerical experiments where large steps can be taken, it is a first result in explaining why we can initialize the iteration at a random initial subspace $\mathcal{X}_0$.
\begin{tcolorbox}
\begin{Proposition} \label{prop:suff_small}
Riemannian steepest descent started from a subspace $\mathcal{X}_t$ returns a subspace $\mathcal{X}_{t+1}$ such that
\begin{equation*}
\theta_k(\mathcal{X}_{t+1},\mathcal{V}_{\alpha}) \leq \theta_k(\mathcal{X}_t,\mathcal{V}_{\alpha}),
\end{equation*}
for all step sizes $0 \leq \eta \leq \bar \eta$ where $ \bar \eta > 0$ is sufficiently small.
\end{Proposition}
\end{tcolorbox}
For the proof of this proposition, we will need the derivatives of certain singular values. While this is well known for isolated singular values, it is possible to generalize to higher multiplicities as well by relaxing the ordering and sign of singular values~\cite{bunse-gerstnerNumericalComputationAnalytic1991}. For a concrete formula, we use the following result from Lemma A.5 in~\cite{lippertFixingTwoEigenvalues2005}.
\begin{lemma}\label{lem:perturb_sing_value}
Let $\sigma_1 \geq \cdots \geq \sigma_n$ be the singular values of $A \in \mathbb{R}^{n \times n}$ with $u_{1}, \ldots, u_{n}$ and $v_{1}, \ldots, v_{n}$ the associated left and right orthonormal singular vectors. Suppose that $\sigma_j$ has multiplicity $m$, that is,
$$
\sigma_{j_0-1} > \sigma_{j_0} = \cdots = \sigma_j =\cdots = \sigma_{j_0+m-1} > \sigma_{j_0+m}.
$$
Then, the $j$th singular value of $A+\eta B$ satisfies
\[
\sigma_j(A + \eta B) = \sigma_j + \eta\, \lambda_{j - j_0 + 1} + \mathcal{O}(\eta^2), \quad \eta \to 0^+,
\]
where $\lambda_{j}$ is the $j$th largest eigenvalue of $\tfrac{1}{2}(U^T B V + V^T B^T U)$
with
\[
U = \begin{bmatrix} u_{j_0} & \cdots & u_{j_0+m-1} \end{bmatrix} \quad \text{and}\quad V = \begin{bmatrix} v_{j_0} & \cdots & v_{j_0+m-1} \end{bmatrix}.
\]
\end{lemma}
\begin{proof}[Proof of Proposition~\ref{prop:suff_small}]
For ease of notation, let $X:= X_t$ and $X_+ := X_{t+1}$ such that $\mathcal{X}_t = \myspan(X)$ and $\mathcal{X}_{t+1} = \myspan(X_+)$. By definition of the exponential map on Grassmann, the next iterate of the Riemannian SD iteration~\eqref{eq:GD} with step $\eta$ satisfies
\begin{equation*}
X_+=X V \cos(\eta \Sigma) V^T + U \sin(\eta \Sigma) V^T
\end{equation*}
with
\begin{equation*}
U \Sigma V^T = -\grad f(\mathcal{X}_t).
\end{equation*}
Since $V$ is orthogonal, we can write
\begin{equation*}
U \sin(\eta \Sigma) V^T = U (\eta \Sigma) V^T V \left(\frac{\sin(\eta \Sigma)}{\eta \Sigma} \right) V^T = -\eta \grad f(\mathcal{X}_t) V \left(\frac{\sin(\eta \Sigma)}{\eta \Sigma} \right) V^T
\end{equation*}
where $1/\Sigma:=\Sigma^{-1}$ and $\frac{\sin 0}{0} = 1$.
Taking Taylor expansions of $\sin$ and $\cos$,
\begin{align*}
V \cos(\eta \Sigma) V^T &= V \left(I - \mathcal{O}(\eta^2) \right) V^T = I-\mathcal{O}(\eta^2) \\
V \frac{\sin(\eta \Sigma)}{\eta \Sigma} V^T &= V \left(I - \mathcal{O}(\eta^2) \right) V^T = I-\mathcal{O}(\eta^2),
\end{align*}
we obtain
\begin{align}\label{eq:V_a_X_t_plus_1}
{V}_{\alpha}^T X_+ & = {V}_{\alpha}^T X (I-\mathcal{O}(\eta^2)) + {V}_{\alpha}^T (- \eta \grad f(\mathcal{X})) (I-\mathcal{O}(\eta^2)) \notag \\
& = {V}_{\alpha}^T (X- \eta \grad f(\mathcal{X}_t)) (I-\mathcal{O}(\eta^2))
\end{align}
since $\|V_\alpha\|_2 = \| X \|_2=1$.
Let now $\theta$ be the vector of $k$ principal angles between $\mathcal{X}_t$ and $\mathcal{V}_\alpha$. As in~\eqref{eq:SVD_Va_X} and~\eqref{eq:SVD_Vb_X}, we therefore have the SVDs
\begin{equation}\label{eq:Va_X_Vb_X}
V_\alpha^T X = U_1 \cos \theta\, V_1^T \qquad \text{and} \qquad V_\beta^T X = \tilde U_2 \sin \theta\, V_1^T,
\end{equation}
where $U_1,V_1 \in \mathbb{R}^{k \times k}$ and $\tilde U_2 \in \mathbb{R}^{(n-k) \times k}$ have orthonormal columns. Next, we write~\eqref{eq:V_a_X_t_plus_1} in terms of
\[
M :=\sin^2 \theta \, U_1^T \Lambda_{\alpha} U_1 \cos \theta - \cos \theta \sin \theta \, \tilde U_2^T \Lambda_{\beta} \tilde U_2 \sin \theta.
\]
Since $\grad f(\mathcal{X}_t) = -2 (I-XX^T)AX$, the identity~\eqref{eq:grad_with_Va} gives
\[
{V}_{\alpha}^T (X- \eta \grad f(\mathcal{X}_t)) = V_\alpha^T X +2 \eta \Lambda_\alpha V_\alpha^T X - 2 \eta V_\alpha^T X X^T A X.
\]
After substituting~\eqref{eq:XAX worked out} and~\eqref{eq:Va_X_Vb_X}, a short calculation using $\cos^2 \theta = I - \sin^2 \theta$ and the orthogonality of $U_1$ and $V_1$ then shows
\[
V_{\alpha}^T (X- \eta \grad f(\mathcal{X}_t)) = U_1 (\cos \theta + 2\eta M) V_1 ^T.
\]
Relating back to~\eqref{eq:V_a_X_t_plus_1}, we thus obtain
\begin{align*}
V_{\alpha}^T {X}_{+} &= U_1 (\cos \theta + 2\eta M) V_1 ^T (I-\mathcal{O}(\eta^2)) \\
&= U_1 (\cos \theta + 2\eta M) (I - V_1 ^T \mathcal{O}(\eta^2) V_1)V_1 ^T \\
&= U_1 (\cos \theta + 2\eta M - \mathcal{O}(\eta^2)) V_1^T.
\end{align*}
The singular values of $V_{\alpha}^T {X}_{+}$ are therefore the same as the singular values of the matrix $\cos \theta + 2\eta M +\mathcal{O}(\eta^2)$.
By Weyl's inequality (see, e.g., \cite[Cor.~7.3.5]{hornMatrixAnalysis2012a}), each singular value of $\cos \theta + 2\eta M +\mathcal{O}(\eta^2)$ is $\mathcal{O}(\eta^2)$ close to some singular value of $\cos \theta + 2\eta M$. Let $1 \leq j \leq k$. Denote the $j$th singular of $\cos \theta + 2\eta M $ by $\sigma_j(\eta)$ to which we will apply Lemma~\ref{lem:perturb_sing_value}. Since $\cos \theta$ is a diagonal matrix with decreasing diagonal, its $j$th singular value equals $\cos \theta_j$ and its associated left/right singular vector is the $j$th canonical vector $e_j$.
Denoting
\[
E = \begin{bmatrix} e_{j_0} & \cdots & e_{j_0+m-1} \end{bmatrix},
\]
observe that $\cos \theta \, E = \cos \theta_{j_0} \, E$ (here, $\cos \theta$ is a diagonal matrix and $\cos \theta_{j_0} $ is a scalar) and likewise for $\sin \theta \, E$. We thus get
\[
E^T M E = \sin^2 \theta_{j_0} \cos \theta_{j_0} (U_1^T \Lambda_{\alpha} U_1 - \tilde U_2^T \Lambda_{\beta} \tilde U_2 ).
\]
In the proof of Proposition~\ref{prop:weak-quasi-convexity}, we showed that the matrix in brackets above is symmetric and positive semi-definite (see~\eqref{eq:La_min_Lb_is_PSD}). Since $0 \leq \theta_{j_0} \leq \pi/2$, the eigenvalues of $E^T M E$ are therefore all non-negative. Lemma~\ref{lem:perturb_sing_value} thus gives that $\sigma_j(\eta) \geq \sigma_j$ for sufficiently small and positive $\eta $. Since the singular values of $V_\alpha^T X_+$ are the cosines of the principal angles between $\mathcal{V}_\alpha$ and $\mathcal{X}_{t+1}$ with step size $\eta \geq 0$, we conclude that there exists $\bar \eta>0$ such that for all $\eta \in [0,\bar\eta]$ it holds
\begin{equation*}
\theta_j(\mathcal{X}_{t+1}, \mathcal{V}_{\alpha}) \leq \theta_j(\mathcal{X}_t, \mathcal{V}_{\alpha}).
\end{equation*}
Since $j$ was arbitrary, this finishes the proof.
\end{proof}
\section{Conclusion and future work}
We provided the first systematic study of Riemannian steepest descent on the Grassmann manifold for computing a subspace spanned by $k$ leading eigenvectors of a symmetric and positive semi-definite matrix $A$.
Our main idea was to exploit a convexity-like structure of the block Rayleigh quotient, which can be of much more general interest than for only analyzing steepest descent. One example is line search methods, which have usually favourable properties compared to vanilla steepest descent. Also, weakly-quasi-convex functions have been proven to admit accelerated algorithms \cite{nesterov2020primal}, while accelerated or almost accelerated Riemannian algorithms have been developed in \cite{zhang2018towards,alimisis2020continuous,alimisis2021momentum}. It would naturally be interesting to examine whether a provable accelerated method can be developed for the block Rayleigh quotient on the Grassmann manifold. This would hopefully reduce the dependence of the iteration complexity on the eigengap $\delta$ from $\mathcal{O}(1/\delta)$ to $\mathcal{O}(1/\sqrt{\delta})$.
Another interesting direction is to extend the analysis of \cite{alimisis2021distributed} from the computation of just one leading eigenvector to computation of a whole subspace, using the generalized machinery developed in this work, or develop a noisy version of steepest descent and compare with noisy power method \cite{hardt2014noisy}.
\paragraph{Acknowledgements}
This work was supported by the SNSF under research project 192363.
\section{Geodesic convexity} \label{sec:convexity}
Let $\delta > 0$ and thus $\mathcal{V}_\alpha$ is the unique minimizer of $f$. Define the following neighbourhood of $\mathcal{V}_\alpha$ in $\Gr(n,k)$:
\begin{equation}\label{eq:geodesic_region_N}
N_*(\varphi) = \{ \mathcal{X} \in \Gr(n,k) \colon \theta_k(\mathcal{X}, \mathcal{V}_\alpha) < \varphi \} \qquad \text{with $\varphi \in [0, \pi/4]$}.
\end{equation}
Here, $\theta_k(\mathcal{X}, \mathcal{V}_\alpha)$ denotes the largest principle angle between $\mathcal{X}$ and $\mathcal{V}_\alpha$. Since $\theta_k$ is a metric on $\Gr(n,k)$ (see~\cite{qiuUnitarilyInvariantMetrics2005}), any two subspaces $\mathcal{X},\mathcal{Y} \in N_*(\varphi)$ will satisfy $\theta_k(\mathcal{X},\mathcal{Y}) < \pi /2$ by triangle inequality. They thus have a unique connecting geodesic. It is shown in \cite[Lemma~2]{ahn2021riemannian} that for any fixed $\varphi \in [0, \pi/4]$ this geodesic remains in $N_*(\varphi)$. Each set $N_*(\varphi)$ is thus an open totally geodesically convex set as defined in, e.g., \cite[Def.~11.16]{boumal2022intromanifolds}.
One of the main results in \cite{ahn2021riemannian}, namely Cor.~4, states that $f$ is geodesically convex on $N_*(\pi/4)$. This is unfortunately wrong and we present a small counterexample.
\paragraph{Counterexample for Cor.~4 in \cite{ahn2021riemannian}.}
Here we use the notation of \cite{ahn2021riemannian}. The reader is encouraged to take a look there for notational purposes.
Take $c:=\cos(\pi/4) = \sqrt{2}/2$ and $0 \le \varepsilon<1$. Define the matrices
$$
X_p := \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ 0 & 0 \end{pmatrix}, \quad
U_p := \begin{pmatrix} c & 0 \\ 0 & c \\ c & 0 \\ 0 & c \end{pmatrix}, \quad
M := U_p \begin{pmatrix} 1 & 0 \\ 0 & \varepsilon \end{pmatrix}.
$$
These matrices satisfy the conditions posed in \cite{ahn2021riemannian}:
\begin{itemize}
\item Principal alignment: $X_p^T U_p = \begin{pmatrix} c & 0 \\ 0 & c \end{pmatrix}$.
\item Principal angles between $X_p$ and $U_p$ are in $[0,\pi/4]$.
\item $U = U_p$ since $Q=I$.
\end{itemize}
Now consider the following tangent vector of unit Frobenius norm:
$$
\Delta = \begin{pmatrix} 0 & 0 \\ 0 & 0 \\ 0 & 1 \\ 0 & 0 \end{pmatrix}.
$$
It is clearly a tangent vector of $[X_p]$ since $X_p^T \Delta = 0$.
The Hessian of $f_{full}$ at $[X_p]$ in the direction of $\Delta$ satisfies (see equation (4.2) in \cite{ahn2021riemannian})
$$
\textnormal{Hess}f_{full}([X_p])[\Delta,\Delta] = -2 \Tr(M^T \Delta \Delta^T (I-X_p X_p^T) M) + \|(\Delta X_p^T + X_p\Delta ^T) M \|_F^2.
$$
Simple calculation shows that
$$
\textnormal{Hess}f_{full}([X_p])[\Delta,\Delta] = -2 c^2 + (1+\varepsilon^2) c^2.
$$
Hence for $\varepsilon<1$, we have $\textnormal{Hess}f_{full}([X_p])[\Delta,\Delta]<0$ and the $f_{full}$ is non-convex which is in contrast with Corollary 4. \qed
\vspace{8mm}
Instead, our Theorem~\ref{eq:g-convex_domain} guarantees convexity when $\varphi$ depends on the spectral gap.
Since $f$ is smooth, the function is geodesically convex on $N_*(\varphi)$ if and only if its Riemannian Hessian is positive definite on $N_*(\varphi)$; see, e.g., \cite[Thm.~11.23]{boumal2022intromanifolds}. We will therefore compute the eigenvalues of $\Hess f$ based on its matrix representation. This requires us to first vectorize the tangent space.
From~\eqref{eq:def_TXGr}, a matrix $G$ is a tangent vector if and only if $G^TX = 0$. Hence, taking $X_\perp \in \mathbb{R}^{n \times (n-k)}$ orthonormal such that $\mathcal{X}^\perp=\myspan(X_\perp)$, we have the equivalent definition
\[
T_X \Gr(n,k) = \{ X_\perp M \colon M \in \mathbb{R}^{(n-k) \times k} \}.
\]
The matrix $M$ above can be seen as the coordinates of $G=X_\perp M $ in the basis $X_\perp$. More specifically, by using the linear isomorphism $\vecop \colon \mathbb{R}^{n \times k} \to \mathbb{R}^{nk}$ that stacks all columns of a matrix under each other, we can define the tangent vectors of $\Gr(n,k)$ as standard (column) vectors in the following way:
\[
\vecop(G) = \vecop(X_\perp M ) = (I_k \otimes X_\perp) \vecop(M).
\]
Here, the Kronecker product $\otimes$ appears due to \cite[Lemma 4.3.1]{hornTopicsMatrixAnalysis1991}. By well-known properties of $\otimes$ (see, e.g., \cite[Chap.~4.2]{hornTopicsMatrixAnalysis1991}), the matrix $I_k \otimes X_\perp$ has orthonormal columns. We have thus obtained an orthonormal basis for the (vectorized) tangent space. With this setup, we can now construct the Hessian.
\begin{lemma}\label{lem:matrix_Hx}
Let $I_k \otimes X_\perp$ be the orthonormal basis for the vectorization of $T_{\mathcal{X}} \Gr(n,k)$. Then the Riemannian Hessian of $f$ at $\mathcal{X}$ in that basis has the symmetric matrix representation
\begin{equation}\label{eq:matrix_Hx}
H_X = 2 (X^T A X \otimes I_{n-k} - I_k \otimes X_\perp^T A X_\perp).
\end{equation}
Furthermore, with $1 \leq i \leq k$ and $1 \leq j \leq n-k$ its $k(n-k)$ eigenvalues satisfy
\[
\lambda_{i,j}(H_X) = 2(\lambda_i(X^T A X) - \lambda_j(X_\perp^T A X_\perp)).
\]
\end{lemma}
\begin{proof}
Since $\vecop$ is a linear isomorphism, the symmetric matrix $H_X$ satisfies
\[
\Hess f(X)[X_\perp M, X_\perp M] = \langle \vecop(M), H_X \vecop(M) \rangle, \qquad \forall M \in \mathbb{R}^{n \times (n-k)},
\]
where $\langle \cdot, \cdot \rangle$ is the Euclidean inner product. Define $m = \vecop(M)$. Plugging in the formula~\eqref{eq:Hessian_f_inner_product} for $\Hess f$, we calculate
\begin{align*}
\Hess f(X)[X_\perp M, X_\perp M] &= 2\langle X_\perp M, X_\perp M X^T A X - AX_\perp M \rangle \\
&= 2\langle (I \otimes X_\perp) m, (X^T A X \otimes X_\perp) m - (I \otimes AX_\perp) m \rangle \\
&= 2\langle m, (I \otimes X_\perp)^T (X^T A X \otimes X_\perp - I \otimes AX_\perp) m \rangle \\
&= 2\langle m, (X^T A X \otimes I - I \otimes X_\perp^T AX_\perp) m \rangle
\end{align*}
Here, we used typical calculus rules for the Kronecker product (see, e.g., \cite[Chap.~4.2]{hornTopicsMatrixAnalysis1991}). We recognize the matrix $H_X$ directly.
The eigenvalues of~\eqref{eq:matrix_Hx} can be directly obtained using \cite[Thm.~4.4.5]{hornTopicsMatrixAnalysis1991}.
\end{proof}
Taking $X=V_\alpha$ and $X_\perp = V_\beta$, Lemma~\ref{lem:matrix_Hx} shows immediately that the minimal eigenvalue of $\Hess f(\mathcal{V}_\alpha)$ is equal to $2\delta = 2(\lambda_k - \lambda_{k+1})$. Since $\delta > 0$, $\Hess f$ will remain strictly positive definite in a neighbourhood of $\mathcal{V}_\alpha$ by continuity. To quantify this neighbourhood, we will connect $\mathcal{V}_\alpha$ to an arbitrary $\mathcal{X}$ using a geodesic and see how this influences the bounds of Lemma~\ref{lem:matrix_Hx}. This also requires connecting $\mathcal{V}_\beta$ to $\mathcal{X}^\perp$. The next lemma shows that both geodesics are closely related. Recall that $\sin(t\theta)$ and $\cos(t\theta)$ denote diagonal matrices of size $k \times k$. For convenience, we will denote by $O$ a zero matrix whose dimensions are clear from the context and is not always square.
\begin{lemma}\label{lem:geodesic_perp}
Let $X,Y \in \mathbb{R}^{n \times k}$ be such that $X^T X = Y^T Y = I_k$ with $k \leq n/2$. Denote the principal angles between $\myspan(X)$ and $\myspan(Y)$ by $\theta_1 \leq \cdots \leq \theta_k$ and assume that $\theta_k< \pi/2$. Choose $X_\perp, Y_\perp \in \mathbb{R}^{n \times (n-k)}$ such that $X_\perp^T X_\perp = Y_\perp^T Y_\perp = I_{n-k}$ and $\myspan(X_\perp) = \myspan(X)^\perp$, $\myspan(Y_\perp) = \myspan(Y)^\perp$. Define the curves
\begin{align*}
\gamma(t) &\colon [0,1] \to \mathbb{R}^{n \times k} , & t &\mapsto X V_1 \cos(t\theta) + X_\perp V_2 \begin{bmatrix} O \\ \sin(t\theta) \end{bmatrix}, \\
\gamma_\perp(t) &\colon [0,1] \to \mathbb{R}^{n \times (n-k)} , & t &\mapsto X_\perp V_2 \begin{bmatrix} I \\ & \cos(t\theta) \end{bmatrix} - X V_1 \begin{bmatrix} O & \sin(t\theta) \end{bmatrix},
\end{align*}
where the orthogonal matrices $V_1,V_2$ are the same as in Lemma~\ref{lemma:CS_square_blocks}. Then $\myspan(\gamma(t))$ is the connecting geodesic on $\Gr(n,k)$ from $\myspan(X)$ to $\myspan(Y)$. Likewise, $\myspan(\gamma_\perp(t))$ is a connecting geodesic on $\Gr(n,n-k)$ from $\myspan(X_\perp)$ to $\myspan(Y_\perp)$. Furthermore, $\gamma(t)$ and $\gamma_\perp(t)$ are orthonormal matrices for all $t$.
\end{lemma}
\begin{proof}
Assume $\theta_1 = \cdots = \theta_r = 0$, where $r=0$ means that $\theta_1 > 0$. Like in the proof of Prop.~\ref{prop:weak-quasi-convexity}, the CS decomposition of $X$ and $Y$ from Lemma~\ref{lemma:CS_square_blocks} can be written in terms of their principal angles $\theta_1, \ldots, \theta_k$. Since $\theta_k < \pi/2$ and $n \leq k/2$, this gives after dividing certain block matrices the relations
\begin{align*}
Y^T X &= U_1 \, \cos(\theta) \, V_1^T, &
Y^T X_\perp &= U_1 \begin{bmatrix}O_{k \times (n-2k)} & \sin(\theta) \end{bmatrix} V_2^T \\
Y_\perp^T X &= U_2 \begin{bmatrix}O_{(n-2k) \times k} \\ \sin(\theta) \end{bmatrix} V_1^T, & Y_\perp^T X_\perp &= U_2 \begin{bmatrix}-I_{n-2k} \\ & -\cos(\theta) \end{bmatrix} V_2^T,
\end{align*}
where $U_1, V_1$ and $U_2,V_2$ are orthogonal matrices of size $k \times k$ and $(n-k) \times (n-k)$, resp.
Denote $\mathcal{X}=\myspan(X)$ and $\mathcal{Y}=\myspan(Y)$. By definition, the connecting geodesic $\gamma(t)$ is determined by the tangent vector $ \Log_{\mathcal{X}}(\mathcal{Y})$, which can be computed from \eqref{eq:log formula}. To this end, we first need the compact SVD of $M:= X_\perp X_\perp^T Y (X^T Y)^{-1}$. Substituting the results from above, we get (cfr.~\eqref{eq:Log_with_V2})
\[
M
= X_\perp V_2 \begin{bmatrix} O_{(n-2k) \times k} \\ \sin(\theta) \end{bmatrix} U_1^ T U_1 \, (\cos(\theta))^{-1}\, V_1^T =
X_\perp V_2 \begin{bmatrix} O_{(n-2k) \times k} \\ I_k \end{bmatrix} \, \tan(\theta) \, V_1^T.
\]
Observe that this is a compact SVD. Applying \eqref{eq:log formula}, we therefore get
\[
G := \Log_{\mathcal{X}}(\mathcal{Y}) = U \Sigma V^T \quad \text{with $U = X_\perp V_2 \begin{bmatrix} O \\ I_k \end{bmatrix}, \ \Sigma = \theta, \ V = V_1$}
\]
and from~\eqref{eq:formula_geo}, the connecting geodesic satisfies
\[
\Exp_{\mathcal{X}}(tG) = \myspan( \, X V_1 \cos(t\theta) + X_\perp V_2 \begin{bmatrix} O \\ I_k \end{bmatrix} \sin(t\theta) \, ).
\]
We have proven the stated formula for $\gamma(t)$. Verifying that $\gamma(t)^T \gamma(t) = I_k$ follows from a simple calculation that uses $\cos^2(t \theta) + \sin^2(t \theta) = I_k$.
Denote $\mathcal{X}^\perp=\myspan(X_\perp)$ and $\mathcal{Y}^\perp=\myspan(Y_\perp)$. To prove $\gamma_\perp(t)$, we proceed similarly by computing $G^\perp:= \Log_{\mathcal{X}^\perp}(\mathcal{Y}^\perp)$, which requires now the SVD of $M^\perp:= X X^T Y_\perp (X_\perp^T Y_\perp)^{-1}$. Again substituting the results from the CS decomposition, we get
\begin{align*}
M^\perp &= X V_1 \begin{bmatrix}O_{k \times (n-2k)} & \sin(\theta) \end{bmatrix} U_2^ T U_2 \begin{bmatrix} -I_{n-2k} \\ & - \cos(\theta) \end{bmatrix}^{-1} V_2^T \\
&= X V_1 \begin{bmatrix}O_{k \times (n-2k)} & -\tan(\theta) \end{bmatrix} V_2^T
\end{align*}
Since \eqref{eq:log formula} requires a compact SVD with a \emph{square} $\Sigma$, we rewrite this as
\[
M^\perp = \begin{bmatrix} \widetilde X & X V_1 \end{bmatrix} \begin{bmatrix}O_{(n-2k) \times (n-2k)} \\ & -\tan(\theta) \end{bmatrix} V_2^T
\]
where $\widetilde X$ contains $n-2k$ columns that are orthonormal to $X$ (the final result will not depend on $\widetilde X$). Let $\theta^\perp_1 \leq \cdots \leq \theta^\perp_{n-k}$ denote the principal angles between $\mathcal{X}^\perp$ and $\mathcal{Y}^\perp$. Up to zero angles, they are the same as those between $\mathcal{X}$ and $\mathcal{Y}$. Since $k \leq n/2$, we thus have
\[
\theta^\perp_1 = \cdots = \theta^\perp_{n-2k} = 0, \ \theta^\perp_{n-2k+1} = \theta_1, \ldots, \theta^\perp_{n-k} = \theta_k.
\]
Applying \eqref{eq:log formula} with these principal angles, we obtain
\[
G^\perp := \Log_{\mathcal{X}^\perp}(\mathcal{Y^\perp}) = U \Sigma V^T \quad \text{with $U = -\begin{bmatrix} \widetilde X & X V_1 \end{bmatrix} , \ \Sigma = \theta^\perp, \ V = V_2$}.
\]
From \eqref{eq:formula_geo}, the corresponding geodesic satisfies
\begin{align*}
\Exp_{\mathcal{X}^\perp}(tG^\perp) &= \myspan( \, X_\perp V_2 \cos(t\theta^\perp) - \begin{bmatrix} \widetilde X & X V_1 \end{bmatrix} \sin(t\theta^\perp) \, ) \\
&= \myspan( \, X_\perp V_2 \begin{bmatrix} I_{n-2k} \\ & \cos(t\theta) \end{bmatrix} - \begin{bmatrix} O_{n \times (n-2k)} & X V_1 \sin(t\theta) \end{bmatrix} \, ).
\end{align*}
Rewriting the block matrix, we have proven $\gamma_\perp(t)$. Its orthonormality is again a straightforward verification.
\end{proof}
With the previous lemma, we can now investigate the Riemannian Hessian of $f$ near $\mathcal{V}_\alpha$ when it is given in the matrix form $H_X$ of Lemma~\ref{lem:matrix_Hx}. Let $\mathcal{X} = \myspan(X) \in \Gr(n,k)$ with orthonormal $X$. Its principal angles with $\mathcal{V}_\alpha$ are $\theta_1 \leq \cdots \leq \theta_k < \pi/2$.
Use the substitutions $X \mapsto V_\alpha, Y \mapsto X$ and $X_\perp \mapsto V_\beta, Y_\perp \mapsto X_\perp$ in Lemma~\ref{lem:geodesic_perp} to define the geodesics $\gamma(t)$ and $\gamma_\perp(t)$ that connect $\mathcal{V}_\alpha$ to $\mathcal{X}$, and $\mathcal{V}_\beta$ to $\mathcal{X}^\perp$, resp. Denoting
\[
C := \cos(\theta), \ S := \sin(\theta),
\ \widetilde C := \begin{bmatrix} I \\ & C \end{bmatrix},
\ \widetilde S := \begin{bmatrix} O \\ S \end{bmatrix},
\]
we get the following expressions for the geodesics:
\[
\gamma(t) = V_\alpha V_1 C + V_\beta V_2 \widetilde S, \quad
\gamma_\perp(t) = V_\beta V_2 \widetilde C - V_\alpha V_1 \widetilde S^T.
\]
Recall that $H_X$ is defined using $X^T A X$ and $X_\perp^T A X_\perp$. Since $\gamma(1) = XQ_1$ and $\gamma_\perp(1) = X^\perp Q_2$ for some orthogonal matrices $Q_1, Q_2$, we can write with $A = V_\alpha \Lambda_{\alpha} V_\alpha^T + V_\beta \Lambda_{\beta} V_\beta^T$ that
\begin{equation}\label{eq:XtAX_XptAXp_geodesic}
\begin{aligned}
Q_1^T X^T A X Q_1 &= \gamma(1)^T A \gamma(1) \\
&= C \, (V_1^T \Lambda_{\alpha} V_1) \, C + \widetilde S^T \, (V_2^T \Lambda_{\beta} V_2) \, \widetilde S \\
Q_2^TX_\perp^T A X_\perp Q_2&= \gamma_\perp(1)^T A \gamma_\perp(1) \\
&= \widetilde C \, (V_2^T \Lambda_{\beta} V_2) \, \widetilde C + \widetilde S \, (V_1^T \Lambda_{\alpha} V_1) \, \widetilde S^T.
\end{aligned}
\end{equation}
Here we used simplifications like $V_\beta^T AV_\alpha = V_\beta^T V_\alpha \Lambda_{\alpha} = 0$.
A simple bounding of the eigenvalues of the difference of these matrices results in the main result.
\begin{theorem}\label{eq:g-convex_domain}
Let $k \leq n/2$. Define the neighbourhood
\[
B_* = \left\{ \mathcal{X} \in \Gr(n,k) \colon \sin^2 (\theta_k(\mathcal{X}, \mathcal{V}_\alpha)) \leq \frac{\delta}{\lambda_1 + \lambda_k} \right\},
\]
then $f$ is geodesically convex on $B_*$.
\end{theorem}
\begin{proof}
Our aim is to show that $\lambda_{i,j}(H_X)$ remains positive given the bound on $\theta_k$. From Lemma~\ref{lem:matrix_Hx}, we see that
\begin{equation}\label{eq:condition_Hess_pos}
\lambda_{\min}(H_X) \geq 0 \quad \iff \quad \lambda_{\min}(X^T A X) \geq \lambda_{\max}(X_\perp^T A X_\perp).
\end{equation}
Since $Q_1,Q_2$ are orthogonal in~\eqref{eq:XtAX_XptAXp_geodesic}, it suffices to find a lower and upper bound of, resp.,
\begin{align*}
\lambda_{\min}(X^T A X) &= \lambda_{\min}(C \, (V_1^T \Lambda_{\alpha} V_1) \, C + \widetilde S^T \, (V_2^T \Lambda_{\beta} V_2) \, \widetilde S) \\
\lambda_{\max}(X_\perp^T A X_\perp) &= \lambda_{\max}(\widetilde C \, (V_2^T \Lambda_{\beta} V_2) \, \widetilde C + \widetilde S \, (V_1^T \Lambda_{\alpha} V_1) \, \widetilde S^T).
\end{align*}
Standard eigenvalue inequalities for symmetric matrices (see, e.g., \cite[Cor.~4.3.15]{hornMatrixAnalysis2012a}) give
\begin{align*}
\lambda_{\min}(X^T A X) &\geq \lambda_{\min}(C \, (V_1^T \Lambda_{\alpha} V_1) \, C) + \lambda_{\min}(\widetilde S^T \, (V_2^T \Lambda_{\beta} V_2) \, \widetilde S) \\
\lambda_{\max}(X_\perp^T A X_\perp) &\leq \lambda_{\max}(\widetilde C \, (V_2^T \Lambda_{\beta} V_2) \, \widetilde C) + \lambda_{\max}(\widetilde S \, (V_1^T \Lambda_{\alpha} V_1) \, \widetilde S^T).
\end{align*}
Recall that $\lambda_1 \geq \cdots \geq \lambda_n$ are the eigenvalues of $A$.
Since $\widetilde S$ is a tall rectangular matrix, we apply the generalized version of Ostrowski's theorem from~\cite[Thm.~3.2]{highamModifyingInertiaMatrices1998} to each term above\footnote{Observe that the cited theorem orders the eigenvalues inversely to the convention used in this paper.} and obtain
\begin{align*}
\lambda_{\min}(C \, (V_1^T \Lambda_{\alpha} V_1) \, C) &\geq \lambda_{\min}(C^2) \lambda_{\min}(\Lambda_{\alpha}) = \cos^2(\theta_k) \lambda_k \\
\lambda_{\min}(\widetilde S^T \, (V_2^T \Lambda_{\beta} V_2) \, \widetilde S) &\geq \lambda_{\min}(\tilde S^T \tilde S) \lambda_{\min}(\Lambda_{\beta}) = \sin^2(\theta_1) \lambda_n,
\end{align*}
since the matrices $V_1,V_2$ are orthogonal and $\theta_1 \leq \cdots \leq \theta_k < \pi/2$. Adding this gives the lower bound
\begin{equation}\label{eq:convex_intermediate_lower_bound}
\lambda_{\min}(X^T A X) \geq \cos^2(\theta_k) \lambda_k + \sin^2(\theta_1) \lambda_n \geq \cos^2(\theta_k) \lambda_k.
\end{equation}
Likewise, using the block structure of $\widetilde S$, we get
\begin{align*}
\lambda_{\max}(\widetilde C \, (V_2^T \Lambda_{\beta} V_2) \, \widetilde C) &\leq \lambda_{\max}(C^2) \lambda_{\max}(\Lambda_{\beta}) = \cos^2(\theta_1) \lambda_{k+1} \\
\lambda_{\max}(\widetilde S \, (V_1^T \Lambda_{\alpha} V_1) \, \widetilde S^T) &=
\lambda_{\max}(S \, (V_1^T \Lambda_{\alpha} V_1) \, S) \\
&\leq \lambda_{\max}(S^2) \lambda_{\max}(\Lambda_{\alpha}) = \sin^2(\theta_k) \lambda_1
\end{align*}
and thus
\begin{equation}\label{eq:convex_intermediate_upper_bound}
\lambda_{\max}(X_\perp^T A X_\perp) \leq \cos^2(\theta_1) \lambda_{k+1} + \sin^2(\theta_k) \lambda_1 \leq \lambda_{k+1} + \sin^2(\theta_k) \lambda_1.
\end{equation}
The condition~\eqref{eq:condition_Hess_pos} is thus satisfied when
\[
\cos^2(\theta_k) \lambda_k = \lambda_k - \sin^2(\theta_k) \lambda_k \geq \lambda_{k+1} + \sin^2(\theta_k) \lambda_1,
\]
which reduces to the bound on $\theta_k$ in the statement of the theorem.
It remains to show that $B_*$ is an open totally geodesically convex set. Since $\lambda_1 \geq \lambda_k \geq \lambda_{k+1} \geq 0$, we get
\[
\frac{\lambda_k - \lambda_{k+1}}{\lambda_1 + \lambda_{k}} \leq \frac{\lambda_k }{2\lambda_{k}} = \frac{1}{2}.
\]
Hence, $B_* = N_*(\varphi)$ with $\varphi \leq \pi/4$ since $\sin^2(\pi/4) = 1/2$.
\end{proof}
If $k=1$, the proof above can be simplified.
\begin{corollary}\label{cor:g-convex_domain_sphere}
Let $k=1$ and define the neighbourhood
\[
B_* = \left\{ \mathcal{X} \in \Gr(n,1) \colon \sin^2 (\theta_1(\mathcal{X}, \mathcal{V}_\alpha)) \leq \frac{\delta}{\delta + \lambda_1 - \lambda_n} \right\}.
\]
Then $f$ is geodesically convex on $B_*$.
\end{corollary}
\begin{proof}
Since $k=1$, there is no need to simplify the bounds~\eqref{eq:convex_intermediate_lower_bound} and \eqref{eq:convex_intermediate_upper_bound} as was done above. This gives that $f$ is convex as long as
\[
\cos^2(\theta_1) \lambda_1 + \sin^2(\theta_1) \lambda_n \geq \cos^2(\theta_1) \lambda_{2} + \sin^2(\theta_1) \lambda_1.
\]
Rewriting leads directly to the stated condition on $\sin^2(\theta_1)$.
\end{proof}
Remark that optimizing $f$ on $\Gr(n,1)$ is equivalent to
\begin{equation}\label{eq:min_f_sphere}
\min_{x \in \mathbb{R}^n} - x^T A x \qquad \text{s.t.} \quad \|x\| = 1,
\end{equation}
which is the minimization of the Rayleigh quotient problem on the unit sphere $S^{n-1} = \{ x \in \mathbb{R}^n \colon x^T x = 1 \}$.
Cor.~\ref{cor:g-convex_domain_sphere} can therefore also be phrased in terms of a geodesically convex region for this problem. Denoting a unit norm top eigenvector of $A$ by $v_1$ and using that $\sin^2 \theta_1 = 1 - \cos^2 \theta_1$, we get that~\eqref{eq:min_f_sphere} is geodesically convex on
\[
\hat B_* = \left\{ x \in S^{n-1} \colon (x^T v_1)^2 \geq 1 - \frac{\delta}{\delta + \lambda_1 - \lambda_n} \right\}.
\]
This result can now be directly compared to \cite[Lemma 7]{pmlr-v119-huang20e} where the corresponding region is defined as $(x^T v_1)^2 \geq 1 - \frac{\delta}{\delta + \lambda_1}$. This is a stricter condition and our result is therefore a small improvement.
\section{Convergence of steepest descent with step $\frac{1}{\gamma}$}
\label{sec:big_step}
We now prove convergence of steepest descent with a more tractable choice of step size compared to the analysis of the main paper, where it depended on the weak convexity constant $a(\mathcal{X})$. However, this requires a slightly better initialization at most $\frac{\pi}{2 \sqrt{2}}$ away from the minimizer.
\subsection{Maximum extent of the iterates}
We first prove that, while steepest descent with step size at most $\frac{1}{\gamma}$ does not guarantee contraction in Riemannian distance to the global minimizer, the distance after $t$ steps is always at most a constant factor (independent from $t$) within the initial distance.
\begin{Proposition} \label{prop:big_step_distance}
Consider steepest descent applied to $f$ with step-size $\eta \leq \frac{1}{\gamma}$. If the iterates $\mathcal{X}_t$ satisfy $\theta_k(\mathcal{X}_t,\mathcal{V}_{\alpha})<\frac{\pi}{2}$, then they also satisfy
\begin{equation*}
\textnormal{dist}^2(\mathcal{X}_t, \mathcal{V}_{\alpha}) \leq 2 \textnormal{dist}^2(\mathcal{X}_0, \mathcal{V}_{\alpha}).
\end{equation*}
\end{Proposition}
\begin{proof}
Consider the discrete Lyapunov function
\begin{equation*}
\mathcal{E}(t)= \frac{1}{\gamma} (f(\mathcal{X}_t)-f^*)+\frac{1}{2} \textnormal{dist}^2(\mathcal{X}_t, \mathcal{V}_{\alpha}).
\end{equation*}
Then
\begin{equation*}
\mathcal{E}(t+1)-\mathcal{E}(t) = \frac{1}{\gamma}(f(\mathcal{X}_{t+1})-f(\mathcal{X}_t))+\frac{1}{2} ( \textnormal{dist}^2(\mathcal{X}_{t+1},\mathcal{V}_{\alpha})-\textnormal{dist}^2(\mathcal{X}_t,\mathcal{V}_{\alpha}) ).
\end{equation*}
Recall that $\mathcal{X}_{t+1} = \Exp_{\mathcal{X}_t}(-\eta \grad f(\mathcal{X}_t))$.
By $\gamma$-smoothness of $f$ (cfr.~\eqref{eq:quadratic_upper_bound}), we have
\begin{align}
f(\mathcal{X}_{t+1})-f(\mathcal{X}_t) &\leq \langle \textnormal{grad}f(\mathcal{X}_t),\textnormal{Log}_{\mathcal{X}_t}(\mathcal{X}_{t+1}) \rangle +\frac{\gamma}{2} \textnormal{dist}(\mathcal{X}_t,\mathcal{X}_{t+1})^2\notag \\
&= \left( -\eta +\frac{\gamma}{2} \eta^2 \right) \|\textnormal{grad}f(\mathcal{X}_t) \| ^2. \label{eq:diff_X_gamma_smoothness}
\end{align}
We also know by Proposition \ref{prop:weak-quasi-convexity} that
\begin{equation*}
\langle \textnormal{grad}f(\mathcal{X}), -\Log_{\mathcal{X}}(\mathcal{V_{\alpha})} \rangle \geq 0,
\end{equation*}
for any $\mathcal{X}$ with $\theta_k(\mathcal{X},\mathcal{V}_\alpha) < \pi/2$.
By the fact that the sectional curvatures of the Grassmann manifold are non-negative, we have
\begin{align*}
\textnormal{dist}^2(\mathcal{X}_{t+1},\mathcal{V}_{\alpha}) & \leq \textnormal{dist}^2(\mathcal{X}_t,\mathcal{V}_{\alpha}) + \textnormal{dist}^2(\mathcal{X}_{t+1},\mathcal{X}_t) - 2 \langle \Log_{\mathcal{X}_{t}}(\mathcal{X}_{t+1}), \Log_{\mathcal{X}_t}(\mathcal{V_{\alpha})} \rangle \\
& = \textnormal{dist}^2(\mathcal{X}_t,\mathcal{V}_{\alpha}) + \eta^2 \| \textnormal{grad}f(\mathcal{X}_t) \|^2 + 2 \eta \langle \textnormal{grad}f(\mathcal{X}_t) , \Log_{\mathcal{X}_t}(\mathcal{V_{\alpha})} \rangle \\ & \leq \textnormal{dist}^2(\mathcal{X}_t,\mathcal{V}_{\alpha}) + \eta^2 \| \textnormal{grad}f(\mathcal{X}_t) \|^2.
\end{align*}
From $\eta \leq \frac{1}{\gamma}$, we therefore get
\begin{align*}
\mathcal{E}(t+1)-\mathcal{E}(t) &\leq \left(-\frac{\eta}{\gamma}+\frac{\eta^2}{2} \right)\| \textnormal{grad}f(\mathcal{X}_t) \| ^2+ \frac{\eta^2}{2}\| \textnormal{grad}f(\mathcal{X}_t) \|^2 \\
& \leq \left(-\frac{\eta}{\gamma}+\eta^2 \right) \| \textnormal{grad}f(\mathcal{X}_t) \|^2 \leq 0.
\end{align*}
Since $\mathcal{E}(t)$ does not increase, we have
\begin{align*}
\frac{1}{2} \textnormal{dist}^2(\mathcal{X}_t,\mathcal{V}_{\alpha}) & \leq \mathcal{E}(t) \leq \mathcal{E}(0) = \frac{1}{\gamma} (f(\mathcal{X}_0)-f^*)+\frac{1}{2} \textnormal{dist}^2(\mathcal{X}_0, \mathcal{V}_{\alpha}) \\ & \leq \frac{1}{2} \textnormal{dist}^2(\mathcal{X}_0, \mathcal{V}_{\alpha}) + \frac{1}{2} \textnormal{dist}^2(\mathcal{X}_0, \mathcal{V}_{\alpha})= \textnormal{dist}^2(\mathcal{X}_0, \mathcal{V}_{\alpha})
\end{align*}
and the desired result follows.
\end{proof}
\subsection{Convergence under positive eigengap}
When $\delta>0$, we can use gradient dominance to prove convergence of steepest descent to the (unique) minimizer in terms of function values.
\begin{Proposition}
Steepest descent with step-size $\eta = \frac{1}{\gamma}$ initialized at $\mathcal{X}_0$ such that
\begin{equation*}
\textnormal{dist}(\mathcal{X}_0,\mathcal{V}_{\alpha}) \leq \frac{\pi}{4}
\end{equation*}
satisfies
\begin{equation*}
f(\mathcal{X}_t)-f^* \leq \left(1-0.32 c_Q \frac{\delta}{\gamma} \right)^t (f(\mathcal{X}_0)-f^*).
\end{equation*}
\end{Proposition}
\begin{proof}
By the previous result and an induction argument to guarantee that the biggest angle between $\mathcal{X}_t$ and $\mathcal{V}_{\alpha}$ stays strictly less than $\pi/2$, we can bound the quantities $a(\mathcal{X}_t)$ uniformly from below:
\newline
Indeed, since $\textnormal{dist}(\mathcal{X}_t,\mathcal{V}_{\alpha}) \leq \sqrt{2} \cdot \textnormal{dist}(\mathcal{X}_0,\mathcal{V}_{\alpha}) \leq \frac{\sqrt{2} \pi}{4}$, we have
\begin{equation*}
a(\mathcal{X}_t) \geq \cos(\theta_k(\mathcal{X}_t,\mathcal{V}_{\alpha})) \geq \cos(\textnormal{dist}(\mathcal{X}_t,\mathcal{V}_{\alpha})) \geq \cos\left(\frac{\sqrt{2} \pi}{4} \right) \geq 0.4.
\end{equation*}
Since the step size $\eta = \frac{1}{\gamma}$, the bound~\eqref{eq:diff_X_gamma_smoothness} implies
\begin{equation*}
f(\mathcal{X}_{t+1})-f(\mathcal{X}_t) \leq -\frac{\| \textnormal{grad}f(\mathcal{X}_t) \| ^2}{2 \gamma}
\end{equation*}
Applying gradient dominance (Proposition \ref{prop:PL}), we therefore obtain
\begin{equation*}
f(\mathcal{X}_{t+1})-f(\mathcal{X}_t) \leq -\frac{2 c_Q \delta a^2(\mathcal{X}_t)}{\gamma}(f(\mathcal{X}_t)-f^*)
\end{equation*}
and thus
\begin{equation*}
f(\mathcal{X}_{t+1})-f^* \leq \left(1-2 c_Q a^2(\mathcal{X}_t ) \frac{\delta}{\gamma}\right ) (f(\mathcal{X}_t)-f^*) \leq \left(1-0.32 c_Q \frac{\delta}{\gamma} \right) (f(\mathcal{X}_t)-f^*).
\end{equation*}
By induction the desired result follows.
\end{proof}
We now state the iteration complexity of steepest descent algorithm:
\begin{theorem}
Steepest descent with step-size $\frac{1}{\gamma}$ starting from a subspace $\mathcal{X}_0$ with Riemannian distance at most $\frac{\pi}{4}$ from $\mathcal{V}_{\alpha}$ computes an estimate $\mathcal{X}_T$ of $\mathcal{V}_{\alpha}$ such that $\textnormal{dist}(\mathcal{X}_T,\mathcal{V}_{\alpha})\leq \epsilon$ in at most
\begin{equation*}
T=\mathcal{O}\left( \frac{\gamma}{\delta} \log \frac{f(\mathcal{X}_0)-f^*}{\delta \epsilon} \right).
\end{equation*}
\end{theorem}
\begin{proof}
For $\textnormal{dist}(\mathcal{X}_T,\mathcal{V}_{\alpha})< \epsilon$, it suffices to have
\begin{equation*}
f(\mathcal{X}_T)-f^* \leq c_Q \epsilon^2 \delta
\end{equation*}
by quadratic growth of $f$ in Proposition \ref{prop:quadratic growth}. Using $(1-c)^T \leq \exp(-c T)$ for all $T \geq 0$ and $0 \leq c \leq 1$, the previous result gives that it suffices to choose $T$ as the smallest integer such that
\begin{equation*}
f(\mathcal{X}_T)-f^* \leq \exp\left(- 0.32 c_Q \frac{\delta}{\gamma} T \right) (f(\mathcal{X}_0)-f^*) \leq c_Q \epsilon^2 \delta.
\end{equation*}
Solving for $T$ and substituting $c_Q = 4/\pi^2$, we get the required result.
\end{proof}
\begin{remark} The step size in the above theorems satisfies $\eta =\frac{1}{\gamma}$, which may seem unrealistic. Since an overestimation for $\gamma$ is always a valid (but less tight) smoothness constant, the previous theorems can also be phrased for a step size $\eta \leq \frac{1}{\gamma}$.
\end{remark}
\subsection{Gap-less result}
We also prove a convergence result for the function values when $\delta$ is unknown and can be, in particular, equal to $0$.
\begin{theorem}
Steepest descent with step-size $\eta=\frac{1}{\gamma}$
initialized at $\mathcal{X}_0$ such that
\begin{equation*}
\textnormal{dist}(\mathcal{X}_0,\mathcal{V}_{\alpha}) \leq \frac{\pi}{4}
\end{equation*}
satisfies
\begin{equation*}
f(\mathcal{X}_t)-f^* \leq \frac{f(\mathcal{X}_0)-f^*+\frac{\gamma}{2}\textnormal{dist}^2(\mathcal{X}_0, \mathcal{V}_{\alpha})}{0.4 t + 1} = \mathcal{O}\left(\frac{1}{t}\right).
\end{equation*}
\end{theorem}
\begin{proof}
By Proposition \ref{prop:big_step_distance}, we have that $\textnormal{dist}(X_t,\mathcal{V}_{\alpha}) \leq \frac{\sqrt{2} \pi}{4}$ and $f$ satisfies the weak-quasi-convexity inequality at any iterate $\mathcal{X}_t$ of steepest descent with constant $C_0:=0.4$.
Consider the discrete Lyapunov function
\begin{equation*}
\mathcal{E}(t)= \frac{C_0t+1}{\gamma} (f(\mathcal{X}_t)-f^*)+\frac{1}{2} \textnormal{dist}^2(\mathcal{X}_t, \mathcal{V}_{\alpha})
\end{equation*}
We have that
\begin{align*}
\mathcal{E}(t+1)-\mathcal{E}(t) = & \frac{C_0t+C_0+1}{\gamma}(f(\mathcal{X}_{t+1})-f^*)-\frac{C_0t+1}{\gamma} (f(\mathcal{X}_t)-f^*)
\\ & +\frac{1}{2}(\textnormal{dist}^2(\mathcal{X}_{t+1},\mathcal{V}_{\alpha})-\textnormal{dist}^2(\mathcal{X}_t,\mathcal{V}_{\alpha})).
\end{align*}
Now we have to estimate a bound for $\textnormal{dist}^2(\mathcal{X}_{t+1},\mathcal{V}_{\alpha})-\textnormal{dist}^2(\mathcal{X}_t,\mathcal{V}_{\alpha})$.
By $\gamma$-smoothness of $f$ and denoting
$\Delta_t=f(\mathcal{X}_t)-f^*$ we have
\begin{equation*}
\Delta_{t+1}-\Delta_t \leq \langle \textnormal{grad}f(\mathcal{X}_t),\textnormal{Log}_{\mathcal{X}_t}(\mathcal{X}_{t+1}) \rangle +\frac{\gamma}{2} \textnormal{dist}^2(\mathcal{X}_t,\mathcal{X}_{t+1})= -\frac{\|\textnormal{grad}f(\mathcal{X}_t) \|^2}{2\gamma}
\end{equation*}
By $C_0$-weak-strong-convexity of $f$ and the fact that the Grassmann manifold is of positive curvature, we have
\begin{equation*}
C_0\Delta_t \leq \frac{\gamma}{2}(\textnormal{dist}^2(\mathcal{X}_t,\mathcal{V}_{\alpha})-\textnormal{dist}^2(\mathcal{X}_{t+1},\mathcal{V}_{\alpha}))+\frac{ \| \textnormal{grad}f(\mathcal{X}_t) \|^2}{2\gamma}
\end{equation*}
Summing this to the previous inequality, we get
\begin{equation*}
\textnormal{dist}^2(\mathcal{X}_{t+1},\mathcal{V}_{\alpha})-\textnormal{dist}^2(\mathcal{X}_t,\mathcal{V}_{\alpha}) \leq \frac{2}{\gamma} ((1-C_0)(f(\mathcal{X}_t)-f(\mathcal{X}_{t+1}))-C_0(f(\mathcal{X}_{t+1})-f^*)).
\end{equation*}
Thus
\begin{align*}
\mathcal{E}(t+1)-\mathcal{E}(t) & \leq \frac{C_0t+ 1 }{\gamma} (f(\mathcal{X}_{t+1})-f(\mathcal{X}_t))+\frac{C_0}{\gamma} (f(\mathcal{X}_{t+1})-f^*) \\ & + \frac{1 - C_0}{\gamma} (f(\mathcal{X}_t)-f(\mathcal{X}_{t+1}))- \frac{C_0}{\gamma} (f(\mathcal{X}_{t+1})-f^*)
\\&=\frac{C_0t+C_0}{\gamma} (f(\mathcal{X}_{t+1})-f(\mathcal{X}_t)) \leq 0.
\end{align*}
Thus $\mathcal{E}(t) \leq \mathcal{E}(0)$ and the result follows.
\end{proof}
\section{Introduction}
\label{intro}
Your text comes here. Separate text sections with
\section{Section title}
\label{sec:1}
Text with citations \cite{RefB} and \cite{RefJ}.
\subsection{Subsection title}
\label{sec:2}
as required. Don't forget to give each section
and subsection a unique label (see Sect.~\ref{sec:1}).
\paragraph{Paragraph headings} Use paragraph headings as needed.
\begin{equation}
a^2+b^2=c^2
\end{equation}
\begin{figure}
\includegraphics{example.eps}
\caption{Please write your figure caption here}
\label{fig:1}
\end{figure}
\begin{figure*}
\includegraphics[width=0.75\textwidth]{example.eps}
\caption{Please write your figure caption here}
\label{fig:2}
\end{figure*}
\begin{table}
\caption{Please write your table caption here}
\label{tab:1}
\begin{tabular}{lll}
\hline\noalign{\smallskip}
first & second & third \\
\noalign{\smallskip}\hline\noalign{\smallskip}
number & number & number \\
number & number & number \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
|
1,314,259,993,419 | arxiv | \section{Introduction}
Several recent experiments have exhibited the loophole-free violation of a
Bell inequality\cite{Hensen},\cite{Shalm},\cite{Giustina}. The result has
been interpreted as the ``death by experiment for local realism'', this
being the hypothesis that ``the world is made up of real stuff, existing in
space and changing only through local interactions ...about the most
intuitive scientific postulate imaginable''\cite{Wiseman}. In this paper I
will argue that the claimed death of local realism requires some refinements.
It is common wisdom that the most celebrated supporter of local realism was
Albert Einstein, whence recalling his views may clarify the subject. His
opinions about realism will not be commented here (see, e. g. \cite{Harrigan
), but it is appropriate to comment on his idea about (relativistic)
locality, stated as ``On one supposition we should, in my opinion,
absolutely hold fast: the real factual situation of the system S2 is
independent of what is done with the system S1 , which is spatially
separated from the former.''\cite{Einstein}. This quotation is usually
interpreted as Einstein\'{}s support for ``relativistic causality'', this
used as synonymous of locality. So for instance in the pioneer paper by John
Bell\cite{Bell}. However this interpretation is misleading as explained in
the following.
Causality is commonly viewed as the assumption that the present may
influence the future, but not the past, which in (special) relativity would
mean that an event may be influenced only by events in its past light cone,
that is neither by spacelike separated events nor by events in the future
light cone. However Einstein sentence did not exclude influences by events
in the future light cone. Indeed he was well aware that the laws of physics
do not distinguish future from past, as in the often quoted passage from his
letter of condolences upon the death of his friend Michele Besso: ``Michele
has left this strange world just before me. This is of no importance. For us
convinced physicists the distinction between past, present and future is an
illusion, although a persistent one.''\cite{EinsteinBesso}. Indeed the
concept of temporal causality, stating that an event may influence its
future but not its past, is related to our experience as living beings, but
it is alien to the laws of physics.
The main purpose of this paper is to stress that a (loophole-free) violation
of a Bell inequality does not imply influences between spacelike separated
events provided that we allow influences of the future on the past.
\section{The arrow of time vs. microscopic reversibility}
The name ``arrow of time'' was introduced by Arthur Eddington in 1927. He
wrote ``I shall use the phrase time's arrow to express this one-way property
of time which has no analogue in space''\cite{Eddington}. Thus the arrow of
time refers to the distinction between past and future that we observe in
nature. At present it is used more specifically with reference to the
problem of explaining the irreversibility that we experience, which is not
trivial taking into account that the laws of nature are invariant under time
reversal (except for a small violation in the decay of some elementary
particles like $K$ mesons that will be ignored here). There are many books
and articles devoted (or discussing) the arrow of time and a review is out
of scope of this paper, where I will only discuss a few points that
sometimes have been the source of confusion.
The existence of an arrow of time was formalized by Clausius with the
concept of entropy and its postulated increase for any spontaneous evolution
of an isolated system. The entropy was introduced in physics as a kind of
measure of the ``quality'' of energy. For instance mechanical and
gravitational energy have high quality because they may be transformed
completely in other forms, but this is not the case for heat because only a
part of it can be transformed in work (mechanical energy). In the particular
case of energy transfer taking place exclusively in the form of heat, a
simple quantitative calculation of the entropy change, $\Delta S,$ of a
system is possible, namely
\begin{equation}
\Delta S=\int \frac{dQ}{T}, \label{2.2}
\end{equation}
$Q$ being the heat entering the system and $T$ the absolute temperature. For
other cases the calculation is more involved. Clausius realized that in the
processes that are possible in the laboratory the total entropy never
decreases. This led to postulate that entropy never decreases in closed
systems, that was the first scientific statement about the existence of an
arrow of time. For instance if we put a hot body in contact with a cold one
the heat goes spontaneously from the former to the latter until they have
equal temperature. This fits in the increase of entropy as is easily derived
from eq.$\left( \ref{2.2}\right) $ leading to
\[
\Delta S=\int \frac{dQ}{T_{cold}}-\frac{dQ}{T_{hot}}>0,
\]
which is positive taking into account that $dQ>0$ ($dQ<0)$ is defined as
energy that enters (leaves) the body and obviously $T_{hot}>T_{cold}.$
The fundamental step towards the solution of the apparent contradiction
between the\textit{\ irreversibility of spontaneous (macroscopic) evolution
vs.} \textit{reversibility of the fundamental (microscopic) laws of nature
was made by Boltzmann, who gave a microscopic interpretation of entropy.
Boltzmann realized that irreversibility is always associated to macroscopic
systems and he proposed that it is due to the tendency towards more probable
states in the spontaneous evolution. Then Boltzmann introduced a relation
between the entropy, $S$, of a composite system and the number $N$ of
microscopic states of the system that correspond to a given macroscopic
state, that is
\begin{equation}
S=k_{B}\log N, \label{2.3}
\end{equation}
where $k_{B}$ is today named Boltzmann constant. A standard example is a box
divided in two equal parts by a wall with a small hole on it, filled with an
amount of gas consisting of $n$ molecules. If we define a microscopic state
by specifying which gas molecules are present in each part of the box, there
is only one state with all molecules in the left (or in the right). In this
state $N=1$ and eq.$\left( \ref{2.3}\right) $ gives $S=0.$ If at time $t=0$
the box starts in this state, after some time $t=T$ there will be several,
say $j,$ molecules on the left and $n-j$ on the right. Hence the number of
microstates equals the number of ways to choose $j$ molecules amongst $n$,
that is
\[
N=\frac{n!}{j!(n-j)!}>1\Rightarrow S>0.
\]
The most probable state will correspond to $j=n/2$ whence,
\[
S_{\max }=k_{B}\log N_{\max }\simeq k_{B}n\log 2.
\]
Boltzmann's work was one of the great achievements in the history of
physics, but it did not solve the problem of the arrow of time as was soon
pointed out by several authors, in particular Loschmidt and Poincar\'{e}. I
think that in order to clarify the subject it is important to distinguish
between the evolution of systems in experiments made in the laboratory and
whot happens on Earth.
\section{Evolution of closed systems in the laboratory}
I will speak about LAB experiments in a wide sense, including processes
induced by human beings like those of chemical industry. In any case I will
refer only to evolution of isolated systems because it is obvious that
evolution subject to external influences may present irreversibility induced
by them. In the example of the box, commented in the previous section, the
irreversibility is related to
\[
S(T)>S(0).
\]
The Loschmidt argument applied to this example is as follows. If the system
was isolated since well before $t=0$ it is the case that at time $t=-T$ the
gas would be filling both parts of the box. In fact the evolution backwards
in time between $t=0$ and $t=-T$ would be identical to the evolution forward
in time between between $t=0$ and $t=T$ with all velocities reversed at time
$t=0$. Therefore in terms of the entropy we may write
\[
S(-T)=S(T)>S(0).
\]
The reversal of velocities is appropriate for classical mechanical systems
consisting particles. In quantum physics the complex conjugation of the
wavefunction is substituted for the velocities reversal.
Any reader will immediatily argue that nobody has ever seen an isolated box
with a quantity of gas having an homogeneous density (say at time $t=-T$) to
evolve spontaneously towards a state with all the gas concentrated in a part
of the box (at time $t=0$). This is true, but the point is that we, human
beings, are able to prepare a box having gas in only one part and then
observe the evolution towards the future, $t=T$, but we are unable to
observe towards the past, $t=-T$, the evolution of an isolated system
prepared at time $t=0.$\textit{\ That is, the irreversibility in the LAB is
not a feature of the material systems themselves, but it derives from our
fundamental irreversibility as living beings. }This irreversibility
constrains us to observe what happens at times $t>0$ to a system prepared by
us at time $t=0$, but we are unable to prepare an isolated system in such a
way that we could observe its evolution towards the past. In section 5 we
shall see that apparently there are experiments where it is possible to
derive the existence of influences ``towards the past'' from actual
experiments.
The conclusion is that closed (isolated) systems are reversible, this being
a straightforward consequence of the reversibility of the fundamental laws
of physics. In particular if a system is isolated between times $-T$ and $T$
and at time $t=0$ it is out of equilibrium, then it will be more close to
equilibrium both at time $T$ and at time $-T$. Of course this does not apply
to the Earth as a whole or to the living beings, including humans, because
they are not isolated. This point will be commented in more detail in the
next section.
\section{The irreversibility of the Earth, the living beings and the
universe.}
Explaining the irreversibility of living beings, including humans, is rather
trivial once we know that the universe is expanding. The universe may be
assumed an isolated system, governed by reversible laws, but its initial
state was very special. In that state it was far from equilibrium and
consequently its evolution has been irreversible. The expansion combined
with the attractive nature of gravity caused that the initial almost
homogeneous plasma evolved giving rise to galaxies and stars. The stars
frequently have associated planets giving rise to solar systems. Every
planet receives energy from its star, this causing irreversible evolution.
Incidentally in a stationary universe the existence of (irreverible) living
beings would be difficult to explain except introducing additional
assumptions.
Our solar systems was formed about 5 billion years ago. After some period
the Earth, initially very hot, became cold arriving at an approximate
stationary state with a separation of the solid crust, the sea and the
atmosphere. In that cold Earth life surged and then evolved until the
appearance of human beings. The evolution in that period has been clearly
irreversible and the reason is obvious. The (stationary) Earth is not an
isolated system. Asides from minor perturbations, the main cause of
irreversibility is the fact that it is receiving energy at high temperatura
$T_{in}\simeq 5800K$) from the Sun and sending away a similar power by
radiation at lower temperatura ($T_{out}\simeq 300K$). This produces a net
increase of entropy of the universe at a rate
\[
\frac{dS}{dt}=\frac{W}{T_{out}}-\frac{W}{T_{in}}>0,
\]
where $W$ is the average power received from the Sun or emitted by the Earth
to outer space. The irreversibility of Earth is responsible for the
irreversibility of the living beings, including us. That is life in Earth is
an irreversible process because living beings are interacting with the
environment and the process increases the entropy.
In summary all closed (isolated) systems are reversible. However any
macroscopic system that at a given time, say $t=0$, is out of equilibrium
would evolve towards equilibrium both for the past and the future as far as
the system remains isolated. This implies that, if we study the system only
towards de future it will evolve irreversibly approaching equilibrium. This
is the case for the universe as a whole that we can study only \textit{after}
the big bang.
\section{Acausality in Bell experiments}
The consequence of the facts commented in the previous sections is that
locality interpreted as \textit{relativistic (temporal) causality does not
follow from relativity theory} because the theory is time reversal
invariant. Therefore if two events $A$ and $B$ are timelike separated it is
equally correct to say that $A$ is the cause of $B$ or that $B$ is the cause
of $A$. That is the fact that $B$ happens later or earlier than $A$ is
irrelevant. Thus in physics we should speak about correlation between
timelike events rather than causality. In sharp contrast, in biology or
social sciences the concept of causality attached to time ordering is very
relevant, the systems studied by these sciences being essentially open and,
consequently, irreversible.
In a Bell experiment\cite{Shalm},\cite{Giustina} there are two parties,
Alice and Bob, measuring some observable property of one particle each. I
will label $A(B)$ the observable measured by Alice (Bob). Typically $A$ may
be one of two possible photon polarizations and similar for $B$. I shall
label the results of the measurements $a$ and $b$ respectively. Pairs of
particles in an appropriate (entangled) state are produced in the source.
Bell's proposal for the expectation of the product of observables,
\left\langle AB\right\rangle ,$ in what he named ``local hidden variables
(LHV) model'', was
\begin{equation}
\left\langle AB\right\rangle =\int \rho (\lambda )a\left( A,\lambda \right)
b\left( B,\lambda \right) d\lambda , \label{4.0}
\end{equation}
where $\lambda $ labels the state produced in the source (typically two
entangled photons), $\rho $ is the probability density of states and $a(b)$
is the result obtained by Alice (resp. Bob), typically $a=1$ (detection) or
a=0$ (absence of detection) and similar for $b$. (Bell considered
deterministic LHV models\cite{Bell}, but the generalization to probabilitic
models is straightforward\cite{Santos}). Bell pointed out that the result $a$
should not depend on what Bob is measuring, say $B$, and similarly $b$
should not depend on $A$. In loophole-free tests these conditions are
carefully implemented via performing the measurements by Alice and Bob in
spacelike separated regions. This requirement was strongly supported by
Einstein in the paragraph that we reproduce in the introduction of this pape
\cite{Einstein}. However Bell also demanded that $\rho $ should not depend
on $A$ or $B$ (neither on $a$ or $b$), the reason being the fact that the
measurements are in the future light cone of the state production on the
source, a condition that Bell included under the concept of locality. In
order to see more clearly how Bell's locality condition agrees with
(relativistic) causality, we may substitute $\sigma \left( \lambda ,\mu
\right) $ for $\rho (\lambda )$ in eq.$\left( \ref{4.0}\right) $, where $\mu
$ represents all relevant events in the backward light cone with influence
in the state preparation (e.g. the properties of the laser and the nonlinear
crystal where the entangled photon pair is produced). Therefore Bell's
correlation formula eq.$\left( \ref{4.0}\right) $ may be written more
explicitly
\begin{equation}
\left\langle AB\right\rangle =\int d\lambda \int d\mu \sigma (\lambda ,\mu
)a\left( A,\lambda \right) b\left( B,\lambda \right) . \label{4.2}
\end{equation}
It is easy to see that eq.$\left( \ref{4.2}\right) $ implies eq.$\left( \ref
{4.0}\right) $ provided that we identify
\begin{equation}
\int d\mu \sigma (\lambda ,\mu )=\rho (\lambda ). \label{4.3}
\end{equation}
However \textit{influences from the forward light cone are not forbidden by
relativity theory}. Thus we should substitute
\[
\left\langle AB\right\rangle =\int d\lambda \int d\mu \sigma (\lambda ,\mu
,a,b)a\left( A,\lambda \right) b(B,\lambda )
\]
for eq.$\left( \ref{4.2}\right) ,$ thus including the possible influence of
the most relevant events in the future of the state preparation, namely the
absoption, or not, of the corresponding photon by Alice or Bob. With the
identification eq.$\left( \ref{4.3}\right) $ this becomes
\begin{equation}
\left\langle AB\right\rangle =\int \rho (\lambda ,a,b)a\left( A,\lambda
\right) b\left( B,\lambda \right) d\lambda , \label{4.1}
\end{equation}
rather than eq.$\left( \ref{4.0}\right) ,$ as appropriate for models of
correlation. It may be interpreted saying that the probability of the state
in the source depends on whether the photons will be detected or not, which
of course depends on what measurement are to perform Alice and Bob, this
being governed by the results of two independent random generators\cite
{Shalm},\cite{Giustina}. In actual experiments the state created in the
source is spacelike separated from both random generations and these are
spatially separated from each other. However both the state production in
the source and Alice\'{}s random generation are in the past light cone of
Alice measurement, and similar for Bob. Hence eq.$\left( \ref{4.1}\right) $
is consistent with no influences between spacelike separated events, which
should be the real meaning of locality.
The experiments\cite{Shalm},\cite{Giustina} have refuted eq.$\left( \ref{4.0
\right) $ because they have violated its consequence, namely the Bell
inequality. In sharp contrast a Bell inequality cannot be derived from eq.
\left( \ref{4.1}\right) $. Therefore the theoretical arguments provided in
this paper show that the empirical evidence support the thesis that eq.
\left( \ref{4.1}\right) $ rather than eq.$\left( \ref{4.0}\right) $ is the
correct starting point to understand correlations, including quantum
correlations associated to entanglement. Consequently eq.$\left( \ref{4.1
\right) $ should be the basis for hidden variables models consistent with
relativity theory.
Many people are aware of the fact that the (loophole-free) violation of a
Bell inequality seems to create a conflict with relativity theory. The most
popular scapes to this conclusion are the following \cite{Brunner}. Some
authors simply reject the need (or even the possibility) of hidden variables
models. For other people the solution is more sophisticated, they
distinguish superluminal influences from superluminal signals and assume
that only the latter are forbidden by relativity theory. Indeed superluminal
signals are also forbidden by quantum mechanics (no-signalling theorem).
Other solutions less popular are the absolute determinism or the assumption
that some (causal) common influence correlates the random generations with
the system preparation in the source. The latter would amount to assume that
$\lambda $ is correlated with $A$ and/or $B$ due to some events in the
common backward light cone, a possibility certainly compatible with
relativity but more implausible than eq.$\left( \ref{4.1}\right) $ in my
opinion.
In conclusion I propose that the loophole-free violation of the Bell
inequality should be interpreted as showing that an event may influence
other events on its \textit{past} light cone, whence eq.$\left( \ref{4.1
\right) $, rather than the more restrictive eq.$\left( \ref{4.0}\right) ,$
should be the basis for hidden variables models compatible with relativity$.$
Eq.$\left( \ref{4.1}\right) $ might be interpreted in ``human language''
saying that the system in the source ``knows'' in advance whether every
photon will be ``later'' detected or not. This statement sounds rather
counterintuitive, but it fits in relativity theory. In contrast suggesting
that influences may travel with superluminal speed may sound less
counterintuitive, but in my opinion violates relativity theory.
An interpretation of quantum mechanics that takes into account the possible
influence of the future on the past has been proposed with the name \textit
transactional interpretation }\cite{Cramer}. The relation of that
interpretation with the proposal made here will not be discussed further in
this paper.
|
1,314,259,993,420 | arxiv | \section{Introduction}
Homomorphism preservation theorems relate the syntactic shape of a sentence with the semantic property of being preserved under homomorphisms between structures.
Recall that a first-order sentence $\phi$ in a vocabulary $\tau$ is said to be \emph{preserved under homomorphisms} if, whenever there is a homomorphism of $\tau$-structures $\As\to \Bs$ and ${\As\vDash \phi}$, then also $\Bs\vDash\phi$.
Further, an \emph{existential positive sentence} is a first-order sentence that uses only the connectives $\vee,\wedge$ and the quantifier $\exists$.
The following classical result, known as the \emph{homomorphism preservation theorem}, is due to {\L}o{\'s}, Lyndon and Tarski \cite{Los1955, Lyndon1959, Tarski1955} and applies to arbitrary (first-order) vocabularies.
\begin{theorem}\label{th:HPT}
A first-order sentence is preserved under homomorphisms if, and only if, it is equivalent to an existential positive sentence.
\end{theorem}
The homomorphism preservation theorem is a fairly straightforward consequence of the compactness theorem, see e.g.~ \cite[Lemma~3.1.2]{TZ2012}. However, applying the compactness theorem means that we lose control over the syntactic shape of an existential positive sentence $\psi$ that is equivalent to a sentence $\phi$ preserved under homomorphisms. In particular, it is an ineffective approach if we want to determine to which extent the passage from $\phi$ to $\psi$ increases the ``complexity'' of the former sentence.
One way to measure the complexity of a formula $\phi$ is in terms of its \emph{quantifier rank}, i.e.~ the maximum number of nested quantifiers appearing in $\phi$.
Rossman's \emph{equirank homomorphism preservation theorem}~\cite{Rossman2008}, which applies to relational vocabularies (i.e.~ vocabularies that contain no constant or function symbols), shows that it is possible to find a $\psi$ whose quantifier rank is less than or equal to the quantifier rank of~$\phi$:
\begin{theorem}\label{th:equirank-HPT}
A first-order sentence of quantifier rank at most $k$ is preserved under homomorphisms if, and only if, it is equivalent to an existential positive sentence of quantifier rank at most $k$.
\end{theorem}
This is a considerable improvement on the classical homomorphism preservation theorem and was proved by Rossman on the way to his celebrated \emph{finite homomorphism preservation theorem}, stating that Theorem~\ref{th:HPT} admits a relativisation to finite structures.\footnote{The finite homomorphism preservation theorem is a major result in finite model theory, as well as a surprising one given that most preservation theorems fail when restricted to finite structures. Note that the finite homomorphism preservation theorem and the classical one are incomparable results.} In the proof of the equirank homomorphism preservation theorem, the application of the compactness theorem is replaced with a model construction which is similar, in spirit, to the construction of a saturated elementary extension of a given structure.
The main contribution of this paper consists in laying out a categorical framework in which ``equi-resource'' homomorphism preservation theorems can be proved in an axiomatic fashion.
In \cite{abramsky2017pebbling,DBLP:conf/csl/AbramskyS18}, \emph{game comonads} were introduced to capture in a structural way a number of logic fragments and corresponding combinatorial and game-theoretic notions, both at the level of finite and infinite structures. For a recent survey article, see~\cite{emerging2022}. The template of game comonads was axiomatised in~\cite{AR2021icalp}, see also the extended version~\cite{AR2022}, by means of the notion of \emph{arboreal category}.
Our proof strategy consists in establishing abstract homomorphism preservation theorems at the level of arboreal categories and then instantiating these results for specific choices of game comonads.
We thus obtain novel equi-resource homomorphism preservation theorems for modal and guarded logics (Theorems~\ref{th:hpt-graded-modal-logic} and~\ref{th:hpt-guarded}, respectively), along with relativisations to appropriate subclasses of structures---e.g.~ the class of finite structures (Theorems~\ref{th:hpt-graded-modal-logic-finite} and~\ref{th:hpt-guarded-logics-finite}, respectively). Further, we derive a relativisation result (Theorem~\ref{t:equirank-hpt-relative}) which refines Rossman's equirank homomorphism preservation theorem.
This paper is organised as follows. In Section~\ref{s:prelim-game-comonads} we provide a brief introduction to game comonads, and in Section~\ref{s:prelim-arboreal} we recall the necessary definitions and facts concerning arboreal categories. Homomorphism preservation theorems are recast into categorical statements and proved at the level of arboreal categories in Sections~\ref{s:logics-HPTs} and~\ref{s:axiomatic}. Finally, Section~\ref{s:proof-mc} contains the proof of our main technical result, namely Theorem~\ref{t:model-construction}.
Throughout this article, we shall assume the reader is familiar with the basic notions of category theory; standard references include e.g.~ \cite{adamek2004abstract,MacLane}.
\section{Logic Fragments and Game Comonads}\label{s:prelim-game-comonads}
We shall mainly deal with two types of vocabularies:
\begin{itemize}
\item \emph{Relational vocabularies}, i.e.~ first-order vocabularies that contain no function or constant symbols.
\item \emph{Multi-modal vocabularies}, i.e.~ relational vocabularies in which every relation symbol has arity at most $2$.
\end{itemize}
Multi-modal vocabularies will be referred to simply as \emph{modal vocabularies}. If $\sigma$ is a modal vocabulary, we can assign to each unary relation symbol $P\in \sigma$ a propositional variable $p$, and to each binary relation symbol $R\in \sigma$ modalities $\Diamond_R$ and $\Box_R$. We refer to $\sigma$-structures as \emph{Kripke structures}. For any Kripke structure $\As$, the interpretation of $P$ in $\As$, denoted by $P^{\As}$, corresponds to the valuation of the propositional variable $p$, and the binary relation $R^{\As}$ to the accessibility relation for the modalities $\Diamond_R$ and $\Box_R$.
For our running examples and intended applications, we will be interested in the following resource-bounded fragments of first-order logic (always including the equality symbol) and modal logic, for a relational vocabulary $\sigma$ and positive integer~$k$:
\begin{itemize}
\item $\FO_k$ and $\EFO_k$: these denote, respectively, the set of sentences of quantifier rank $\leq k$ in the vocabulary $\sigma$, and its existential positive fragment.
\item $\ML_k$ and $\exists^+\ML_k$: if $\sigma$ is a modal vocabulary, $\ML_k$ is the set of modal formulas of modal depth $\leq k$ in the vocabulary $\sigma$ (recall that the \emph{modal depth} is the maximum number of nested modalities in a formula). Moreover, $\exists^+\ML_k$ denotes the existential positive fragment of $\ML_k$, i.e.~ the set of formulas in $\ML_k$ that use only the connectives $\vee,\wedge$ and the diamond modalities.
\item $\ML_k(\#)$: this is the extension of $\ML_k$ with \emph{graded modalities}. Recall that graded modalities have the form $\Diamond_R^n$ and $\Box_R^n$, with $n\geq 0$, and $\As, a \models {\Diamond_R^n} \, \phi$ if there are at least $n$ $R$-successors of $a$ satisfying $\phi$ (and $\Box_R^n\phi = \neg \Diamond_R^n \neg \phi$).
\end{itemize}
Further logics, namely \emph{guarded logics}, will be considered in Section~\ref{ss:tame}.
Any logic fragment\footnote{We employ the nomenclature ``logic fragment'', rather than the more customary term ``theory'', as we are mainly interested in the situation where $\LL$ is defined by constraining the syntactic shape~of~sentences.} $\LL$, i.e.~ any set of first-order sentences in a vocabulary $\tau$, induces an equivalence relation $\equiv^\LL$ on $\tau$-structures defined as follows: for all $\tau$-structures $\As, \Bs$,
\[
\As \equiv^{\LL} \Bs \ \ \Longleftrightarrow \ \ \forall \phi\in\LL. \, (\As\vDash \phi \, \Leftrightarrow \, \Bs\vDash \phi).
\]
If $\LL$ consists of all first-order sentences, $\equiv^\LL$ coincides with elementary equivalence.
Syntax-free characterisations of the equivalence relations $\equiv^\LL$ play an important role in model theory. For example, the Keisler-Shelah theorem states that two $\tau$-structures are elementarily equivalent if, and only if, they have isomorphic ultrapowers. A different approach is through \emph{model comparison games}. These have a wide range of applications in model theory, see e.g.~ \cite[\S 3]{Hod93}, and are central to finite model theory where tools such as the compactness theorem and ultraproducts are not available. Model comparison games lead to a perspective which may be described as ``model theory without compactness''.
Game comonads arise from the insight that model comparison games can be seen as semantic constructions in their own right. Although we shall not employ games as a tool, we recall two examples of games to motivate the framework of game comonads.
Henceforth we shall work with a relational vocabulary $\sigma$.
Let $\As,\Bs$ be $\sigma$-structures. Both types of game are two-player games played between Spoiler and Duplicator. Whereas Spoiler aims to show that $\As$ and $\Bs$ are different, Duplicator aims to show that they are similar. Each game is played in a number of rounds:
\begin{description}
\item[Ehrenfeucht-Fra\"{i}ss\'{e}~game] In the $i$th round, Spoiler chooses an element in one of the structures and Duplicator responds by choosing an element in the other structure. After $k$ rounds have been played, we have sequences $[a_1, \ldots, a_k]$ and $[b_1, \ldots, b_k]$ of elements from $\As$ and $\Bs$ respectively. Duplicator wins this play if the ensuing relation $r\coloneqq \{(a_i, b_i) \mid 1 \leq i \leq k \}$ is a partial isomorphism. Duplicator wins the $k$-round game if they have a strategy which is winning after $i$ rounds, for all $1\leq i\leq k$.
\item[Bisimulation game] Suppose $\sigma$ is a modal vocabulary. The game is played between pointed Kripke structures $(\As, a)$ and $(\Bs, b)$, where $a\in \As$ and $b\in \Bs$. The initial position is $(a_0,b_0)=(a,b)$. In the $i$th round, with current position $(a_{i-1},b_{i-1})$, Spoiler chooses one of the two structures, say $\As$, a binary relation symbol $R$ in $\sigma$, and an element $a_{i}\in \As$ such that $(a_{i-1},a_i)\in R^{\As}$. Duplicator responds by choosing an element in the other structure, say $b_i\in \Bs$, such that $(b_{i-1},b_i)\in R^{\Bs}$. If Duplicator has no such response, they lose. Duplicator wins the $k$-round game if, for all unary relation symbols $P$ in $\sigma$, we have $P^{\As}(a_i) \Leftrightarrow P^{\Bs}(b_i)$ for all $0\leq i\leq k$.
\end{description}
Assume the vocabulary $\sigma$ is finite. The classical Ehrenfeucht-Fra\"{i}ss\'{e}~theorem \cite{Ehrenfeucht1960,Fraisse1954} states that Duplicator has a winning strategy in the $k$-round Ehrenfeucht-Fra\"{i}ss\'{e}~game played between $\As$ and $\Bs$ if, and only if, $\As$ and $\Bs$ satisfy the same first-order sentences of quantifier rank at most $k$. Similarly, Duplicator has a winning strategy in the $k$-round bisimulation game played between pointed Kripke structures $(\As,a)$ and $(\Bs, b)$ if, and only if, $(\As,a)$ and $(\Bs, b)$ satisfy the same modal formulas of modal depth at most $k$~\cite{HM1980}.
\begin{remark}
In both Ehrenfeucht-Fra\"{i}ss\'{e}~and bisimulation games, the resource parameter is the number of rounds. This need not be the case in general. For instance, the resource parameter for pebble games \cite{Barwise1977,Immerman1982}, which correspond to finite-variable fragments of first-order logic, is the number of pebbles available to the players.
\end{remark}
Next, we introduce the comonads corresponding to Ehrenfeucht-Fra\"{i}ss\'{e}~and bisimulation games, respectively.
For each $\sigma$-structure $\As$, denote by $\Ek(\As)$ the set of all non-empty sequences of length at most $k$ of elements from $\As$. In other words, $\Ek(\As)$ is the set of all plays in $\As$ in the $k$-round Ehrenfeucht-Fra\"{i}ss\'{e}~game. The interpretations of the relation symbols can be lifted from $\As$ to $\Ek(\As)$ as follows. Let $\epsilon_{\As}\colon \Ek(\As) \to \As$ be the function sending a sequence to its last element. For each relation symbol $R\in\sigma$ of arity~$j$, we define $R^{\Ek(\As)}$ to be the set of all tuples $(s_1,\ldots,s_j)\in \Ek(\As)^j$ such that:
\begin{enumerate}[label=(\roman*)]
\item The sequences $s_1,\ldots,s_j$ are pairwise comparable in the prefix order.
\item $(\epsilon_{\As}(s_1),\ldots,\epsilon_{\As}(s_j))\in R^{\As}$.
\end{enumerate}
For every homomorphism $f\colon \As\to \Bs$, let
\[
\Ek(f)\colon \Ek(\As)\to \Ek(\Bs), \ \ [a_1,\ldots,a_l]\mapsto [f(a_1),\ldots, f(a_l)].
\]
This yields a comonad (in fact, a \emph{family} of comonads indexed by $k>0$) on the category $\mathbf{Struct}(\sg)$ of $\sigma$-structures and their homomorphisms, known as \emph{Ehrenfeucht-Fra\"{i}ss\'{e}~comonad}~\cite{DBLP:conf/csl/AbramskyS18}. The underlying functor of this comonad is $\Ek\colon \mathbf{Struct}(\sg)\to \mathbf{Struct}(\sg)$, the counit is $\epsilon$, and the comultiplication at $\As$ is the homomorphism
\[
\Ek(\As)\to \Ek\Ek(\As), \ \ [a_1,\ldots,a_l]\mapsto [[a_1],[a_1,a_2],\ldots,[a_1,\ldots,a_l]].
\]
A similar construction applies to $k$-round bisimulation games.
Suppose $\sigma$ is a modal vocabulary and let $(\As,a)$ be a pointed Kripke structure. We define a Kripke structure $\Mk(\As,a)$ whose carrier is the set of all paths $p$ of length at most $k$ starting from~$a$:
\[ a \xrightarrow{R_1} a_1 \xrightarrow{R_2} a_2 \to \cdots \xrightarrow{R_n} a_n \]
where $R_1, \dots, R_n$ are binary relation symbols in $\sigma$.
If $P\in\sigma$ is unary then $P^{\Mk(\As,a)}$ is the set of paths $p$ whose last element $a_n$ belongs to $P^{\As}$. For a binary relation symbol $R$ in $\sigma$, $R^{\Mk(\As,a)}$ is the set of pairs of paths $(p,p')$ such that $p'$ is obtained by extending $p$ by one step along $R$. The distinguished element of the Kripke structure $\Mk(\As,a)$ is the trivial path $\langle a\rangle$ of length~$0$, and the function $\epsilon_{(\As,a)}\colon (\Mk(\As,a),\langle a\rangle) \to (\As,a)$ sending a path to its last element is a morphism of pointed Kripke structures. By similar reasoning to the one above, the assignment $(\As,a)\mapsto (\Mk(\As,a),\langle a\rangle)$ can be upgraded to a comonad on the category $\CSstar$ of pointed Kripke structures and their homomorphisms, the \emph{modal comonad}~\cite{DBLP:conf/csl/AbramskyS18}.
In addition to the examples mentioned above, the framework of game comonads covers a number of model comparison games, cf.~ e.g.~ \cite{abramsky2017pebbling,Guarded2021,Hybrid2022,conghaile2021game,FVM2022,Paine2020}. In each case, they yield structural (syntax-free) characterisations of equivalence in the corresponding logic fragments. This will be illustrated from an axiomatic perspective in Section~\ref{s:prelim-resource-ind-arb-adj}.
\section{Arboreal Categories}\label{s:prelim-arboreal}
In this section, we recall from~\cite{AR2022} the basic definitions and facts concerning arboreal categories.
All categories under consideration are assumed to be locally small and \emph{well-powered}, i.e.~ every object has a \emph{small} set of subobjects (as opposed to a proper class).
\subsection{Proper factorisation systems}
Given arrows $e$ and $m$ in a category $\C$, we say that $e$ has the \emph{left lifting property} with respect to $m$, or that $m$ has the \emph{right lifting property} with respect to $e$, if for every commutative square as on the left-hand side~below
\begin{equation*}
\begin{tikzcd}
{\cdot} \arrow{d} \arrow{r}{e} & {\cdot} \arrow{d} \\
{\cdot} \arrow{r}{m} & {\cdot}
\end{tikzcd}
\ \ \ \ \ \ \ \ \ \ \ \ \
\begin{tikzcd}
{\cdot} \arrow{d} \arrow{r}{e} & {\cdot} \arrow{d} \arrow{dl}[description]{d} \\
{\cdot} \arrow{r}{m} & {\cdot}
\end{tikzcd}
\end{equation*}
there is an arrow $d$ such that the right-hand diagram above commutes.
For any class $\mathscr{H}$ of morphisms in $\C$, let ${}^{\pitchfork}\mathscr{H}$ (respectively $\mathscr{H}^{\pitchfork}$) be the class of morphisms having the left (respectively right) lifting property with respect to every morphism in $\mathscr{H}$.
\begin{definition}\label{def:weak-f-s}
A pair of classes of morphisms $(\Q,\M)$ in a category $\C$ is a \emph{weak factorisation system} provided it satisfies the following conditions:
\begin{enumerate}[label=(\roman*)]
\item Every morphism $f$ in $\C$ can be written as $f = m \circ e$ with $e\in \Q$ and $m\in \M$.
\item $\Q={}^{\pitchfork}\M$ and $\M=\Q^{\pitchfork}$.
\end{enumerate}
A \emph{proper factorisation system} is a weak factorisation system $(\Q,\M)$ such that each arrow in $\Q$ is epic and each arrow in $\M$ is monic.
We refer to $\M$-morphisms as \emph{embeddings} and denote them by $\emb$. $\Q$-morphisms will be referred to as \emph{quotients} and denoted by~$\epi$.
\end{definition}
Next, we state some well known properties of proper factorisation systems (cf.~ e.g.~ \cite{freyd1972categories} or~\cite{riehl2008factorization}) which will be used throughout the paper without further reference.
\begin{lemma}\label{l:factorisation-properties}
Let $(\Q,\M)$ be a proper factorisation system in $\C$. The following hold:
\begin{enumerate}[label=(\alph*)]
\item\label{compositions} $\Q$ and $\M$ are closed under compositions.
\item\label{isos} $\Q$ contains all retractions, $\M$ contains all sections, and $\Q\cap\M=\{\text{isomorphisms}\}$.
\item\label{pullbacks} The pullback of an $\M$-morphism along any morphism, if it exists, is in $\M$.
\item\label{cancellation-e} $g\circ f\in \Q$ implies $g\in\Q$.
\item\label{cancellation-m} $g\circ f\in\M$ implies $f\in\M$.
\end{enumerate}
\end{lemma}
Assume $\C$ is a category admitting a proper factorisation system $(\Q,\M)$. In the same way that one usually defines the poset of subobjects of a given object $X\in\C$, we can define the poset $\Emb{X}$ of $\M$-subobjects of $X$. Given embeddings $m\colon S\emb X$ and $n\colon T\emb X$, let us say that $m\trianglelefteq n$ provided there is a morphism $i\colon S\to T$ such that $m=n\circ i$ (if it exists, $i$ is necessarily an embedding).
\[\begin{tikzcd}
S \arrow[rightarrowtail]{r}{m} \arrow[dashed, swap]{d}{i} & X \\
T \arrow[rightarrowtail]{ur}[swap]{n} & {}
\end{tikzcd}\]
This yields a preorder on the class of all embeddings with codomain $X$. The symmetrisation~$\sim$ of~$\trianglelefteq$ is given by $m\sim n$ if, and only if, there exists an isomorphism $i\colon S\to T$ such that $m=n\circ i$. Let $\Emb{X}$ be the class of $\sim$-equivalence classes of embeddings with codomain $X$, equipped with the natural partial order $\leq$ induced by~$\trianglelefteq$. We shall systematically represent a $\sim$-equivalence class by any of its representatives. As $\C$ is well-powered and every embedding is a monomorphism, $\Emb{X}$ is a set.
For any morphism $f\colon X\to Y$ and embedding $m\colon S\emb X$, consider the (quotient, embedding) factorisation of $f\circ m$:
\[\begin{tikzcd}
S \arrow[twoheadrightarrow]{r} & \exists_f S \arrow[rightarrowtail]{r}{\exists_f m} & Y.
\end{tikzcd}\]
This yields a monotone map $\exists_f\colon \Emb{X}\to\Emb{Y}$ sending $m$ to $\exists_f m$. Note that the map $\exists_f$ is well-defined because factorisations are unique up to isomorphism. If $f$ is an embedding (or, more generally, $f\circ m$ is an embedding), $\exists_f m$ can be identified with $f\circ m$. For the following observation, cf.~ e.g.~ \cite[Lemma~2.7(a)]{AR2022}.
\begin{lemma}\label{l:emb-quo-order-embeddings}
Let $\C$ be a category equipped with a proper factorisation system and let $f\colon X\emb Y$ be an embedding in $\C$. Then $\exists_f \colon \Emb{X}\to \Emb{Y}$ is an order-embedding.
\end{lemma}
\subsection{Arboreal categories}
Let $\C$ be a category endowed with a proper factorisation system $(\Q,\M)$.
\begin{definition}
An object $X$ of $\C$ is called a \emph{path} provided the poset $\Emb{X}$ is a finite chain. Paths will be denoted by $P,Q$ and variations thereof.
\end{definition}
The collection of paths is closed under embeddings and quotients. That is, given an arrow $f\colon X\to Y$, if $f$ is an embedding and $Y$ is a path then $X$ is a path, and if $f$ is a quotient and $X$ is a path then $Y$ is a path~\cite[Lemma~3.5]{AR2022}.
A \emph{path embedding} is an embedding $P\emb X$ whose domain is a path.
We let $\Path{X}$ denote the sub-poset of $\Emb{X}$ consisting of the path embeddings.
Because paths are closed under quotients, for any arrow $f\colon X\to Y$ the monotone map $\exists_f \colon \Emb{X}\to \Emb{Y}$ restricts to a monotone map
\begin{equation}\label{eq:Path-functor}
\Path{f}\colon \Path{X}\to\Path{Y}, \ \ (m\colon P\emb X)\mapsto (\exists_f m\colon \exists_f P\emb Y).
\end{equation}
For any object $X$ of $\C$, we have a diagram with vertex $X$ consisting of all path embeddings with codomain $X$. The morphisms between paths are those making the obvious triangles commute:
\[\begin{tikzcd}[column sep=1.2em, row sep=2em]
& X & \\
P \arrow[bend left=20,rightarrowtail]{ur} \arrow[rightarrowtail]{rr} & & Q \arrow[bend right=20,rightarrowtail]{ul}
\end{tikzcd}\]
Choosing representatives in an appropriate way, this yields a cocone over the small diagram $\Path{X}$. We say that $X$ is \emph{path-generated} provided this is a colimit cocone in $\C$.
Suppose for a moment that coproducts of sets of paths exist in $\C$. An object $X$ of $\C$ is \emph{connected} if, for all non-empty sets of paths $\{P_i\mid i\in I\}$ in~$\C$, any morphism
\[
X\to \coprod_{i\in I}{P_i}
\]
factors through some coproduct injection $P_j\to \coprod_{i\in I}{P_i}$.
In order to state the definition of arboreal category, let us say that a proper factorisation system is \emph{stable} if, for any quotient $e$ and embedding $m$ with common codomain, the pullback of $e$ along $m$ exists and is a quotient.
\begin{definition}
An \emph{arboreal category} is a category $\C$, equipped with a stable proper factorisation system, that satisfies the following conditions:
\begin{enumerate}[label=(\roman*)]
\item\label{ax:colimits} $\C$ has all coproducts of sets of paths.
\item\label{ax:2-out-of-3} For any paths $P,Q,Q'$ in $\C$, if a composite $P\to Q \to Q'$ is a quotient then so is $P\to Q$.
\item\label{ax:path-generated} Every object of $\C$ is path-generated.
\item\label{ax:connected} Every path in $\C$ is connected.
\end{enumerate}
\end{definition}
\begin{remark}
Item~\ref{ax:2-out-of-3} in the previous definition is equivalent to the following \emph{2-out-of-3 condition}: For any paths $P,Q,Q'$ and morphisms
\[\begin{tikzcd}
P \arrow{r}{f} & Q \arrow{r}{g} & Q',
\end{tikzcd}\]
if any two of $f$, $g$, and $g\circ f$ are quotients, then so is the third. See \cite[Remark~3.8]{AR2022}. Moreover, item~\ref{ax:path-generated} is equivalent to saying that the inclusion functor $\Cp\hookrightarrow \C$ is dense, where $\Cp$ is the full subcategory of $\C$ defined by the paths \cite[Lemma~5.1]{AR2022}.
Finally, note that any arboreal category admits an initial object, obtained as the coproduct of the empty set, and any initial object is a path because its poset of $\M$-subobjects has a single element---namely, the equivalence class of the identity.
\end{remark}
If $(P, {\leq})$ is a poset, then $C \subseteq P$ is a \emph{chain} if it is linearly ordered. $(P,\leq)$ is a \emph{forest} if, for all $x\in P$, the set $\down x\coloneqq \{y\in P\mid y\leq x\}$ is a finite chain.
The \emph{height} of a forest is the supremum of the cardinalities of its chains.
The \emph{covering relation} $\cvr$ associated with a partial order $\leq$ is defined by $u\cvr v$ if and only if $u<v$ and there is no $w$ such that $u<w< v$.
The \emph{roots} of a forest are the minimal elements, and a \emph{tree} is a forest with at most one root.
Morphisms of forests are maps that preserve roots and the covering relation.
The category of forests is denoted by $\F$, and the full subcategory of trees by~$\T$.
\begin{example}\label{ex:RE}
Let $\sigma$ be a relational vocabulary.
A \emph{forest-ordered $\sigma$-structure} $(\As, {\leq})$ is a $\sigma$-structure $\As$ equipped with a forest order $\leq$.
A morphism of forest-ordered $\sigma$-structures $f\colon (\As, {\leq}) \to (\Bs, {\leq'})$ is a $\sigma$-homomorphism $f\colon \As \to \Bs$ that is also a forest morphism. This determines a category $\R(\sigma)$.
We equip $\R(\sigma)$ with the factorisation system given by (surjective morphisms, embeddings), where an embedding is a morphism which is an embedding \textit{qua} $\sigma$-homomorphism.
Let $\RT(\sigma)$ be the full subcategory of $\R(\sigma)$ determined by those objects $(\As, {\leq})$ satisfying the following condition:
\begin{enumerate}[label=\textnormal{(E)}]
\item\label{E} If $a,b\in\As$ are distinct elements that appear in a tuple of related elements $(a_1,\ldots,a_l)\in R^{\As}$ for some $R\in\sigma$, then either $a<b$ or $b<a$.\footnote{I.e.,~ if $a$ and $b$ are adjacent in the \emph{Gaifman graph} of $\As$, then they are comparable in the forest order.}
\end{enumerate}
For each $k>0$, let $\RTk(\sigma)$ be the full subcategory of $\RT(\sigma)$ of forest-ordered structures of height $\leq k$. In \cite[Theorem~9.1]{AS2021}, it is shown that $\RTk(\sigma)$ is isomorphic to the category of coalgebras for the Ehrenfeucht-Fra\"{i}ss\'{e}~comonad $\Ek$ on $\mathbf{Struct}(\sg)$. The objects $(\As, {\leq})$ of $\RTk(\sigma)$ are forest covers of $\As$ witnessing that its \emph{tree-depth} is at most $k$ \cite{nevsetvril2006tree}.
The category $\RT(\sigma)$ is arboreal when equipped with the restriction of the factorisation system on $\R(\sigma)$. The paths in $\RT(\sigma)$ are those objects in which the order is a finite chain. Similarly, $\RTk(\sigma)$ is an arboreal category for all $k>0$, when equipped with the restriction of the factorisation system on $\R(\sigma)$. See~\cite[Examples~5.3]{AR2022}.
\end{example}
\begin{example}\label{ex:RM}
Assume that $\sigma$ is a modal vocabulary. Let $\RM(\sigma)$ be the full subcategory of $\R(\sigma)$ consisting of the tree-ordered $\sigma$-structures $(\As, {\leq})$ satisfying:
\begin{enumerate}[label=\textnormal{(M)}]
\item\label{M} For $a,b\in\As$, $a \cvr b$ if and only if $(a,b)\in R^{\As}$ for some unique binary relation $R$.
\end{enumerate}
For each $k>0$, the full subcategory $\RMk(\sigma)$ of $\RM(\sigma)$ consisting of the tree-ordered structures of height at most $k$ is isomorphic to the category of coalgebras for the modal comonad $\Mk$ on $\CSstar$~\cite[Theorem~9.5]{AS2021}. When equipped with the restriction of the factorisation system on $\R(\sigma)$, the category $\RM(\sigma)$ is arboreal and its paths are those objects in which the order is a finite chain. Likewise for $\RMk(\sigma)$.
\end{example}
It follows from the definition of path that, for any object $X$ of an arboreal category, the poset $\Path{X}$ is a tree; in fact, a non-empty tree. Crucially, this assignment extends to a functor into the category of trees (for a proof, see~\cite[Theorem~3.10]{AR2022}):
\begin{theorem}
Let $\C$ be an arboreal category.
The assignment $f\mapsto \Path{f}$ in equation~\eqref{eq:Path-functor} induces a functor $\Path\colon \C\to\T$ into the category of trees.
\end{theorem}
Finally, recall from~\cite[\S 5]{AR2022} the following properties of paths and posets of embeddings.
\begin{lemma}\label{l:arboreal:properties}
The following statements hold in any arboreal category $\C$:
\begin{enumerate}[label=(\alph*)]
\item\label{at-most-one-emb} Between any two paths there is at most one embedding.
\item\label{SX-complete-lattice} For all objects $X$ of $\C$, the poset $\Emb{X}$ of its $\M$-subobjects is a complete lattice.\footnote{In fact, $\Emb{X}$ is a \emph{perfect} lattice, cf.~ \cite{Raney1952} or~\cite{DP2002}.}
\item\label{join-irred} Let $X$ be an object of $\C$ and let $\U\subseteq \Path{X}$ be a non-empty subset. A path embedding $m\in\Path{X}$ is below $\bigvee{\U}\in\Emb{X}$ if, and only if, it is below some element~of~$\U$.
\end{enumerate}
\end{lemma}
If it exists, the unique embedding between paths $P,Q$ in an arboreal category is denoted by
\[
!_{P,Q}\colon P\emb Q.
\]
If no confusion arises, we simply write $!\colon P\emb Q$.
\subsection{Bisimilarity and back-and-forth systems}
Throughout this section, we fix an arbitrary arboreal category $\C$.
A morphism $f\colon X\to Y$ in $\C$ is said to be \emph{open} if it satisfies the following path-lifting property: Given any commutative square
\[\begin{tikzcd}
P \arrow[rightarrowtail]{r} \arrow[rightarrowtail]{d} & Q \arrow[rightarrowtail]{d} \arrow[dashed]{dl} \\
X \arrow{r}{f} & Y
\end{tikzcd}\]
with $P,Q$ paths, there is an arrow $Q\to X$ making the two triangles commute. (If such an arrow exists, it is automatically an embedding.)
Further, $f$ is a \emph{pathwise embedding} if, for all path embeddings $m\colon P\emb X$, the composite $f\circ m\colon P \to Y$ is a path embedding.
Combining these notions, we can define a bisimilarity relation between objects of $\C$:
\begin{definition}
Two objects $X,Y$ of $\C$ are said to be \emph{bisimilar} if there exist an object $Z$ of $\C$ and a span of open pathwise embeddings $X\leftarrow Z \rightarrow Y$.
\end{definition}
\begin{remark}
The definition of open morphism given above is a refinement of the one introduced in~\cite{JNW1993} (cf.~ \cite[\S 4.1]{AR2022} for a discussion of the relation between these notions), which is a special case of the concept of open geometric morphism between toposes~\cite{JM1994}.
\end{remark}
As we shall see next, if $\C$ has binary products the bisimilarity relation can be characterised in terms of back-and-forth systems. Let $X,Y$ be objects of $\C$. Given $m\in\Path{X}$ and $n\in\Path{Y}$, we write $\br{m,n}$ to indicate that $\dom(m)\cong \dom(n)$.
Intuitively, the pair $\br{m,n}$ encodes a partial isomorphisms between $X$ and $Y$ ``of shape $P$'', with $P$ a path.
\begin{definition}\label{def:back-and-forth}
A \emph{back-and-forth system} between objects $X$ and $Y$ of $\C$ is a set
\[
\B=\{\br{m_i,n_i}\mid m_i\in\Path{X}, \, n_i\in\Path{Y}, \, i\in I\}
\]
satisfying the following conditions:
\begin{enumerate}[label=(\roman*)]
\item\label{initial} $\br{\bot_X,\bot_Y}\in\B$, where $\bot_X,\bot_Y$ are the roots of $\Path{X}$ and $\Path{Y}$, respectively.
\item\label{forth} If $\br{m,n}\in\B$ and $m'\in\Path{X}$ are such that $m\cvr m'$, there exists $n'\in\Path{Y}$ satisfying $n\cvr n'$ and $\br{m',n'}\in\B$.
\item\label{back} If $\br{m,n}\in\B$ and $n'\in\Path{Y}$ are such that $n\cvr n'$, there exists $m'\in\Path{X}$ satisfying $m\cvr m'$ and $\br{m',n'}\in\B$.
\end{enumerate}
Two objects $X$ and $Y$ of $\C$ are said to be \emph{back-and-forth equivalent} if there exists a back-and-forth system between them.
\end{definition}
For a proof of the following result, see~\cite[Theorem~6.4]{AR2022}.
\begin{theorem}\label{th:bisimilar-iff-strong-back-forth}
Let $X,Y$ be objects of an arboreal category admitting a product. Then $X$ and $Y$ are bisimilar if, and only if, they are back-and-forth equivalent.
\end{theorem}
The existence of a back-and-forth system between~$X$ and~$Y$ can be equivalently described in terms of the existence of a Duplicator winning strategy in a two-player game played between~$\Path{X}$ and~$\Path{Y}$~\cite[\S 6.2]{AR2022}. Since winning strategies can be composed to yield again a winning strategy, in any arboreal category with binary products the bisimilarity relation is transitive, hence an equivalence relation.
\subsection{Resource-indexed arboreal adjunctions}\label{s:prelim-resource-ind-arb-adj}
Let $\C$ be an arboreal category, with full subcategory of paths $\Cp$. We say that $\C$ is \emph{resource-indexed} if for all positive integers $k$ there is a full subcategory $\Cp^k$ of $\Cp$ closed under embeddings\footnote{\label{fn:closure-emb}That is, for any embedding $P\emb Q$ in $\C$ with $P,Q$ paths, if $Q\in \Cp^k$ then also $P\in \Cp^k$. We shall further assume that each category $\Cp^k$ contains the initial object of $\C$.} with
\[ \Cp^1 \hookrightarrow \Cp^2 \hookrightarrow \Cp^3 \hookrightarrow \cdots \]
This induces a corresponding tower of full subcategories $\C_k$ of $\C$, with the objects of $\C_k$ those whose cocone of path embeddings with domain in $\Cp^k$ is a colimit cocone in $\C$.
It turns out that each category $\C_k$ is arboreal. Furthermore, the paths in $\C_k$ are precisely the objects of $\Cp^k$, i.e.~ $(\C_k)_p = \Cp^k$. Cf.~ \cite[Proposition~7.6]{AR2022} and its proof.
\begin{example}\label{ex:resource-ind-arb-cat}
Consider the arboreal category $\RT(\sigma)$ from Example~\ref{ex:RE}. This can be regarded as a resource-indexed arboreal category by taking as $\Cp^k$ the full subcategory of $\RT(\sigma)$ consisting of the objects in which the order is a finite chain of cardinality $\leq k$. The generated subcategory $\C_k$ then coincides with $\RTk(\sigma)$.
A similar reasoning shows that the arboreal category $\RM(\sigma)$ from Example~\ref{ex:RM} can also be regarded as a resource-indexed arboreal category.
\end{example}
\begin{definition}
Let $\{\C_k\}$ be a resource-indexed arboreal category and let $\E$ be a category. A \emph{resource-indexed arboreal adjunction} between $\E$ and $\C$ is an indexed family of adjunctions
\[ \begin{tikzcd}
\C_k \arrow[r, bend left=25, ""{name=U, below}, "L_k"{above}]
\arrow[r, leftarrow, bend right=25, ""{name=D}, "R_k"{below}]
& \E.
\arrow[phantom, "\textnormal{\footnotesize{$\bot$}}", from=U, to=D]
\end{tikzcd}
\]
A \emph{resource-indexed arboreal cover} of $\E$ by $\C$ is a resource-indexed arboreal adjunction between $\E$ and $\C$ such that all adjunctions $L_k\dashv R_k$ are comonadic, i.e.~ for all $k>0$ the comparison functor from $\C_k$ to the category of Eilenberg-Moore coalgebras for the comonad $G_k\coloneqq L_k R_k$ is an isomorphism.
\end{definition}
\begin{example}\label{ex:res-ind-arb-cover}
Let $\sigma$ be a relational vocabulary and let $\E=\mathbf{Struct}(\sg)$. Consider the resource-indexed arboreal category $\RT(\sigma)$ in Example~\ref{ex:resource-ind-arb-cat}. For each $k > 0$, there is a forgetful functor
\[
\LE_k\colon \RTk(\sigma) \to \mathbf{Struct}(\sg)
\]
which forgets the forest order. This functor is comonadic. The right adjoint $\RE_k$ sends a $\sigma$-structure $\As$ to $\Ek(\As)$ equipped with the prefix order, and the comonad induced by this adjunction coincides with the $k$-round Ehrenfeucht-Fra\"{i}ss\'{e}~comonad $\Ek$. This gives rise to a resource-indexed arboreal cover of $\mathbf{Struct}(\sg)$ by $\RT(\sigma)$.
Similarly, if $\sigma$ is a modal vocabulary, there is a resource-indexed arboreal cover of $\CSstar$ by $\RM(\sigma)$ such that each adjunction $L^M_k \dashv R^M_k$ induces the $k$-round modal comonad $\Mk$.
\end{example}
\begin{example}\label{ex:res-ind-arb-adj}
To deal with the equality symbol in the logic, it is useful to consider resource-indexed arboreal adjunctions constructed as follows.
Let
\[\sigma^I\coloneqq \sigma\cup \{I\}
\]
be the vocabulary obtained by adding a fresh binary relation symbol~$I$ to~$\sigma$.
Any $\sigma$-structure can be expanded to a $\sigma^I$-structure by interpreting $I$ as the identity relation. This yields a fully faithful functor $J\colon \mathbf{Struct}(\sg)\to \CSplus$. The functor $J$ has a left adjoint $H\colon \CSplus\to \mathbf{Struct}(\sg)$ which sends a $\sigma^I$-structure $\As$ to the quotient of the $\sigma$-reduct of $\As$ by the equivalence relation generated by $I^{\As}$ \cite[Lemma~25]{DJR2021}. We can then compose the adjunction $H\dashv J$ with e.g.~ the Ehrenfeucht-Fra\"{i}ss\'{e}~resource-indexed arboreal cover of $\CSplus$ by $\RT(\sigma^I)$ from Example~\ref{ex:res-ind-arb-cover}.
\begin{equation*}
\begin{tikzcd}
{\RTk(\sigma^I)} \arrow[r, bend left=25, ""{name=U, below}, "\LE_k"{above}]
\arrow[r, leftarrow, bend right=25, ""{name=D}, "\RE_k"{below}]
& {\CSplus} \arrow[r, bend left=25, ""{name=U', below}, "H"{above}]
\arrow[r, leftarrow, bend right=25, ""{name=D'}, "J"{below}] & {\mathbf{Struct}(\sg)}
\arrow[phantom, "\textnormal{\footnotesize{$\bot$}}", from=U, to=D]
\arrow[phantom, "\textnormal{\footnotesize{$\bot$}}", from=U', to=D']
\end{tikzcd}
\end{equation*}
The composite adjunctions $H \LE_k \dashv \RE_k J$, which are not comonadic, yield the \emph{Ehrenfeucht-Fra\"{i}ss\'{e}~resource-indexed arboreal adjunction} between $\mathbf{Struct}(\sg)$ and $\RT(\sigma^I)$.
\end{example}
Crucially, a resource-indexed arboreal adjunction between $\E$ and $\C$ can be used to define several resource-indexed relations between objects of $\E$:
\begin{definition}\label{def:resource-indexed-relations}
Consider a resource-indexed arboreal adjunction between $\E$ and $\C$, with adjunctions $L_k \dashv R_k$, and any two objects $a,b$ of $\E$. For all $k>0$, we define:
\begin{itemize}
\item $a \rightarrow_k^{\C} b$ if there exists a morphism $R_k a \to R_k b$ in $\C_k$.
\item $a \eqbCk b$ if $R_k a$ and $R_k b$ are bisimilar in $\C_k$.
\item $a \eqcCk b$ if $R_k a$ and $R_k b$ are isomorphic in $\C_k$.
\end{itemize}
\end{definition}
Further, we write $\eqaCk$ for the symmetrisation of the preorder $\rightarrow_k^{\C}$. There are inclusions
\[
{\eqcCk} \ \subseteq \ {\eqbCk} \ \subseteq \ {\eqaCk}.
\]
The first inclusion is trivial, the second follows from \cite[Lemma~6.20]{AR2022}. For a proof of the following easy observation, see~\cite[Lemma~7.18]{AR2022}.
\begin{lemma}\label{l:equiv-rel-properties}
Consider a resource-indexed arboreal adjunction between $\E$ and~$\C$, with adjunctions $L_k \dashv R_k$. The following hold for all $a,b\in \E$ and all $k > 0$:
\begin{enumerate}[label=(\alph*)]
\item\label{hom-k-hom} If there exists a morphism $a\to b$ in $\E$ then $a \rightarrow_k^{\C} b$.
\item\label{k-equiv} $a\eqaCk L_k R_k a$.
\end{enumerate}
\end{lemma}
To conclude, we recall how the relations in Definition~\ref{def:resource-indexed-relations} capture, in our running examples, preservation of the logics introduced at the beginning of Section~\ref{s:prelim-game-comonads}.
Given a set of sentences (or modal formulas) $\LL$, let $\Rrightarrow^\LL$ be the preorder on (pointed) $\sigma$-structures given by
\[
\As \Rrightarrow^{\LL} \Bs \ \ \Longleftrightarrow \ \ \forall \phi\in\LL. \, (\As\vDash \phi \, \Rightarrow \, \Bs\vDash \phi).
\]
The equivalence relation $\equiv^\LL$ is the symmetrisation of $\Rrightarrow^\LL$.
\begin{example}\label{logical-rel-EF}
Let $\sigma$ be a finite relational vocabulary.
Consider the Ehrenfeucht-Fra\"{i}ss\'{e}~resource-indexed arboreal adjunction between $\mathbf{Struct}(\sg)$ and $\RT(\sigma^I)$ in Example~\ref{ex:res-ind-arb-adj}, and write $\rightarrow_k^{E}$ and $\leftrightarrow_k^E$ for the relations on $\mathbf{Struct}(\sg)$ induced according to Definition~\ref{def:resource-indexed-relations}.
For all $\sigma$-structures $\As,\Bs$ and all $k>0$, we have
\[
\As \rightarrow_k^{E} \Bs \ \Longleftrightarrow \ \As \Rrightarrow^{\EFO_k} \Bs
\]
and
\[
\As \leftrightarrow_k^{E} \Bs \ \Longleftrightarrow \ \As \equiv^{\FO_k} \Bs.
\]
Cf.~ \cite[Theorems~3.2 and~5.1]{AS2021} and~\cite[Theorem~10.5]{AS2021}, respectively. We also mention that $\cong_k^E$ coincides with equivalence in the extension of $\FO_k$ with \emph{counting quantifiers} \cite[Theorem~5.3(2)]{AS2021}, although we shall not need this fact.
\end{example}
\begin{example}\label{logical-rel-modal}
Suppose $\sigma$ is a finite modal vocabulary and consider the relations $\rightarrow_k^{M}$ and $\cong_k^M$ on $\CSstar$ induced by the modal resource-indexed arboreal cover of $\CSstar$ by $\RMk(\sigma)$ in Example~\ref{ex:res-ind-arb-cover}. For all pointed Kripke structures $(\As,a),(\Bs,b)$ and all $k>0$, we have
\[
(\As,a)\rightarrow_k^{M} (\Bs,b) \ \Longleftrightarrow \ (\As,a)\Rrightarrow^{\exists^+\ML_k} (\Bs,b),
\]
see \cite[Theorem~9]{DBLP:conf/csl/AbramskyS18}.
Furthermore,
\[
(\As,a)\cong_k^M (\Bs,b) \ \Longrightarrow \ (\As,a)\equiv^{\ML_k(\#)} (\Bs,b),
\]
cf.~ \cite[Proposition~15]{DBLP:conf/csl/AbramskyS18} and~\cite[Proposition~3.6]{deRijke2000}. We mention in passing that the relation $\leftrightarrow_k^M$ coincides with equivalence in $\ML_k$ \cite[Theorem~10.13]{AS2021}.
\end{example}
\section{Homomorphism Preservation Theorems}\label{s:logics-HPTs}
In this section, we recast the statement of a generic equi-resource homomorphism preservation theorem into a property \textnormal{(HP)}---and its strengthening \textnormal{(HP${}^\#$)}---that a resource-indexed arboreal adjunction may or may not satisfy.
We then identify a class of ``tame'' resource-indexed arboreal adjunctions, namely those satisfying the \emph{bisimilar companion property}, for which \textnormal{(HP)} always holds. In the absence of the bisimilar companion property, one may try to ``force'' it; this leads to the notion of \emph{extendability}, inspired by the work of Rossman~\cite{Rossman2008}.
Finally, we provide simple sufficient conditions under which properties \textnormal{(HP)} and \textnormal{(HP${}^\#$)} admit a relativisation to a full subcategory.
\subsection{\textnormal{(HP)} and \textnormal{(HP${}^\#$)}}\label{s:HP-HPplus}
Given a first-order sentence $\phi$ in a relational vocabulary~$\sigma$, its ``model class'' $\Mod(\phi)$ is the full subcategory of $\mathbf{Struct}(\sg)$ defined by the $\sigma$-structures $\As$ such that $\As\vDash\phi$.
To motivate the formulation of properties \textnormal{(HP)} and \textnormal{(HP${}^\#$)}, we recall a well-known characterisation of model classes of sentences in $\FO_k$, i.e.~ first-order sentences of quantifier rank at most $k$, and in its existential positive fragment $\EFO_k$. Since a sentence can only contain finitely many relation symbols, for the purpose of investigating homomorphism preservation theorems we can safely assume that $\sigma$ is finite.
For a full subcategory $\D$ of a category $\A$, and a relation $\nabla$ on the class of objects of $\A$, we say that $\D$ is \emph{upwards closed (in $\A$) with respect to $\nabla$} if
\[
\forall a,b \in \D, \ \text{ if } \ a\in\D \ \text{ and } \ a \nabla b \ \text{ then } \ b\in\D.
\]
If $\nabla$ is an equivalence relation and the latter condition is satisfied, we say that $\D$ is \emph{saturated under $\nabla$}.
The following lemma follows from the fact that, for all $k \geq 0$, there are finitely many sentences in $\FO_k$ up to logical equivalence. Cf.~ e.g.~ \cite[Lemma~3.13]{Libkin2004}.
\begin{lemma}\label{concrete-charact-log-eq}
The following hold for all $k\geq 0$ and all full subcategories $\D$ of $\mathbf{Struct}(\sg)$:
\begin{enumerate}[label=(\alph*)]
\item\label{synt-free} $\D=\Mod(\phi)$ for some $\phi\in\FO_k$ if, and only if, $\D$ is saturated under $\equiv^{\FO_k}$.
\item\label{synt-free-EP} $\D=\Mod(\psi)$ for some $\psi\in\EFO_k$ if, and only if, $\D$ is upwards closed with respect to $\Rrightarrow^{\EFO_k}$.
\end{enumerate}
\end{lemma}
\begin{remark}\label{rem:finite-fragments}
The previous lemma remains true if $\FO_k$ is replaced with any fragment of first-order logic that is closed under Boolean connectives and contains, up to logical equivalence, finitely many sentences.
\end{remark}
Now, fix a resource-indexed arboreal adjunction between $\E$ and $\C$, with adjunctions
\[\begin{tikzcd}
\C_k \arrow[r, bend left=25, ""{name=U, below}, "L_k"{above}]
\arrow[r, leftarrow, bend right=25, ""{name=D}, "R_k"{below}]
& \E.
\arrow[phantom, "\textnormal{\footnotesize{$\bot$}}", from=U, to=D]
\end{tikzcd}\]
Let us say that a full subcategory $\D$ of $\E$ is \emph{closed (in $\E$) under morphisms} if, whenever there is an arrow $a\to b$ in $\E$ with $a\in \D$, also $b\in \D$.
Note that, when $\E=\mathbf{Struct}(\sg)$ and $\D=\Mod(\phi)$ is the model class of some sentence $\phi$, the category $\D$ is closed under morphisms precisely when $\phi$ is preserved under homomorphisms.
Consider the following statement, where $\prCk$ and $\eqbCk$ are the relations on the objects of $\E$ induced by the resource-indexed arboreal adjunction as in Definition~\ref{def:resource-indexed-relations}:
\begin{enumerate}[label=\textnormal{(HP)}]
\item\label{HP-abstract} For any full subcategory $\D$ of $\E$ saturated under $\eqbCk$, $\D$ is closed under morphisms precisely when it is upwards closed with respect to $\prCk$.
\end{enumerate}
Replacing the relation $\eqbCk$ with $\eqcCk$, we obtain a strengthening of \ref{HP-abstract}, namely:
\begin{enumerate}[label=\textnormal{(HP${}^\#$)}]
\item\label{HPplus-abstract} For any full subcategory $\D$ of $\E$ saturated under $\eqcCk$, $\D$ is closed under morphisms precisely when it is upwards closed with respect to $\prCk$.
\end{enumerate}
Just recall that ${\eqcCk} \subseteq {\eqbCk}$, and so any full subcategory $\D$ saturated under $\eqbCk$ is also saturated under $\eqcCk$. Thus, \ref{HPplus-abstract} entails~\ref{HP-abstract}.
\begin{remark}\label{r:easy-dir-HPTs}
By Lemma~\ref{l:equiv-rel-properties}\ref{hom-k-hom}, any full subcategory of $\E$ that is upwards closed with respect to $\prCk$ is closed under morphisms. Hence, the right-to-left implications in~\ref{HP-abstract} and~\ref{HPplus-abstract} are always satisfied.
\end{remark}
In view of Example~\ref{logical-rel-EF} and Lemma~\ref{concrete-charact-log-eq}, for the Ehrenfeucht-Fra\"{i}ss\'{e}~resource-indexed arboreal adjunction between $\mathbf{Struct}(\sg)$ and $\RT(\sigma^I)$, property \ref{HP-abstract} coincides with Rossman's equirank homomorphism preservation theorem (Theorem~\ref{th:equirank-HPT}).
In Section~\ref{s:axiomatic} we will prove that \ref{HP-abstract} holds for any resource-indexed arboreal adjunction satisfying appropriate properties (see Corollary~\ref{cor:HPT-axiomatic}), which are satisfied in particular by the Ehrenfeucht-Fra\"{i}ss\'{e}~resource-indexed arboreal adjunction.
\subsection{Tame: bisimilar companion property and idempotency}\label{ss:tame}
For all $k>0$, write $G_k\coloneqq L_k R_k$ for the comonad on $\E$ induced by the adjunction ${L_k\dashv R_k \colon \E \to \C_k}$.
\begin{definition}
A resource-indexed arboreal adjunction between $\E$ and $\C$, with induced comonads $G_k$, has the \emph{bisimilar companion property} if $a \eqbCk G_k a$ for all $a\,{\in}\, \E$ and $k > 0$.
\end{definition}
\begin{proposition}\label{p:HPT-tame}
\ref{HP-abstract} holds for any resource-indexed arboreal adjunction between $\E$ and $\C$ satisfying the bisimilar companion property.
\end{proposition}
\begin{proof}
For the left-to-right implication in~\ref{HP-abstract}, let $\D$ be a full subcategory of $\E$ closed under morphisms and saturated under~$\eqbCk$. Suppose that $a\prCk b$ for objects $a,b$ of $\E$. By definition, this means that there is an arrow $R_k a \to R_k b$ and so, as $L_k$ is left adjoint to $R_k$, there is an arrow $G_k a \to b$. Using the bisimilar companion property, we get
\[
a \, \eqbCk \, G_k a\, \to \, b.
\]
Therefore, if $a\in \D$ then also $b\in \D$. That is, $\D$ is upwards closed with respect to $\prCk$.
The converse direction follows from Remark~\ref{r:easy-dir-HPTs}.
\end{proof}
In order to establish a similar result for property \ref{HPplus-abstract}, recall that a comonad $G$ is \emph{idempotent} if its comultiplication $G \Rightarrow G G$ is a natural isomorphism.
\begin{definition}
A resource-indexed arboreal adjunction between $\E$ and $\C$ is \emph{idempotent} if so are the induced comonads $G_k$, for all $k>0$.
\end{definition}
\begin{proposition}\label{p:HPT-graded}
\ref{HPplus-abstract} holds for any idempotent resource-indexed arboreal adjunction between $\E$ and $\C$.
\end{proposition}
\begin{proof}
Recall that $G_k$ is idempotent if, and only if, $\eta R_k$ is a natural isomorphism, where $\eta$ is the unit of the adjunction $L_k \dashv R_k$. In particular, for any $a\in \E$, the component of $\eta R_k$ at $a$ yields an isomorphism $R_k a \cong R_k G_k a$ in $\C$. Hence, $a \eqcCk G_k a$ for all $a\in\E$.
Reasoning as in the proof of Proposition~\ref{p:HPT-tame}, it is easy to see that~\ref{HPplus-abstract} holds.
\end{proof}
\begin{remark}
Consider an idempotent resource-indexed arboreal adjunction between $\E$ and $\C$ with induced comonads $G_k$ on $\E$. The previous proof shows that, for all $a\in \E$ and $k>0$, we have $a \eqcCk G_k a$. A fortiori, $a \eqbCk G_k a$. Therefore, any idempotent resource-indexed arboreal adjunction satisfies the bisimilar companion property.
\end{remark}
Next, we show how Propositions~\ref{p:HPT-tame} and~\ref{p:HPT-graded} can be exploited to obtain equi-resource homomorphism preservation theorems for (graded) modal logic and guarded first-order logics. Relativisations of these results to subclasses of structures, e.g.~ to the class of all finite structures, are discussed in Section~\ref{s:relativisation}.
\subsection*{Graded modal logic}
Let $\sigma$ be a finite modal vocabulary. As observed in~\cite[\S 9.3]{AS2021}, the modal comonads $\Mk$ on $\CSstar$ are idempotent, hence so is the modal resource-indexed arboreal cover of $\CSstar$ by $\RMk(\sigma)$. This corresponds to the fact that a tree-ordered Kripke structure is isomorphic to its tree unravelling.
Thus, Proposition~\ref{p:HPT-graded} entails the following \emph{equidepth homomorphism preservation theorem} for graded modal formulas (i.e.,~ modal formulas that possibly contain graded modalities):
\begin{theorem}\label{th:hpt-graded-modal-logic}
The following statements are equivalent for any graded modal formula~$\phi$ of modal depth at most $k$ in a modal vocabulary:
\begin{enumerate}
\item $\phi$ is preserved under homomorphisms between pointed Kripke structures.
\item $\phi$ is logically equivalent to an existential positive modal formula of modal depth at most $k$.
\end{enumerate}
\end{theorem}
\begin{proof}
Fix a graded modal formula $\phi\in\ML_k(\#)$. Since a single modal formula contains only finitely many modalities and propositional variables, we can assume without loss of generality that $\phi$ is a formula in a finite modal vocabulary. By Example~\ref{logical-rel-modal} we have
\[
{\rightarrow_k^{M}} = {\Rrightarrow^{\exists^+\ML_k}} \ \text{ and } \ {\cong_k^M} \subseteq {\equiv^{\ML_k(\#)}}.
\]
In particular, the latter inclusion entails that the full subcategory $\Mod(\phi)$ of $\CSstar$ is saturated under $\cong_k^M$.
As the modal resource-indexed arboreal cover is idempotent, Proposition~\ref{p:HPT-graded} implies that $\Mod(\phi)$ is closed under morphisms if, and only if, it is upwards closed with respect to $\to_k^M$. Note that $\Mod(\phi)$ is closed under morphisms precisely when $\phi$ is preserved under homomorphisms between pointed Kripke structures. On the other hand, the equality ${\rightarrow_k^{M}} = {\Rrightarrow^{\exists^+\ML_k}}$ implies that $\Mod(\phi)$ is upwards closed with respect to $\to_k^M$ if, and only if, $\Mod(\phi)=\Mod(\psi)$ for some $\psi\in \exists^+\ML_k$ (this is akin to Lemma~\ref{concrete-charact-log-eq}\ref{synt-free-EP} and hinges on the fact that $\exists^+\ML_k$ contains finitely many formulas up to logical equivalence). Thus the statement follows.
\end{proof}
\begin{remark}
Forgetting about both graded modalities and modal depth, Theorem~\ref{th:hpt-graded-modal-logic} implies that a modal formula is preserved under homomorphisms if, and only if, it is equivalent to an existential positive modal formula. This improves the well known result that a modal formula is preserved under simulations precisely when it is equivalent to an existential positive modal formula (see e.g.~ \cite[Theorem~2.78]{blackburn2002modal}).
\end{remark}
\subsection*{Guarded fragments of first-order logic}
The study of guarded fragments of first-order logic was initiated by Andr\'eka, van Benthem and N\'emeti in~\cite{HNvB1998} to analyse, and extend to the first-order setting, the good algorithmic and model-theoretic properties of modal logic. \emph{Guarded formulas} (over a relational vocabulary $\sigma$) are defined by structural induction, starting from atomic formulas and applying Boolean connectives and the following restricted forms of quantification: if $\phi(\o{x},\o{y})$ is a guarded formula, then so are
\[
\exists \o{x}. \, G(\o{x},\o{y}) \wedge \phi(\o{x},\o{y}) \ \text{ and } \ \forall \o{x}. \, G(\o{x},\o{y}) \to \phi(\o{x},\o{y})
\]
where $G$ is a so-called \emph{guard}. The (syntactic) conditions imposed on guards determine different guarded fragments of first-order logic. We shall consider the following two:
\begin{itemize}
\item \emph{Atom guarded:} $G(\o{x},\o{y})$ is an atomic formula in which all variables in $\o{x},\o{y}$ occur.
\item \emph{Loosely guarded:} $G(\o{x},\o{y})$ is a conjunction of atomic formulas such that each pair of variables, one in $\o{x}$ and the other in $\o{x},\o{y}$, occurs in one of the conjuncts.
\end{itemize}
The atom guarded fragment of first-order logic was introduced in~\cite{HNvB1998} under the name of F2 (``Fragment~$2$''), whereas the loosely guarded fragment was defined by van Benthem in~\cite{vBpieces}. The atom guarded fragment can be regarded as an extension of modal logic, in the sense that the standard translation of the latter is contained in the former. In turn, the loosely guarded fragment extends the atomic one and can express e.g.~ (the translation of) the \emph{Until} modality in temporal logic, cf.~ \cite[p.~9]{vBpieces}.
For each notion of guarding $\mathfrak{g}$ (atomic or loose), denote by
\[
\mathfrak{g}\FO^n \ \text{ and } \ \exists^+\mathfrak{g}\FO^n,
\]
respectively, the $n$-variable $\mathfrak{g}$-guarded fragment of first-order logic and its existential positive fragment.
In~\cite{Guarded2021}, \emph{guarded comonads} $\mathbb{G}_n^{\mathfrak{g}}$ on $\mathbf{Struct}(\sg)$ are defined for all $n>0$. The associated categories of Eilenberg-Moore coalgebras are arboreal and induce the \emph{$\mathfrak{g}$-guarded resource-indexed arboreal cover} of $\mathbf{Struct}(\sg)$ with resource parameter $n$. For an explicit description of the resource-indexed arboreal category in question, cf.~ \cite[\S IV]{Guarded2021}.
Assume the vocabulary $\sigma$ is finite and let $\rightarrow_n^{\mathfrak{g}}$ and $\leftrightarrow_n^{\mathfrak{g}}$ be the resource-indexed relations on $\mathbf{Struct}(\sg)$ induced by the $\mathfrak{g}$-guarded resource-indexed arboreal cover. It follows from \cite[Theorems~III.4 and~V.2]{Guarded2021} that, for all $\sigma$-structures $\As,\Bs$ and all $n>0$,
\[
\As \rightarrow_n^{\mathfrak{g}} \Bs \ \Longleftrightarrow \ \As \Rrightarrow^{\exists^+\mathfrak{g}\FO^n} \Bs
\]
and
\[
\As \leftrightarrow_n^{\mathfrak{g}} \Bs \ \Longleftrightarrow \ \As \equiv^{\mathfrak{g}\FO^n} \Bs.
\]
To obtain an analogue of Lemma~\ref{concrete-charact-log-eq}, we consider finite fragments of $\mathfrak{g}\FO^n$ by stratifying in terms of \emph{guarded-quantifier rank} (cf.~ Remark~\ref{rem:finite-fragments}).
Note that, as guarded quantifiers bound \emph{tuples} of variables, rather than single variables, the guarded-quantifier rank of a guarded formula is typically lower than its ordinary quantifier rank. Nevertheless, for all $k\geq 0$, the fragment $\mathfrak{g}\FO^n_k$ of $\mathfrak{g}\FO^n$ consisting of those sentences with guarded-quantifier rank at most $k$ contains finitely many sentences up to logical equivalence.
This stratification can be modelled in terms of comonads $\mathbb{G}_{n,k}^{\mathfrak{g}}$ on $\mathbf{Struct}(\sg)$, for all $n,k>0$, as explained in \cite[\S VII]{Guarded2021}. Fixing $n$ and letting $k$ vary, we obtain an \emph{$n$-variable $\mathfrak{g}$-guarded resource-indexed arboreal cover} of $\mathbf{Struct}(\sg)$, with resource parameter $k$. The induced relations $\rightarrow_{n,k}^{\mathfrak{g}}$ and $\leftrightarrow_{n,k}^{\mathfrak{g}}$ on $\mathbf{Struct}(\sg)$ coincide, respectively, with preservation of $\exists^+\mathfrak{g}\FO^n_k$ and equivalence in $\mathfrak{g}\FO^n_k$. Thus, for any full subcategory $\D$ of $\mathbf{Struct}(\sg)$:
\begin{itemize}
\item $\D=\Mod(\phi)$ for some $\phi\in\mathfrak{g}\FO^n_k$ if, and only if, $\D$ is saturated under $\leftrightarrow_{n,k}^{\mathfrak{g}}$.
\item $\D=\Mod(\psi)$ for some $\psi\in\exists^+\mathfrak{g}\FO^n_k$ if, and only if, $\D$ is upwards closed with respect to $\rightarrow_{n,k}^{\mathfrak{g}}$.
\end{itemize}
As observed in \cite[\S 6.1]{Hybrid2022}, the $\mathfrak{g}$-guarded resource-indexed arboreal cover of $\mathbf{Struct}(\sg)$ satisfies the bisimilar companion property, and so does the $n$-variable $\mathfrak{g}$-guarded resource-indexed arboreal cover for all $n>0$. Therefore, Proposition~\ref{p:HPT-tame} implies the following \emph{equirank-variable homomorphism preservation theorem} for guarded logics:
\begin{theorem}\label{th:hpt-guarded}
Let $\mathfrak{g}$ be a notion of guarding (either atom or loose).
The following statements are equivalent for any $\mathfrak{g}$-guarded sentence~$\phi$ in $n$ variables of guarded-quantifier rank at most $k$ in a relational vocabulary:
\begin{enumerate}
\item $\phi$ is preserved under homomorphisms.
\item $\phi$ is logically equivalent to an existential positive $\mathfrak{g}$-guarded sentence in $n$ variables of guarded-quantifier rank at most $k$.
\end{enumerate}
\end{theorem}
\subsection{Not-so-tame: extendability}
A resource-indexed arboreal adjunction may fail to satisfy the bisimilar companion property, in which case Proposition~\ref{p:HPT-tame} does not apply. This is the case e.g.~ for the Ehrenfeucht-Fra\"{i}ss\'{e}~resource-indexed arboreal adjunction:
\begin{example}\label{ex:EF-bcp-fails}
The Ehrenfeucht-Fra\"{i}ss\'{e}~resource-indexed arboreal adjunction between $\mathbf{Struct}(\sg)$ and $\RT(\sigma^I)$ does not have the bisimilar companion property. Suppose that $\sigma=\{R\}$ consists of a single binary relation symbol and let $\As$ be the $\sigma$-structure with underlying set $\{a,b\}$ satisfying $R^{\As}=\{(a,b),(b,a)\}$. In view of Examples~\ref{ex:res-ind-arb-adj} and~\ref{logical-rel-EF}, it suffices to find $k>0$ and a first-order sentence $\phi$ of quantifier rank $\leq k$ such that
\[
\As \vDash \phi \ \text{ and } \ H\Ek J\As \not\vDash \phi.
\]
Let $\phi$ be the sentence
$
\forall x \forall y \ (x\neq y \Rightarrow xRy)
$
of quantifier rank $2$ stating that any two distinct elements are $R$-related. Then $\phi$ is satisfied by $\As$ but not by $H\Ek J\As$, because the sequences $[a]$ and $[b]$ are not $R$-related in $H\Ek J\As$. This shows that the bisimilar companion property fails for all $k\geq 2$.
\end{example}
When the bisimilar companion property fails, i.e.~ $a \not\eqbCk G_k a$ for some $a\in \E$ and $k > 0$, we may attempt to force it by finding appropriate extensions $a^*$ and $(G_k a)^*$ of $a$ and $G_k a$, respectively, satisfying $a^* \eqbCk (G_k a)^*$. This motivates the notion of \emph{$k$-extendability} (see Definition~\ref{d:k-extendable} below), inspired by the work of Rossman~\cite{Rossman2008} and its categorical interpretation in~\cite{ABRAMSKY2020}.
To start with, we introduce the following notations.
Given objects $a,b$ of a category~$\A$, we write $a\to b$ to denote the existence of an arrow from $a$ to $b$. Further, we write $a\rightleftarrows b$ to indicate that $a\to b$ and $b\to a$, i.e.~ $a$ and $b$ are \emph{homomorphically equivalent}.
This applies in particular to coslice categories.
Recall that, for any $c\in\A$, the \emph{coslice category} $c/{\A}$ (also known as \emph{under category}) has as objects the arrows in $\A$ with domain~$c$. For any two objects $m\colon c\to a$ and $n\colon c\to b$ of $c/{\A}$, an arrow $f\colon m\to n$ in $c/{\A}$ is a morphism $f\colon a\to b$ in $\A$ such that $f\circ m = n$. Hence, $m\rightleftarrows n$ in $c/{\A}$ precisely when there are arrows $f\colon a\to b$ and $g\colon b\to a$ in $\A$ satisfying $f\circ m = n$ and $g\circ n = m$. We shall represent this situation by means of the following diagram:
\[\begin{tikzcd}[column sep=2em]
{} & c \arrow{dl}[swap]{m} \arrow{dr}{n} & {} \\
a \arrow[yshift=3pt]{rr}{f} & & b \arrow[yshift=-3pt]{ll}{g}
\end{tikzcd}\]
\begin{remark}\label{rem:sections-coslice}
Note that $m\rightleftarrows n$ in $c/{\A}$ whenever there is a section $f\colon a \to b$ in $\A$ satisfying $f\circ m = n$.
Just observe that the left inverse $f^{-1}$ of $f$ satisfies
\[
f^{-1} \circ n = f^{-1} \circ f\circ m = m
\]
and so $n\to m$. Further, $f\circ m = n$ entails $m\to n$. Hence, $m\rightleftarrows n$.
\end{remark}
Now, let $a$ be an object of an arboreal category $\C$ and let $m\colon P\emb a$ be a path embedding.
As $\Emb{a}$ is a complete lattice by Lemma~\ref{l:arboreal:properties}\ref{SX-complete-lattice}, the supremum $\bigvee{{\uparrow}m}$ in $\Emb{a}$ of all path embeddings above $m$ exists and we shall denote it by
\[
\inc{m}\colon \Sg{m}\emb a.
\]
Clearly, $m\leq \inc{m}$ in $\Emb{a}$, and so there is a path embedding
\[
\co{m}\colon P\emb \Sg{m}
\]
satisfying $\inc{m} \circ \co{m} = m$. (Note that $\inc{m}$ is well defined only up to isomorphism in the coslice category ${\C}/a$, but as usual we work with representatives for isomorphism classes.)
\begin{remark}
To provide an intuition for the previous definition, let us say for the sake of this remark that a path embedding $m\colon P\emb a$ is ``dense in $a$'' if all elements of $\Path{a}$ are comparable with $m$. Then Lemma~\ref{l:corestriction-properties}\ref{comparable-subtree} below implies that $\inc{m}\colon \Sg{m}\emb a$ is the largest $\M$-subobject of $a$ in which $m$ is dense.
\end{remark}
\begin{lemma}\label{l:corestriction-properties}
The following statements hold for all path embeddings ${m\colon P\emb a}$:
\begin{enumerate}[label=(\alph*)]
\item\label{comparable-subtree} $\Path{\Sg{m}}$ is isomorphic to the subtree of $\Path{a}$ consisting of the elements that are comparable with $m$.
\item\label{corestriction-arrows-finer} For all path embeddings $n\colon P\emb b$ and arrows $f\colon a\to b$ such that ${f\circ m = n}$, there is a unique $g\colon \Sg{m}\to \Sg{n}$ making the following diagram commute.
\[\begin{tikzcd}
\Sg{m} \arrow[rightarrowtail]{d}[swap]{\inc{m}} \arrow[dashed]{r}{g} & \Sg{n} \arrow[rightarrowtail]{d}{\inc{n}} \\
a \arrow{r}{f} & b
\end{tikzcd}\]
\item\label{corestriction-arrows} For all path embeddings $n\colon P\emb b$, if $m\to n$ then $\co{m}\to \co{n}$.
\end{enumerate}
\end{lemma}
\begin{proof}
\ref{comparable-subtree} The map $\inc{m}\circ - \colon \Emb{\Sg{m}}\to \Emb{a}$ is an order-embedding by Lemma~\ref{l:emb-quo-order-embeddings}, and so its restriction $\Path{\Sg{m}}\to \Path{a}$ is an injective forest morphism. Hence, $\Path{\Sg{m}}$ is isomorphic to the subtree of $\Path{a}$ consisting of those elements that factor through $\inc{m}$, i.e.~ that are below $\bigvee{{\uparrow}m}$. By Lemma~\ref{l:arboreal:properties}\ref{join-irred}, an element of $\Path{a}$ is below $\bigvee{{\uparrow}m}$ precisely when it is below some element of ${\uparrow}m$. In turn, the latter is equivalent to being comparable with~$m$.
\ref{corestriction-arrows-finer} Since $\Sg{m}$ is path-generated, it is the colimit of the canonical cocone $C$ of path embeddings over the small diagram $\Path{\Sg{m}}$ which, by item~\ref{comparable-subtree}, can be identified with the subdiagram of $\Path{a}$ consisting of those elements comparable with $m$. As $\Path{(f\circ \inc{m})}$ is monotone, it sends path embeddings comparable with $m$ to path embeddings comparable with $n$, and so the cocone $\{f\circ \inc{m}\circ p\mid p\in C\}$ factors through $\inc{n}\colon \Sg{n}\emb b$. Hence, there is $g\colon \Sg{m}\to \Sg{n}$ such that $\inc{n} \circ g = f\circ \inc{m}$. Finally, note that if $g'\colon \Sg{m}\to \Sg{n}$ satisfies $\inc{n} \circ g' = f\circ \inc{m}$ then we have $\inc{n} \circ g = \inc{n} \circ g'$, and so $g=g'$ because $\inc{n}$ is monic.
\ref{corestriction-arrows} Suppose there exists $f\colon a\to b$ such that $f\circ m = n$. By the item~\ref{corestriction-arrows-finer}, there is $g\colon \Sg{m}\to \Sg{n}$ such that $\inc{n} \circ g = f\circ \inc{m}$. Therefore,
\[
\inc{n}\circ g\circ \co{m} = f\circ \inc{m} \circ \co{m} = f\circ m = n = \inc{n} \circ \co{n}
\]
and so $g\circ \co{m} = \co{n}$ because $\inc{n}$ is a monomorphism.
\end{proof}
\begin{remark}\label{rem:idempotent}
Lemma~\ref{l:corestriction-properties}\ref{corestriction-arrows} entails that $\co{m}\to \co{n}$ whenever $\co{m}\to n$. Just observe that $\co{(\co{m})}$ can be identified with $\co{m}$.
\end{remark}
\begin{definition}\label{d:k-extendable}
Consider a resource-indexed arboreal adjunction between $\E$ and $\C$, with adjunctions $L_k\dashv R_k$.
An object $a$ of $\E$ is \emph{$k$-extendable} if it satisfies the following property for all $e\in\E$: For all path embeddings $m\colon P\emb R_k a$ and $n \colon P\emb R_k e$ such that $\co{m} \rightleftarrows \co{n}$ in the coslice category $P/{\C_k}$ (see the leftmost diagram below),
%
\begin{center}
\begin{tikzcd}[column sep=2em]
{} & P \arrow[rightarrowtail]{dl}[swap]{\co{m}} \arrow[rightarrowtail]{dr}{\co{n}} & {} \\
\Sg{m} \arrow[yshift=3pt]{rr} & & \Sg{n} \arrow[yshift=-3pt]{ll}
\end{tikzcd}
\ \ \ \ \ \ \
\begin{tikzcd}[column sep=2em]
{} & Q \arrow[rightarrowtail, dashed]{dl}[swap]{\co{m'}} \arrow[rightarrowtail]{dr}{\co{n'}} & {} \\
\Sg{m'} \arrow[yshift=3pt, dashed]{rr} & & \Sg{n'} \arrow[yshift=-3pt, dashed]{ll}
\end{tikzcd}
\end{center}
if $n'\colon Q\emb R_k e$ is a path embedding satisfying $n\leq n'$ in $\Path{(R_k e)}$, there is a path embedding $m'\colon Q\emb R_k a$ such that $m\leq m'$ in $\Path{(R_k a)}$ and $\co{m'}\rightleftarrows \co{n'}$ in $Q/{\C_k}$ (as displayed in the rightmost diagram above).
\end{definition}
We shall see in Proposition~\ref{p:extendable} below that, under appropriate assumptions, the $k$-extendability property allows us to upgrade the relation $\eqaCk$ to the finer relation $\eqbCk$.
For the next lemma, recall that a category is \emph{locally finite} if there are finitely many arrows between any two of its objects.
\begin{lemma}\label{l:quotients-inverse}
Let $\C$ be an arboreal category whose full subcategory $\Cp$ consisting of the paths is locally finite. If $f\colon P\epi Q$ and $g\colon Q\epi P$ are quotients between paths, then $f$ and $g$ are inverse to each other.
\end{lemma}
\begin{proof}
The set $M$ of quotients $P\epi P$ is a finite monoid with respect to composition, and it satisfies the right-cancellation law because every quotient is an epimorphism. Hence $M$ is a group, and so $g\circ f\in M$ has an inverse. It follows that $g\circ f$ is an embedding. Because there is at most one embedding between any two paths by Lemma~\ref{l:arboreal:properties}\ref{at-most-one-emb}, $g\circ f = \id_P$. By symmetry, also $f\circ g = \id_Q$.
\end{proof}
\begin{proposition}\label{p:extendable}
Consider a resource-indexed arboreal adjunction between $\E$ and $\C$ such that $\Cp^k$ is locally finite for all $k > 0$. For all $k$-extendable objects $a,b$ of $\E$ admitting a product, we have $a\eqaCk b$ if and only if $a\eqbCk b$.
\end{proposition}
\begin{proof}
Fix an arbitrary $k > 0$ and recall that $\C_k$ is an arboreal category.
The ``if'' part of the statement follows from the inclusion ${\eqbCk} \subseteq {\eqaCk}$.
For the ``only if'' part suppose that $a\eqaCk b$, i.e.~ $R_k a$ and $R_k b$ are homomorphically equivalent in $\C_k$. To improve readability, let $X\coloneqq R_k a$ and $Y\coloneqq R_k b$. We must prove that $X$ and $Y$ are bisimilar.
As $X$ and $Y$ admit a product in $\C_k$, namely the image under $R_k$ of the product of $a$ and $b$ in $\E$, by Theorem~\ref{th:bisimilar-iff-strong-back-forth} it suffices to show that $X$ and $Y$ are back-and-forth equivalent.
Fix arbitrary morphisms $f\colon X\to Y$ and $g\colon Y\to X$, and let $m$ and $n$ denote generic elements of $\Path{X}$ and $\Path{Y}$, respectively. We claim that
\[
\B\coloneqq\{\br{m,n}\mid \exists \ s\colon \Sg{m}\to \Sg{n}, \, t\colon \Sg{n} \to \Sg{m} \ \text{s.t.} \ \Path{s}(\co{m})=\co{n} \ \text{and} \ \Path{t}(\co{n})=\co{m} \}
\]
is a back-and-forth system between $X$ and $Y$, i.e.~ it satisfies items~\ref{initial}--\ref{back} in Definition~\ref{def:back-and-forth}.
For item~\ref{initial}, let $\bot_X,\bot_Y$ be the roots of $\Path{X}$ and $\Path{Y}$, respectively. Note that $\Sg{\bot_X}$ and $\Sg{\bot_Y}$ can be identified, respectively, with $X$ and $Y$. As $\Path{f}$ and $\Path{g}$ are forest morphisms, $\Path{f}(\bot_X)=\bot_Y$ and $\Path{g}(\bot_Y)=\bot_X$. So, $\br{\bot_X,\bot_Y}\in \B$.
For item~\ref{forth}, suppose $\br{m,n}\in\B$ and let $m'\in\Path{X}$ satisfy $m\cvr m'$. We seek $n'\in\Path{Y}$ such that $n\cvr n'$ and $\br{m',n'}\in\B$. By assumption, there are arrows $s\colon \Sg{m}\to \Sg{n}$ and $t\colon \Sg{n}\to \Sg{m}$ such that $\Path{s}(\co{m})=\co{n}$ and $\Path{t}(\co{n})=\co{m}$. Writing $P\coloneqq \dom(m)$ and $P'\coloneqq \dom(n)$, we have the following diagrams
\begin{center}
\begin{tikzcd}[row sep = 3em]
P \arrow[twoheadrightarrow]{r}{e} \arrow[rightarrowtail]{d}[swap]{\co{m}} & {\cdot} \arrow[rightarrowtail]{d}[description]{\Path{s}(\co{m})} \arrow{r}{\phi} & P' \arrow[rightarrowtail, bend left=30]{dl}{\co{n}} \\
\Sg{m} \arrow{r}{s}& \Sg{n} &
\end{tikzcd}
\ \ \ \ \ \ \
\begin{tikzcd}[row sep = 3em]
P' \arrow[twoheadrightarrow]{r}{e'} \arrow[rightarrowtail]{d}[swap]{\co{n}} & {\cdot} \arrow[rightarrowtail]{d}[description]{\Path{t}(\co{n})} \arrow{r}{\psi} & P \arrow[rightarrowtail, bend left=30]{dl}{\co{m}} \\
\Sg{n} \arrow{r}{t}& \Sg{m} &
\end{tikzcd}
\end{center}
where $\phi$ and $\psi$ are isomorphisms. By Lemma~\ref{l:quotients-inverse}, $\phi\circ e$ and $\psi\circ e'$ are inverse to each other, thus the left-hand diagram below commutes.
\begin{center}
\begin{tikzcd}[column sep=2em]
{} & P \arrow[rightarrowtail]{dl}[swap]{\co{m}} \arrow[rightarrowtail]{dr}{\co{n}\circ \phi\circ e} & {} \\
\Sg{m} \arrow[yshift=3pt]{rr}{s} & & \Sg{n} \arrow[yshift=-3pt]{ll}{t}
\end{tikzcd}
\ \ \ \ \ \ \
\begin{tikzcd}[column sep=2em]
{} & Q \arrow[rightarrowtail]{dl}[swap]{\co{m'}} \arrow[rightarrowtail, dashed]{dr}{\co{n'}} & {} \\
\Sg{m'} \arrow[yshift=3pt, dashed]{rr}{s'} & & \Sg{n'} \arrow[yshift=-3pt, dashed]{ll}{t'}
\end{tikzcd}
\end{center}
Let $Q\coloneqq \dom(m')$. Since $b$ is $k$-extendable, there exist a path embedding $n'\colon Q\emb Y$ such that $n\leq n'$ in $\Path{Y}$, and arrows $s'\colon \Sg{m'}\to \Sg{n'}$ and $t'\colon \Sg{n'}\to \Sg{m'}$ making the right-hand diagram above commute.
It follows that $\br{m',n'}\in\B$; just observe that $\Path{s'}(\co{m'})=\co{n'}$ because the composite $s'\circ \co{m'}$ is an embedding, and similarly $\Path{t'}(\co{n'})=\co{m'}$.
It remains to show that $n\cvr n'$.
For any element $x$ of a tree, denote by $\htf(x)$ its height. As $n\leq n'$ in $\Path{Y}$, it is enough to show that $\htf(n')=\htf(n)+1$. Using the fact that forest morphisms preserve the height of points and $\Path{s'}(\co{m'})=\co{n'}$, we get
\[
\htf(n') = \htf(\co{n'}) =\htf(\co{m'})= \htf(m') = \htf(m)+1.
\]
Item~\ref{back} is proved in a similar way using the fact that $a$ is $k$-extendable.
\end{proof}
\begin{definition}
Consider a resource-indexed arboreal adjunction between $\E$ and $\C$, and an object $a\in\E$. For all $k > 0$, a \emph{$k$-extendable cover} of $a$ is a section $a \to a^*$ in $\E$ such that $a^*$ is $k$-extendable.
\end{definition}
\begin{proposition}\label{p:axiomatic-HP}
\ref{HP-abstract} holds for any resource-indexed arboreal adjunction between $\E$ and $\C$ satisfying the following properties for all $k >0$:
\begin{enumerate}[label=(\roman*)]
\item $\Cp^k$ is locally finite.
\item $\E$ has binary products and each of its objects admits a $k$-extendable cover.
\end{enumerate}
\end{proposition}
\begin{proof}
Fix a full subcategory $\D$ of $\E$ saturated under $\eqbCk$. For the non-trivial implication in \ref{HP-abstract}, assume $\D$ is closed under morphisms and let $a,b\in\E$ satisfy $a\prCk b$ and $a\in \D$.
If $G_k\coloneqq L_k R_k$, then $a\prCk b$ implies $G_k a\to b$. Let $s\colon a\to a^*$ and $t\colon G_k a\to (G_k a)^*$ be sections with $a^*$ and $(G_k a)^*$ $k$-extendable objects. It follows from Lemma~\ref{l:equiv-rel-properties}\ref{hom-k-hom} that $a\eqaCk a^*$ and $G_k a\eqaCk (G_k a)^*$. By Lemma~\ref{l:equiv-rel-properties}\ref{k-equiv} we have $a\eqaCk G_k a$ and so, by transitivity, $a^*\eqaCk (G_k a)^*$. An application of Proposition~\ref{p:extendable} yields $a^*\eqbCk (G_k a)^*$. We thus have the following diagram, where $t^{-1}$ denotes the left inverse of $t$.
\[\begin{tikzcd}[column sep=0.1em]
a^* & {\eqbCk} & (G_k a)^* &&&&& &&&&& &&&&& \\
a \arrow{u}{s} & {\eqaCk} & G_k a \arrow[leftarrow]{u}[swap]{t^{-1}} \arrow{rrrrrrrrrrrrrrr}{} &&&&& &&&&& &&&&& b
\end{tikzcd}\]
Since $a\in\D$, and $\D$ is saturated under $\eqbCk$ and closed under morphisms, all the objects in the diagram above sit in $\D$. In particular, $b\in \D$ and thus \ref{HP-abstract} holds.
\end{proof}
\subsection{Relativising to full subcategories}\label{s:relativisation}
Let $\E'$ be a full subcategory of $\E$.
We say that \ref{HP-abstract} holds \emph{relative to $\E'$} if the following condition is satisfied: For any full subcategory $\D$ of $\E$ saturated under $\eqbCk$, $\D \cap \E'$ is closed under morphisms in $\E'$ precisely when it is upwards closed in $\E'$ with respect to $\prCk$. Likewise for \ref{HPplus-abstract}.
For the next proposition, observe that in order for a comonad $G$ on $\E$ to restrict to a full subcategory $\E'$ of $\E$ it is necessary and sufficient that $Ga\in \E'$ for all $a\in \E'$.
\begin{proposition}\label{p:relative-HPT-and-HPTplus}
Consider a resource-indexed arboreal adjunction between $\E$ and $\C$, and let $\E'$ be a full subcategory of $\E$ such that the induced comonads $G_k\coloneqq L_k R_k$ restrict to $\E'$. If the resource-indexed arboreal adjunction has the bisimilar companion property then \ref{HP-abstract} holds relative to $\E'$. If it is idempotent then \ref{HPplus-abstract} holds relative to $\E'$.
\end{proposition}
\begin{proof}
The same, mutatis mutandis, as for Propositions~\ref{p:HPT-tame} and~\ref{p:HPT-graded}, respectively.
\end{proof}
Since the modal comonads $\Mk$ restrict to finite pointed Kripke structures, the previous result yields a variant of Theorem~\ref{th:hpt-graded-modal-logic} for finite structures:
\begin{theorem}\label{th:hpt-graded-modal-logic-finite}
The following statements are equivalent for any graded modal formula~$\phi$ of modal depth at most $k$ in a modal vocabulary:
\begin{enumerate}
\item $\phi$ is preserved under homomorphisms between finite pointed Kripke structures.
\item $\phi$ is logically equivalent over finite pointed Kripke structures to an existential positive modal formula of modal depth at most $k$.
\end{enumerate}
\end{theorem}
Similarly, since the guarded comonads $\mathbb{G}_{n,k}^{\mathfrak{g}}$ restrict to finite structures, we obtain the following variant of Theorem~\ref{th:hpt-guarded} for finite structures:
\begin{theorem}\label{th:hpt-guarded-logics-finite}
Let $\mathfrak{g}$ be a notion of guarding (either atom or loose).
The following statements are equivalent for any $\mathfrak{g}$-guarded sentence~$\phi$ in $n$ variables of guarded-quantifier rank at most $k$ in a relational vocabulary:
\begin{enumerate}
\item $\phi$ is preserved under homomorphisms between finite structures.
\item $\phi$ is logically equivalent over finite structures to an existential positive $\mathfrak{g}$-guarded sentence in $n$ variables of guarded-quantifier rank at most $k$.
\end{enumerate}
\end{theorem}
In all our examples of resource-indexed arboreal adjunctions and covers, the counits of the induced comonads $G_k\coloneqq L_k R_k$ are componentwise surjective. The next easy observation, combined with Proposition~\ref{p:relative-HPT-and-HPTplus}, then provides a useful criterion to relativise equi-resource homomorphism preservation theorems to subclasses of structures. Recall that a \emph{negative formula} is one obtained from negated atomic formulas and $\vee, \wedge, \exists, \forall$.
\begin{lemma}\label{l:surj-counit-negpos-relat}
Let $G$ be a comonad on $\mathbf{Struct}(\sg)$ whose counit is componentwise surjective and let $T$ be a set of negative sentences. Then $G$ restricts to $\Mod(T)$.
\end{lemma}
\begin{proof}
If $\psi\in T$, its negation is logically equivalent to a positive sentence $\chi$. Positive sentences are preserved under surjective homomorphisms and so, for all $\As\in \mathbf{Struct}(\sg)$, considering the component of the counit $G\As\epi \As$ we obtain
\[
G\As\models \chi \ \Longrightarrow \ \As \models \chi.
\]
I.e.,~ $G$ restricts to $\Mod(\psi)$. As $\Mod(T)= \bigcap_{\psi\in T}{\Mod(\psi)}$, the statement follows.
\end{proof}
The counits of the guarded comonads $\mathbb{G}_{n,k}^{\mathfrak{g}}$ are componentwise surjective, thus the equirank-variable homomorphism preservation theorem for guarded logics and its finite variant (Theorems~\ref{th:hpt-guarded} and~\ref{th:hpt-guarded-logics-finite}, respectively) admit a relativisation to any full subcategory of the form $\Mod(T)$ where $T$ is a set of negative $\mathfrak{g}$-guarded sentences.
The counits of the modal comonads $\Mk$ are also componentwise surjective. As positive modal formulas are preserved under surjective homomorphisms between pointed Kripke structures, a slight variant of Lemma~\ref{l:surj-counit-negpos-relat} shows that the comonads $\Mk$ restrict to any full subcategory of the form $\Mod(T)$, with $T$ a set of negative modal formulas. Hence, the equidepth homomorphism preservation theorem for graded modal logic and its finite version (Theorems~\ref{th:hpt-graded-modal-logic} and~\ref{th:hpt-graded-modal-logic-finite}, respectively) can be relativised to any such~subcategory.
Relativisations to subclasses of structures can be obtained even in the absence of the bisimilar companion property; in that case, we need to ensure that $k$-extendable covers can be constructed within the subclass. We defer the statement of this result to Section~\ref{s:axioms-adj} (see Corollary~\ref{c:forcing-bisim-comp-prop-relative}).
\section{An Axiomatic Approach}\label{s:axiomatic}
In this section, we identify sufficient conditions on a resource-indexed arboreal adjunction between $\E$ and $\C$ ensuring that property~\ref{HP-abstract} is satisfied (see Corollary~\ref{cor:HPT-axiomatic}). We introduce first conditions \ref{lim-colim}--\ref{fact-syst} on $\E$ in Section~\ref{s:axioms-extensional} and then, in Section~\ref{s:axioms-adj}, conditions \ref{paths-finite}--\ref{path-restriction-prop} on the adjunctions. In Section~\ref{s:equirank-proof}, we derive a slight generalisation of the equirank homomorphism preservation theorem by showing that these conditions are satisfied by the Ehrenfeucht-Fra\"{i}ss\'{e}~resource-indexed arboreal adjunction.
\begin{remark}
Let us point out that we cannot derive from this axiomatic approach an \emph{equivariable} homomorphism preservation theorem, whereby the number of variables in a sentence is preserved, let alone an \emph{equirank-variable} one. In fact, the corresponding ($k$-round) $n$-pebble comonads do not satisfy property~\ref{path-emb} below. On the other hand, under the additional assumption that $k\leq n+2$, where $k$ is the quantifier rank and $n$ is the number of variables in a sentence, an equirank-variable homomorphism preservation theorem was proved by Paine in~\cite{Paine2020}. Also, our approach does not readily apply to \emph{hybrid logic} because the hybrid comonads $\mathbb{H}_k$ in~\cite{Hybrid2022} do not satisfy the path restriction property~\ref{path-restriction-prop} (cf.~ Definition~\ref{def:path-re}).
\end{remark}
\subsection{Axioms for the extensional category}\label{s:axioms-extensional}
We require that the category $\E$ have the following properties:
\begin{enumerate}[label=\textnormal{(E\arabic*)}]
\item\label{lim-colim} $\E$ has all finite limits and small colimits.
\item\label{fact-syst} $\E$ is equipped with a proper factorisation system such that:
\begin{itemize}[leftmargin=*]
\item Embeddings are stable under pushouts along embeddings.
\item Pushout squares of embeddings are also pullbacks.
\item Pushout squares of embeddings are stable under pullbacks along embeddings.
\end{itemize}
\end{enumerate}
\begin{remark}
Note that property~\ref{fact-syst} only involves one half of the factorisation system, namely the embeddings. In fact, it could be weakened to the requirement that $\E$ admit a class of monomorphisms $\mathscr{N}$ satisfying appropriate properties. When $\mathscr{N}$ is the class of all monomorphisms, these are akin to the conditions for an \emph{adhesive category}, cf.~ \cite{Adhesive2004}.
\end{remark}
\begin{example}\label{ex:structures-axioms}
If $\sigma$ is a relational vocabulary, $\mathbf{Struct}(\sg)$ satisfies \ref{lim-colim}--\ref{fact-syst}. In fact, it is well known that $\mathbf{Struct}(\sg)$ is complete and cocomplete, hence it satisfies~\ref{lim-colim}. For~\ref{fact-syst}, consider the proper factorisation system given by surjective homomorphisms and embeddings.
Up to isomorphism, embeddings can be identified with inclusions of induced substructures. The pushout of a span of embeddings in $\mathbf{Struct}(\sg)$ can be identified with a union of structures, and so embeddings are stable under pushouts along embeddings. The remaining two conditions in~\ref{fact-syst} hold because they are satisfied in $\Set$ by the class of monomorphism, and the forgetful functor $\mathbf{Struct}(\sg)\to \Set$ preserves and reflects pullback and pushout diagrams consisting of embeddings.
\end{example}
We note in passing that properties \ref{lim-colim}--\ref{fact-syst} are stable under taking coslices:
\begin{lemma}\label{l:extensional-ax-coslices}
If a category $\E$ satisfies~\ref{lim-colim}--\ref{fact-syst}, then so does $e/{\E}$ for all $e\in \E$.
\end{lemma}
\begin{proof}
Fix an arbitrary object $e\in\E$. It is well known that limits and colimits in $e/{\E}$ are inherited from $\E$, so $e/{\E}$ satisfies~\ref{lim-colim} because $\E$ does.
By assumption, $\E$ admits a proper factorisation system satisfying~\ref{fact-syst}. Let $\Q$ and $\M$ be the classes of arrows in $e/{\E}$ whose underlying morphisms in $\E$ are quotients and embeddings, respectively. It is folklore that $(\Q,\M)$ is a weak factorisation system in $e/{\E}$. Moreover, this factorisation system is proper because the codomain functor $\cod\colon e/{\E} \to \E$ is faithful.
Recall that $\cod\colon e/{\E} \to \E$ preserves pushouts, so embeddings in $e/{\E}$ are stable under pushouts along embeddings because the corresponding property is satisfied in~$\E$. The remaining two properties in~\ref{fact-syst} follow by a similar reasoning, using the fact that $\cod\colon e/{\E} \to \E$ preserves and reflects limits and pushouts.
\end{proof}
\begin{example}
It follows from Example~\ref{ex:structures-axioms} and Lemma~\ref{l:extensional-ax-coslices} that, for all relational vocabularies $\sigma$, the category $\CSstar$ of pointed $\sigma$-structures satisfies \ref{lim-colim}--\ref{fact-syst}.
\end{example}
\subsection{Axioms for the resource-indexed adjunctions}\label{s:axioms-adj}
We now assume that the extensional category $\E$ satisfies \ref{lim-colim}--\ref{fact-syst}, and proceed to introduce conditions on the resource-indexed arboreal adjunction between $\E$ and $\C$.
To start with, consider an arbitrary adjunction $L\dashv R \colon \E \to \C$. As with any adjunction, there are hom-set bijections
\begin{equation*}
\E(L c, e) \longrightarrow \C(c, R a), \enspace f\mapsto f^\flat
\end{equation*}
natural in $c\in\C$ and $e\in\E$. Explicitly, $f^\flat$ is defined as the composite
\[\begin{tikzcd}
c \arrow{r}{\eta_c} & R L c \arrow{r}{R f} & R e
\end{tikzcd}\]
where $\eta$ is the unit of the adjunction $L\,{\dashv}\, R$. The inverse of the function $f\mapsto f^\flat$ sends $g\in \C(c, R e)$ to the morphism $g^\#$ given by the composition
\[\begin{tikzcd}
L c \arrow{r}{L g} & L R e \arrow{r}{\epsilon_{e}} & e,
\end{tikzcd}\]
where $\epsilon$ is the counit of the adjunction.
Naturality of these bijections means that
\[
(f_1\circ f_2)^\flat = R f_1 \circ f_2^\flat
\]
for all morphisms $f_1\colon e\to e'$ and $f_2\colon L c \to e$ in $\E$, and
\[
(g_1\circ g_2)^\#=g_1^\# \circ L g_2
\]
for all morphisms $g_1\colon c\to R e$ and $g_2\colon c'\to c$ in $\C$.
Next, we introduce the path restriction property for resource-indexed arboreal adjunctions. In a nutshell, this states that whenever $a\in \E$ embeds into the image under $L_k$ of a path, $a$ itself can be equipped with a path structure. Furthermore, these path structures can be chosen in a coherent fashion. We start with an auxiliary definition:
\begin{definition}
Consider a resource-indexed arboreal adjunction between $\E$ and $\C$. A path $P\in\Cp^k$ is \emph{smooth} if there exist $e\in \E$ and an embedding $P\emb R_k e$.
\end{definition}
\begin{remark}
The motivation for considering smooth paths arises from the fact that, when considering a fresh binary relation symbol $I$ modelling equality in the logic (cf.\ Example~\ref{ex:res-ind-arb-adj}), the interpretation of $I$ in these paths is always an equivalence relation.
\end{remark}
\begin{definition}\label{def:path-re}
A resource-indexed arboreal adjunction between $\E$ and $\C$ has the \emph{path restriction property} if, for all smooth paths $Q\in\Cp^k$ and embeddings ${j\colon a\emb L_k Q}$, there is a path $Q_a\in\Cp^k$ such that $L_k Q_a \cong a$ and the following conditions are satisfied:
\begin{enumerate}[label=(\roman*)]
\item\label{path-re-1} For all path embeddings $!_{P,Q}\colon P\emb Q$ in $\C_k$ and all commutative diagrams
\[\begin{tikzcd}
L_k P \arrow[rr, relay arrow=2ex, "L_k(!_{P,Q})"] \arrow[rightarrowtail]{r}{f} & a \arrow[rightarrowtail]{r}{j} & L_k Q
\end{tikzcd}\]
there is an arrow $\ell\colon P\to Q_{a}$ such that $L_k \ell = f$.
\item\label{path-re-2} For all path embeddings $!_{P,Q}\colon P\emb Q$ such that $L_k(!_{P,Q})$ is an embedding, the pullback of $L_k(!_{P,Q})$ along $j$ is of the form $L_k \ell$ for some $\ell\colon Q_b\to Q_a$.
\[\begin{tikzcd}
b \arrow[dr, phantom, "\lrcorner", very near start] \arrow[rightarrowtail]{r} \arrow[rightarrowtail]{d}[swap]{L_k \ell} & L_k P \arrow[rightarrowtail]{d}{L_k (!_{P,Q})} \\
a \arrow[rightarrowtail]{r}{j} & L_k Q
\end{tikzcd}\]
\end{enumerate}
\end{definition}
Finally, recall that an object $a$ of a category $\A$ is \emph{finitely presentable} if the associated hom-functor $\A(a,-)\colon \A\to \Set$ preserves directed colimits.
With regards to the resource-indexed arboreal adjunction, we assume the following properties are satisfied for all $k > 0$ and all paths $P\in\Cp^k$:
\begin{enumerate}[label=\textnormal{(A\arabic*)}]
\item\label{paths-finite} The category $\Cp^k$ is locally finite and has finitely many objects up to isomorphism.
\item\label{paths-Lk-fp} $L_k P$ is finitely presentable in $\E$.
\item\label{path-emb} For all arrows $m\colon P\to R_k a$ in $\C_k$, if $m$ is an embedding then so is $m^\# \colon L_k P\to a$. The converse holds whenever $P$ is smooth.
\item\label{path-restriction-prop} The path restriction property is satisfied.
\end{enumerate}
\begin{theorem}\label{t:model-construction}
Consider a resource-indexed arboreal adjunction between $\E$ and $\C$ satisfying \ref{lim-colim}--\ref{fact-syst} and \ref{paths-finite}--\ref{path-restriction-prop}. For all $a\in\E$ and all $k > 0$, there exists a $k$-extendable cover of $a$.
\end{theorem}
The proof of the previous key fact is deferred to Section~\ref{s:proof-mc}. Let us point out the following immediate consequence:
\begin{corollary}\label{cor:HPT-axiomatic}
\ref{HP-abstract} holds for all resource-indexed arboreal adjunctions satisfying \ref{lim-colim}--\ref{fact-syst} and \ref{paths-finite}--\ref{path-restriction-prop}.
\end{corollary}
\begin{proof}
By Proposition~\ref{p:axiomatic-HP} and Theorem~\ref{t:model-construction}.
\end{proof}
We can also deduce the following relativisation result.
Let us say that a full subcategory $\D$ of a category $\A$ is \emph{closed (in $\A$) under co-retracts} provided that, whenever $A\in \D$ and there is a section $A\to B$ in $\A$, also $B\in \D$.
\begin{corollary}\label{c:forcing-bisim-comp-prop-relative}
Let $\sigma$ be a relational vocabulary and consider a resource-indexed arboreal adjunction between $\mathbf{Struct}(\sg)$ and $\C$ satisfying \ref{paths-finite}--\ref{path-restriction-prop}. The following hold:
\begin{enumerate}
\item\label{rel-co-retr} If $\D$ is a full subcategory of $\mathbf{Struct}(\sg)$ closed under co-retracts such that each induced comonad $G_k\coloneqq L_k R_k$ restricts to $\D$, then \ref{HP-abstract} holds relative to $\D$.
\item\label{rel-negated-pos} If the counits of the comonads $G_k$ are componentwise surjective and $T$ is a set of negative sentences in the vocabulary $\sigma$, then \ref{HP-abstract} holds relative to $\Mod(T)$.
\end{enumerate}
\end{corollary}
\begin{proof}
The proof of item~\ref{rel-co-retr} is the same, mutatis mutandis, as for Proposition~\ref{p:axiomatic-HP}, using Theorem~\ref{t:model-construction} and the fact that if $\D$ is closed under co-retracts then $k$-extendable covers can be constructed within $\D$.
Item~\ref{rel-negated-pos} is an immediate consequence of item~\ref{rel-co-retr}.
Just observe that the comonads $G_k$ restrict to $\Mod(T)$ by Lemma~\ref{l:surj-counit-negpos-relat}, and $\Mod(T)$ is closed under co-retracts (cf.~ the proof of the aforementioned lemma).
\end{proof}
\subsection{The equirank homomorphism preservation theorem}\label{s:equirank-proof}
As observed in Section~\ref{s:HP-HPplus}, Rossman's equirank homomorphism preservation theorem is equivalent to property~\ref{HP-abstract} for the Ehrenfeucht-Fra\"{i}ss\'{e}~resource-indexed arboreal adjunction between $\mathbf{Struct}(\sg)$ and $\RT(\sigma^I)$. In turn, by Corollary~\ref{cor:HPT-axiomatic}, to establish~\ref{HP-abstract} it suffices to show that the latter resource-indexed arboreal adjunction satisfies \ref{lim-colim}--\ref{fact-syst} and \ref{paths-finite}--\ref{path-restriction-prop}. By Example~\ref{ex:structures-axioms}, the category $\mathbf{Struct}(\sg)$ satisfies \ref{lim-colim}--\ref{fact-syst} when equipped with the (surjective homomorphisms, embeddings) factorisation system, so it remains to show that \ref{paths-finite}--\ref{path-restriction-prop} hold. Before doing so, note that Corollary~\ref{c:forcing-bisim-comp-prop-relative} yields the following slight generalisation of the equirank homomorphism preservation theorem (just observe that the counits of the induced comonads on $\mathbf{Struct}(\sg)$ are componentwise surjective).
\begin{theorem}\label{t:equirank-hpt-relative}
Let $\sigma$ be a relational vocabulary and let $\D$ be a full subcategory of $\mathbf{Struct}(\sg)$ closed under co-retracts such that the comonads on $\mathbf{Struct}(\sg)$ induced by the Ehrenfeucht-Fra\"{i}ss\'{e}~resource-indexed arboreal adjunction restrict to $\D$. Then the equirank homomorphism preservation theorem holds relative to $\D$.
In particular, the equirank homomorphism preservation theorem holds relative to $\Mod(T)$ whenever $T$ is a set of negative sentences in the vocabulary $\sigma$.
\end{theorem}
\begin{remark}
A consequence of the first part of Theorem~\ref{t:equirank-hpt-relative} is that the equirank homomorphism preservation theorem admits a relativisation to any class of structures that is \emph{co-homomorphism closed}, i.e.~ downwards closed with respect to the homomorphism preorder on $\mathbf{Struct}(\sg)$, a fact already pointed out by Rossman in~\cite[\S 7.1.2]{Rossman2008}.
\end{remark}
We proceed to verify conditions \ref{paths-finite}--\ref{path-restriction-prop} for the adjunctions $L_k\dashv R_k$, where
\[
L_k \coloneqq H \LE_k \ \text{ and } \ R_k\coloneqq \RE_k J
\]
with the notation of Example~\ref{ex:res-ind-arb-adj}.
Note that, since a first-order sentence contains only finitely many relation symbols, in order to deduce the equirank homomorphism preservation theorem, as well as Theorem~\ref{t:equirank-hpt-relative} above, we can assume without loss of generality that the relational vocabulary $\sigma$ is finite.
\ref{paths-finite} Recall from Example~\ref{ex:RE} that, for all $k>0$, the paths in $\RTk(\sigma^I)$ are those forest-ordered $\sigma^I$-structures $(\As,\leq)$ such that the order is a chain of cardinality at most~$k$. Thus, $\As$ has cardinality at most $k$. It follows at once that there are finitely many paths in $\RTk(\sigma^I)$ up to isomorphism, and at most one arrow between any two paths.
\ref{paths-Lk-fp} For any path $P=(\As,\leq)$ in $\RTk(\sigma^I)$, $L_k P$ is the quotient of the $\sigma$-reduct of $\As$ with respect to the equivalence relation generated by $I^{\As}$. As $\As$ is finite, so is $L_k P$. The finitely presentable objects in $\mathbf{Struct}(\sg)$ are precisely the finite $\sigma$-structures (see e.g.~ \cite[\S 5.1]{AR1994}), hence $L_k P$ is finitely presentable.
\ref{path-emb} Consider an arrow $m\colon P\to R_k \Bs$ in $\RTk(\sigma^I)$, with $P=(\As,\leq)$ a path and $\Bs$ a $\sigma$-structure.
Let $\Bs'\coloneqq J(\Bs)$ be the $\sigma^I$-structure obtained from $\Bs$ by interpreting $I$ as the identity relation.
Then $R_k \Bs$ is obtained by equipping $\Ek(\Bs')$ with the prefix order. Consider the $\sigma$-homomorphism
\[
L_k m = H m \colon H(\As)\to H\Ek(\Bs').
\]
For convenience of notation, given an element $x\in\As$ we write $[x]$ for the corresponding element of $H(\As)$, and likewise for elements of $\Ek(\Bs')$.
Then $m^\#\colon L_k P \to \Bs$ is the composite of $Hm$ with the homomorphism $H\Ek(\Bs') \to \Bs$ sending the equivalence class of an element of $\Ek(\Bs')$ to the last element of any of its representatives. This map is well defined because, for any pair of sequences in $I^{\Ek(\Bs')}$, their last elements coincide.
Suppose $m$ is an embedding. If $m^\#([x])=m^\#([y])$ then $(m(x),m(y))$ belongs to the equivalence relation generated by $I^{\Ek(\Bs')}$, and so $(x,y)$ belongs to the equivalence relation generated by $I^{\As}$. It follows that $[x]=[y]$ in $H(\As)$, and so $m^\#$ is injective. The same argument, mutatis mutandis, shows that $m^\#$ reflects the interpretation of the relation symbols, hence is an embedding.
Conversely, suppose that $m^\#$ is an embedding and $P$ is smooth. Consider an embedding $n\colon P\emb R_k \Cs$ with $\Cs\in \mathbf{Struct}(\sg)$. Note that the restriction of $I^{\Ek(\Cs')}$ to the image of $n$ is an equivalence relation, hence $I^{\As}$ is an equivalence relation. As any forest morphism whose domain is linearly ordered is injective, $m$ is injective. So, it remains to show that $m$ reflects the interpretation of the relation symbols. For all $x,y\in \As$, if
\[
(m(x),m(y))\in I^{\Ek(\Bs')}
\] then $m^\#([x])=m^\#([y])$ and so $[x]=[y]$ because $m^\#$ is injective. That is, $(x,y)\in I^{\As}$, showing that $m$ reflects the interpretation of the relation $I$. Suppose now that $S$ is a relation symbol different from~$I$. For convenience of notation we shall assume that $S$ has arity $2$; the general case is a straightforward adaptation. For all $x,y\in \As$, if
\[
(m(x),m(y))\in S^{\Ek(\Bs')}
\]
then $(m^\#([x]), m^\#([y]))\in S^{\Bs}$ and so $([x], [y])\in S^{H(\As)}$ because $m^\#$ is an embedding. That is, there are $x',y'\in \As$ such that $(x,x'),(y,y')\in I^{\As}$ and $(x',y')\in S^{\As}$. We claim that the following property holds, from which it follows that $m$ is an embedding:
\begin{equation}\label{eq:smooth-property}
(x,x'),(y,y')\in I^{\As} \ \text{ and } \ (x',y')\in S^{\As} \ \Longrightarrow \ (x,y)\in S^{\As}. \tag{$\ast$}
\end{equation}
In turn, this is a consequence of the fact that $n$ is an embedding and, in $\Ek(\Cs')$,
\[
(n(x),n(x')),(n(y),n(y'))\in I^{\Ek(\Cs')} \ \text{ and } \ (n(x'),n(y'))\in S^{\Ek(\Cs')}
\]
imply $(n(x),n(y))\in S^{\Ek(\Cs')}$.
\ref{path-restriction-prop} Finally, we show that the path restriction property is satisfied. Let $Q=(\Bs,\leq)$ be a smooth path in $\RTk(\sigma^I)$, and let $j\colon \As\emb H(\Bs)$ be an embedding in $\mathbf{Struct}(\sg)$. Without loss of generality, we can identify $\As$ with a substructure of $H(\Bs)$, and $j$ with the inclusion map. As observed above, since $Q$ is smooth, $I^{\Bs}$ is an equivalence relation and property~\eqref{eq:smooth-property} is satisfied (with $\Bs$ in place of $\As$). If $q_{\Bs}\colon \Bs\twoheadrightarrow H(\Bs)$ is the canonical quotient map, let $Q_{\As}$ denote the substructure of $\Bs$ whose underlying set is
\[
\{x\in \Bs \mid q_{\Bs}(x) \in \As\}.
\]
Then $Q_{\As}$ is a path in $\RTk(\sigma^I)$ when equipped with the restriction of the order on~$\Bs$ and, using~\eqref{eq:smooth-property}, we get $H(Q_{\As})\cong\As$.
It follows from the definition of $Q_{\As}$ that item~\ref{path-re-1} in Definition~\ref{def:path-re} is satisfied. Just observe that any substructure of $Q_{\As}$ that is downwards closed in $Q$ is also downwards closed in $Q_{\As}$. With regards to item~\ref{path-re-2}, consider a path embedding $!_{P,Q}\colon P\emb Q$ with $P=(\Cs,\leq)$ and form the following pullback square in $\mathbf{Struct}(\sg)$.
\[\begin{tikzcd}
\Ds \arrow[dr, phantom, "\lrcorner", very near start] \arrow[rightarrowtail]{r} \arrow[rightarrowtail]{d} & H(\Cs) \arrow[rightarrowtail]{d}{L_k (!_{P,Q})} \\
\As \arrow[rightarrowtail]{r}{j} & H(\Bs)
\end{tikzcd}\]
Identifying $H(\Cs)$ with a substructure of $H(\Bs)$, we can assume $\Ds$ is the intersection of $\As$ and $H(\Cs)$.
Because $\Cs$ is a substructure of $\Bs$, it follows that $Q_{\Ds}$ is a substructure of~$Q_{\As}$. Moreover, because $\Cs$ is downwards closed in~$\Bs$, $Q_{\Ds}$ is downwards closed in~$Q_{\As}$. That is, there is an inclusion $Q_{\Ds}\emb Q_{\As}$ whose image under $L_k$ coincides with the pullback of $L_k (!_{P,Q})$ along $j$. Hence the path restriction property holds.
\section{Proof of Theorem~\ref{t:model-construction}}\label{s:proof-mc}
For the remainder of this section, we fix an arbitrary resource-indexed arboreal adjunction between $\E$ and~$\C$, with adjunctions $L_k\dashv R_k\colon \E\to \C_k$, satisfying \ref{lim-colim}--\ref{fact-syst} and \ref{paths-finite}--\ref{path-restriction-prop}.
\subsection{Relative extendability}\label{ss:relative-ext}
For all $k > 0$, we denote by $\o{\C}_k$ the full subcategory of $\C_k$ whose objects are colimits of finite diagrams of embeddings in $\Cp^k$. Further, we write $L_k[\o{\C}_k]$ for the full subcategory of $\E$ defined by the objects of the form $L_k c$ for $c\in \o{\C}_k$.
\begin{remark}\label{rem:finitely-many-finite-colim}
It follows from~\ref{paths-finite} that $\o{\C}_k$ is equivalent to a finite category. Therefore $L_k[\o{\C}_k]$ contains, up to isomorphism, only finitely many objects.
\end{remark}
As we shall see in the following lemma, every path embedding $P\emb R_k a$ is homomorphically equivalent to one of the form $P\emb R_k \tilde{a}$ with $\tilde{a}\in L_k[\o{\C}_k]$. Consequently, in the definition of $k$-extendable object (see Definition~\ref{d:k-extendable}) we can assume without loss of generality that $e\in L_k[\o{\C}_k]$. This observation, combined with Remark~\ref{rem:finitely-many-finite-colim}, will allow us to control the size of the diagrams featuring in the proof of Theorem~\ref{t:model-construction}.
\begin{lemma}\label{l:fg-equiv-finite}
For all path embeddings $m\colon P\emb R_k a$, there are $\tilde{a}\in L_k[\o{\C}_k]$ and a path embedding $\tilde{m}\colon P\emb R_k \tilde{a}$ such that $m\rightleftarrows \tilde{m}$ in $P/{\C_k}$.
\end{lemma}
\begin{proof}
Fix an arbitrary path embedding $m\colon P\emb R_k a$. By \ref{paths-finite}, there is a finite set of paths $\mathscr{P}=\{P_1,\ldots,P_j\}\subseteq \Cp^k$ such that each path in $\C_k$ is isomorphic to exactly one member of $\mathscr{P}$. We can assume without loss of generality that $P\in\mathscr{P}$.
For each path embedding $p\in \Path{(R_k a)}$, denote by $T_p$ the tree obtained by first considering the tree ${\uparrow} p\subseteq \Path{(R_k a)}$ and then replacing each node $q$ (which is an isomorphism class of a path embedding) with the unique path $P_i\in\mathscr{P}$ such that $P_i\cong \dom(q)$. We assume that $T_p$ is \emph{reduced}, i.e.~ given any two nodes $x$ and $y$ of $T_p$ that cover the same node, if the trees ${\uparrow} x$ and ${\uparrow} y$ are equal then $x=y$. (If $T_p$ is not reduced, we can remove branches in the obvious manner to obtain a maximal reduced subtree $T'_p$.)
We refer to $T_p$ as the \emph{type} of $p$; note that this is a finite tree.
In particular, if $\bot$ is the root of $\Path{(R_k a)}$, we get a finite tree $T_{\bot}$.
Now, for each node $x$ of $T_\bot$, we shall define a path embedding $m_x$ into $R_k a$ whose domain belongs to $\mathscr{P}$. The definition of $m_x$ is by induction on the height of $x$.
Suppose $x$ has height $0$, i.e.~ $x$ is the root of $T_\bot$. Then $x=P_i$ for a unique $i\in \{1,\ldots,j\}$. Define $m_x$ as the restriction of $m$ to $P_i$, i.e.~ the composition of $m\colon P\emb R_k a$ with the unique embedding $P_i\emb P$. Next, suppose $m_z$ has been defined for all nodes $z$ of height at most $l$, and let $x$ be a node of height $l+1$ labeled by some $P_j$. We distinguish two cases:
\begin{itemize}
\item If there is a node $y\geq x$ such that $T_m$ coincides with the tree ${\uparrow} y\subseteq T_{\bot}$, then we let $m_x$ be the restriction of $m$ to $P_j$.
Note that, in this case, the type of $m_x$ coincides with the tree ${\uparrow} x\subseteq T_{\bot}$. Moreover, if $z$ is the predecessor of $x$ then $m_z$ will also be an appropriate restriction of $m$, and thus $m_x$ extends $m_z$.
\item Otherwise, we let $m_x\colon P_j\emb R_k a$ be any path embedding such that:
\begin{enumerate}[label=(\roman*)]
\item The type of $m_x$ coincides with the tree ${\uparrow} x\subseteq T_{\bot}$.
\item $m_x$ extends $m_z$, where $z$ is the predecessor of $x$.
\end{enumerate}
Note that such an embedding $m_x$ exists because $x\in {\uparrow} z\subseteq T_{\bot}$ and, by inductive hypothesis, ${\uparrow} z$ coincides with the type of $m_z$.
\end{itemize}
The set
\[
V\coloneqq \{m_x\mid x\in T_\bot\}
\]
is finite and contains $m$. We regard $V$ as a cocone over a finite diagram $D$ of paths and embeddings between them. Let $\tilde{a}\coloneqq L_k(\operatornamewithlimits{colim} D)$ and note that $\tilde{a}\in L_k[\o{\C}_k]$. The functor $L_k$ preserves colimits because it is left adjoint, hence $\tilde{a}$ is the colimit in $\E$ of the diagram $L_k D$.
The cocone $\{n^\#\mid n\in V\}$ with vertex $a$ over $L_k D$ then factors through a unique morphism $f\colon \tilde{a}\to a$.
By construction, $m\colon P\emb R_k a$ factors through $R_k f$, and so there is $\tilde{m}\colon P\emb \tilde{a}$ such that $m=R_k f\circ \tilde{m}$. Hence, $\tilde{m}\to m$.
Next, with the aim of showing that $m\to \tilde{m}$, we shall define a morphism ${R_k a \to R_k \tilde{a}}$. As $R_k a$ is path generated, it suffices to define a cocone
\[
W=\{\phi_p\mid p\in \Path{(R_k a)}\}
\]
with vertex $R_k \tilde{a}$ over the diagram of path embeddings into $R_k a$. Suppose $p\in \Path{(R_k a)}$. We define the corresponding arrow $\phi_p$ by induction on the height of $p$:
\begin{enumerate}[label=(\roman*)]
\item If $p$ is the root of $\Path{(R_k a)}$, then it factors through $R_k f\colon R_k \tilde{a}\emb R_k a$, and so it yields an embedding $\phi_p\colon \dom(p)\emb R_k \tilde{a}$.
\item Suppose that $p$ has height $l+1$ and $\phi_q$ has been defined whenever $q$ has height at most $l$. We distinguish two cases: if $p$ factors through $R_k f\colon R_k \tilde{a}\emb R_k a$, i.e.~ $p= R_k f\circ s_p$ for some embedding $s_p$, then we set $\phi_p\coloneqq s_p$. This is the case, in particular, when $p\leq m$ in $\Path{(R_k a)}$. Clearly, if $p$ extends $q$ then $\phi_p$ extends~$\phi_q$.
Otherwise, let $q$ be such that $p\succ q$. By inductive hypothesis, we can suppose that $R_k f\circ \phi_q$ coincides with an embedding $m_x\colon P_j\emb R_k a$ in $V$ (up to an isomorphism $\dom(q)\cong P_j$) whose type coincides with the tree ${\uparrow} x\subseteq T_{\bot}$. As $q$ corresponds to a node $y$ covering $x$ labeled by some $P_h\cong \dom(p)$, by definition of $V$ there is an embedding $m_y\colon P_h\emb R_k a$ in $V$ such that $m_y$ extends $m_x$, and the type $m_y$ coincides with ${\uparrow} y$. Since $m_y$ factors through $R_k f$, precomposing with the isomorphism $\dom(p)\cong P_h$ we get an embedding $\phi_p\colon \dom(p)\emb R_k \tilde{a}$. Observe that $\phi_p$ extends $\phi_q$.
\end{enumerate}
The compatibility condition for the cocone $W$ states that $\phi_p$ extends $\phi_q$ whenever $p$ extends $q$, which is ensured by the definition above.
Thus, $W$ induces a morphism $g\colon R_k a \to R_k \tilde{a}$ and, by construction, $g\circ m= \tilde{m}$. Hence, $m\to \tilde{m}$.
\end{proof}
The construction of $k$-extendable covers is akin to that of $\omega$-saturated elementary extensions in model theory, where one starts with a first-order structure $M$ and constructs an elementary extension $M_1$ of $M$ in which all types over (finite subsets of) $M$ are realised, then an elementary extension $M_2$ of $M_1$ in which all types over $M_1$ are realised, and so forth. The union of the induced elementary chain of models yields the desired $\omega$-saturated elementary extension of $M$.
In the same spirit, we introduce a notion of $k$-extendability relative to a homomorphism, which models the one-step construction just outlined.
\begin{definition}\label{def:relative-extendability}
Let $h\colon a\to b$ be an arrow in $\E$. We say that $b$ is \emph{$k$-extendable relative to $h$} if the following property is satisfied for all $e\in L_k[\o{\C}_k]$: For all path embeddings $m\colon P\emb R_k a$ and $n\colon P\emb R_k e$ such that $\co{m}\rightleftarrows \co{n}$,
\[\begin{tikzcd}[column sep=2em]
{} & P \arrow[rightarrowtail]{dl}[swap]{\co{m}} \arrow[rightarrowtail]{dr}{\co{n}} & {} \\
\Sg{m} \arrow[yshift=3pt]{rr} & & \Sg{n} \arrow[yshift=-3pt]{ll}
\end{tikzcd}\]
if $n'\colon Q\emb R_k e$ is a path embedding such that $n\leq n'$ in $\Path{(R_k e)}$, there is a path embedding $m'\colon Q\emb R_k b$ such that the leftmost diagram below commutes and $\co{m'}\rightleftarrows \co{n'}$.
\begin{center}
\begin{tikzcd}
P \arrow[rightarrowtail]{r}{!} \arrow[rightarrowtail]{d}[swap]{m} & Q \arrow[rightarrowtail,dashed]{d}{m'} \\
R_k a \arrow{r}{R_k h} & R_k b
\end{tikzcd}
\ \ \ \ \ \ \
\begin{tikzcd}[column sep=2em]
{} & Q \arrow[rightarrowtail]{dl}[swap]{\co{m'}} \arrow[rightarrowtail]{dr}{\co{n'}} & {} \\
\Sg{m'} \arrow[yshift=3pt,dashed]{rr} & & \Sg{n'} \arrow[yshift=-3pt,dashed]{ll}
\end{tikzcd}
\end{center}
\end{definition}
Suppose that, given an object $a\in\E$, we are able to construct a section $s\colon a\to b$ such that $b$ is $k$-extendable relative to $s$. Iterating this process countably many times, we obtain a $k$-extendable cover $a\to a^*$, thus settling Theorem~\ref{t:model-construction}. The main hurdle consists in establishing the following proposition; a proof is offered in Section~\ref{s:proof-one-step-ext}.
\begin{proposition}\label{pr:relative-extension}
For all $a\in\E$ and all $k > 0$ there is a section $s\colon a\to b$ such that $b$ is $k$-extendable relative to $s$.
\end{proposition}
We can finally prove Theorem~\ref{t:model-construction}:
\begin{proof}[Proof of Theorem~\ref{t:model-construction}]
Let $a\in \E$. By applying Proposition~\ref{pr:relative-extension} repeatedly, we obtain a chain of sections
\[\begin{tikzcd}
a \arrow{r}{s_1} & b_1 \arrow{r}{s_2} & b_2 \arrow{r}{s_3} & b_3 \arrow{r}{s_4} & \cdots
\end{tikzcd}\]
such that $b_i$ is $k$-extendable relative to $s_i$, for all $i\geq 1$. Denote the previous diagram by $D$ and let $a^*$ be the colimit of $D$ in $\E$, which exists by~\ref{lim-colim}. Let $h_i\colon b_i\to a^*$ be the colimit map with domain $b_i$, and $s\colon a\to a^*$ the one with domain~$a$. As all the arrows in $D$ are sections, so are the colimit maps; in particular, $s$ is a section.
We claim that $a^*$ is $k$-extendable. Suppose $m\colon P\emb R_k(a^*)$ and $n\colon P\emb R_k e$ are path embeddings such that $\co{m}\rightleftarrows \co{n}$.
By Lemma~\ref{l:fg-equiv-finite}, we can assume without loss of generality that $e\in L_k[\o{\C}_k]$.
By~\ref{paths-Lk-fp}, $L_k P$ is finitely presentable in $\E$ and so $m^\#\colon L_k P \to a^*$ factors through one of the colimit maps. Assume without loss of generality that $m^\#$ factors through $h_j \colon b_j \to a$ for some $j\,{\geq}\, 1$, so there is an arrow $r\colon L_k P\to b_j$ satisfying $m^\# = h_j \circ r$. If $m_j\coloneqq r^\flat$, it follows that $m = R_k h_j \circ m_j$. In particular, $m_j$ is an embedding because so is $m$. Since $R_k h_j$ is a section, Remark~\ref{rem:sections-coslice} entails $m_j\rightleftarrows m$, and so $\co{(m_j)}\rightleftarrows \co{m}$ by Lemma~\ref{l:corestriction-properties}\ref{corestriction-arrows}. Because $\co{m}\rightleftarrows\co{n}$, also $\co{(m_j)}\rightleftarrows \co{n}$.
Now, let $n'\colon Q\emb R_k e$ be any path embedding such that $n\leq n'$ in~$\Path{(R_k e)}$. Since $b_{j+1}$ is $k$-extendable relative to $s_{j+1}$, there is a path embedding $m'\colon Q\emb R_k b_{j+1} $ such that $\co{m'}\rightleftarrows \co{n'}$ and the following diagram commutes.
\begin{equation*}
\begin{tikzcd}[column sep=4em]
P \arrow[rightarrowtail]{r}{!} \arrow[rightarrowtail]{d}[swap]{m_j} & Q \arrow[rightarrowtail]{d}{m'} \\
R_k b_j \arrow{r}{R_k s_{j+1}} & R_k b_{j+1}
\end{tikzcd}
\end{equation*}
It follows that
\[
m = R_k h_j \circ m_j = R_k h_{j+1} \circ R_k s_{j+1} \circ m_j = R_k h_{j+1} \circ m'\circ {!}
\]
and so $m''\coloneqq R_k h_{j+1} \circ m'\colon Q\emb R_k(a^*)$ satisfies $m\leq m''$ in $\Path{(R_k(a^*))}$. Again by Remark~\ref{rem:sections-coslice} and Lemma~\ref{l:corestriction-properties}\ref{corestriction-arrows} we get $\co{m''}\rightleftarrows \co{m'}$, and thus $\co{m''}\rightleftarrows \co{n'}$. This shows that $a^*$ is $k$-extendable.
\end{proof}
\subsection{Proof of Proposition~\ref{pr:relative-extension}}\label{s:proof-one-step-ext}
Fix an object $a$ of $\E$ and a positive integer $k$.
To improve readability, we drop the subscript from $L_k$ and $R_k$, and simply write $L$ and $R$ (but continue to denote by $\C_k$ the arboreal category).
We must find a section $s\colon a\to b$ such that $b$ is $k$-extendable relative to $s$. Consider all pairs of path embeddings
\[
(u\colon P\emb Ra, v\colon P\emb Re)
\]
in $\C_k$ such that $e\in L[\o{\C}_k]$ and $L\co{v}\to u^\#$ in $LP/{\E}$.
\begin{remark}
Note that $L\co{v}\to u^\#$ entails that $L\co{v}$ is an embedding. Just observe that $u^\#$ is an embedding by the first part of~\ref{path-emb}.
\end{remark}
Each such pair $(u,v)$ induces a pushout square in $\E$ as follows.
\[\begin{tikzcd}[column sep=3em]
L P \arrow[rightarrowtail]{d}[swap]{u^\#} \arrow[rightarrowtail]{r}{L\co{v}} & L\Sg{v} \arrow[rightarrowtail]{d}{\lambda_{(u,v)}} \\
a \arrow[rightarrowtail]{r}{\iota_{(u,v)}} & a +_{L P} L \Sg{v} \arrow[ul, phantom, "\ulcorner", very near start]
\end{tikzcd}\]
This pushout square exists by~\ref{lim-colim} and consists entirely of embeddings by virtue of~\ref{fact-syst}.
\begin{lemma}\label{l:iota-sections}
$\iota_{(u,v)}$ is a section.
\end{lemma}
\begin{proof}
Just observe that, since $L\co{v}\to u^\#$, there is $g\colon L \Sg{v} \to a$ such that ${g\circ L\co{v} = u^\#}$. By the universal property of the pushout, there is an arrow $h\colon a +_{L P} L \Sg{v}\to a$ such that $h\circ \iota_{(u,v)}$ is the identity of $a$.
\end{proof}
We let $D$ be the diagram in $\E$ consisting of all the morphisms
\[
\iota_{(u,v)}\colon a\to a +_{L P} L \Sg{v}
\]
as above. Because $\C_k$ is locally finite and $e$ varies among the objects of $L[\o{\C}_k]$, choosing representatives for isomorphism classes in an appropriate way we can assume by Remark~\ref{rem:finitely-many-finite-colim} that $D$ is a small diagram.
By~\ref{lim-colim}, $D$ admits a colimit $b\coloneqq \operatornamewithlimits{colim} D$. In other words, $b$ is obtained as a \emph{wide pushout} in $\E$. Denote by $s\colon a \to b$ the colimit map with domain $a$, and by
\[
t_{(u,v)}\colon a +_{L P} L \Sg{v}\to b
\]
the colimit map corresponding to the arrow $\iota_{(u,v)}$. As all arrows in $D$ are sections by Lemma~\ref{l:iota-sections}, so are the colimit maps. In particular, $s\colon a \to b$ is a section.
We claim that $b$ is $k$-extendable relative to $s$, thus settling Proposition~\ref{pr:relative-extension}. Assume we are given path embeddings $m\colon P\emb Ra$ and $n\colon P\emb Re$, with $e\in L[\o{\C}_k]$, such that $\co{m}\rightleftarrows\co{n}$ as displayed in the leftmost diagram below.
\begin{equation*}
\begin{tikzcd}[column sep=2em]
{} & P \arrow[rightarrowtail]{dl}[swap]{\co{m}} \arrow[rightarrowtail]{dr}{\co{n}} & {} \\
\Sg{m} \arrow[yshift=3pt]{rr}{f} & & \Sg{n} \arrow[yshift=-3pt]{ll}{g}
\end{tikzcd}
\ \ \ \ \ \ \
\begin{tikzcd}[column sep=2em]
{} & L P \arrow[rightarrowtail]{dl}[swap]{m^{\#}} \arrow[rightarrowtail]{dr}{L \co{n}} & {} \\
a & & L \Sg{n} \arrow{ll}[swap]{\inc{m}^\#\circ Lg}
\end{tikzcd}
\end{equation*}
If $\inc{m}\colon \Sg{m}\emb Ra$ is the canonical embedding, we get a commutative triangle as on the right-hand side above. Just observe that
\[
\inc{m}^\#\circ Lg \circ L\co{n} = \inc{m}^\#\circ L\co{m} = (\inc{m} \circ \co{m})^\# = m^\#.
\]
Hence $L\co{n}\to m^\#$. Let
\[
\iota_{(m, n)}\colon a \to a +_{L P} L \Sg{n}
\]
be the corresponding arrow in the diagram $D$.
To improve readability we shall write, respectively, $\iota$, $\lambda$ and $t$ instead of $\iota_{(m, n)}$, $\lambda_{(m, n)}$ and $t_{(m, n)}$.
Let $n'\colon Q\emb R e$ be a path embedding with $n\leq n'$ in $\Path{(R e)}$.
We must exhibit a path embedding $m'\colon Q\emb R b$ such that the leftmost square below commutes and $\co{m'}\rightleftarrows \co{n'}$.
\begin{equation}\label{eq:two-prop-rel}
\begin{tikzcd}
P \arrow[rightarrowtail]{r}{!} \arrow[rightarrowtail]{d}[swap]{m} & Q \arrow[rightarrowtail,dashed]{d}{m'} \\
R a \arrow{r}{R s} & R b
\end{tikzcd}
\ \ \ \ \ \ \
\begin{tikzcd}[column sep=2em]
{} & Q \arrow[rightarrowtail]{dl}[swap]{\co{m'}} \arrow[rightarrowtail]{dr}{\co{n'}} & {} \\
\Sg{m'} \arrow[yshift=3pt,dashed]{rr} & & \Sg{n'} \arrow[yshift=-3pt,dashed]{ll}
\end{tikzcd}
\end{equation}
Note that, because $n\leq n'$, we get $\Sg{n'}\leq \Sg{n}$ in $\Emb{Re}$. Thus, $\co{n'}$ can be identified with a path embedding into $\Sg{n}$.
Consider the arrow
\[
\xi\coloneqq (\lambda\circ L \co{n'})^\flat\colon Q \to R(a +_{L P} L\Sg{n}).
\]
\begin{lemma}\label{l:xi-emb}
$\xi$ is an embedding.
\end{lemma}
\begin{proof}
By the first part of~\ref{path-emb}, $(n')^\#$ is an embedding. Then $Ln'$ is an embedding since $(n')^\# = \epsilon_e \circ Ln'$, and so is $L\co{n'}$. It follows that $\lambda\circ L \co{n'}$ is an embedding because it is a composition of embeddings, and $\xi$ is an embedding by the second part of~\ref{path-emb}.
\end{proof}
Lemma~\ref{l:xi-emb}, combined with the fact that $Rt$ is a section (hence an embedding), entails that the composite $m'\coloneqq Rt\circ \xi \colon Q\emb R b$ is an embedding. Moreover
\begin{align*}
m'\circ {!} &= ((m'\circ {!})^{\#})^\flat = ((m')^\# \circ L{!})^\flat = (t\circ \lambda \circ L\co{n'} \circ L{!})^\flat \\
&= (t\circ \lambda \circ L\co{n})^\flat = (t\circ \iota\circ m^\#)^\flat = (s\circ m^\#)^\flat = Rs \circ m,
\end{align*}
showing that the leftmost diagram in equation~\eqref{eq:two-prop-rel} commutes.
Since $Rt$ is a section, we have $\xi\rightleftarrows m'$ by Remark~\ref{rem:sections-coslice}, and so $\co{\xi}\rightleftarrows \co{m'}$ by Lemma~\ref{l:corestriction-properties}\ref{corestriction-arrows}. Therefore, in order to show that $\co{m'}\rightleftarrows \co{n'}$ it suffices to prove that $\co{\xi}\rightleftarrows \co{n'}$. We have
\[
\lambda^\flat \circ \co{n'} = ((\lambda^\flat \circ \co{n'})^\#)^\flat = (\lambda \circ L\co{n'})^\flat = \xi
\]
and thus $\co{n'}\to \xi$. It follows from Remark~\ref{rem:idempotent} that $\co{n'}\to \co{\xi}$.
It remains to show that $\co{\xi}\to \co{n'}$; the proof of this fact will occupy us for the rest of this section.
As $\C_k$ is an arboreal category, $\Sg{\xi}$ is the colimit of its path embeddings. Thus, in order to define a morphism $\Sg{\xi} \to \Sg{n'}$ it suffices to define a compatible cocone with vertex $\Sg{n'}$ over the diagram of path embeddings into $\Sg{\xi}$. By Lemma~\ref{l:corestriction-properties}\ref{comparable-subtree}, the path embeddings into $\Sg{\xi}$ can be identified with the path embeddings into $R(a +_{L P} L\Sg{n})$ that are comparable with~$\xi$. For each such path embedding $q\colon Q'\emb R(a +_{L P} L\Sg{n})$, we shall define an arrow $\zeta_q\colon Q'\to Re$ and prove that these form a compatible cocone.
We will then deduce, using the induced mediating morphism $\Sg{\xi} \to Re$, that $\co{\xi}\to \co{n'}$.
Fix an arbitrary path embedding $q\colon Q'\emb R(a +_{L P} L\Sg{n})$ above $\xi$ and consider the following diagram in $\E$, where the four vertical faces are pullbacks.
\begin{equation}\label{eq:cube-of-embeddings}
\begin{tikzcd}[row sep=1em, column sep=2.5em]
\o{L P} \arrow[rr,rightarrowtail,"\nu"] \arrow[dr,swap,rightarrowtail,"\mu_1"] \arrow[dd,rightarrowtail,"\mu_2",swap] &&
\o{L\Sg{n}} \arrow[dd,rightarrowtail,"\tau_2",near end] \arrow[dr,rightarrowtail,"\tau_1"] \\
& \o{a} \arrow[rr,crossing over,rightarrowtail,"\sigma_1" near start] &&
LQ' \\
LP \arrow[rr,rightarrowtail,"L\co{n}", near end] \arrow[dr,rightarrowtail,"m^\#",swap] && L\Sg{n} \arrow[dr,rightarrowtail,"\lambda"] \\
& a \arrow[rr,rightarrowtail,"\iota"] \arrow[uu,leftarrowtail,crossing over,"\sigma_2", near end] & & a +_{L P} L\Sg{n} \arrow[uu,swap,leftarrowtail,"q^\#"]
\end{tikzcd}
\end{equation}
Note that the previous diagram consists entirely of embeddings because $q^\#$ is an embedding by the first part of~\ref{path-emb}, and the pullback in~$\E$ of an embedding exists by~\ref{lim-colim} and is again an embedding.
Because $q$ is above $\xi$, there is an embedding $Q\emb Q'$, and thus also an embedding $P\emb Q'$. By the universal property of pullbacks, there are unique arrows
\[
\theta\colon LP\emb \o{a} \ \ \text{ and } \ \ \Delta\colon LP \emb \o{LP}
\]
making the following diagrams commute.
\begin{equation*}
\begin{tikzcd}[column sep = 3em]
LP \arrow[rr, relay arrow=2ex, rightarrowtail, "L!"]
\arrow[bend right=30,rightarrowtail]{dr}[swap, description]{m^{\#}} \arrow[dashed,rightarrowtail]{r}{\theta} & \o{a} \arrow[dr, phantom, "\lrcorner", very near start] \arrow[rightarrowtail]{r}{\sigma_1} \arrow[rightarrowtail]{d}[swap]{\sigma_2} & LQ' \arrow[rightarrowtail]{d}{q^\#} \\
{} & a \arrow[rightarrowtail]{r}{\iota} & {a +_{L P} L\Sg{n}}
\end{tikzcd}
\ \ \ \ \ \ \
\begin{tikzcd}[column sep = 3em]
LP \arrow[rr, relay arrow=2ex, "\theta", rightarrowtail]
\arrow[bend right=30]{dr}[swap, description]{\id_{LP}} \arrow[dashed,rightarrowtail]{r}{\Delta} & \o{LP} \arrow[dr, phantom, "\lrcorner", very near start] \arrow[rightarrowtail]{r}{\mu_1} \arrow[rightarrowtail]{d}[swap]{\mu_2} & \o{a} \arrow[rightarrowtail]{d}{\sigma_2} \\
{} & LP \arrow[rightarrowtail]{r}{m^{\#}} & a
\end{tikzcd}
\end{equation*}
Note in particular that $\mu_2$ is a retraction whose right inverse is $\Delta$. As $\mu_2$ is also an embedding, it must be an isomorphism with (two-sided) inverse $\Delta$.
By~\ref{path-restriction-prop} (more precisely, by item~\ref{path-re-1} in Definition~\ref{def:path-re}) there is an arrow $w\colon P\to Q_{\o{a}}$ between paths such that $Lw = \theta$. Hence, we can consider $\sigma_2^\flat\colon Q_{\o{a}}\to Ra$. Note that
\[
(\sigma_2^\flat \circ w)^\# = \sigma_2 \circ Lw = \sigma_2 \circ \theta = m^\#,
\]
and so $\sigma_2^\flat \circ w = m$. In particular, $w$ is an embedding. As $Q_{\o{a}}$ is a path, $\inc{w}\colon \Sg{w}\emb Q_{\o{a}}$ can be identified with the identity $Q_{\o{a}} \to Q_{\o{a}}$. By Lemma~\ref{l:corestriction-properties}\ref{corestriction-arrows-finer}, there is a unique arrow $\psi_q\colon Q_{\o{a}}\to \Sg{m}$ making the following diagram commute.
\[\begin{tikzcd}
Q_{\o{a}} \arrow[dashed]{rr}{\psi_q} \arrow[rightarrowtail]{dr}[swap]{\sigma_2^\flat} & & \Sg{m} \arrow[rightarrowtail]{dl}{\inc{m}} \\
{} & Ra & {}
\end{tikzcd}\]
\begin{lemma}\label{l:gamma-tilde-triangle}
The following diagram commutes.
\[\begin{tikzcd}
{} & \o{LP} \arrow[rightarrowtail]{dl}[swap]{\mu_1} \arrow[rightarrowtail]{dr}{L\co{n}\circ \mu_2} & {} \\
\o{a} \arrow{rr}{L(f\circ \psi_q)} & & L\Sg{n}
\end{tikzcd}\]
\end{lemma}
\begin{proof}
Note that
\[\inc{m}\circ \psi_q\circ w = \sigma_2^\flat \circ w = m = \inc{m}\circ \co{m}
\]
and so $\psi_q\circ w = \co{m}$ since $\inc{m}$ is a monomorphism.
Applying the functor $L$ to the outer commutative diagram on the left-hand side below, we obtain the commutative diagram on the right-hand side.
\begin{center}
\begin{tikzcd}[row sep = 2.5em, column sep = 2.5em]
{} & P \arrow[rightarrowtail]{dl}[swap]{w} \arrow[rightarrowtail]{d}[description]{\co{m}} \arrow[rightarrowtail]{dr}{\co{n}} & {} \\
Q_{\o{a}} \arrow{r}{\psi_q} & \Sg{m} \arrow{r}{f} & \Sg{n}
\end{tikzcd}
\ \ \ \ \ \ \
\begin{tikzcd}[row sep = 2.5em]
{} & LP \arrow[rightarrowtail]{dl}[swap]{\theta} \arrow[rightarrowtail]{dr}{L\co{n}} & {} \\
\o{a} \arrow{rr}{L(f\circ \psi_q)} & & L\Sg{n}
\end{tikzcd}
\end{center}
Hence, precomposing with $\mu_2$ we get
\[
L\co{n} \circ \mu_2 = L(f\circ \psi_q) \circ \theta \circ \mu_2 = L(f\circ \psi_q) \circ \mu_1. \qedhere
\]
\end{proof}
For convenience of notation, let us write
\[
\tilde{\gamma}_q\coloneqq L(f\circ \psi_q)\colon \o{a}\to L\Sg{n} \ \text{ and } \ \gamma_q \coloneqq \epsilon_{e}\circ L\inc{n}\circ \tilde{\gamma}_q\colon \o{a}\to e.
\]
With this notation we have
\begin{align*}
\gamma_q \circ \mu_1 &= \epsilon_{e}\circ L\inc{n}\circ \tilde{\gamma}_q \circ \mu_1 \\
&= \epsilon_{e}\circ L\inc{n}\circ L\co{n}\circ \mu_2 \tag*{Lemma~\ref{l:gamma-tilde-triangle}} \\
&= \epsilon_{e}\circ L\inc{n}\circ \tau_2 \circ \nu
\end{align*}
and so the leftmost diagram below commutes.
\begin{center}
\begin{tikzcd}
\o{L P} \arrow[rightarrowtail]{r}{\nu} \arrow[rightarrowtail]{d}[swap]{\mu_1} & \o{L\Sg{n}} \arrow[rightarrowtail]{d}{\epsilon_{e}\circ L\inc{n}\circ \tau_2} \\
\o{a} \arrow{r}{\gamma_q} & e
\end{tikzcd}
\ \ \ \ \ \ \
\begin{tikzcd}
\o{L P} \arrow[rightarrowtail]{r}{\nu} \arrow[rightarrowtail]{d}[swap]{\mu_1} & \o{L\Sg{n}} \arrow[rightarrowtail]{d}{\tau_1} \\
\o{a} \arrow[rightarrowtail]{r}{\sigma_1} & L Q' \arrow[ul, phantom, "\ulcorner", very near start]
\end{tikzcd}
\end{center}
Now, note that by~\ref{fact-syst} the top face of diagram~\eqref{eq:cube-of-embeddings}, displayed in the rightmost diagram above, is a pushout in $\E$.
By the universal property of pushouts, there is a unique $\delta_q\colon LQ' \to e$ satisfying
\[
\delta_q\circ \sigma_1=\gamma_q \ \text{ and } \ \delta_q\circ \tau_1= \epsilon_{e}\circ L\inc{n} \circ \tau_2.
\]
Define $\zeta_q\coloneqq (\delta_q)^\flat\colon Q'\to Re$ for all path embeddings $q\colon Q'\emb R(a +_{L P} L\Sg{n})$ above $\xi$. Further, if $q$ is below $\xi$, we let $\zeta_q$ be the obvious restriction of $\zeta_{\xi}$.
\begin{lemma}
The following family of arrows forms a compatible cocone over the diagram of path embeddings into $\Sg{\xi}$:
\[
\{\zeta_q \mid q\colon Q'\emb R(a +_{L P} L\Sg{n}) \ \text{is a path embedding comparable with $\xi$}\}.
\]
\end{lemma}
\begin{proof}
Fix arbitrary path embeddings
\[
q\colon Q'\emb R(a +_{L P} L\Sg{n}) \ \text{ and } \ q'\colon Q''\emb R(a +_{L P} L\Sg{n})
\]
comparable with $\xi$.
The compatibility condition for the cocone states that $\zeta_q$ extends $\zeta_{q'}$ whenever $q\geq q'$. It suffices to settle the case where $\xi\leq q\leq q'$. Also, it is enough to show that $\delta_q$ extends $\delta_{q'}$, i.e.~ $\delta_q \circ L{!} = \delta_{q'}$ where ${!}\colon Q''\emb Q'$ is the unique embedding. Just observe that $\delta_q \circ L{!} = \delta_{q'}$ entails
\[
\zeta_q\circ {!} = \delta_q^\flat \circ {!} = ((\delta_q^\flat \circ {!})^\#)^\flat = (\delta_q \circ L{!})^\flat = \delta_{q'}^\flat = \zeta_{q'}.
\]
Consider the following diagram all whose vertical faces are pullbacks and note that by~\ref{path-restriction-prop}, and more precisely by item~\ref{path-re-2} in Definition~\ref{def:path-re}, the pullback of $L!$ along $\sigma_1$ is of the form $L\ell$ for some arrow $\ell\colon Q_{\o{\o{a}}}\to Q_{\o{a}}$.
\[\begin{tikzcd}[row sep=1em, column sep=2.5em]
\o{\o{L P}} \arrow[rr,rightarrowtail,""] \arrow[dr,swap,rightarrowtail,""] \arrow[dd,rightarrowtail,"",swap] &&
\o{\o{L\Sg{n}}} \arrow[dd,rightarrowtail,"\o{\tau}_2",near end] \arrow[dr,rightarrowtail,"\o{\tau}_1"] &&\\
& \o{\o{a}} \arrow[rr,crossing over,rightarrowtail,"\o{\sigma}_1", near start] &&
LQ'' \arrow[dd,rightarrowtail,"L{!}"] \arrow[dddr,"\delta_{q'}", bend left = 30] && \\
\o{L P} \arrow[rr,rightarrowtail,"\nu", near end] \arrow[dr,swap,rightarrowtail,"\mu_1"] \arrow[dd,rightarrowtail,"\mu_2",swap] &&
\o{L\Sg{n}} \arrow[dd,rightarrowtail,"\tau_2",near end] \arrow[dr,rightarrowtail,"\tau_1"] && \\
& \o{a} \arrow[rr,crossing over,rightarrowtail,"\sigma_1" near start] \arrow[uu,leftarrowtail,crossing over,"L\ell", near end] &&
LQ' \arrow[dr,"\delta_q"] && \\
LP \arrow[rr,rightarrowtail,"L\co{n}", near end] \arrow[dr,rightarrowtail,"m^\#",swap] && L\Sg{n} \arrow[dr,rightarrowtail,"\lambda"] & & e \\
& a \arrow[rr,rightarrowtail,"\iota"] \arrow[uu,leftarrowtail,crossing over,"\sigma_2", near end] & & a +_{L P} L\Sg{n} \arrow[uu,swap,leftarrowtail,"q^\#"] &&
\end{tikzcd}\]
In view of the definition of $\delta_{q'}$ in terms of the universal property of pushouts, it suffices to show that $\delta_q \circ L{!}$ satisfies
\[
(\delta_q \circ L{!})\circ \o{\sigma}_1 = \gamma_{q'} \ \text{ and } \ (\delta_q \circ L{!}) \circ \o{\tau}_1= \epsilon_{e}\circ L\inc{n} \circ \tau_2\circ \o{\tau}_2.
\]
The latter equation follows at once from the identity $\delta_q \circ \tau_1 = \epsilon_{e} \circ L\inc{n} \circ \tau_2$. With regards to the former, we have
\[
(\delta_q \circ L{!})\circ \o{\sigma}_1 = \delta_q \circ \sigma_1 \circ L\ell = \gamma_q \circ L\ell.
\]
Thus it suffices to show that $\gamma_q \circ L\ell = \gamma_{q'}$, and this clearly follows if we prove that $\tilde{\gamma}_q \circ L\ell = \tilde{\gamma}_{q'}$.
Recall that $\psi_{q'}$ is the unique morphism such that the composite
\[\begin{tikzcd}
Q_{\o{\o{a}}} \arrow{r}{\psi_{q'}} & \Sg{m} \arrow[rightarrowtail]{r}{\inc{m}} & Ra
\end{tikzcd}\]
coincides with $(\sigma_2 \circ L\ell)^\flat$. But
\[
(\sigma_2 \circ L\ell)^\flat = ((\sigma_2^\flat \circ \ell)^\#)^\flat = \sigma_2^\flat \circ \ell,
\]
so $\inc{m}\circ \psi_{q}\circ \ell = \sigma_2^\flat \circ \ell$ entails $\psi_{q'} = \psi_q \circ \ell$. Therefore,
\[
\tilde{\gamma}_{q'} = L(f\circ \psi_{q'}) = L(f\circ \psi_q \circ \ell) = \tilde{\gamma}_{q}\circ L\ell. \qedhere
\]
\end{proof}
The previous lemma entails the existence of a unique morphism $h\colon \Sg{\xi} \to Re$ satisfying $h\circ q = \zeta_q$ for all path embeddings $q$ into $R(a +_{L P} L\Sg{n})$ that are comparable with $\xi$. In order to conclude that $\co{\xi} \to \co{n'}$ as desired, we prove the following useful property of the cocone consisting of the morphisms $\zeta_q$.
\begin{lemma}
Let $G\coloneqq LR$ and consider the composite morphism
\[\begin{tikzcd}
RL\Sg{n} \arrow{r}{RL \inc{n}} & RGe \arrow{r}{R\epsilon_e} & Re.
\end{tikzcd}\]
If $q= R\lambda\circ \alpha$ for some arrow $\alpha\colon Q'\emb RL\Sg{n}$, then $\zeta_q = R\epsilon_e \circ RL \inc{n} \circ \alpha$.
\end{lemma}
\begin{proof}
Suppose that $q= R\lambda\circ \alpha$ for some $\alpha\colon Q'\emb RL\Sg{n}$.
We have
\[
(R\epsilon_e \circ RL \inc{n} \circ \alpha)^\# = ((\epsilon_e \circ L \inc{n} \circ \alpha^\#)^\flat)^\# = \epsilon_e \circ L \inc{n} \circ \alpha^\#.
\]
By the universal property of $\delta_q = \zeta_q^\#$, $\zeta_q = R\epsilon_e \circ RL \inc{n} \circ \alpha$ if, and only if,
\begin{equation}\label{eq:cond-1}
(\epsilon_e \circ L \inc{n} \circ \alpha^\#) \circ \tau_1 = \epsilon_{e}\circ L\inc{n} \circ \tau_2
\end{equation}
and
\begin{equation}\label{eq:cond-2}
(\epsilon_e \circ L \inc{n} \circ \alpha^\#) \circ \sigma_1 = \epsilon_{e}\circ L\inc{n} \circ \tilde{\gamma}_q.
\end{equation}
Observe that $\alpha^\# \circ \tau_1 = \tau_2$ because
\[
\lambda\circ \alpha^\# \circ \tau_1 = q^\# \circ \tau_1 = \lambda \circ \tau_2
\]
and $\lambda$ is a monomorphism. Thus, equation~\eqref{eq:cond-1} holds.
Further, note that
\[
q^\# = (R\lambda\circ \alpha)^\# = ((\lambda\circ \alpha^\#)^\flat)^\# = \lambda\circ \alpha^\#
\]
and so $\tau_1$ in diagram~\eqref{eq:cube-of-embeddings} is an isomorphism. As pushout squares of embeddings in $\E$ are also pullbacks by~\ref{fact-syst}, $\mu_1$ in diagram~\eqref{eq:cube-of-embeddings} is also an isomorphism.
Therefore,
\begin{align*}
\lambda \circ \alpha^\# \circ \sigma_1 &= q^\# \circ \sigma_1 = q^\# \circ \sigma_1 \circ \mu_1 \circ \mu_1^{-1} = \lambda\circ L\co{n} \circ \mu_2 \circ \mu_1^{-1} \\
& = \lambda\circ \tilde{\gamma}_q \circ \mu_1 \circ \mu_1^{-1} = \lambda\circ \tilde{\gamma}_q
\end{align*}
and so $\alpha^\# \circ \sigma_1 = \tilde{\gamma}_q$. Equation~\eqref{eq:cond-2} then follows at once.
\end{proof}
Since $\xi = R\lambda \circ (L\co{n'})^\flat$, recalling that we identify $\co{n'}$ with a path embedding into $\Sg{n}$, an application of the previous lemma with $\alpha\coloneqq (L\co{n'})^\flat$ yields
\[
\zeta_{\xi} = R\epsilon_e \circ RL \inc{n} \circ (L\co{n'})^\flat = (\epsilon_e \circ L\inc{n} \circ L\co{n'})^\flat = (\epsilon_e \circ L n')^\flat = ((n')^\#)^\flat = n'.
\]
In other words, $h\circ \co{\xi}=n'$ and so $\co{\xi}\to n'$. It follows from Remark~\ref{rem:idempotent} that $\co{\xi} \to \co{n'}$, thus concluding the proof of Proposition~\ref{pr:relative-extension}.
\bibliographystyle{amsplain-nodash}
|
1,314,259,993,421 | arxiv | \section{Sometimes legal operations}\label{s:legal}
The equation $(x+y)^p=x^p+y^p$ is valid in a field of prime
characteristic~$p$. Thus an apparent error can be a legitimate deduction
in the right circumstances.
Denote the $k$th derivative of
$t^n$ as $n^{\underline{k}}t^{n-k}$ where $n^{\underline{k}}$ equals
$n(n-1)\cdots(n-k+1)$ for $k>0$, and 1 if $k=0$. We pronounce $n^{\underline{k}}$
as ``$n$ falling factorial $k$''. Observe that
$n^{\underline{k}}$ is divisible by each integer in $\{1,\dots,k\}$, and $n^{\underline{k}}=0$ for
$n<k$. Also the $k$th derivative $f^{(k)}(t)$ of a power series $f(t)=\sum_{n\ge0}f_nt^n$ equals $\sum_{n\ge0}f_nn^{\underline{k}}t^{n-k}=\sum_{n\ge k}f_nn^{\underline{k}}t^{n-k}$.
Let $\alpha\in\mathbb{Z}$ satisfy $\alpha^n\equiv1\pmod n$. Thus $\alpha$ is a root, modulo~$n$, of the polynomial $t^n-1=(t-1)f(t)$, where $f(t)=\sum_{i=0}^{n-1} t^i$.
It is clear that $f^{(0)}(\alpha) \equiv 0 \pmod n$ when
$\alpha\equiv 1 \pmod n$. However, it seems unreasonable to expect that
$f^{(k)}(\alpha)= \sum_{i=0}^{n-1}i^{\underline{k}}\alpha^{i-k}\equiv 0 \pmod n$ holds for all $k\ge0$. What looks like a blunder turns out to be true under the
(unreasonably) weak assumptions of Theorem~\ref{T1}.
\begin{theorem}\label{T1}
Suppose $k\ge0$, $n\ge1$, $\alpha\in\mathbb{Z}$ where $\alpha^n\equiv1\pmod n$.
Then
\begin{equation}\label{E3}
f^{(k)}(\alpha)= \sum_{i=0}^{n-1}i^{\underline{k}}\alpha^{i-k}\equiv 0 \pmod n.
\end{equation}
if and only if at least one of the following hold:
\begin{enumerate}[{\rm (a)}]
\item $k+1\not\in\{4,q\}$ where $q$ is prime, or
\item $k+1=4$ and $4\nmid n$, or
\item $k+1$ is a prime $q$, and $q\nmid n$ or $\alpha\not\equiv1 \pmod q$.
\end{enumerate}
\end{theorem}
The motivation for Theorem~\ref{T1} came from a (presently
unfinished~\cite{BDG}) study of input-output
automata on a group $G$. We considered
the finite groups $G$ for which there exists a `constant' $k\in G$ and
a function $f\colon G\to G$ satisfying $f(xk)=xf(x)$ for all $x\in G$.
We call these $J$-groups (as they are related to the Jacobson radical of
a near ring).
A simple argument shows that $J$-groups must have odd order, and hence
are solvable by~\cite{FT}. We conjectured~\cite{BDG} that any nilpotent group of
odd order is a $J$-group. To prove that many metacyclic groups
are $J$-groups required the $k=0$ and $k=1$ cases of Theorem~\ref{T1}. The proof for
all $k\ge0$ is not much harder.
\section{The proofs}\label{s:proofs}
We first establish some preliminary results before proving Theorem~\ref{T1}.
Henceforth, $n,i,j,k$ will be integers.
A sum $\sum_{i=n_0}^{n_1-1}g(i)$ collapses if we find a function
$G$ such that $g(i)=G(i+1)-G(i)$ for $n_0\leq i<n_1$. Then
$\sum_{i=n_0}^{n_1-1}g(i)=G(n_1)-G(n_0)$. By analogy with differentiation, we write
$(\Delta G)(i)=G(i+1)-G(i)$. For example, if $g(i)=i^{\underline{k}}$, then it
follows from $\Delta(i^{\underline{k+1}})=(i+1)i^{\underline{k}}-i^{\underline{k}}(i-k)
=(k+1)i^{\underline{k}}$ that $G(i)=i^{\underline{k+1}}/(k+1)$. Hence
\begin{equation}\label{E1}
\sum_{i=n_0}^{n_1-1}i^{\underline{k}}
=\sum_{i=n_0}^{n_1-1}\Delta\left(\frac{i^{\underline{k+1}}}{k+1}\right)=
\frac{n_1^{\underline{k+1}}}{k+1}-\frac{n_0^{\underline{k+1}}}{k+1}
=\frac{n_1^{\underline{k+1}}-n_0^{\underline{k+1}}}{k+1}.
\end{equation}
Clearly $k$ divides $n^{\underline{k}}$ for all $n\ge0$ and $k\ge1$.
The $p$-adic valuation $\nu_p(n)$ of an integer $n\ne 0$ is defined by
$\nu_p(n)=\log_p(n_p)$ where $n_p$ is the largest $p$-power divisor of $n$.
This (additive) valuation extends to $\mathbb{Q}^\times$ by defining $\nu_p(r/s)$ to be $\nu_p(r)-\nu_p(s)$.
\begin{lemma}\label{L1}
Suppose $k\ge1$ and $n\ge1$. Let $p\mid (k+1)$ where $p$ is a prime, and let $e=\nu_p(k+1)\ge 1$.
\begin{enumerate}[{\rm (a)}]
\item If $k+1\ne p^e$, then
$\nu_p((n-1)^{\underline{k}})\ge e$.
\item If $k+1=p^e$, then $\nu_p((n-1)^{\underline{k}})\ge e-1$ where equality holds only if $p\mid n$.
\item $\nu_p((n-1)^{\underline{k}}/(k+1))<0$ if and only if $k+1\in\{4,p\}$
and $(k+1)\mid n$, in which case $\nu_p((n-1)^{\underline{k}}/(k+1))=-1$.
\end{enumerate}
\end{lemma}
\begin{proof}
(a)~Suppose first that
$k+1$ is not a $p$-power and write $k+1=ab$ where $\gcd(a,b)=1$ and
$1<a<b<k+1$. Since $a,b\le k$ it follows that $a$ and $b$, and hence
$k+1=ab$, divide $(n-1)^{\underline{k}}$.
Hence $e=\nu_p(k+1)\le\nu_p((n-1)^{\underline{k}})$. This proves~(a).
(b)~Suppose now that $k+1=p^e$. As $p^{e-1}\leq k$, we deduce that
$p^{e-1}\mid (n-1)^{\underline{k}}$, and so $\nu_p((n-1)^{\underline{k}})\ge e-1$. Suppose $\nu_p((n-1)^{\underline{k}})=e-1$. As $k+1$ divides $n^{\underline{k+1}}=n(n-1)^{\underline{k}}$
but not $(n-1)^{\underline{k}}$, we deduce that $p$ divides $n$.
This proves~(b).
(c)~Assume first that $\nu_p((n-1)^{\underline{k}}/(k+1))<0$, that is, $\nu_p((n-1)^{\underline{k}})<\nu_p(k+1)=e$. Part~(a) implies $k+1=p^e$ and Part~(b) implies $\nu_p((n-1)^{\underline{k}})= e-1$ and $p\mid n$, so that $\nu_p((n-1)^{\underline{k}}/(k+1))=-1$.
Thus each factor of $(n-1)^{\underline{k}}$ of the form $n-jp$ with
$1\le j\le p^{e-1}-1$ is a multiple of $p$, and so $\nu_p((n-1)^{\underline{k}})\geq p^{e-1}-1$. Therefore
$p^{e-1}-1\le e-1$, that is $p^{e-1}\le e$.
The latter inequality is true
for $e=1$ and all primes $p$, and for $e=2$ and $p=2$, and false otherwise.
If $e=1$, then $k+1=p\mid n$. If $e=2$ and $p=2$, then $k+1=4$, $2\mid n$ and $\nu_2((n-1)^{\underline{3}})= 1$. Thus $n-1$ and $n-3$ are odd while $n-2\equiv 2\pmod 4$. It follows that $n\equiv 0\pmod 4$, and so in both cases $k+1$ divides $n$.
Conversely, assume that $k+1\in\{4,p\}$ and $(k+1)\mid n$.
If $k+1=4\mid n$, then $(n-1)^{\underline{k}}=(n-1)(n-2)(n-3)$ where $n-i\equiv 4-i\pmod 4$. Thus $\nu_2((n-1)^{\underline{3}})= 1<\nu_2(k+1)=2$.
If $k+1=p\mid n$, then $(n-1)^{\underline{k}}=(n-1)(n-2)\cdots(n-p+1)$ where $n-k\equiv p-k \not\equiv 0\pmod p$. Thus, in both cases, we have { $\nu_p((n-1)^{\underline{k}})= e-1<\nu_p(k+1)=e$, as desired.}
\end{proof}
\begin{corollary}\label{C}
By Lemma~$\ref{L1}$, we have that $(n-1)^{\underline{k}}/(k+1)$ is an integer unless
\begin{enumerate}[{\rm (i)}]
\item $k+1=4$ and $4\mid n$, or
\item $k+1$ is a prime $p$, and $p\mid n$.
\end{enumerate}
Moreover, $\nu_p((n-1)^{\underline{k}}/(k+1))\ge-1$ and
$\nu_p((n-1)^{\underline{k}}/(k+1))\ge0$ if $(k+1)\nmid n$.
\end{corollary}
To prove Theorem~\ref{T1}, we will also need the following lemma.
\begin{lemma}\label{L2}
Suppose $k\geq0$, $n\ge1$ and $\alpha\equiv1\pmod p$ where $p$ is prime. Then
\[
\sum_{i=0}^{n-1}i^{\underline{k}}\alpha^{i-k}\equiv \frac{(n-1)^{\underline{k}}}{k+1}n\pmod {p^\ell}\qquad\textup{where $\ell:=\nu_p(n)$.}
\]
\end{lemma}
\begin{proof}
Since $\alpha\equiv1\pmod p$, we have $\alpha=1+y$ where $p\mid y$. Using $i^{\underline{k}}=0$ if $0\le i<k$,
and Eq.~\eqref{E1} gives:
\begin{align}\label{E4}
\sum_{i=0}^{n-1}i^{\underline{k}}\alpha^{i-k}&=
\sum_{i=k}^{n-1}i^{\underline{k}}(1+y)^{i-k}=
\sum_{i=k}^{n-1}i^{\underline{k}}\sum_{j=0}^{i-k}\binom{i-k}{j}y^j\notag\\
&=\sum_{j=0}^{n-1-k}y^j\sum_{i=k+j}^{n-1}i^{\underline{k}}\binom{i-k}{j}=\sum_{j=0}^{n-1-k}\frac{y^j}{j!}\sum_{i=k+j}^{n-1}i^{\underline{k+j}}\notag\\
& =\sum_{j=0}^{n-1-k}\frac{y^j}{j!}\left(\frac{n^{\underline{k+j+1}}}{k+j+1}
-\frac{(k+j)^{\underline{k+j+1}}}{k+j+1}\right) \notag\\
& =\sum_{j=0}^{n-1-k}\frac{y^j}{j!}\frac{n^{\underline{k+j+1}}}{k+j+1} =\sum_{j=0}^{n-1-k}\frac{y^j}{j!}\frac{(n-1)^{\underline{k+j}}}{k+j+1}n.
\end{align}
Consider the summands in~\eqref{E4} with $j\ge 1$.
By Legendre's formula, $\nu_p(j!)=\sum_{i=1}^\infty \lfloor\frac{j}{p^i}\rfloor$.
(The sum is finite as $\lfloor\frac{j}{p^i}\rfloor=0$ for $p^i>j$.)
Thus $\nu_p(j!)<\sum_{i=1}^\infty \frac{j}{p^i}=\frac{j}{p-1}$. Therefore
$\nu_p(j!)< j$ for $j\ge1$, and hence $p$ divides $y^j/j!$.
Since $\nu_p(y^j/j!)\ge 1$ for all $j\ge1$, and
$\nu_p(\frac{(n-1)^{\underline{k+j}}}{k+j+1})\ge -1$ by Corollary~\ref{C}, we have
$\nu_p(\frac{y^j}{j!}\frac{(n-1)^{\underline{k+j}}}{k+j+1})\ge 0$. However $p^\ell\mid n$, and so $\frac{y^j}{j!}\frac{(n-1)^{\underline{k+j}}}{k+j+1}n\equiv 0\pmod {p^\ell}$ for each $j\geq 1$.
Hence
\[
\sum_{i=0}^{n-1}i^{\underline{k}}\alpha^{i-k}\equiv \frac{(n-1)^{\underline{k}}}{k+1}n\pmod {p^\ell}.\qedhere
\]
\end{proof}
\medskip
\begin{proof}[Proof of Theorem~\ref{T1}]
When $n=1$, Eq.~\eqref{E3} is trivially true.
Moreover, one of (a), (b) or (c) is true when $n=1$ since $4\nmid n$ and $q\nmid n$ for any prime $q$.
We now assume $n>1$.
Suppose that $n=n_1\cdots n_r$ where the $n_j$ are pairwise coprime
and $n_j>1$ for each $j$.
Given the ring isomorphism $\mathbb{Z}_n\to\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_r}$, Eq.~\eqref{E3} holds if and only if $n_j\mid\sum_{i=0}^{n-1}i^{\underline{k}}\alpha^{i-k}$
for each $j$. Suppose that $n_j=p_j^{\ell_j}$ where each $p_j$ is prime.
Fix a prime factor $p$ of $n$, and set $\ell:=\nu_p(n)$.
For each prime factor $p$ of $n$, we divide the proof in two cases.
{\sc Claim~1:} If $\alpha\not\equiv1\pmod p$, then $\sum_{i=0}^{n-1}i^{\underline{k}}\alpha^{i-k}\equiv0\pmod {p^\ell}$.
Suppose that $\alpha\not\equiv1\pmod p$.
Consider the identity $f(t)=\sum_{i=0}^{n-1}t^i=f_1(t)f_2(t)$ where
$f_1(t)=t^n-1$ and $f_2(t)=(t-1)^{-1}$. The $k$-fold derivative of the product
$f_1f_2$ is
$(f_1f_2)^{(k)}=\sum_{i=0}^k\binom{k}{i}f_1^{(k-i)}f_2^{(i)}$ by Leibnitz' formula. We have
$f_1^{(i)}(t)=n^{\underline{i}}t^{n-i}$ for $i>0$, and
$f_2^{(i)}(t)=(-1)^ii!(t-1)^{-1-i}=-i!(1-t)^{-1-i}$ for $i\ge0$. Hence, for $t\neq 1$,
\[
f^{(k)}(t)=\sum_{i=0}^{n-1}i^{\underline{k}}t^{i-k}
=f_1^{(0)}(t)f_2^{(k)}(t)-\sum_{i=0}^{k-1}\binom{k}{i}n^{\underline{k-i}}t^{n-k+i}i!(1-t)^{-1-i}.
\]
Replacing $\binom{k}{i}i!$ with $k^{\underline{i}}$ gives
\[
\sum_{i=0}^{n-1}i^{\underline{k}}t^{i-k}
=-(t^n-1)k!(1-t)^{-1-k}-\sum_{i=0}^{k-1}k^{\underline{i}}n^{\underline{k-i}}t^{n-k+i}(1-t)^{-1-i}.
\]
Substituting $t=\alpha$ and noting that $\alpha^n\equiv1\pmod {p^\ell}$
and $1-\alpha$ is a unit modulo~$p^\ell$ shows that
$\sum_{i=0}^{n-1}i^{\underline{k}}\alpha^{i-k}\equiv0\pmod {p^\ell}$.
The sum vanishes modulo $p^\ell$ for $k=0$, and for $k>0$
because $n$ divides $n^{\underline{k-i}}$ for $i<k$ and $p^\ell\mid n$.
This proves Claim~1.
{\sc Claim~2:} If $\alpha\equiv1\pmod p$ and at least one of the conditions (a), (b), or~(c) hold, then
$\sum_{i=0}^{n-1}i^{\underline{k}}\alpha^{i-k}\equiv 0\pmod {p^\ell}$.
Suppose that $\alpha\equiv1\pmod p$ and at least one of the conditions (a), (b), or~(c) hold.
We argue that $\frac{(n-1)^{\underline{k}}}{k+1}$ is an integer. Suppose not.
Then by Corollary~\ref{C}, condition~(i) or (ii) holds.
Since $\alpha\equiv1\pmod p$ both ~(i) and (ii) are incompatible with
(a), (b), and~(c). This shows that $\frac{(n-1)^{\underline{k}}}{k+1}$ is
an integer, and hence Claim~2 is true by Lemma~\ref{L2}.
In summary, we have $n_j\mid\sum_{i=0}^{n-1}i^{\underline{k}}\alpha^{i-k}$ for each $j$, and so~\eqref{E3} holds if at least one of the conditions (a), (b), or (c) hold.
To finish the proof, we now prove the converse.
Assume~\eqref{E3} holds but (a) does not hold. Then $n_j\mid\sum_{i=0}^{n-1}i^{\underline{k}}\alpha^{i-k}$
for each $j$, and $k+1\in\{4,q\}$ where $q$ is a prime. We must show that conditions~(b) or~(c) hold.
Suppose $k+1=4$. Assume $4\mid n$. Consider the prime factor $2$ of $n$.
Since $\alpha^n\equiv1\pmod n$, we have $\alpha\equiv1\pmod 2$. Thus Lemma~\ref{L2} implies
$\sum_{i=0}^{n-1}i^{\underline{3}}\alpha^{i-3}\equiv \frac{(n-1)^{\underline{3}}}{4}n\pmod {2^\ell}$ where $\nu_2(n)=\ell$.
By Lemma~\ref{L1}(c), $\nu_2((n-1)^{\underline{3}}/4)<0$, and so $2^\ell$ does not divide $\frac{(n-1)^{\underline{3}}}{4}n$, a contradiction. Hence, if $k+1=4$,
then $4\nmid n$ and (b) holds.
Suppose $k+1$ is a prime $q$ and $\alpha\equiv 1 \pmod q$. Assume $q\mid n$.
Consider the prime factor $q$ of $n$. Then
$\sum_{i=0}^{n-1}i^{\underline{k}}\alpha^{i-k}\equiv \frac{(n-1)^{\underline{k}}}{k+1}n\pmod {q^\ell}$ where $\nu_q(n)=\ell$, by Lemma~\ref{L2}. However, Lemma~\ref{L1}(c) shows $\nu_q((n-1)^{\underline{k}}/(k+1))<0$, and so $q^\ell$ does not divide $\frac{(n-1)^{\underline{k}}}{k+1}n$, a contradiction.
Hence, if $k+1=q$ with $\alpha\equiv 1 \pmod q$, then $q\nmid n$ and condition (c) holds.
\end{proof}
Finally, observe that the requirement $\alpha\not\equiv 1\pmod q$
in Theorem~\ref{T1}(c) is needed. For example take $\alpha=5$,
$n=6$, $k=2$.
Then $\sum_{i=0}^{n-1}i^{\underline{k}}\alpha^{i-k}\equiv 0\pmod n$ and $q=k+1=3$.
Thus conditions~(a) and (b) do not hold, and the only part of~(c) that holds is
$\alpha\not\equiv1\pmod q$.
In condition (b), we do not need to add ``or $\alpha\not\equiv 1 \pmod 2$'' because that never happens when $4\mid n$.
\vskip2mm{\sc Acknowledgement.}
We thank the referee for their helpful suggestions.
|
1,314,259,993,422 | arxiv | \section{Basic Definitions on Compact Group Actions}
In this article it will be given only a short survey on the determination of
orbit spaces of compact linear groups. More details can be found in
\cite{6st,sar,stv}.
When a physical system has to show a symmetry of the nature, it must
be described mathematically on a certain representation space of a
group $G$ in such a way that all physically relevant
quantities are invariant with respect to $G$-transformations.\\
Often the group $G$ is compact and its representation space is finite
dimensional.
In this case in all generality one might suppose that $G$
is a group of real orthogonal matrices acting on
the real vector space $\real^n$.
In the following it will be assumed that $G \subseteq O(n)$ and also that
the origin of $\real^n$ is the only point left fixed by all transformations
of $G$.
The orbit through $x \in \real^n$ is the subset of $\real^n$ formed by all
points connected to $x$ by $G$-transformations:
$$\Omega(x)=\{g\cdot x,\quad \forall g \in G \},\qquad x \in \real^n$$
The isotropy subgroup $G_x$ of the point $x$ is the subgroup of $G$
that leaves $x$ fixed:
$$G_x=\{g \in G \mid g \cdot x = x \},\qquad x \in \real^n$$
All the points in a same orbit $\Omega(x)$ have isotropy subgroups in a same
conjugacy class $[G_x]$ of subgroups of $G$,
called the {\em orbit type} of $\Omega(x)$, in fact one has:
$$G_{g\cdot x}=g \cdot G_x \cdot g^{-1},\qquad \forall g\in G,\qquad x\in \real^n$$
To each orbit type $[H]$ it is associated
a {\em stratum} $\Sigma_{[H]}$, formed by all points of $\real^n$ that have
isotropy subgroups in $[H]$:
$$\Sigma_{[H]}=\{x\in \real^n \mid G_x \in [H]\}$$
The orbits (and the strata) are disjoint subsets of $\real^n$, as each orbit
has one and only one orbit type.\\
The orbits and the strata can be partially ordered according to their
orbit types [H]. The orbit type $[H]$ is said to be
{\em smaller} than the orbit type $[K]$: $[H]<[K]$,
if $H' \subset K'$ for some $H'\in [H]$ and $K'\in [K]$. Then
$[K]$ is {\em greater} than $[H]$.
Due to the compactness of $G$, the number of different orbit types is finite
and there is a unique minimal orbit type. The unique stratum of smallest
orbit type is called the {\em principal stratum}.
All other strata are called {\em singular}.
The {\em orbit space} is the quotient space $\real^n/G$ defined through the
equivalence relation relating points belonging to the same orbit.
The natural projection $\pi:\real^n \to \real^n/G$ maps the orbits of $\real^n$
into single points of $\real^n/G$. Projections of strata of $\real^n$
define strata of
$\real^n/G$. The principal stratum of $\real^n/G$ is always {\em open
connected} and {\em dense} in $\real^n/G$, so also $\real^n/G$ is connected.
If $[K]>[H]$, then $\pi(\Sigma_{[K]})$ lies in the boundary
of $\pi(\Sigma_{[H]})$,
so to greater orbit types correspond strata of smaller dimension and
the boundary of the principal stratum contains all singular strata.
Clearly there is a one to one correspondence between the strata of $\real^n$
and the strata of $\real^n/G$, so $\real^n$ and $\real^n/G$ are stratified
in exactly the same manner.
For all the $G$-invariant functions, $f(g\cdot x)=f(x),\ \forall g\in G,\ x\in
\real^n$, so the invariant functions are constant on the orbits. Then can then
be thought as functions defined on the orbit space and in this way one eliminates the
degeneration of all the points belonging to a same orbit, in which $f(x)$ is
constant.\\
The isotropy subgroup of the minimum point of an invariant potential function
determines the true symmetry group of a physical system and if this minimum
is not at the origin then one has a symmetry breaking.
The potential may depend on some variable parameters, so the location of
the minimum point also depends on these parameters and a
phase transition is realized when the minimum point changes stratum.
Keeping these things in mind one may realize that many properties of
invariant functions and phase transitions can be better
studied in the orbit spaces.
\section{Orbit Spaces in $\real^q$ and $\widehat P$-matrices}
A concrete mathematical description of the orbit space is achieved through
an {\em integrity basis} (IB) $\ p_1(x),\ldots,p_q(x)\ $ for the ring
$\ \real[\real^n]^G\ $ of the $G$-invariant
polynomial functions defined on $\real^n$. All $G$-invariant polynomial
(or $C^\infty$) functions can be expressed as polynomials (or $C^\infty$)
functions of the finite number $q$ of basic polynomial invariant functions
forming the IB:
$$f(x)=\widehat f(p_1(x),\ldots,p_q(x)),\qquad \forall f \in \real[\real^n]^G \qquad
(\mbox{or}\ \forall f \in C^\infty[\real^n]^G)$$
The IB is supposed minimal, i.e. no subset of the IB is itself an IB, and
formed by homogeneous polynomial functions.
The choice of the IB is not unique, but the group fixes the number $q$
of its elements and their degrees $d_1,\ldots,d_q$.\\
We assume that the basic invariants $p_i(x)$ are labelled in such a way that
$d_i \ge d_{i+1}$. As there are no fixed
points except the origin of $\real^n$, $d_q\geq 2$,
and, because of the orthogonality of $G$, we may take
$\ p_q(x)=\sum_{i=1}^n x_i^2\ $.
All IB transformations (IBTs):
$$p_i'(x) =p_i'(p_1(x),\ldots,p_q(x)),\qquad i=1,\ldots,q-1,\qquad p_q'(x)=p_q(x)$$
that satisfy the conventions adopted must have Jacobian matrix with elements
$J_{ij}(x)=\partial p_i'(x)/\partial p_j(x)$
that are 0 or $G$-invariant homogeneous polynomial functions of degree:
$\deg(J_{ij}) =d_i - d_j $.
Then, $J(x)$ is an upper block triangular matrix and $\det J(x)$
is a non vanishing constant.
The IB can be used to represent the orbits of $\real^n$ as points of
$\real^q$. In fact, given an orbit $\Omega$, the vector function
$(p_1(x), p_2(x),\ldots, p_q(x))\ $is constant on $\Omega$,
because the $p_i(x)$ are $G$-invariant. The $q$
numbers $p_i=p_i(x),\ x \in \Omega$, determine a point
$p=(p_1,p_2,\ldots,p_q)\in\real^q$, which can be
considered the image in $\real^q$ of $\Omega$.
No other orbit of $\real^n$ is represented in $\real^q$ by the same point
because the IB separates the orbits.\\
The vector map:
$$p:\real^n \to \real^q:x \to (p_1(x), p_2(x),\ldots,p_q(x))$$
is called the {\it orbit map}. It maps $\real^n$ onto the subset
${\cal S}\subset \real^q$:
$${\cal S}=p(\real^n) \subset\real^q$$
such that each orbit of $\real^n$ is mapped in one and only one point of ${\cal S}$.\\
The orbit map $p$ induces a one to one correspondence
between $\real^n/G$ and ${\cal S}$ so that ${\cal S}$ can be concretely
identified with the orbit space of the $G$-action.\\
${\cal S}$ is a closed connected semi-algebraic
proper subset of $\real^q$ stratified in exactly the same manner as $\real^n$.
All the strata $\sigma$ of ${\cal S}$ are images of the strata $\Sigma$
of $\real^n$ through the orbit map and if
$\Sigma'$ is of greater orbit type than $\Sigma$,
then $\sigma'=p(\Sigma')$ lie in the boundary of $\sigma=p(\Sigma)$.
The interior of ${\cal S}$ hosts the principal stratum and all singular strata
lie in the bordering surface of ${\cal S}$. Like all semi-algebraic sets ${\cal S}$ is
stratified in primary strata and each primary stratum is the image of a
connected component of a stratum of $G$.
The origin of $\real^n$ is the only stratum of the greatest orbit
type $[G]$ and its image through the orbit map is always the origin of
$\real^q$, because of the homogeneity of the IB. The origin of
$\real^q$ lies then in the boundary of all other strata of ${\cal S}$.\\
${\cal S}$ is unlimited because $\forall x\in \real^n$ the points $x$ and
$\lambda x$, $\forall \lambda\in \real,\ \lambda\neq 0,\ $ belong to the same
stratum because of the linearity of the $G$-action, so,
as $x$ belongs to the sphere with equation $p_q(x)=(x,x)=x^2$,
all the positive $p_q$ axis of $\real^q$ must belong to ${\cal S}$.
Then any spheric surface of $\real^n$ with equation $(x,x)=r^2>0$ intersects
all strata of $\real^n$ except the origin.
Then any plane $\Pi_r$ of $\real^q$ with equation $p_q=r^2>0$ intersects all
strata of $\real^q$ except the origin. As the sphere is a compact set,
${\cal S}\cap \Pi_r$, gives a compact connected section of the orbit space ${\cal S}$.
This section is sufficient to imagine the whole shape of ${\cal S}$, because
going down to the origin in the direction of the $p_q$ axis this section must
contract to reduce at the end to the origin point and
going up to infinite this section must expand, mantaining in any case its
topological shape.
As an example, Figure 1 shows the orbit space classified in
\cite{6st} as III.2 (Table~\ref{tabR} lists the coregular groups
that have this orbit space) and a section of this orbit space with
a plane $p_3=\mbox{constant}$.
\begin{figure}[h]
\vfill
\begin{minipage}{0.5\linewidth}
\hskip 0.7cm
\includegraphics{fig1tal.eps}
\end{minipage} \hfill
\begin{minipage}{0.5\linewidth}
\vskip 0.2cm
\includegraphics{fig2tal.eps}
\end{minipage} \hfill
\caption{\protect\footnotesize Orbit space III.2 and its section
with a plane $p_3=\mbox{constant}$.}
\end{figure}
All $G$-invariant $C^\infty$ functions $f(x)$, defined on
$\real^n$, can be expressed as $C^\infty$ functions of the basic
invariants, so they define $C^\infty$ functions $\widehat f(p)$,
defined on $\real^q$:
$$f(x) = \widehat f(p_1(x),\ldots,p_q(x)) \to \widehat f(p_1,\ldots,p_q)$$
The functions $\widehat f(p)$ are defined also in
points $p\notin {\cal S}$ but only the restriction $\widehat f(p)\mid_{p\in {\cal S}}$
has the same range as $f(x),\ x \in \real^n$, in fact $f(x)=f(p)$ if
$p=p(x)$, that is if $p\in {\cal S}$.\\
All $G$-invariant $C^\infty$ functions can then be studied in the orbit
space ${\cal S}$ but one needs to know
exactly all equations and inequalities defining ${\cal S}$ and its strata.\\
A polynomial $f(p)$ is said {\it $w$-homogeneous} of {\it weight} $d$
if the polynomial $f(p(x))$ is homogeneous with degree $d$.
Each coordinate $p_1,\ldots,p_q$ of $\real^q$ has
then a weight $d_1,\ldots,d_q$.
The IBTs can be viewed as coordinate transformations of $\real^q$:
$$p_i' =p_i'(p_1,\ldots,p_q),\qquad
i=1,\ldots,q-1,\qquad p_q'=p_q$$
The Jacobian matrix $J(p)$ inherits the
properties of $J(x)$, in particular its matrix elements
$J_{ij}(p)=\partial p_i'(p)/\partial p_j$ are 0 or
$w$-homogeneous polynomials in $p$ of weight $d_i-d_j$ and $\det J(p)$ is
a non vanishing constant.
The only coordinate transformations of $\real^q$
of interest are those corresponding to IBTs and they are called
again IBTs (of $\real^q$).\\
The IBTs change the form of ${\cal S}$ but not its topological shape and
stratification.
An IB is said {\em regular} if it does not exist any polynomial function
$f(p)$ such that:
$$f(p_1(x),\ldots,p_q(x))=0\qquad \forall x \in \real^n$$
Otherwise the IB is said {\em non regular}. Obviously $f(p)$
cannot be solved with respect of any of the variables, otherwise the basis
would not be minimal.\\
A linear group $G$ with a regular $IB$ is said {\em coregular}.
Otherwise it is said {\em non-coregular}.\\
If the basis is non regular then generally it exists an ideal of
polynomial relations between the elements of the IB and the (independent)
generators of this ideal $f_1,\ldots,f_r$ can always be chosen homogeneous.
The orbit space ${\cal S}$ must then be contained in the surface ${\cal Z}$ of
$\real^q$ defined by the equations:
$$f_1(p)=0,\ldots,\quad f_r(p)=0$$
and has the same dimension $q-r$ of ${\cal Z}$. ${\cal Z}$ is called the
{\em surface of the relations} and the couple $(q,q-r)$ the
{\em regularity type} (or $r$-type) of the IB and of the group $G$.\\
When $G$ is coregular there are no relations, $r=0$, and ${\cal Z}\equiv \real^q$,
so ${\cal S}$ is $q$-dimensional.
In a point $x\in\Sigma \subset \real^n$, the number of
linear independent gradients of the basic invariants is equal to the
dimension of the stratum $p(\Sigma)\subset {\cal S}$.
It is convenient to construct the $q \times q$
Grammian matrix $P(x)$ with elements $P_{ab}(x)$
that are scalar products between the gradients of the basic invariants:
$$ P_{ab}(x) = (\nabla p_a(x),\nabla p_b(x))$$
$P(x)$ is then positive semidefinite $\forall x \in \real^n$, and for
$x\in \Sigma \subset \real^n$ $ \mbox{rank} P(x)$ equals the dimension of the stratum
$p(\Sigma)$ of ${\cal S}$.\\
Because of the covariance of the gradients of $G$-invariant
functions ($\nabla f(g \cdot x)=g \cdot \nabla f(x)$)
and of the orthogonality of $G$ (which implies the invariance of
the scalar products),
the matrix elements
$P_{ab}(x)$ are $G$-invariant homogeneous polynomial functions of
degree $d_a+d_b-2$.
Then, all the matrix elements of $P(x)$ can be expressed as
polynomials of the basic invariants.
One can then define a matrix $\widehat P (p)$ in
$\real^q$ such that:
$$ P_{ab}(x) =\widehat P_{ab}(p_1(x),\ldots,p_q(x))=\widehat P_{ab}(p)
\quad \forall x \in \real^n\ \mbox{and}\ p=p(x)$$
At the point $p=p(x)$, image in $\real^q$
of the point $x\in\real^n$ through the orbit map, the matrix $\widehat P(p)$
is the same as the matrix $P(x)$.
The matrix $\widehat P(p)$ is however defined in all $\real^q$, also
outside ${\cal S}$, but only in ${\cal S}$ it reproduces $P(x),\ \forall x\in\real^n$.\\
The properties of the matrix $\widehat P(p)$ depend on the definition of
$P(x)$ and are the following:
\begin{enumerate}
\item $\widehat P(p)$ is a real, symmetric $q \times q$ matrix.
\item $\widehat P(p)$ is positive semidefinite in ${\cal S}$.
${\cal S}$ is the {\em only} region of ${\cal Z}$ where $\widehat P(p)\geq 0$.
\item $\mbox{rank}\widehat P(p)$ equals the dimension of the stratum
containing $p$.
\item the matrix elements
$\widehat P_{ab}(p)$ are $w$-homogeneous polynomial functions of weight
$d_a+d_b-2$ and the last row and column of $\widehat P(p)$ have the fixed form:
$$\widehat P_{qa}(p) =\widehat P_{aq}(p) = 2 d_a p_a\qquad \forall a=1,\ldots, q$$
\item $\widehat P(p)$ transforms as a contravariant tensor
under IBTs:
$$ P_{ab}'(p') = J_{ai}(p) J_{bj}(p) P_{ij}(p)$$
\end{enumerate}
The matrix $\widehat P(p)$ completely determines ${\cal S}$ and its stratification.
Defining ${\cal S}_k$ the union of all $k$-dimensional strata of ${\cal S}$, one has:
$${\cal S}=\{p\in {\cal Z} \mid \widehat P(p)\geq 0\}$$
$${\cal S}_k=\{p\in {\cal Z} \mid \widehat P(p)\geq 0,\ \mbox{rank} P(p)=k\}$$
The principal stratum has dimension $q-r$, equal to the maximum rank of
$\widehat P(p)$, with $r$ the number of independent relations among the $p_i(x)$,
and coincides with ${\cal S}_{q-r}$.
(Obviously if $G$ is coregular $r=0$ and ${\cal Z}\equiv \real^q$).\\
To find out ${\cal S}_k$ one may impose that all the principal minors of $\widehat P(p)$
of order greater than $k$ have zero determinant and that at least one of
of those of order $k$ have non zero determinant.
From this one sees that ${\cal S}$ is semialgebraic because it is defined through
algebraic equations and inequalities.\\
In order to classify orbit spaces, it is sufficient
to classify the corresponding matrices $\widehat P(p)$, as they determine ${\cal S}$
completely and this has been done in \cite{6st,7st}.
\section {Determination of the Allowable $\widehat P$-matrices of
Compact Linear Groups through the Canonical Equation}
The polynomials defining the singular strata of ${\cal S}$ (and also the principal
stratum if $G$ is non-coregular) must satisfy a differential relation
(that characterize them) that has been crucial in \cite{6st,7st,stv}
to determine the $\widehat P$-matrices without the explicit use of
the IB's. Let's see the origin of this relation.\\
Let $\sigma$ be a primary stratum of ${\cal S}$ and $I(\sigma)$ the ideal of the
polynomials defined in $\real^q$ vanishing in $\sigma$.
Then $\Sigma=\{x\in \real^n \mid p(x)\in \sigma \}$ is a connected component
of a stratum of $\real^n$. For all $\widehat f\in I(\sigma)$,
$f(x)=\widehat f(p(x))$ is a $G$-invariant function that vanishes
on $\Sigma$.
Then in all regular points $x$ of $\Sigma$ one must have that
$\ \nabla f(x) |_{x\in \Sigma}\; \perp \; \Sigma\ $
because $f(x)$ is constant in $\Sigma$, and that
$\ \nabla f(x) |_{x\in \Sigma}\ $ is tangent to $\Sigma$
because $f(x)$ is $G$-invariant and the gradients of $G$-invariant functions
are tangent to the strata. Then the only possibility is that:
$$\nabla f(x) |_{x\in \Sigma}\ = \ 0$$
Applying the partial differentiation rule and taking the scalar product
with $\nabla p_a(x)$, $a=1,\ldots,q$, one obtains $q$ relations expressed
only in terms of the $p_i(x)$, that can so be defined also in $\real^q$:
$$0=\nabla f(x) |_{x\in \Sigma}=
\sum_{b=1}^q \nabla p_b(x)
\left. \frac{\partial f(x)}{\partial p_b(x)}\right|_{x\in \Sigma}=
\sum_{b=1}^q (\nabla p_a(x),\nabla p_b(x))
\left.\frac{\partial f(x)}{\partial p_b(x)}\right|_{x\in \Sigma}=$$
$$=\sum_{b=1}^q {\widehat P}_{ab}(p(x))
\left.\frac{\partial f(p(x))}{\partial p_b(x)}\right|_{x\in \Sigma}=
\sum_{b=1}^q {\widehat P}_{ab}(p)
\left.\frac{\partial f(p)}{\partial p_b}\right|_{p\in \sigma},\qquad a=1,\ldots,q$$
This means that:
$$\left.\sum_{b=1}^q {\widehat P}_{ab}(p)
\partial_b f(p)\right|_{p\in \sigma}\in I(\sigma),\qquad a=1,\ldots,q$$
where $\partial_b$ means partial derivation with respect to $p_b$.\\
It is convenient to distinguish two cases:
\begin{enumerate}
\item
$I(\sigma)$ has only one generator $a(p)$.
In this case one obtains the following relations:
$$\sum_{b=1}^q {\widehat P}_{ab}(p) \partial_b a(p) =
\lambda_a(p) a(p) \qquad a=1,\ldots,q$$
with the $\lambda_a(p)$ $w$-homogeneous polynomials of weight $d_a-2$.
In this case $\sigma$ is a surface of dimension $q-1$ and its intersection
with the region $\widehat P(p)\geq 0$ gives a
singular strata of maximal dimension if $G$ is coregular or the
principal stratum if $G$ is non-coregular of $r$-type $(q,q-1)$.
In this last case the equation $a(p)=0$ defines ${\cal Z}$.
\item
$I(\sigma)$ has more independent generators $a^{(1)}(p),\ldots, a^{(s)}(p)$.
In this case one obtains the following relations:
$$\sum_{b=1}^q {\widehat P}_{ab}(p) \partial_b a^{(j)}(p)=
\sum_{i=1}^s \lambda_{a}^{(i,j)}(p) a^{(i)}(p) \qquad a=1,\ldots,q,\ j=1,\ldots,s$$
with the $\lambda_{a}^{(i,j)}(p)$ $w$-homogeneous polynomials of weight
$d_a-2+w(a^{(j)})-w(a^{(i)})$.
$\sigma$ in this case is a surface of dimension $q-s$.
It can be a principal stratum only if $G$ is non-coregular of $r$-type
$(q,q-s)$. In all other cases it is a singular stratum.
\end{enumerate}
Both the two relations written are called {\em master relations}.
At the moment only the case of $(q-1)$-dimensional strata has been
investigated: the coregular case is presented in \cite{6st} and
the non-coregular case of $r$-type $(q,q-1)$ in \cite{stv} and all what
follows concerns only $(q-1)$-dimensional strata.
It has been proved in \cite{6st} that all irreducible polynomials $a(p)$
that satisfy the master relation must be factors of $\det{\widehat P}(p)$.
Any product of them satisfies the master relation too.
All these polynomials are called {\em active}.
The product $A(p)$ of all irreducible active polynomials is called
the {\em complete} (active) factor of $\det{\widehat P}(p)$.
The surface $\sigma=\{p\in {\cal Z} \mid \widehat P(p)\geq 0, A(p)=0\}$ coincides
with the whole boundary of ${\cal S}$ if $G$ is coregular
(${\cal Z}\equiv \real^n$ in this case) or coincides with
the whole principal stratum of ${\cal S}$ if $G$ is non-coregular of $r$-type $(q,q-1)$.
$\det {\widehat P}(p)$ may contain a non active factor $B(p)$ that is called
{\it passive}. $A(p)$ and $B(p)$ are uniquely defined except by
non-zero constant factors.\\
In \cite{6st} are studied the properties of the
master relation with respect to IBTs, and the main results are the following:
\begin{enumerate}
\item In some IB, called {\em $A$-bases}, the master relation
has the following {\it canonical} form:
$$\sum_{b=1}^q \widehat P_{ab}(p) \partial_b A(p) = 2 w(A)\delta_{aq}A(p) \qquad a=1,\ldots,q$$
The case $a=q$ is just a homogeneity condition on $A(p)$ and only the cases
$a=1,\ldots,q-1$ are characteristic of the $(q-1)$-dimensional strata.
The variable $p_q$ can then be easily eliminated from the equation if one
restricts to a plane $\Pi$ of constant $p_q$, for example
$\Pi=\{p \in \real^q \mid p_q=1\}$.
\item
In all $A$-bases, the restriction $A(p)\mid_{p\in\Pi}$,
has at most one local non-degenerate extremum in $\Pi$, outside of the region
$A(p)\mid_{p\in\Pi}\;=0$, at the point $p_{0}=(0,\ldots,0,1)$.
\end{enumerate}
If $G$ is coregular one has also the following results:
\begin{enumerate}
\item[3.]
$p_{0}$ always exists and lies in the interior of ${\cal S}$.
\item[4.]
In some $A$-bases, the {\it standard} $A$-bases, $\widehat P(p)$
evaluated at $p_0$ is diagonal.
\end{enumerate}
Properties 3. and 4. exclude linear terms in
$A(p)\mid_{p\in\Pi}$ and requires that all the quadratic terms $p_i^2$
have coefficients of equal sign (negative if one requires $A(p)\mid_{p\in\Pi}$
to be maximum at $p_0$) and that there are no mixed quadratic terms $p_i p_j$.
The weight $w(A)$ of $A(p)$ is then bounded:
$2 d_1 \leq w(A)\leq w(\det \widehat P) = 2 \sum_{i=1}^q (d_i-1)$.\\
If $G$ is non-coregular properties 3. and 4. are not valid but $A(p)$ must
satisfy in addition a second order boundary condition \cite{stv}.
Given the IB, and following the lines reported above,
it is easy to determine the matrix $\widehat P(p)$, the complete factor $A(p)$,
and the subset ${\cal S}$ of $\real^q$ that represents the orbit space of the
group action. The $\widehat P$-matrices can however be calculated and classified
without the knowledge of the IB's.
The many conditions found on the form of
a general $\widehat P$-matrix and on the complete factor
$A$ allow to find out all possible solutions to the
{\em canonical equation} that are compatible with these
conditions (the master relation in its canonical form is
used now to find out the $\widehat P$-matrices, so it is better to call it equation
instead of relation).
In \cite{6st,7st} the canonical equation is solved for the case of coregular
groups. We only fixed the number $q$ of the basic invariants and we
considered all matrix elements
$\widehat P_{ab}(p)$ and $A(p)$ and $B(p)$ as unknown $w$-homogeneous polynomials
satisfying the following {\em initial conditions}:
\begin{enumerate}
\item $\widehat P(p)$ is a real, symmetric $q \times q$ matrix;
\item the matrix elements
$\widehat P_{ab}(p)$ are $w$-homogeneous polynomial functions of weight
$d_a+d_b-2$ and the last row and column of $\widehat P(p)$ have the fixed form:
$$\widehat P_{qa}(p) =\widehat P_{aq}(p) = 2 d_a p_a\qquad \forall a=1,\ldots, q$$
\item $\widehat P(p_0)$ is diagonal and positive definite;
\item $A(p)$ is $w$-homogeneous of weight $w(A)$ such that:
$2 d_1\leq w(A)\leq w(\det{\widehat P})$.
Its restriction to the plane $p_q=1$ has no linear terms and
only quadratic terms of the type $-k_i p_i^2$ with $k_i>0$.
\item $A(p)$ is a factor of $\det{\widehat P}$, so the equation
$A(p)B(p) = \det{\widehat P}(p)$ defines $B(p)$.
\item the canonical equation must be satisfied.
\end{enumerate}
In all these unknown polynomials the dependence
on the variable $p_1$ can always be rendered explicit even if all
degrees $d_i,\ i\neq q$ are unknown.
Items 5. and 6. give a system of coupled differential equations that
should be solved by unknown $w$-homogeneous polynomial functions.
The initial conditions imposed
were so strong that this system could be solved analytically and it
gave only a finite number of different solutions for each value
of the dimension $q$ ($q$ = 2, 3, 4, but there is no reason to believe that
this will not be true for higher values of $q$).
The matrices $\widehat P (p)$ that together with the corresponding complete factor
$A(p)$ satisfy these initial conditions, are such that $\widehat P(p)\geq 0$
only in a connected subset ${\cal S}$ of $\real^q$ (whose boundary satisfy the
equation $A(p)=0$) and are called {\em allowable $\widehat P$-matrices}.
The name originates from the fact that they are potentially determined by
an IB of an existing group $G$, but it is not known in general if
that group does really exist. It is clear however that all $\widehat P$-
matrices determined by the IB of the existing compact coregular
groups are allowable $\widehat P$-matrices.
This approach has been recently applied with success to the determination of
the allowable $\widehat P$-matrices associated to compact non-coregular linear groups
of $r$-type $(q,q-1)$ \cite{stv}.
The initial conditions 3. and 4. for the case of coregular groups are no longer
valid and must be replaced by the following two conditions:
\begin{enumerate}
\item [3.] $A(p)$ is $w$-homogeneous of weight $w(A)\leq w(\det \widehat P)$.
\item [4.] $A(p)$ satisfies the second order boundary condition \cite{stv}.
\end{enumerate}
Then one proceeds as in the coregular case. Here not all solutions found
are such that $\widehat P(p)\geq 0$ in a connected subset of $\real^q$, so, a
posteriori, one has to discard some solutions.
It can however be proved that when the point
$p_0$, where $A(p)\mid_{p\in\Pi}$ has an extremum, exists and does not
belong to ${\cal Z}$, then the allowable $\widehat P$-matrices of the
non-coregular groups of $r$-type $(q,q-1)$ are necessarily allowable
$\widehat P$-matrices of coregular groups with $q$ basic invariants.
In this case the orbit spaces ${\cal S}'$ of the non-coregular groups lie always
in some connected $(q-1)$-dimensional singular stratum of the orbit space
${\cal S}$ of a coregular group.
In table 1 I report all 2-, 3- and 4-dimensional
allowable $\widehat P$-matrices for coregular groups,
showing the corresponding weights $[d_1,\ldots,d_{q-1},2]$
of the $p_i$ and the weight
$w(A)$ of the complete factor $A(p)$. The parameters $j_i$ and $s$ that
appear in the table are arbitrary positive integers limited only by
$d_1\geq \cdots \geq d_{q-1} \geq 2$.\\ The explicit forms of
these allowable $\widehat P$-matrices are given in \cite{6st,7st}.\\
These allowable $\widehat P$-matrices share the following
properties:
\begin{enumerate}
\item For each number $q$, there are a finite number of classes of
allowable $q\times q$ $\widehat P$-matrices.
In each class the $\widehat P$-matrices differ only in some positive integer
parameters $j_i$ and in a scale factor $s$. These parameters
fix also the values of the degrees $[d_1,d_2,\ldots,d_q-1]$. In
$\Pi=\{p\in \real^q | p_q=1\}$ all the matrices that differ only for the
value of $s$ become identical.
\item
In convenient $A$-bases all coefficients of the
$\widehat P$-matrix elements are integer numbers.
\end{enumerate}
In \cite{7st,t1} the classes $A9(j_1)$ and $A10(j_1)$ were forgotten.
These classes of solutions have been determined by applying the induction rules
\cite{st} to the case $q=3$. These induction rules permit to write down
easily most of the solutions for the $(q+1)$-dimensional coregular case
once one knows those of the $q$-dimensional case.
The 4 induction rules discovered up to now when applied to the solutions
corresponding to $q=2,3$ give all solutions of the case $q=3,4$, except
when the complete factor contain a term in $p_1^q$ (and this seems the case
of all irreducible finite groups generated by reflections) and except the
class $D2$ (that probably can be derived from the class III.3 with
an induction rule not yet discovered). The induction rules probably reflect
some properties of groups but they are not yet understood.
In \cite{stv} are determined all allowable $\widehat P$-matrices for non-coregular
groups of $r$-type $(q,q-1)$, tht is groups
with 3 invariants among which there is one algebraic relation. It results
only one class of allowable $\widehat P$-matrices for these groups
and these $\widehat P$-matrices are the same as those for the coregular case with
3 invariants classified in \cite{6st} as $I(1,1)$.
The only difference is that the principal stratum now is the 2-dimensional
surface that makes up the boundary of ${\cal S}$ in the coregular case.
It is possible that all orbit spaces of non-coregular groups of $r$-type
$(q,q-1)$ occour at the border of the orbit spaces of coregular groups.
\begin{table}[h]\begin{center}\label{tab0}\caption{ALLOWABLE $\widehat P$-
MATRICES OF COREGULAR GROUPS OF DIMENSION $q = 2, 3, 4$.} \vskip
0.4cm\begin{tabular}{|c|l|c|c|} \hline
$q$&CLASS& $w(A)$ & $[d_1,\ldots,d_q]$ \\
\hline
2&$I_2$ & $ 2 d_1$ & $[s,2]$ \\
\hline
3&$I(j_1,j_2)$ &
$ 2 d_1 $ & $ [s(j_1+j_2)/2, s,2]$\\
&$II(j_1) $ & $ 2 d_1+d_2 $ & $
[s(j_1+1), 2s,2]$ \\
&$III.1$ & $ 3
d_1 $ & $ [4s, 3s,2]$ \\
&$III.2$
& $ 3 d_1 $ & $ [6s, 4s,2]$
\\
&$III.3$ & $ 3 d_1 $ & $ [10s,6s,2]$ \\
\hline
4&$A1(j_1,j_2,j_3,j_4)$ &
$ 2 d_1 $ &$[s(j_1+j_2)(j_3+j_4)/4, s(j_1+j_2)/2,s,2]$\\
&$A3(j_1,j_2,j_3)$ & $ 2 d_1+d_3 $& $
[s(j_1+1)(j_2+j_3)/2, s(j_1+1), 2s,2] $ \\
&$A2(j_1,j_2,j_3,j_4)$ & $
2 d_1 $ & $ [s((j_1+1)(j_2+j_3)/2+j_4), s(j_1+1), 2s,2]$\\
&$A4(j_1,j_2) $ & $ 2 d_1+j_1 d_3$ & $
2s j_2, s(j_1+j_2), 2s,2] $ \\
&$A5(j_1,j_2,j_3)$ & $ 2
d_1+d_2 $ & $ [s (j_1(j_2+1)+j_3), 2s(j_2+1), 2s,2] $\\
&$A6(j_1,j_2,j_3)$ & $ 2 d_1+d_2 $ & $
[s(j_1+j_2)(j_3+1)/2, s(j_1+j_2), s,2]$ \\
&$A7(j_1,j_2)$ & $
2 d_1+d_2+d_3$ & $ [s j_1(j_2+1), 2s j_1, 2s,2] $\\
&$A8(j_1,j_2)$ & $ 2 d_1+2 d_2 $ &$[s(j_1+1), s(j_2+1), 2s,2] $ \\
&$A9(j_1)$ & $ 2 d_1+ d_3 $ &$[2s(j_1+1), s(j_1+1), 2s,2] $ \\
&$A10(j_1)$ & $ 2 d_1+ d_2 $ &$[s(j_1+1), 2s, s,2] $ \\
&$B1(j_1) $ & $ 2
d_1 $ & $ [6s j_1, 4s, 3s,2] $ \\
&$B2$ & $ 3 d_1 $ & $ [4s, 3s, 3s,2]
$ \\
&$B3(j_1,j_2)$ & $ 3 d_1 $ & $
[2s(j_1+j_2), 3s(j_1+j_2)/2, s,2]$ \\
&$B4(j_1) $ & $
3 d_1+d_3 $ & $ [4s j_1, 3s j_1, 2s,2] $\\
&$C1(j_1,j_2)$ & $ 2 d_1 $ & $
[3s(j_1+2j_2), 6s, 4s,2] $ \\
&$C2(j_1) $ & $ 2
d_1+d_2 $ & $ [6s j_1, 6s, 4s,2] $ \\
&$C3(j_1)
$ & $ 2 d_1+2 d_2 $ & $ [3s(j_1+1), 6s, 4s,2]$ \\
&$C4 $ & $ 3 d_1 $
& $ [6s, 4s, 3s,2] $ \\
&$C5(j_1,j_2)$
& $ 3 d_1 $ & $ [3s(j_1+j_2), 2s(j_1+j_2), s,2] $\\
&$C6(j_1) $ & $ 3 d_1+d_3 $ & $ [6s j_1,4s j_1, 2s,2] $ \\
&$D1(j_1) $ & $ 2 d_1
$ & $ [15s j_1, 10s , 6s,2] $ \\
&$D2 $
& $ 3 d_1 $ & $ [10s, 6s, 4s,2] $
\\
&$D3(j_1,j_2)$ & $ 3 d_1 $ & $
[5s(j_1+j_2), 3s(j_1+j_2), s,2] $ \\
&$D4(j_1) $ & $
3 d_1+d_3 $ & $ [10s j_1, 6s j_1, 2s,2] $ \\
&$E1$
& $ 4 d_1 $ &$ [5s, 4s, 3s,2]
$ \\
&$E2$ & $ 4 d_1 $ &$
[6s, 4s, 4s,2] $ \\
&$E3$ & $ 4
d_1 $ &$ [8s, 6s, 4s,2] $ \\
&$E4$
& $ 4 d_1 $ &$ [12s, 8s, 6s,2]
$ \\
&$E5$ & $ 4 d_1 $ &$ [30s, 20s, 12s,2] $ \\ \hline
\end{tabular}\end{center}
{\bf Notation}.
$q$: number of basic invariants and dimension of the $\widehat P$-matrices.
CLASS: class of $\widehat P$-matrices in the notation of \cite{6st,7st}.
$w(A)$ degree of the active factor $A(p)$ determining the boundary of the
orbit space.
$[d_1,\ldots,d_q]$: degree of the basic invariants.
\end{table}
\section{Orbit Spaces of Coregular Compact Groups}
All linear compact groups generate a $\widehat P$-matrix that must be an
allowable $\widehat P$-matrix. The converse may also be true but one must find
the linear groups corresponding to a given allowable $\widehat P$-matrix.
In the case of irreducible finite groups generated by reflections the IB
are well known and when these IB contain 2, 3 and 4 basic invariants
the corresponding $\widehat P$-matrices have been determined \cite{sv}.
All these $\widehat P$-matrices, after a proper IBT, exactly coincide with
those in the classification of allowable $\widehat P$-matrices reported
in \cite{6st,7st}.
In the case of compact Lie groups the IB is known only in few cases.
G. W. Schwarz \cite{sch} gave a classification of all complex coregular
representations of complex simple Lie groups,
together with the number and degrees of the basic invariants,
and for some groups also some hints to write down the integrity basis.\\
From this classification it is possible to deduce all real coregular
representations of compact simple Lie groups. Here under I shall sketch
how this has been possible. Proofs and details can be found in \cite{t0,stv2}.
\begin{itemize}
\item
Let $G$ be a compact Lie group and $G_c$ be its {\it complexification}.
$G_c$ is the {\em smallest} complex Lie group that
contains $G$ and $G$ is a {\it maximal compact subgroup} of $G_c$.
\item
Let $\varphi$ be a real representation of $G$ in a real vector space
$W_{\varphi }$. $\varphi$ defines uniquely a complex representation of $G_c$
in the complex space $V_{\varphi}=\complex \otimes W_{\varphi}$.
Viceversa, a complex representation $\varphi$ of $G_c$ in the complex space
$V_{\varphi }$ defines a complex representation of $G$ in $V_{\varphi }$.
All these representations are completely reducible and
$G$ and $G_c$ are then {\em reductive} groups.
\item
Every complex reductive Lie group may be identified with a complex linear
algebraic group so that its complex analytic representations coincide
exactly with its rational representations. Then the linear group
$\varphi(G_c)$ is a complex algebraic group as well as a complex Lie group.
\item
Every linear group $H\subset GL(n,\complex)$ and its
Zariski closure $\mbox{cl}(H)$ have the same polynomial invariants:
$$\complex[\complex^n]^H=\complex[\complex^n]^{\mbox{{\small cl}}(H)}$$
\item
The linear group $\varphi(G_c)$ is the Zariski closure of the linear group
$\varphi(G)$, so $\varphi(G)$ and $\varphi(G_c)$
have equal rings of invariant polynomials, and in particular these rings
have the same IB.
\item
Some complex representations $\varphi$ of $G_c$ admit a {\em real form} for the
maximal compact subgroup $G$ and in this case the linear group $\varphi(G)$
is equivalent to a group of real orthogonal matrices. All these
representations are well known and classified.
Given a complex representation $\varphi$ of $G_c$ that does not admit a real
form for $G$, one may form a real representation for $G$ in
$\varphi+\overline{\varphi}$, with $\overline{\varphi}$ the complex conjugate
representation of $\varphi$, called the {\em realification} of $\varphi$.
\item
If the complex representation $\varphi$ of $G_c$ admits a real form for $G$
then the ring of $\varphi(G)$-invariant polynomials, with real coefficients,
$\real [ \real^n]^{\varphi (G)}$, admits a real IB.
The ring of $\varphi(G_c)$-invariant polynomials, with complex coefficients,
is exactly the ring obtained from $\real [ \real^n]^{\varphi (G)}$
allowing the coefficients of the polynomials to be complex numbers:
$$ \complex [ \complex^n ]^{\varphi (G_c)}
\simeq \complex \otimes \real [ \real^n ]^{\varphi (G)}$$
(The space $\real^n$ or
$\complex^n$ where the polynomials are defined is irrelevant here,
as one may consider that all of them are defined in terms of $n$
abstract indeterminates).\\
Then, both $ \complex [ \complex^n ]^{\varphi (G_c)}$ and
$\real [ \real^n ]^{\varphi (G)}$ admit the same IB formed by
polynomials with real coefficients.
\item
A representation $\varphi(G)$ in $\real^n$ is coregular if and only if the
representation $\varphi(G_c)$ in $\complex^n$ is coregular. In fact
any algebraic relation $\hat f$ among the $p_i(x)$ (it doesn't matter if the
coefficients in $\hat f$ are real or complex) implies that both $\varphi(G)$
and $\varphi(G_c)$ are non-coregular.
\item
Every subrepresentation $\varphi'$ of a coregular representation $\varphi$,
is coregular too.
The basic invariants of $\varphi'$ are a proper subset of the basic
invariants of $\varphi$.
\end{itemize}
From the classification of G. W. Schwarz \cite{sch} then one
may recover the classification of the real coregular representations of the
compact simple Lie groups. All what one has to do is to select in
\cite{sch} the representations of the complex simple Lie
groups $G_c$ that are complexifications
of real representations of the maximal compact subgroups $G$ of $G_c$.
This gives the classification of all coregular real linear representations
of compact simple Lie groups, together with the number and degrees of
the basic invariants, and it is reported in \cite{stv2}.
(The IB of the real linear groups $\varphi(G)$ and of their complexifications
$\varphi(G_c)$ can be chosen to be real and the same for the two groups).
The next step is to determine the orbit spaces, or equivalently the
$\widehat P$-matrices, of the real coregular representations of compact simple
Lie groups.\\
For each real coregular representation $\varphi$ of a compact simple
Lie group that have an IB with $q\leq 4$ basic invariants,
we select all possible candidates in the list of the allowable
$\widehat P$-matrices given in table 1 that have the same
number and degrees of the basic invariants.
In some cases we are
left with only one possibility, but in some others there are more different
choiches.\\
When there are more than one candidate $\widehat P$-matrix,
we must select among the candidates, the right one.
In the case of adjoint representations the choice is easy, because
in this case the orbit space is the same of that of the corresponding
Weyl group \cite{stv} and one already knows the $\widehat P$-matrices
of the irreducible finite reflection groups. In all other
cases the hints given by Schwarz to construct the IB
are sufficient to determine the right choice, even if the IB is not known
completely. These calculations are reported in \cite{stv}.
Table~\ref{tabFG} lists the $\widehat P$-matrices of the irreducible
representations of coregular finite groups with $q\leq 4$ basic invariants
and tables~\ref{tabLG1} and \ref{tabLG2} list the $\widehat P$-matrices of coregular
representations of compact simple Lie groups with $q\leq 4$ basic invariants.
To denote the irreducible representation with maximal weight
$\Lambda=(\Lambda_1,\Lambda_2,\ldots)$ I shall use the notation:
${\varphi_1}^{\Lambda_1}{\varphi_2}^{\Lambda_2}\ldots$, omitting to write
${\varphi_i}^{\Lambda_i}$ when ${\Lambda_i}=0$ and omitting to write
${\Lambda_i}$ when ${\Lambda_i}=1$.
The notation here differs sometimes with that reported in several texts
on group theory for the ordering of the roots in the Dynkin diagrams but
I prefere here to maintain the same notation of \cite{sch}.
As an aid to the reader in the tables \ref{tabLG1} and \ref{tabLG2}
it is reported also the dimension of the representation,
so no ambiguity can occour.\\
In the following tables~\ref{tabLG1} and \ref{tabLG2}
in the column $G$ the (compact, real) Lie group is indicated by
the symbol of the Lie algebra of its complexification,
and this means that it is written $A_n$ for $SU(n+1)$,
$B_n$ for $SO(2n+1)$, $C_n$ for $Sp(2n)$, $D_n$ for $SO(2n)$.\\
In the cases of one or two invariants there is only one class of
allowable $P$-matrices (beside equivalences).
I shall denote with $I_1$ and $I_2$
the classes of $\widehat P$-matrices of dimension 1 and 2 respectively.\\
In tables~\ref{tabLG1} and \ref{tabLG2}, to avoid isomorphisms, the ranks are limited in the
following way: $B_n,n\ge 2$; $C_n,n\ge 3$; $D_n,n\ge 4 $.
Representations that differ only for changes:
of $\varphi_i \leftrightarrow \varphi_{n-i}$, $i=1,\ldots,[\frac{n}{2}]\ $
for $A_n$,
for changes: $\varphi_{n-1} \leftrightarrow \varphi_n\ $ for $D_n$,
for permutations of $\varphi_1$, $\varphi_3$, $\varphi_4\ $ for $D_4$,
are avoided.
\section{Conclusions}
The main conclusions of our calculations are as follows:
\begin{enumerate}
\item
It is possible to classify all allowable $\widehat P$-matrices of compact
coregular linear groups \cite{6st,7st} or of compact non-coregular linear
groups of $r$-type $(q,q-1)$ \cite{stv}. These matrices determine univocally
the sets ${\cal S}\subset \real^q$ that represent the orbit spaces.
\item
This classification is done without using the integrity basis and without
knowing any specific information of group structure, but using
only some very general algebraic conditions.
\item
All existing compact linear groups determine $\widehat P$-matrices of
the same form (eventually after an integrity basis transformation)
of an allowable $\widehat P$-matrix.
When for a given set of the degrees $d_1,\ldots,d_q$ there are no allowable
$\widehat P$-matrix, then there are also no compact linear group with the basic
invariants of those degrees.
\item
Finite groups and compact Lie groups may share the same orbit space structure.
\end{enumerate}
The main open problems in all this subject are the following:
\begin{enumerate}
\item
Given an allowable $\widehat P$-matrix $\widehat P$, does it always exist a compact linear
group whose integrity basis defines $\widehat P$? When this group exists,
which is the group and its integrity basis.
\item
What is the meaning of the induction rules and what is their relation with
group theory?
\item
Is it always true that the allowable $\widehat P$-matrices of non-coregular groups
of $r$-type $(q,q-1)$, that is with only one relation among the basic
invariants, coincide with the allowable $\widehat P$-matrices of the coregular
groups?
\end{enumerate}
The results here reviewed are partial but they point out
a very strong relation with group theory and with
invariant theory which ought to be further investigated.\\
One fact that appears clearly from the table~\ref{tabR}
is that different representations of different groups may share the same
orbit space. The orbit spaces of finite groups and of Lie groups may also
be the same. This happens because the orbit spaces are determined only by the
$\widehat P$-matrices, that is only by the way how the scalar products between the
gradients of the basic invariants $p_i(x)$ are expressed in terms of the
$p_i(x)$. When for two integrity basis these expressions are the same,
then the $\widehat P$-matrices and the orbit spaces are the same.
From table~\ref{tabR} one sees that this happens often, even
considering only coregular groups.
Some future work might be oriented towards the following goals:
\begin{enumerate}
\item
Find the 5-dimensional allowable $\widehat P$-matrices of coregular groups.
This will clarify the induction rules.
\item
Find the 4- and 5-dimensional allowable $\widehat P$-matrices of non-coregular
groups of $r$-type $(q,q-1)$. This will clarify the link between the
coregular and non-coregular case.
\item
Find the groups generating the allowable $\widehat P$-matrices.
\item
Study if and how the link between a group and one of its subgroups or the
link between a direct product group and its factor groups gives some
links also between the corresponding $\widehat P$-matrices.
\end{enumerate}
\begin{table}[h]\begin{center}
\caption{ORBIT SPACES OF REAL COREGULAR REPRESENTATIONS OF COMPACT
SIMPLE LIE GROUPS WITH $q=1, 2, 3, 4$ BASIC INVARIANTS. I.} \label{tabLG1}
\vskip 0.4cm\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Entry
&$G$ & $\varphi$ &{\it dim}&i/r& $q$& $d_i$ &\#$P$ &$P$ \\
\hline
1 &$A_1$ & $2 \varphi_1 $ & 4 & i & 1 & 2 & 1 & $I_1$ \\
2 & & $ {\varphi_1}^2=Ad$ & 3 & i & 1 & 2 & 1 & $I_1$ \\
3 & & $ 2{\varphi_1}^2$ & 3+3 & r & 3 & 2,2,2 & 2 & $I(1,1)$ \\
4 & & $ {\varphi_1}^4$ & 5 & i & 2 & 3,2 & 1 & $I_2$ \\
5 &$A_{n\ge2}$
& $\varphi_1+\varphi_n$
&$2(n+1)$ & i & 1 & 2 & 1 & $I_1$ \\
6 & & $ 2\cdot(\varphi_1+\varphi_n)$
&$2\cdot2(n+1)$
& r & 4 & 2,2,2,2 & 5 & $A1(1,1,1,1)$ \\
7 &$A_2$ & $ \varphi_1 \varphi_2=Ad$
& 8 & i & 2 & 3,2 & 1 & $I_2$ \\
8 & & $ \varphi_1^2+ \varphi_2^2$
& 12 & i & 4 & 4,3,3,2 & 1 & $B2$ \\
9 &$A_3$ & $ \varphi_1 \varphi_3=Ad$
& 15 & i & 3 & 4,3,2 & 1 & $III.1$ \\
10 & & $ \varphi_2$ & 6 & i & 1 & 2 & 1 & $I_1$ \\
11 & & $ \varphi_1+ \varphi_2+ \varphi_3$
& 8+6 & r & 2 & 2,2 & 1 & $I_2$ \\
12 & & $ 2\varphi_2$ & 6+6 & r & 3 & 2,2,2 & 2 & $I(1,1)$ \\
13 &$A_4$ & $ \varphi_1 \varphi_4=Ad$
& 24 & i & 4 & 5,4,3,2 & 1 & $E1$ \\
14 & & $ \varphi_2+\varphi_3$
& 20 & i & 2 & 4,2 & 1 & $I_2$ \\
15 &$A_5$ & $ \varphi_2+\varphi_4$
& 30 & i & 4 & 4,3,3,2 & 1 & $B2$ \\
16 &$A_6$ & $ \varphi_2+\varphi_5$
& 42 & i & 3 & 6,4,2 & 3 & $III.2$ \\
17 &$A_8$ & $ \varphi_2+\varphi_7$
& 72 & i & 4 & 8,6,4,2 & 4 & $E3$ \\
18 &$B_{n\ge2}$
& $ \varphi_1$ & $2n+1$ & i & 1 & 2 & 1 & $I_1$\\
19 & & $ 2\varphi_1$ &$2\cdot(2n+1)$
& r & 3 & 2,2,2 & 2 & $I(1,1)$\\
20 &$B_2$ & $ \varphi_1^2$ & 14 & i & 4 & 5,4,3,2 & 1 & $E1$ \\
21 & & $ \varphi_2=Ad$ & 10 & i & 2 & 4,2 & 1 & $I_2$\\
22 &$B_3$ & $ \varphi_2=Ad$ & 21 & i & 3 & 6,4,2 & 3 & $III.2$\\
23 & & $ \varphi_3$ & 8 & i & 1 & 2 & 1 & $I_1$\\
24 & & $\varphi_1+\varphi_3$& 7+8 & r & 2 & 2,2 & 1 & $I_2$\\
25 & & $2\varphi_1+\varphi_3$
& 7+7+8 & r & 4 & 2,2,2,2 & 5 & $A3(1,1,1)$\\
26 & & $ 2\varphi_3$ & 8+8 & r & 3 & 2,2,2 & 2 & $I(1,1)$\\
27 &$B_4$ & $ \varphi_2=Ad$ & 36 & i & 4 & 8,6,4,2 & 4 & $E3$\\
28 & & $ \varphi_4$ & 16 & i & 1 & 2 & 1 & $I_1$\\
29 & & $ \varphi_1+\varphi_4$
& 9+16 & r & 3 & 3,2,2 & 2 & $I(2,1)$\\
30 & & $ 2\varphi_4$ & 16+16 & r & 4 & 4,2,2,2 & 10& $A1(1,1,2,2)$\\
31 &$D_{n\ge4}$
& $ \varphi_1$ & $2n$ & i & 1 & 2 & 1 & $I_1$\\
32 & & $ 2\varphi_1$ &$2n+2n$ & r & 3 & 2,2,2 & 2 & $I(1,1)$\\
33 &$D_4$ & $ \varphi_2=Ad$ & 28 & i & 4 & 6,4,4,2 & 7 & $E2$\\
34 & & $ \varphi_1+\varphi_3$
& 8+8 & r & 2 & 2,2 & 1 & $I_2$\\
35 & & $ \varphi_1+\varphi_3+\varphi_4$
& 8+8+8 & r & 4 & 3,2,2,2 & 6 & $A2(1,1,1,1)$\\
36 & & $ \varphi_1+2\varphi_3$
& 8+8+8 & r & 4 & 2,2,2,2 & 5 & $ A3(1,1,1)$\\
37 &$D_5$ & $ \varphi_4+\varphi_5$
& 32 & i & 2 & 4,2 & 1 & $I_2$\\
\hline\end{tabular}\end{center}
\end{table}
\begin{table}[h]\begin{center}
\caption{ORBIT SPACES OF REAL COREGULAR REPRESENTATIONS OF COMPACT
SIMPLE LIE GROUPS WITH $q=1, 2, 3, 4$ BASIC INVARIANTS. II.} \label{tabLG2}
\vskip 0.4cm\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Entry
& $\ G\ $ & $\qquad \varphi\qquad $ &$\ $\quad{\it dim}\quad$\ $&i/r& $q$& $d_i$ &\#$P$ &$\qquad P \qquad $\\
\hline
38 &$C_3$ & $ 2\varphi_1 $ & 12 & i & 1 & 2 & 1 & $I_1$\\
39 & & $ \varphi_1^2=Ad$ & 21 & i & 3 & 6,4,2 & 3 & $III.2$\\
40 & & $ \varphi_2 $ & 14 & i & 2 & 3,2 & 1 & $I_2$\\
41 &$C_4$ & $ \varphi_1^2=Ad$ & 36 & i & 4 & 8,6,4,2 & 4 & $E3$\\
42 & & $ \varphi_2 $ & 27 & i & 3 & 4,3,2 & 1 & $III.1$\\
43 &$C_5$ & $ \varphi_2 $ & 44 & i & 4 & 5,4,3,2 & 1 & $E1$\\
44 &$E_6$ & $ \varphi_1+\varphi_5$
& 54 & i & 4 & 4,3,3,2 & 1 & $B2$\\
45 &$F_4$ & $ \varphi_1$ & 26 & i & 2 & 3,2 & 1 & $I_2$\\
46 & & $ \varphi_4=Ad$ & 52 & i & 4 & 12,8,6,2& 3 & $E4$\\
47 &$G_2$ & $ \varphi_1$ & 7 & i & 1 & 2 & 1 & $I_1$\\
48 & & $ 2\varphi_1$ & 7+7 & r & 3 & 2,2,2 & 2 & $I(1,1)$\\
49 & & $ \varphi_2=Ad$ & 14 & i & 2 & 6,2 & 1 & $I_2$\\
\hline\end{tabular}\end{center}
{\bf Notation for tables \ref{tabLG1}, \ref{tabLG2}}.
Entry: line number.
$G$: Compact Lie group (indicated by the Lie algebra of its complexification).
$\varphi$: real representation of $G$.
{\it dim}: real dimension of $\varphi$.
i/r: reducibility.
$q$: number of basic invariants.
$d_i$: degrees of the basic invariants.
\#$P$: number of different allowable $\widehat P$-matrices with degrees $d_i$.
$P$: $\widehat P$-matrix and corresponding orbit space of the linear group
$(G,\varphi)$.
\end{table}
\begin{table}[h]\begin{center}
\caption{ORBIT SPACES OF REAL IRREDUCIBLE REPRESENTATIONS OF COREGULAR
FINITE GROUPS WITH $q=1, 2, 3, 4$ BASIC INVARIANTS. } \label{tabFG}
\vskip 0.4cm\begin{tabular}{|c|c|c|c|c|}
\hline
$G$ & $dim=q$& $d_i$ &\#$P$ &$P$\\
\hline
$A_1$ & 1 & 2 & 1 & $I_1$ \\
$I_2(m)$ & 2 & $m$,2 & 1 & $I_2$ \\
$A_3$ & 3 & 4,3,2 & 1 & $III.1$ \\
$B_3$ & 3 & 6,4,2 & 3 & $III.2$\\
$H_3$ & 3 & 10,6,2 & 3 & $III.3$\\
$A_4$ & 4 & 5,4,3,2 & 1 & $E1$ \\
$D_4$ & 4 & 6,4,4,2 & 7 & $E2$\\
$B_4$ & 4 & 8,6,4,2 & 4 & $E3$\\
$F_4$ & 4 & 12,8,6,2& 3 & $E4$\\
$H_4$ & 4 & 30,20,12,2 & 2 & $E5$\\
\hline\end{tabular}\end{center}
{\bf Notation}.
$G$: Finite group. {\it dim}: dimension of the representation.
$q$: number of basic invariants.
$d_i$: degrees of the basic invariants.
\#$P$: number of different allowable $\widehat P$-matrices with degrees $d_i$.
$P$: $\widehat P$-matrix and corresponding orbit space of the group $G$.
\end{table}
\begin{table}
\begin{center} \label{tabR}
\caption{ORBIT SPACES OCCOURING IN TABLES 2, 3 AND 4}
\vskip 0.4cm\begin{tabular}{|c|c|cc|}
\hline
$P$-matrix &$d_i$ & & $G$ \\
\hline
$I_1 $& 2 & $A_1$,& $<\!1\!>$, $<\!2\!>$, $<\!5\!>$, $<\!10\!>$, $<\!18\!>$, $<\!23\!>$,\\
& & & $<\!28\!>$, $<\!31\!>$, $<\!38\!>$, $<\!47\!>$\\
$I_2 $& $d$,2 &$I_2(d)$,& $<\!4\!>$, $<\!7\!>$, $<\!11\!>$, $<\!14\!>$, $<\!21\!>$, $<\!24\!>$, \\
& & & $<\!34\!>$, $<\!37\!>$, $<\!40\!>$, $<\!45\!>$, $<\!49\!>$\\
$I(1,1) $&2,2,2 & & $<\!3\!>$, $<\!12\!>$, $<\!19\!>$, $<\!26\!>$, $<\!32\!>$, $<\!48\!>$\\
$I(2,1) $&3,2,2 & & $<\!29\!>$\\
$III.1 $&4,3,2 &$A_3$, & $<\!9\!>$, $<\!42\!>$\\
$III.2 $&6,4,2 & $B_3$, & $<\!16\!>$, $<\!22\!>$, $<\!39\!>$\\
$III.3 $&10,6,2 & $H_3$ & \\
$A1(1,1,1,1) $&2,2,2,2 & &$<\!6\!>$\\
$A1(1,1,2,2) $&4,2,2,2 & &$<\!30\!>$\\
$A2(1,1,1,1) $&3,2,2,2 & &$<\!35\!>$\\
$A3(1,1,1) $&2,2,2,2 & &$<\!25\!>$, $<\!36\!>$\\
$B2 $&4,3,3,2 & &$<\!8\!>$, $<\!15\!>$, $<\!44\!>$\\
$E1 $&5,4,3,2 &$A_4$, & $<\!13\!>$, $<\!20\!>$, $<\!43\!>$\\
$E2 $&6,4,4,2 &$D_4$, & $<\!33\!>$\\
$E3 $&8,6,4,2 &$B_4$, & $<\!17\!>$, $<\!27\!>$, $<\!41\!>$\\
$E4 $&12,8,6,2&$F_4$, & $<\!46\!>$\\
$E5 $&30,20,12,2& $H_4$ &\\
\hline\end{tabular}\end{center}
{\bf Notation}.
$P$-matrix: Type of $\widehat P$-matrix.
$d_i$: degrees of the basic invariants.
$G$: linear group, indicated by its symbol if it is a finite group or by its
entry number in tables~\ref{tabLG1} and \ref{tabLG2} if it is a Lie group.
\end{table}
|
1,314,259,993,423 | arxiv | \section{Introduction}
When the stars in the Solar neighbourhood (Snhd) are binned by age, the
velocity dispersion of each bin increases with its age. This age-velocity
dispersion relation (AVR) has been known and studied for decades (e.g.
\citealp{stromberg,parenago, wielen}) and similar relations have now also
been inferred for external galactic discs (\citealp{beasley} for M33,
\citealp{dorman} for M31). It is generally agreed that understanding the
physics that establishes the AVR would be a significant step towards
understanding how galaxies form and evolve.
Despite many efforts, the shape of the AVR is still not adequately constrained.
The major constraints on the AVR come from observations of stars in the Snhd.
The measured ages of stars suffer from substantial errors, and samples of
stars with measured ages typically have a selection function that favours
young stars over old \citep{nordstroem}. Alternatively modelling the velocity
dispersions as functions of stellar colour has been used to determine the
shape of the AVR \citep[hereafter AB09]{ab09}.
Whereas the vertical density profile of the Milky Way (MW) is well fitted by
a double-exponential in $|z|$, the AVR is typically described as a simple
power-law in age $\sigma(\tau)\propto \tau^{\beta}$ with exponent $\beta$.
\cite{quillen} claimed to detect a jump in the vertical AVR for ages $\tau>
10\,{\rm Gyr}$ and connected this jump to the double-exponential nature of the
density profile. However, more recent studies find that the AVR in the Snhd
can be reasonably described by a single power-law (\citealp{holmberg}, AB09).
Moreover, \citet{sb09} demonstrated that the observed vertical density
profile is fitted well by a model in which the histories of star formation and disc heating are
continuous, and \cite{bovy12} showed that subsets of stars selected at
different points in the $([\alpha/\hbox{Fe}],[\hbox{Fe/H}])$ abundance plane have scale heights that
vary continuously, with no evidence for a dichotomy.
To constrain the heating processes responsible for the Snhd AVR, three
diagnostics have been used in addition to the value of $\beta$: (i) the axis
ratios $\sigma_z/\sigma_R$ of the velocity ellipsoids of old and young
populations; (ii) the magnitude $\sigma_{{\rm old}}$ of $\sigma_z$ for the
oldest populations; (iii) the vertical density profile of the MW's disc.
Several physical processes work together to establish the AVR. The goals
of this paper are to deepen our understanding of the collaboration of secular
heating processes and to show how the AVR responds to one or the other process
playing a more prominent role. External heating due to interaction with dark
substructure is not considered in this paper.
\citet{spitzer} showed that scattering of stars off small-scale
irregularities in the potential of the Galactic disc would transform the
nearly circular orbits of young stars, which reflect their origin from the
gas disc, into orbits with higher radial and vertical energy. The discovery
of Giant molecular clouds (GMCs) with masses $M_{\rm GMC}\sim 10^{5-7}\,M_\odot$
provided a suitable candidate for the heating agent. Using analytic
arguments, \citet{lacey} showed that GMC heating could have contributed
significantly to the observed AVR, but that the dispersion $\sigma_{{\rm old}}$ of
the oldest stars could be reproduced only if the masses and / or number
density of GMCs was significantly higher in the past than they are now.
Lacey further derived values $\sigma_z/\sigma_R\sim 0.8$ and $\beta\sim 0.25$
that are, respectively, larger and smaller than the data indicate.
\citet{ida} used analytical calculations more sophisticated than that of
Lacey to show that scattering by GMCs in discs actually yields values
$\sigma_z/\sigma_R\sim 0.5-0.6$ that are consistent with observations. This
was confirmed with particle simulations by \citet{shiidsuka} and
\citet{haenninen}: to obtain correct values it is essential to take into
account that in a thin disc impact parameters are concentrated towards the
galactic plane, whereas Lacey had assumed an isotropic distribution of impact
parameters \citep{sellwood}. However, the conflict between observations and
Lacey's values for $\beta$ and $\sigma_{{\rm old}}$ remained
\citep{haenninen}.
GMCs are not the only sources of a non-axisymmetric component to the
gravitational field experienced by disc stars, and any such component will
heat the disc. \citet{barbanis} showed that spiral structure could
significantly heat the disc, when the spiral pattern has either a very high density contrast
or is of a transient and recurring nature. Two-dimensional simulations of
discs continuously fed with young, cold particles showed that transient and
recurrent spiral structure is always present in star forming discs and
provides sufficient heating to explain the in-plane AVR \citep{selcar,
carsel}. From such two-dimensional simulations and analytical
arguments for vertical cloud heating, \citet{carlberg} concluded
that a combination of GMC and spiral heating could explain the observations.
However, spirals do not directly increase $\sigma_z$ significantly (e.g.\
\citealp{sellwood13, martinez}), and the question remained open whether deflections by
GMCs can convert in-plane heat to vertical heat in an appropriate manner.
\citet{jenkins} used analytic arguments to examine this question for growing
discs. They concluded that the observed value of $\sigma_z/\sigma_R$ could be
explained, but the observed value of $\sigma_{{\rm old}}$ was problematic.
Another continuous secular heating process that has been discussed in the literature
is heating by the bar \citep{saha, grand}. The interaction of galactic discs with satellite
galaxies and the corresponding dark matter substructure can also cause disc heating \citep{velazquez}.
However, in the case of the MW this process has likely only made a minor contribution to the observed AVRs
\citep{just}.
Another contributor to the AVR could be a decline over cosmic time in the
velocity dispersion of stars at their time of birth as discs have become less
gas-rich and and less turbulent \citep{bournaud, forbes}. These models are
motivated by observations of gas kinematics in disc galaxies at various redshifts,
mostly based on H$\alpha$ emission. These observations have
revealed significantly ($\sim3-10$ times) higher velocity dispersions $\sigma$
at redshifts $z\sim 2-4$ than in corresponding observations of the local universe (e.g.
\citealp{sins}), and a decline of $\sigma$ with decreasing redshift \citep{wisn}.
It is, however, unclear how the kinematics of young stars which form from
cold gas, relate to these observations.
Fully cosmological hydrodynamical simulations of galaxy formation have recently
reached reasonable levels of success in reproducing MW like disc galaxies and
the AVRs in some of these simulations have been studied \citep{house, bird, martig,
grand}. At $z=0$ the stellar populations in the majority of these simulations
are significantly hotter than the stars in the MW at all ages (but see model
\emph{g92} of Martig et al.). Especially young stars have overly high $\sigma_z$
which has been linked to numerical effects and insufficient resolution and shown
to depend on the specifics of the numerical sub-grid star-formation models
\citep{house, martig}. Better agreement with the Snhd AVR is generally found
for galaxies unaffected by mergers. Martig et al. find that the thin disc stars
in their more successful models are born cold and heated with time, which they
attribute to heating by spirals and overdensities and possibly the coupling between
transient spirals and weak bending waves \citep{masset}. Note that GMCs are not
resolved in these simulations.
For many years it was assumed that the chemodynamical evolution of any
annulus of the Galactic disc could be modelled in isolation of other annuli.
Now there is clear evidence that radial migration of stars within discs is an
important process \citep{sellwoodb,roskar, sb09, kordopatis}, with the
consequence that the production of a hot population in one annulus, for
example through the action of a bar, can subsequently endow a distant annulus
with a hot population that could not have been locally heated. On account of
radial migration, it is essential to understand the origin of the AVR
\emph{globally}, that is by tracking the evolution of the disc at all radii.
In general we expect the mean birth radius of a coeval cohort of Snhd stars
will decrease with increasing age, and on account of the radial gradient in
velocity dispersion the decrease in birth radius will be reflected in the AVR
\citep{sb09}.
In this paper we use the simulations presented in \citet[hereafter
ABS16]{abs16} to study the formation of the AVR. Unlike the previously cited
studies, these simulations include simultaneously all the following important
aspects: growing discs with multiple coeval populations, GMCs, recurring
spiral structure with evolving properties, a bar, an evolving GMC-to-stellar
mass fraction, radial migration and sufficiently cold young stars. ABS16 showed
that although the vertical
profiles of their models do not show a thick disc like that of the MW, some
models do provide quite good fits to the AVR of the Snhd. Hence they
concluded that the thick disc requires additional sources of heat, but the
thin disc can be explained by combined GMC and spiral heating. They showed
that the efficiency of GMC heating declines over time because the fraction of
the disc's mass contained in GMCs falls steadily as a consequence of a
declining star-formation rate (SFR) and a growing disc mass. Their
simulations are thus a promising tool to study what shapes the AVR in thin
galactic discs.
Two major conclusions will emerge from our study: (i) biased ages and age
uncertainties cause measured AVRs to deviate significantly from the true
AVRs; (ii) it is vital to distinguish between an AVR $\sigma(\tau)$, which
gives velocity dispersion as a function of age for stars that are now
co-located, and a \emph{heating history} $\sigma(t-t_{\rm b})$, which gives the
velocity dispersion as a function of time for a cohort of currently
co-located stars that were born at a given time $t_{\rm b}$. Whereas the AVR
$\sigma(\tau)$ for $\tau\simeq4.5\,{\rm Gyr}$ quantifies the current kinematics of stars
born contemporaneously with the Sun, the heating history $\sigma(t-t_{\rm b})$ for $t-t_{\rm b}\simeq4.5\,{\rm Gyr}$
quantifies the kinematics of stars $4.5\,{\rm Gyr}$ after they were born, which
would be $5.5\,{\rm Gyr}$ ago in the case of $10\,{\rm Gyr}$ old stars in the disc. If stars
were born into a statistically stationary environment providing heating
processes which are constant in time, the cohort born
$10\,{\rm Gyr}$ ago would $5.5\,{\rm Gyr}$ ago have been in the same dynamical state that the
Sun's cohort is in now. That is, given a stationary environment the AVR would be
the same function of $\tau$ that the heating history is of $t-t_{\rm b}$.
If a galaxy undergoes a major merger, stars born before and
after the merger will undergo different heating histories $\sigma(t-t_{\rm b})$.
Here we argue, that even in the absence of mergers or declining birth dispersions,
the thin discs of galaxies change beyond recognition over cosmological timescales,
so the environment is very far from stationary, and the heating
experienced by stars born $10\,{\rm Gyr}$ ago during the first Gyr of their lives was
very different from the environment experienced by recently born stars during
the first Gyr of their lives. Consequently heating histories are described
by entirely different functions from the AVR.
Nevertheless we will find that both AVRs and
heating histories can be well approximated by the modified power law
\begin{equation}
\sigma(x)=\sigma_{10}\left({{x + x_1} \over {10\,{\rm Gyr} + x_1}}\right)^{\beta}
\label{eq:heatlaw}
\end{equation}
used by AB09, with $x=\tau$ or $t$. To differentiate between parameters derived from AVRs
and heating histories, we will mark the latter with a tilde, i.e. $\tilde{\beta}$,
$\tilde{\sigma}_{10}$ etc. We will find that the indices $\beta$ and $\tilde{\beta}$
of these power laws are often dissimilar. Moreover, we find that in the case of a
heating history the value of $\tilde{\sigma}_{10}$ can evolve strongly with the
time $t_{\rm b}$ of the cohort's birth, whereas in most models $\tilde{\beta}$ evolves only mildly.
Our paper is organised as follows: In Section \ref{sec:simulations} we
briefly describe the simulations. In Section \ref{sec:biases} we examine the
effects of observational age errors and biases on the AVR. In Section
\ref{sec:AVR} we describe the model AVRs and compare them to local data.
Topics discussed include the uncertainties of the comparisons arising from
azimuthal variations in the model AVRs (Sections \ref{sec:azimuth} and
\ref{sec:specifics}), and the
diagnostic content of the axis ratios of velocity ellipsoids (Section
\ref{sec:arat}), and power-law fits to AVRs (Section \ref{sec:heatIndex}).
In Section \ref{sec:heathistory} we consider the heating histories for
different populations of coeval model stars and show how these relate to
AVRs. In Section~\ref{sec:discuss} we relate our findings to the physics of
star scattering, and we conclude in Section \ref{sec:conclude}.
\tabcolsep=4.5pt
\begin{table*}
\vspace{-0cm}
\caption{List of models analysed in this paper.
{\it 1st Column}: Model Name;
{\it 2nd Column}: Initial Conditions;
{\it 3rd Column}: Total baryonic IC mass $M_{\rm{b,i}}$;
{\it 4th Column}: IC DM halo scalelength {$a_{\rm halo}$};
{\it 5th Column}: IC radial disc scalelength $h_{R,{\rm disc}}$;
{\it 6th Column}: IC vertical disc scaleheight $z_{0,{\rm disc}}$;
{\it 7th Column}: GMCs Yes/No;
{\it 8th Column}: Cutoff: Adaptive (no new particles in bar region) or
fixed (pre-defined evolving inner cutoff for new particles);
{\it 9th Column}: Total inserted baryonic model mass $M_{\rm f}$
(including initial baryonic mass);
{\it 10th Column}: Initial disc scalelength $h_{R, {\rm i}}$;
{\it 11th Column}: Final disc scalelength $h_{R, {\rm f}}$;
{\it 12th Column}: Scalelength growth parameter $\xi$;
{\it 13th Column}: Exponential decay timescale $t_{\rm SFR}$
for the star formation rate;
{\it 14th Column}: Initial velocity dispersion for inserted stellar
particles, {$\sigma_0$};
{\it 15th Column}: GMC star formation efficiency $\zeta$;
}
\begin{tabular}{@{}ccccccccccccccc@{}}\hline
1st & 2nd & 3rd & 4th & 5th & 6th &7th & 8th & 9th & 10th & 11th & 12th & 13th & 14th & 15th \\
{Name}&{ICs} &{$M_{\rm{b,i}}$} &{$a_{\rm halo}$}&{$h_{R,{\rm disc}}$}&{$z_{0,{\rm disc}}$}&{GMCs}&{Cutoff}&{$M_{\rm f}/\,M_\odot$}&{$h_{R, {\rm i}}$}& {$h_{R, {\rm f}}$}& {$\xi$} & {$t_{\rm SFR}$}&{$\sigma_0$}& {$\zeta$}\\
& &{$[10^{9}\,M_\odot]$}&{$\,{\rm kpc}$} &{$\,{\rm kpc}$} &{$\,{\rm kpc}$} & & &{$[10^{10}]$} &{$\,{\rm kpc}$} & {$\,{\rm kpc}$} & & {$[\rm Gyr]$}&{$[\,{\rm km\,s^{-1}}]$} &\\ \hline
Y1 & Y & 5 & 30.2 & 1.5 & 0.1 & Yes & Adap & 5 & 1.5 & 4.3 & 0.5 & 8.0 & 6 & 0.08 \\
Y1s2 & Y & 5 & 30.2 & 1.5 & 0.1 & Yes & Adap & 5 & 1.5 & 4.3 & 0.5 & 16.0 & 6 & 0.08 \\
Y1$\zeta $- & Y & 5 & 30.2 & 1.5 & 0.1 & Yes & Adap & 5 & 1.5 & 4.3 & 0.5 & 8.0 & 6 & 0.04 \\
Y1f$\sigma$ & Y & 5 & 30.2 & 1.5 & 0.1 & Yes & Fix & 5 & 1.5 & 4.3 & 0.5 & 8.0 & 10 & 0.08 \\
Y2 & Y & 5 & 30.2 & 1.5 & 0.1 & Yes & Adap & 5 & 2.5 & 2.5 & 0.0 & 8.0 & 6 & 0.08 \\
Y2Mb- & Y & 5 & 30.2 & 1.5 & 0.1 & Yes & Adap & 3 & 2.5 & 2.5 & 0.0 & 8.0 & 6 & 0.08 \\
Y2Mb+ & Y & 5 & 30.2 & 1.5 & 0.1 & Yes & Adap & 7.5 & 2.5 & 2.5 & 0.0 & 8.0 & 6 & 0.08 \\
Y4f$\zeta$- & Y & 5 & 30.2 & 1.5 & 0.1 & Yes & Fix & 5 & 1.5 & 2.2 & 0.5 & 8.0 & 6 & 0.04 \\
YG1 & YG & 5 & 30.2 & 1.5 & 0.1 & Yes & Adap & 5 & 1.5 & 4.3 & 0.5 & 8.0 & 6 & 0.08 \\
YN1 & Y & 5 & 30.2 & 1.5 & 0.1 & No & Adap & 5 & 1.5 & 4.3 & 0.5 & 8.0 & 6 & -- \\
A2$\tau$ & A & 10 & 30.2 & 1.5 & 0.8 & Yes & Adap & 5 & 2.5 & 2.5 & 0.0 & 8.0&$6 + 30{\rm e}}\def\rd{{\rm d}^{-t/1.5\,{\rm Gyr}}$ & 0.08 \\
E2 & E & 15 & 30.2 & 2.5 & 1.2 & Yes & Adap & 5 & 2.5 & 2.5 & 0.0 & 8.0 & 6 & 0.08 \\
F2 & F & 5 & 51.7 & 1.5 & 0.1 & Yes & Adap & 5 & 2.5 & 2.5 & 0.0 & 8.0 & 6 & 0.08 \\ \hline
\end{tabular}
\label{modeltable}
\end{table*}
\section{Simulations}
\label{sec:simulations}
The simulations analysed in this paper are a subset of the models presented
in ABS16. These are simulations of growing disc galaxies within non-growing
live dark matter haloes made using the Tree Smoothed Particle Hydrodynamics
(TreeSPH) code GADGET-3, last described in \citet{gadget2}. We focus on
standard-resolution models, which contain $N=5\times10^6$ particles in the
final disc and the same number of halo particles. Most of the simulations are
collisionless, but a subset contains an isothermal gas component with
pressure $P=\rho c_s^2$ and sound speed $c_s=10\,{\rm km\,s^{-1}}$. The global gas fraction in
these discs is kept roughly constant over time at $f_g=10$ per cent. In addition,
most simulations contain a population of short-lived, massive
particles representing GMCs. The force softening lengths are $\epsilon_{\rm
bar}=30\,{\rm pc}$ for baryonic particles (including GMCs) and $\epsilon_{\rm
DM}=134\,{\rm pc}$ for DM particles.
Table~\ref{modeltable} gives an overview of the models cited here.
We give only a brief account of the meaning of the model parameters -- a full
description can be found in ABS16. Standard models contain GMCs, but no gas
and all models have evolved over a simulation time of $t_{\rm f}=10\,{\rm Gyr}$.
The presence of gas in a model is marked by a `G' in its name, while the
absence of GMCs is marked by an `N'.
The initial conditions (ICs) were created using the GALIC code \citep{yurin}.
The details of the ICs can be found in Table 1 of ABS16.
The models discussed here all start with a spherical
dark matter halo with a \citet{hernquist} profile and a mass of $10^{12}\,M_\odot$.
The F model differs from the others in that the scale length of its halo is
$a_{\rm halo}=51.7\,{\rm kpc}$ rather than $30.2\,{\rm kpc}$.
All models analysed here contain an IC disc with a mass profile
\begin{equation}
{\rho_{\rm{disc,i}}(R,z)} = {{M_{\rm{b,i}}}\over{4\pi {z_{0,{\rm
disc}}}{h_{R,{\rm disc}}}^2}} {\sech^2 \left({z}\over{z_{0,{\rm
disc}}}\right)} {\exp\left(-{R}\over{h_{R,{\rm disc}}}\right)}.
\end{equation}
Here $h_{R,{\rm disc}}$ is the IC disc exponential scalelength and
a radially constant isothermal vertical profile with scaleheight $z_{0, {\rm disc}}$
is assumed. The Y and F models start with a baryonic disc of mass
$M_{\rm b,i}=5\times10^9\,M_\odot$, which is compact ($h_{R,{\rm disc}}=1.5\,{\rm kpc}$)
and thin ($z_{0, {\rm disc}}=0.1\,{\rm kpc}$). The A models contain a thicker and more
massive IC disc ($z_{0, {\rm disc}}=0.8\,{\rm kpc}$, $M_{\rm b,i}=10\times10^9\,M_\odot$)
and the IC disc in the E models is even more massive, thicker and more extended
($h_{R,{\rm disc}}=2.5\,{\rm kpc}$, $z_{0, {\rm disc}}=1.2\,{\rm kpc}$, $M_{\rm b,i}=15\times10^9\,M_\odot$).
Stellar particles are continuously added to the disc on near-circular orbits.
The young stellar populations are assigned birth velocity dispersions
$\sigma_0$ in all three directions $R$, $\phi$ and $z$. The standard choice is
$\sigma_0=6\,{\rm km\,s^{-1}}$, but we also consider a simulation with $\sigma_0=10\,{\rm km\,s^{-1}}$
(Y1f$\sigma$) and one (A2$\tau$) in which the birth velocity dispersion declines
exponentially with time
\begin{equation}\label{sigmazerot}
\sigma_0(t)=\left(6+30{\rm e}}\def\rd{{\rm d}^{-t/1.5\!\,{\rm Gyr}}\right)\,{\rm km\,s^{-1}}.
\end{equation}
The star-formation rate is
\begin{equation}\label{eq:SFTt}
{\rm SFR}(t)={\rm SFR}_0 \times \exp({-t/t_{\rm SFR}}),
\end{equation}
with $t_{\rm SFR}=8$ or $16\,{\rm Gyr}$. The constant ${\rm SFR}_0$ is adjusted to produce at $t=t_f$
a target final baryonic mass $M_{\rm f}$ in the range $3-7.5\times10^{10}\,M_\odot$.
Mass growth is smooth in time and sufficiently slow for the process to
be effectively adiabatic.
Particles are added randomly distributed in azimuth every five Myr
with an exponential radial density profile $\Sigma_{\rm SF}(R)\propto\exp(-R/h_R(t))$.
The scalelength $h_R(t)$ of the newly added particles grows in time as
\begin{equation}\label{eq:hRt}
h_R(t)=h_{R,\rm i}+(h_{R,\rm f}-h_{R,\rm i})(t/t_{\rm f})^\xi.
\end{equation}
To avoid inserting particles in the bar region, where near-circular orbits
do not exist, particles are not added inside the cutoff radius $R_{\rm cut}$,
which is either determined by the current bar length (`adaptive cutoff'), or given by a
pre-determined formula $R_{\rm cut}(t)=\left(0.67+{0.33t\over1\,{\rm Gyr}}\right) \,{\rm kpc}$
(`fixed cutoff').
GMCs are modelled as a population of massive particles drawn from a mass
function of the form ${\rm d}N/{\rm d}M\propto M^\gamma$ with lower and upper mass
limits $M_{\rm low}=10^5\,M_\odot$ and $M_{\rm up}=10^7\,M_\odot$ and a power law exponent
$\gamma=-1.6$. Their radial density is proportional to $\Sigma_{\rm SF}$,
and their azimuthal density is given by
\begin{equation}
\rho_{\rm GMC}(\phi)\propto \left[\rho_{\rm ys}(\phi)\right]^\alpha,
\end{equation}
where $\rho_{\rm ys}$ is the density of young stars and $\alpha=1$. The
mass in GMCs is determined by the SFR efficiency $\zeta$. Specifically, for
each $\Delta m_{\rm stars}$ of stars formed, a total GMC mass $\Delta
m_{\rm GMC} = \Delta m_{\rm stars}/\zeta$ is created. GMC particles live for
only $50\,{\rm Myr}$: for $25\,{\rm Myr}$ their masses grow with time, and for the
final $25\,{\rm Myr}$ of their lives their masses are constant.
In ABS16 we presented an overview of the properties of the simulated
galaxies. Important findings of ABS16 that are relevant to the results
of this paper, are:
\begin{enumerate}
\item{In the absence of GMCs, the models are too cold vertically to
explain the vertical profile of the MW thin disc. Heating by GMCs creates
remarkably exponential vertical profiles, the scaleheights of which agree roughly with that
inferred for the MW thin disc for our standard GMC mass function and $\zeta=0.04$ (Y1$\zeta$-).
These discs have radially constant vertical profiles.}
\item{GMC heating is particularly efficient early on, when SFRs are high and stellar
disc masses are small.}
\item{No thick discs similar to the one observed in the MW form in the models with thin IC discs. Thicker
discs can form in models with high baryon fractions (Y4f$\zeta$-, F2), but they are too hot radially.}
\item{Spurious two-body heating due to halo particles is negligible when the halo
is resolved with at least $5\times10^{6}$ particles.}
\item{The output scalelength $h_R$ of a stellar population can differ significantly from the
input scalelength, as bars and spirals drive radial redistribution and lead to an increase
of $h_R$ in the outer disc.}
\item{Bars are stronger for more compact models and weaker for models with an isothermal gas component.}
\item{Isothermal gas components mildly increase the efficiency of GMC heating, presumably
as GMC particles attract wakes of gas which increase their effective mass.}
\end{enumerate}
Unless otherwise noted, we will analyse our models at a solar-like radius $R_0=8\,{\rm kpc}$. The local
exponential scalelength $h_R$ at $R_0$ can differ significantly between models (2-5 kpc), but the
extreme values are caused by deviations from simple exponential profiles. The differences in
final surface density at $R_0$ are small (Fig. 12 in ABS16) and for our standard
disc mass the models agree reasonably with estimates for the Snhd \citep{flynn}.
\section{The impact on AVRs of biases and errors in ages}
\label{sec:biases}
\begin{figure*}
\centering
\vspace{-0.cm}
\includegraphics[width=18cm]{figs/p8-fig1.pdf}\\
\caption
{Correcting for age bias and errors to compare models to the GCS data. Left:
The black curve shows the age distribution of stars in Model Y1 at 10 Gyr and
at $R=8\pm0.5\,{\rm kpc}$ and $z=0\pm0.1\,{\rm kpc}$. If we assume observational $1\sigma$
errors of 20 per cent we predict the green line. The blue line shows the age
distribution in the GCS sample. The red line is what we get when we weight stars
in the model so that they match the blue line and then assume 20 per cent age
errors. Right: Vertical AVRs $\sigma_z(\tau)$ for
Models Y1 (green) and E2 (red) at $10\,{\rm Gyr}$ and at $R=8\pm0.5\,{\rm kpc}$ and $|z|\le0.1\,{\rm kpc}$.
Dashed lines are for the true age distribution (black line in left panel) and
solid lines are for the degraded age distribution (red line in left panel).}
\label{gcs}
\end{figure*}
Before we can compare the AVRs from the \emph{Geneva-Copenhagen Survey} (GCS)
\citep{nordstroem, casagrande} to
data from simulations, we need to model the impact on the observations of the
survey's selection function and errors in the estimated ages of stars
(see also \citealp{holmberg7, martig}).
We start by selecting GCS stars with $\left[{\rm Fe/H}\right]>-0.8$ and heliocentric azimuthal
velocity $V>-150\,{\rm km\,s^{-1}}$ and a `good' age determination: from Casagrande et al., we use the
maximum likelihood ages $\tau$ and the ages $\tau_{16}$ and $\tau_{84}$ of the
$16$ and $84 \%$ quantiles of the probability distribution in age. If either
$\tau_{84}-\tau_{16} < 2\,{\rm Gyr}$ or $2\left(\tau_{84}-\tau_{16}\right) /
\left(\tau_{84}+\tau_{16}\right) < 0.5$ the age $\tau$ is deemed good and the
star enters our sample. These $\sim7500$ stars are ordered by age and placed
in bins with 200 stars each to calculate $\sigma(\tau)$. Every 10 stars a new point
is plotted and thus every 20th point is statistically independent of its predecessors.
The blue curve in
Fig.~\ref{gcs} shows the strongly biased age distribution of this sample.
In addition to the bias of this mid-plane survey towards kinematically colder
and hence younger stars, the GCS is largely magnitude limited, favouring
more luminous, younger stars. The exclusion of hot/blue stars with
$T_{\rm eff} \gtrsim 7000 {\rm K}$ biases the sample against very young stars,
and in particular excludes massive stars that could have safe age-determinations
with $\tau \lesssim 1.5 \,{\rm Gyr}$. The small upturn in the number densities at
$\tau < 0.5 \,{\rm Gyr}$ can be mostly ascribed to a pile-up of maximum likelihood
values from main-sequence stars with temperature and/or metallicity over-estimates,
and hence underestimated maximum likelihood ages. Consistently, there is little
evolution in GCS velocity dispersions for ages below $\tau \sim2\,{\rm Gyr}$, stalling
at values typical for $\sim 1.5 \,{\rm Gyr}$ old stars and far above the values
derived by AB09 for their bluest stars.
Blue \emph{Hipparcos} stars can be used to determine velocity dispersions of
very young stars. The lowest dispersions found by AB09 for their bluest bins
in $B-V$ are $\sigma_z=5.5 \,{\rm km\,s^{-1}}$ and $\sigma_R=8 \,{\rm km\,s^{-1}}$. These stars will,
however, be kinematically biased, as a majority of them belong to a small number of
moving groups of young stars. It is interesting that $\sigma_R$ increases
very strongly between $B-V=-0.2$ and $B-V=0$, so the smallest value of
$\sigma_z/\sigma_R\sim1/3$ occurs at $B-V\sim0$.
We impose the GCS age bias and errors of $\sim 20$ per cent in ages on data from
a model galaxy as follows: first we select star particles at
$R=8\pm0.5\,{\rm kpc}$ and $|z|\le 0.1\,{\rm kpc}$, which approximates the GCS volume.
Stars already present in the ICs are randomly assigned ages $\tau \in \left[t_{\rm
f}, t_{\rm f} + 1 \,{\rm Gyr} \right]$. To each star $i$ we assign a weight $w_i\ge1$
so that the distribution of true ages of the weighted sample agrees with the
one of the GCS sample (blue curve in Fig.~\ref{gcs}). We then determine $w_i$
`observed' ages for each star by assuming a Gaussian error distribution with
standard deviation 20 per cent of the true age. The resulting distribution
of all assigned ages is shown as the red curve in Fig.~\ref{gcs}. For the
models all bins have width $\Delta\tau=0.25\,{\rm Gyr}$. Note that the GCS selection function
also depends on metallicity, which is not taken into account here.
The black curve in the left panel of Fig.~\ref{gcs} shows for Model Y1 at
$t_{\rm f}=10\,{\rm Gyr}$ the actual age distribution of `solar neighbourhood' stars. This
distribution peaks at the youngest ages because young stars are confined to
the plane, where we select particles. At intermediate ages the distribution
is rather flat because heating and inside-out growth are balanced by the
declining SFR. The oldest stars are underrepresented as they are hot and
centrally concentrated. The green curve is obtained on folding the black
curve with our assumed errors in age. Now stars with ages above
$5\,{\rm Gyr}$ are smeared into a tail that extends to $14\,{\rm Gyr}$. The striking
difference between this green curve and the blue curve for the age
distribution of our GCS sample shows how strongly the survey's selection
function biases the data.
In the right panel of Fig.~\ref{gcs} we show the derived AVRs $\sigma_z (\tau)$ for Models
Y1 (green) and E2 (red). The dashed lines show
the AVRs for true age distributions, and the
solid lines the AVRs for the GCS-like samples just described. The
jump at $10\,{\rm Gyr}$ in the red dashed curve reflects the hot proto-thick disc
included in the ICs of Model E2. The main difference between the dashed and
full curves is extension of the end point from 10 to $14\,{\rm Gyr}$ and elimination
of the step in the dashed curve for E2 (see also \citealp{martig}). At low ages
the solid curves lie slightly above the dashed curves as a consequence of stars
with ages $\sim2\,{\rm Gyr}$ being scattered to lower `observed' ages.
\section{AVRs and velocity ellipsoid shapes}
\label{sec:AVR}
\begin{figure*}
\vspace{-0.cm}
\includegraphics[width=18cm]{figs/p8-fig2.pdf}\\
\caption
{The AVRs of a variety of models. For each models
we show $\sigma_R(\tau)$ (top), $\sigma_z(\tau)$ (middle) and
$\sigma_z/\sigma_R (\tau)$ (bottom). The pink asterisks are extracted from the
GCS Snhd data of \citet{casagrande}. The blue lines are the azimuthally averaged
AVRs from the models for $|z|<0.1\,{\rm kpc}$ and the red lines are for $|z|<0.3\,{\rm kpc}$.
The set of grey lines are for 36 equally spaced azimuthal bins at $|z|<0.3\,{\rm kpc}$.
All model data are for $R=8\pm0.5\,{\rm kpc}$. The green lines at low ages mark the
lowest values found by AB09 for each quantity with blue \emph{Hipparcos} stars.
The GCS stars with good ages are placed in bins with 200 stars each and every 10 stars a new point
is plotted, so that every 20th point is statistically independent.
At $\tau\sim 2\,{\rm Gyr}$, there are 200 stars per $\sim100\,{\rm Myr}$, whereas at $\tau>6\,{\rm Gyr}$
there are only $\sim 1000$ stars in total.}
\label{azi}
\end{figure*}
In the last section we explained how we extract comparable AVRs from the
models and the GCS. Here we compare $\sigma_z(\tau)$, $\sigma_R(\tau)$
and the ratio $\sigma_z/\sigma_R (\tau)$ between models and the GCS sample.
\subsection{Azimuthal variation of the AVR}
\label{sec:azimuth}
Whereas in our models we can select star particles from every azimuth, all
GCS stars are drawn from azimuths near that of the Sun. Just as the density
of stars in disc galaxies varies with azimuth, we expect
the age and velocity distributions of stars to vary as well. Moreover, a
bar or spiral arm drives non-axisymmetric streaming (e.g. \citealp{dehnen}),
and streaming velocities will boost the recovered velocity dispersion when
star particles are binned regardless of azimuth.
Here we attempt to quantify the azimuthal variation of the AVR in our models.
We make no attempt to take into account the actual inter-arm location of the Sun
because this task requires (i) reliable knowledge of the Galaxy's
non-axisymmetries, (ii) precise control of the non-axisymmetries within the
models, (iii) sufficient star particles within $\sim0.1\,{\rm kpc}$ of the Sun to make
Poisson noise unimportant, and (iv) detailed modelling of the selection function.
None of these three conditions being satisfied, we
confine ourselves to the estimation of the uncertainties arising from
uncharted non-axisymmetric structures.
For each model we divided the annulus $R=7.5-8.5\,{\rm kpc}$ at
$\left|z\right|<0.3\,{\rm kpc}$ into 36 sectors of 10 deg width and determined
mean motions and dispersions for each sector. These 36 separate AVRs are
displayed as grey lines in Fig.~\ref{azi}. The area occupied by these
lines should be regarded as the region within which the AVR of a mock Snhd
could fall. On average we find that higher velocity dispersions are
found in regions of higher star density, but the scatter is significant. For
most models a typical azimuthal spread in $\sigma_z(\tau)$ is $\sim \pm 1
\,{\rm km\,s^{-1}}$ at young ages and $\sim2-3\,{\rm km\,s^{-1}}$ at older ages.
Models such as Y4f$\zeta$- and E2 with bars that nearly reach $R=8\,{\rm kpc}$, show
larger spreads, even at young ages, because bars have strong effects on
stellar orbits and significant differences in orbit populations occur in
regions positioned differently with respect to the bar. Bars and spiral
structure excite streaming motions more parallel to the plane than
vertically, so the fractional azimuthal spreads are larger for
$\sigma_R(\tau)$ than for $\sigma_z(\tau)$.
In Fig.~\ref{azi} we also plot in red the azimuthal average for
$|z|<0.3\,{\rm kpc}$ and in blue for $|z|<0.1\,{\rm kpc}$, which is the most relevant region
to compare to the GCS data. Unfortunately, when we subdivide the data by
azimuth, Poisson noise is unacceptably large in the data for $|z|<0.1\,{\rm kpc}$.
The biggest discrepancies between red and blue lines are found for
$\sigma_z(\tau)$ in Models E2 and F2, which have thicker discs, and
in Model YN1, which has no GMCs. The discrepancy indicates vertical
dispersions which increase with altitude at a given age, as the red lines probe
higher altitudes. For most models dispersions hardly change with altitude
and red and blue lines are very similar indicating that our use of bins
extending to $|z|=0.3\,{\rm kpc}$ will not mislead.
In Fig.~\ref{azi} the Snhd data from the GCS are shown in pink. Green horizontal lines
at low ages indicate the lowest values found for each of the three
quantities for blue \emph{Hipparcos} stars by AB09 to give an indication of
the uncertainties at low ages.
\subsection{Specific models}\label{sec:specifics}
We begin by analysing models, which ABS16 considered to have problematic AVRs
to reconsider this judgement in light of the azimuthal spreads shown in
Fig.~\ref{azi}.
Models Y2Mb+ and Y2Mb- have an abnormally high and low final disc mass,
respectively. In these models the azimuthal variations are too small to
account for the conflict with observation. Similarly, the vertical
dispersions in Model YN1, which lacks GMCs, remain too small and the
dispersions in the thickest Y model, Y4f$\zeta$-, remain too high. Finally,
the characteristic feature in the radial dispersion at $\tau\sim 4 \,{\rm Gyr}$ for
Y4f$\zeta$- shows up clearly at all azimuthal positions.
ABS16 showed that models that start from IC F, which has a low-density dark
halo, produce thicker discs, which are, however, too hot. In particular
Model F2 has a vertical profile that is significantly thicker than those of
our standard models, run from initial condition Y, and is slightly
double-exponential, but still significantly thinner than that of the MW.
When the azimuthal variation in its AVRs is taken into account, Model F2
becomes marginally acceptable because at the right end of the lower
row of panels in Fig.~\ref{azi}, the lowest grey curve for
$\sigma_R(\tau)$ is consistent with the observations.
We now consider models run from the E and Y ICs. These models have standard dark halos
and total baryonic masses $M_{\rm f}=5\times10^{10}\,M_\odot$. Model Y1 is
slightly too cold radially for old stars, and slightly too cold vertically
for stars of intermediate age. The more compact Model Y2 has a radial AVR
that fits observations better because it has higher surface densities and
thus stronger non-axisymmetric structures. In Model Y1s2 the SF timescale is
longer so the disc grows more slowly. The consequence is old populations
that are too cold radially and vertically because the total mass and the GMC
mass fraction at early times are both low. Model Y1f$\sigma$, which uses a
fixed cutoff and has a higher value for the input velocity dispersion
$\sigma_0$, shows a mildly better fit to the data than Y1, as does Model YG1,
which has an isothermal gas component. The best fits by a Y model are
provided by Model Y1$\zeta$-, which has a lower SF efficiency and thus a
higher total GMC mass at all times.
Whereas the Y models start from a small, compact and thin disc, Model
E2 starts from a more extended and more massive thick disc. If we consider
that we should not trust any points at $\tau>12 \,{\rm Gyr}$ and that the blue lines
in Fig.~\ref{azi} are more relevant than the red lines, E2 also provides
acceptable fits to both $\sigma_R(\tau)$ and $\sigma_z(\tau)$. At old ages
its AVR $\sigma_z(\tau)$ differs from that of Model Y2 on account of the
thick IC, and at young ages it differs on account of its extended and
buckling bar.
\subsection{Shape of the velocity ellipsoid}
\label{sec:arat}
We now consider $\sigma_z/\sigma_R (\tau)$, which is plotted for each model
in the third and sixth rows of Fig.~\ref{azi}. The (pink) observational
values show substantial scatter around $\sigma_z/\sigma_R\sim 0.5$, with
lower values at young ages and higher values at old ages. Although the
\citet{casagrande} values lie in $(0.4,0.6)$, AB09 found values as low as
0.33 for the bluest stars in the \emph{Hipparcos} catalogue, which, as we
noted above, are excluded from the GCS sample, but may be biased by moving
groups. At old ages, $\tau \gtrsim 7 \,{\rm Gyr}$, $\sigma_z/\sigma_R\approx0.55-0.6$
appears rather constant. Values of this order are predicted by simulations
of heating by GMCs \citep{haenninen, sellwood}.
In all models stars are, by construction, added with
$\sigma_z=\sigma_R=\sigma_0$. Non-axisymmetries almost instantaneously increase in
$\sigma_R$ to a value $\gg\sigma_0$, while $\sigma_z$ increases much more
gradually. Consequently, quite soon $\sigma_z/\sigma_R<0.5$, as observed for
the youngest stars. Moreover, $\sigma_z/\sigma_R$ increases with age, again
as observed.
Although Model Y2Mb- with a low-mass disc provides a very good fit to the observed
$\sigma_z/\sigma_R (\tau)$, its velocity dispersions are too low to be
consistent with observations. Model F2 with a low-density halo and a
marginally acceptable AVR, can now be excluded because in it
$\sigma_z/\sigma_R (\tau)$ is too low at young ages.
The E and Y models have standard dark haloes. In Model Y1 $\sigma_z/\sigma_R
(\tau)$ is lower than in the observations at $\tau\sim5\,{\rm Gyr}$,
and higher for $\tau\ga10\,{\rm Gyr}$. Model Y2 with a more compact disc fits
$\sigma_z/\sigma_R$ less well. Model Y1f$\sigma$, which has an abnormally
high value of the birth dispersion parameter $\sigma_0=10\,{\rm km\,s^{-1}}$, fits the observations
better. Model Y1$\zeta$-, which has an abnormally low star-formation
efficiency, provides an excellent fit except at ages $\tau\ga8\,{\rm Gyr}$. Model
Y1s2, which has a more slowly declining SFR, provides a similar quality of
fit to that of Model Y1.
Model E2, which has a massive and extended primordial thick disc, provides a
good fit to the observed $\sigma_z/\sigma_R(\tau)$ at $\tau<9\,{\rm Gyr}$, but
provides unacceptably high values at older ages. However, this conclusion must
be moderated by two caveats: (i) the grey lines should be moved downwards by
the separation between the blue line for $|z|<0.1\,{\rm kpc}$ and the red line for
$|z|<0.3\,{\rm kpc}$; (ii) no model has stars with $\tau>11\,{\rm Gyr}$ (ICs stars are assigned
ages at 10-11 Gyr), so we must be careful when drawing conclusions regarding
this age range. In light of these caveats, we consider that Model E2 also
provides an acceptable fit to $\sigma_z/\sigma_R (\tau)$.
As expected for a model without GMCs, $\sigma_z/\sigma_R(\tau)$ is too low at all
ages $\tau$ in YN1. We note that the increase in $\sigma_z/\sigma_R(\tau)$ for older
stars may in part be caused by spurious collisional relaxation \citep{sellwood13}.
\begin{figure*}
\centering
\includegraphics[width=18cm]{figs/p8-fig3.pdf}\\
\caption
{Asterisks display AVRs for all three directions $R$ (black), $\phi$ (green)
and $z$ (grey) at $R=8\pm0.5\,{\rm kpc}$, $z=0\pm0.1\,{\rm kpc}$ and $t=10\,{\rm Gyr}$ for several models. Overplotted are fits of the form
$\sigma_{10}\left[\left(\tau+\tau_1\right)/ \left(10\,{\rm Gyr} + \tau_1\right)\right]^{\beta}$
in red ($R$), pink ($\phi$) and blue ($z$). For each model we show AVRs for both
true (left) and degraded ages (right) according to Section \ref{sec:biases}. The
fitted values for $\beta$ are displayed in corresponding colours in the bottom
right of each panel.
}
\label{heatfits}
\end{figure*}
\subsection{Which models can reproduce the Snhd AVR?}
\label{sec:whichModel}
Combining the results of all three quantities plotted in Fig.~\ref{azi}, we
conclude that only models with a standard halo (Y and E models) and standard
baryonic mass ($M_{\rm f}=5\times10^{10}\,M_\odot$) are compatible with the Snhd
data. Lower or higher disc masses, or lower density haloes, all fail to
reproduce the data. As far as models with thicker disc components are
concerned, inclusion of a thick disc in the ICs (Model E2) is clearly
favoured over a thick disc that emerges during the simulations (Models F2 and
Y4f$\zeta$-).
Since our models include significant idealizations, we cannot
expect any model to provide a perfect fit to the data and we do not seek to
reproduce the MW disc precisely. Not only do the models still lack heating
processes, such as interactions with satellite galaxies (e.g.
\citealp{just}), that will modify the predictions, but even after our effort
to take age biases and errors into account, the comparison of data from
models and observations must be imperfect. Moreover, the resolution of our
models is still too low and the history of the MW is too rich in events to
allow the detailed modelling of velocity distributions of stars in the $\sim
100\,{\rm pc}$ sphere covered by the GCS.
Considering all the shortcomings listed above and despite their failure to
model properly the thick disc component of the MW, models such as Y1$\zeta$-
or E2 provide very good fits to the data. We thus emphasise the conclusions
already drawn in ABS16: (i) combined disc heating by GMCs and
non-axisymmetries in the disc is very likely responsible for the overall
shape of the AVR in the Snhd; (ii) the models clearly favour a
baryonic disc mass $5\times10^{10}\,M_\odot$ and the cosmologically inferred dark-halo
mass parameters.
\subsection{Power-law indices of AVRs}
\label{sec:heatIndex}
We now examine the values of $\beta$ that are obtained by fitting AVRs
to equation \eqref{eq:heatlaw} with $x=\tau$. Small values indicate larger
differences in the velocity dispersions of young stars and stars of
intermediate ages than between the stars of intermediate and old ages.
Over the last half century
values of $\beta$ between 0.25 and 0.6 have been found
from various samples of the Snhd stars (see e.g. Table 1 in
\citealp{haenninen}). Data from the \emph{Hipparcos} and
\emph{GCS} surveys indicate that $\beta_z\sim0.45-0.53$ is
larger than $\beta_R\sim0.31-0.39$ (\citealp{holmberg}, AB09). In view of
the influence of age biases and errors on empirical AVRs, the spread of
values is not surprising.
Fig.~\ref{heatfits} displays for the endpoints of several models the AVRs for the
stars in the annulus $R=8\pm0.5\,{\rm kpc}$ and at $|z|<0.1\,{\rm kpc}$. There are two panels
for each model because the AVRs using true ages are plotted in the left
panel, while the right panel shows the AVRs yielded by ages degraded as
described in Section~\ref{sec:biases}. The model data are displayed as black
($\sigma_R$), green ($\sigma_\phi$) and grey points ($\sigma_z$). Fits of
equation \eqref{eq:heatlaw} to these data are over-plotted in red
($\sigma_R$), pink ($\sigma_\phi$) and blue ($\sigma_z$). The corresponding
heating exponents $\beta_i$ are displayed in the lower right corner of each
panel.
In this section we analyse also the azimuthal velocity dispersion
$\sigma_{\phi}$. For the most part we exclude $\sigma_{\phi}$ from our
analysis because (i) the skewness of the $v_{\phi}$ distribution (see e.g.
\citealp{gd2}, Section 4.4.3) renders $\sigma_\phi$ hard to interpret, and
(ii) it is tightly coupled by dynamics to $\sigma_R$, so it does not provide
independent diagnostic information. Models yield $\sigma_{\phi}/\sigma_{R}\sim
0.75-0.8$ at $R=8\,{\rm kpc}$ and $|z|<0.1\,{\rm kpc}$, whereas observations of the Snhd
yield smaller values, $\sigma_{\phi}/\sigma_{R}\sim 0.65$. Azimuthal
variation can lower the observed value by $0.05-0.1$ for some azimuths, which
only brings some models to marginal agreement with the Snhd data. We
interpret this discrepancy as a result of selection biases. Any selection
bias exerting a preference in metallicity has a tendency to lower
$\sigma_{\phi}/\sigma_{R}$. This kind of bias has been discussed in
\citet{sbd10}. We leave the detailed investigation of this phenomenon to a
future paper.
Fig.~\ref{heatfits} shows that the data can usually be nicely fitted by equation
\eqref{eq:heatlaw}. Notable exceptions are, as expected, the AVRs from true
ages for Models A2$\tau$, which features a declining input dispersion
(eq.~\ref{sigmazerot}), and E2 which has a thick
disc in its IC. Consequently, in both models all three dispersions, but especially
$\sigma_z$, have sharp upturns at the oldest ages.
The true AVRs of the other models have heating exponents in the range
$\beta_R=0.24-0.31$, $\beta_{\phi}=0.22-0.27$ and $\beta_z=0.51-1.39$. The
in-plane coefficients show hardly any scatter as heating is dominated by
non-axisymmetries, which are similar in all models. $\beta_{\phi}<\beta_R$
holds in all models. AB09 favoured $\beta_{\phi}>\beta_R$ for the Snhd
data but could not exclude $\beta_{\phi}<\beta_R$.
\begin{figure*}
\centering
\includegraphics[width=18cm]{figs/p8-fig4.pdf}\\
\caption {The AVRs $\sigma_i(\tau)$ at $t=10\,{\rm Gyr}$ for true ages and at radii
$R=3,4,\ldots,10\,{\rm kpc}$ and $z=0\pm0.1\,{\rm kpc}$ in directions $R$ (top row), $\phi$ (second row) and
$z$ (third row). Overplotted are fits of the form
$\sigma_{10}\left[\left(\tau+\tau_1\right)/ \left(10\,{\rm Gyr}
+\tau_1\right)\right]^{\beta}$. The bottom row plots the heating parameters
$\beta$ obtained from fits in all three directions as a function of $R$.
Vertical lines at $R=8\,{\rm kpc}$ show how $\beta$ changes when ages are degraded
by errors and observational bias. } \label{heatrad}
\end{figure*}
The scatter in $\beta_z$ is significant. The lowest value is found for Model
Y1s2. In Section \ref{sec:heathistory}, we will show that this arises from
this model's flatter SF history, which implies a slower decline with time in
the total mass of GMCs. The heating exponent of Model F2 with a low-density
dark halo, $\beta_z=1.12$, is unusually high. In this model an extended bar
forms very early on, which leads to a high inner cutoff radius $R_{\rm cut}$
for the insertion of star and GMC particles. As a consequence, the GMC total
mass at $R=8\,{\rm kpc}$ is at early times higher than in other models and the
decline towards late time is thus stronger. Moreover in this model vertical
heating by the extended $m=2$ non-axisymmetries is non-negligible. Model YN1,
which lacks GMCs, has the highest value of $\beta_z$ because it is heated
vertically by large-scale non-axisymmetries rather than GMCs. We find that
this heating mechanism generally produces higher values of $\beta_z$.
Degradation of the ages flattens AVRs because the observations are
dominated by stars with true ages $\tau\sim 2 \,{\rm Gyr}$, which are scattered into
all age bins by observational errors. In our models this flattening is
greatest at the oldest ages, but this is to some degree artificial: we
only have stars with true ages up to $11\,{\rm Gyr}$ (IC stars are assigned ages
$10-11\,{\rm Gyr}$) and ages can be scattered up $14\,{\rm Gyr}$, so we may be overestimating
the total flattening. However, detailed aspects of the selection function,
such as the exclusion of young blue stars, and of how ages were determined,
for example the restriction $\tau\le14\,{\rm Gyr}$ on the ages of GCS stars,
and of the error distribution, which is significantly non-Gaussian, were
not modelled here, and could significantly change the parameters of
observed AVRs.
Flattening of the AVR leads to a noticeable reduction in $\beta$, most prominently for
$\beta_z$. For models with the Y IC that have GMCs (left column in
Fig.~\ref{heatfits}), the reduced ranges are $\beta_R=0.20-0.25$,
$\beta_{\phi}=0.18-0.24$ and $\beta_z=0.41-0.48$. The extremely high values
of $\beta_z$ for Models YN1 and F2 are reduced to 0.61 and 0.59, respectively.
For Models A2$\tau$ and E2 that have hot old components in their ICs,
degradation of the ages smears out the AVRs at old ages and leads to better
fits by equation \eqref{eq:heatlaw}. Nonetheless, even after flattening
$\beta_z=0.64$ for Model A2$\tau$ is high, and for Model
E2 $\sigma_z(\tau)$ is fitted best by an almost straight line: $\beta_z=1.1$.
\begin{figure*}
\centering
\includegraphics[width=18cm]{figs/p8-figX.pdf}\\
\caption {Effect of spatial selection on velocity dispersions. \emph{Left}:
IC stars selected from the model of \citet{as15} at $R=8\pm1\,{\rm kpc}$ and either
$|z|<0.1\,{\rm kpc}$ (solid) or no restriction in z (dashed). We track these stars
back in time and at each output calculate their velocity dispersions in
radial (black) and vertical (red) directions. \emph{Middle and right}: AVRs
for true ages in models YN1 and Y1 in radial (black) and vertical (red)
directions at $R=8\pm0.5\,{\rm kpc}$. Solid lines are for stars at $|z|<100\,{\rm pc}$ and
dashed lines are for all stars irrespective of $z$ position. }
\label{select}
\end{figure*}
Despite the agreement between degraded AVRs of Model E2 and Snhd data noted
in Section \ref{sec:whichModel}, the model's values for $\beta$ conflict with
the data. Most prominently $\beta_z=1.1$ is significantly higher than
$\beta_z\sim0.45-0.53$ found in the Snhd (\citealp{holmberg}, AB09). By
contrast, the Y models with GMCs yield values of $\beta$ that are consistent
with the data, even though these models lack a thick disc. As far as
in-plane heating is concerned, $\beta_R$ is lower in all models than inferred
for the Snhd, where $\beta_R\sim0.31-0.39$.
\subsubsection{Radial variation of AVRs}
Fig.~\ref{heatrad} explores how AVRs are predicted to vary with radius in
four models selected because they show different behaviours.
Each panel in the first three rows shows
eight sets of dots, with each such set showing the AVR given by the true ages
of stars found at a radius in the range $(3,10)\,{\rm kpc}$: the larger the value of
$R$, the lower the curve lies in the panel. Fits of equation
\eqref{eq:heatlaw} to each curve are over-plotted in colours that move
through the spectrum from blue to orange as $R$ increases. The values of $\beta$
for these curves are plotted in the bottom row of panels, with black stars
from $\sigma_R(\tau)$, green crosses from $\sigma_\phi(\tau)$ and red
diamonds from $\sigma_z(\tau)$. For $R=8\,{\rm kpc}$, we also show how using
degraded rather than true ages changes $\beta$. The in-plane values for
$\beta$ are small but increase outwards. The vertical value of $\beta$ is
much larger and its radial variation is rather various.
Model Y1s2, shown in the second column of Fig.~\ref{heatrad}, is a typical
model with GMCs and a weak bar at $R\lesssim3\,{\rm kpc}$. Its AVRs are well fitted
by equation \eqref{eq:heatlaw} at all radii. The in-plane velocity
dispersions for the youngest age bins increase mildly with decreasing radius,
whereas young stars at all radii are equally cold vertically. Outside
$R=3\,{\rm kpc}$ the in-plane values of $\beta$ are almost independent of $R$, while
$\beta_z$ declines gently with increasing $R$. The shape of the AVR thus
depends only mildly on radius.
As far as vertical heating in the outer disc is concerned, we note that
stars which have migrated outwards have higher velocity dispersions than
stars which have not migrated, as qualitatively predicted by \citet{sb09,sb12}.
Non-migrating stars still show a clear increase of $\sigma_z$ with age
due to local GMC heating.
In contrast to Model Y1s2, Model Y1, shown in the leftmost column of
Fig.~\ref{heatrad}, has a strong buckled bar of length $L_{\rm bar}\sim 4
\,{\rm kpc}$. Its vertical AVRs behave like those of Model Y1s2, but the bar
significantly changes the in-plane AVRs. In particular, at the youngest ages
the velocity dispersion increases rapidly with decreasing $R$ because the
bar's gravitational field deforms orbits from circular ones. On
account of our use of an adaptive cutoff, no young stars were inserted into
the bar region after $t\sim 7 \,{\rm Gyr}$, and the youngest stars at $R=3-4\,{\rm kpc}$
are there because they have been captured onto bar orbits. The influence of
the bar moves the disc region, in which the shape of the AVR is almost
independent of $R$, outwards to $R\gtrsim6\,{\rm kpc}$.
Model Y4f$\zeta$-, shown in the third column of Fig.~\ref{heatrad},
has a thicker disc due to the formation of extended ($R\sim10\,{\rm kpc}$) $m=3$ and
$m=2$ non-axisymmetric structures at $t\sim 6\,{\rm Gyr}$. At young ages the
in-plane AVRs of this model
are affected even more strongly by global non-axisymmetries. A feature in the
AVR caused by the event at $t\sim 6\,{\rm Gyr}$ is visible at $\tau\sim 4\,{\rm Gyr}$ at
all displayed radii for the in-plane dispersions. On account of these
features, equation \eqref{eq:heatlaw} provides an unusually poor fit to the
data.
Model E2 in the rightmost column of Fig.~\ref{heatrad} features a thick disc
in its IC and has a long bar ($L_{\rm bar}\sim 6 \,{\rm kpc}$) at $t=10\,{\rm Gyr}$. In the
bottom right panel we see the impact on the vertical AVR of the buckling of
this bar, which created an X-shaped region out to $R\sim 4\,{\rm kpc}$. In contrast
to the other models, at the end of the third row of Fig.~\ref{heatrad} we see
that at the youngest ages $\sigma_z$ increases significantly with decreasing
radius. The thick disc from the IC shows up in the AVR as an abrupt increase
in $\sigma(\tau)$ at $\tau=10\,{\rm Gyr}$: this increase is particularly evident for
$\sigma_z(\tau)$ at the largest radii. On account of this step in $\sigma$,
equation \eqref{eq:heatlaw} provides an unusually poor fit to the data.
\section{Heating histories of coeval populations}
\label{sec:heathistory}
We now extract the intrinsic heating histories of coeval populations that
make up the AVR at a certain time and place. To relate this to the Snhd AVR
we select stars at $t=10\,{\rm Gyr}$ and $R=8\pm1\,{\rm kpc}$, then divide
them into age groups $\Delta \tau =0.05 \,{\rm Gyr}$ wide and track them from their
time of birth $t_{\rm b}$ until $t_{\rm f}=10\,{\rm Gyr}$. In this way, we
calculate velocity dispersions of each group at every output and thus
assemble a heating history for each coeval cohort.
\subsection{Selection effects}
Selecting stars from a limited spatial volume introduces phase correlations
between stars which influence the values of velocity dispersions at the time
of selection and before. For example, when stars are selected close to $z=0$,
they are all close to their maxima in $|v_z|$. Tracking them back in time,
their vertical velocity dispersions thus have to be lower than at the time of
selection. Unfortunately, the output frequency (snapshots at $50\,{\rm Myr}$
intervals) of the simulations discussed here is too low demonstrate this
effect clearly. So we illustrate the effect with a simulation from
\citet{as15} that is similar to YN1, but has a lower number of particles in
the dark halo and a gas component. For this model we have snapshots at
$1\,{\rm Myr}$ intervals.
\begin{figure*}
\centering
\includegraphics[width=18cm]{figs/p8-fig5.pdf}\\
\caption
{The evolution with time $t-t_{\rm b}$ of the vertical and radial velocity
dispersions in Model Y1 for stars with different birth times $t_{\rm b}$.
The stars are selected to be at $R=8\pm0.5\,{\rm kpc}$ at $t=10\,{\rm Gyr}$.
Displayed are curves for ten different age cohorts. Overplotted are fits of
$\sigma(t)=\tilde{\sigma}_{10}\left[\left(t+\tilde{\tau}_1\right) / \left(10\,{\rm Gyr} + \tilde{\tau}_1\right)\right]^{\tilde{\beta}}$.
The fitted values of $\tilde{\beta}$ are displayed in red ($\tilde{\beta}_R$) and green ($\tilde{\beta}_z$).
}
\label{allheat}
\end{figure*}
From the \citet{as15} model, we select IC (i.e. old) stars at $t=7.8\,{\rm Gyr}$,
$R=8\pm1\,{\rm kpc}$ and $|z|\le0.1\,{\rm kpc}$ and track them back for $200\,{\rm Myr}$. From the
full red curve in the
left panel of Fig.~\ref{select} we see that the vertical dispersion
$\sigma_z$ of these stars undergoes an oscillation with a period of $\sim
50\,{\rm Myr}$, which is only distinctly visible in the last period before selection.
This period is the average vertical oscillation period of these stars and
phase mixing diminishes the correlation with increasing time before selection
near the midplane. Repeating the experiment with only radial and no vertical
selection bounds imposed (dashed lines), the effect disappears and $\sigma_z$
is almost constant.
Selecting stars in radius also imposes phase correlations. E.g., if a disc
galaxy has an old, centrally concentrated disc population, old stars at
$R=8\pm1\,{\rm kpc}$ will be dominated by stars on eccentric orbits, which have
their guiding centre radii further in and will therefore be selected close to
apocentre. This effect alone however cannot explain the tracked-back radial
velocity dispersions (black lines) in the left panel of Fig.~\ref{select}.
As predicted, they show oscillations of $\sim 10$ per cent, but these
oscillations appear to have more than one underlying period of $\sim 100\,{\rm Myr}$,
irrespective of the $z$ selection of the stars.
On account of these selection effects, we decided to track heating histories
in the following way: we select stars at $t=10\,{\rm Gyr}$ and $R=8\pm1\,{\rm kpc}$, but do
not select in $z$. To prevent oscillations impacting our results, we exclude
the last $500\,{\rm Myr}$ from our analysis -- that is, we fit $\sigma(t-t_{\rm b})$ between
$t=t_{\rm b}$ and $t=9.5\,{\rm Gyr}$. We fit the curves $\sigma(t-t_{\rm b})$ to equation
\eqref{eq:heatlaw} with $x=t-t_{\rm b}$. To avoid confusion, we mark parameters derived
from heating histories with a tilde, i.e. we write $\tilde{\beta}$,
$\tilde{\sigma}_{10}$ etc.
We compare these parameters to the parameters from AVRs determined at
$t=10\,{\rm Gyr}$ and $R=8\pm0.5\,{\rm kpc}$, without a $z$ selection. Fig.~\ref{azi} has
already shown that AVRs depend little on vertical selection range. We
reiterate this point in the middle and right panels of Fig.~\ref{select},
where we show for models YN1 and Y1 radial and vertical AVRs for $|z|<100\,{\rm pc}$
(solid lines) and for all stars irrespective of $z$ position (dashed lines).
Standard models with a single-exponential vertical profile, such as Y1, show
no significant difference between solid and dashed lines. Other models, like
YN1, show only mildly higher dispersions for old stars when considering stars
at all $z$. The AVR parameters thus differ little between the two ways of
selection of stars for all models. The most notable difference is that
$\beta$ tends to be mildly higher when the selection of stars is unrestricted in $z$.
\subsection{How heating histories shape AVRs}
In Fig.~\ref{allheat} we plot the heating histories of several coeval
populations from Model Y1 that end up at $R=8\,{\rm kpc}$. The extracted data are
shown as black ($\sigma_R$) and blue asterisks ($\sigma_z$), whereas the fits
are overplotted as red ($\sigma_R$) and green lines ($\sigma_z$). Equation
\eqref{eq:heatlaw} again provides very good fits to the data, regardless of
the population's time of birth. In each panel, we give the fitted values
of $\tilde{\beta}$. We note that $\tilde{\beta}_R$ fluctuates between 0.18 and 0.26, whereas
$\tilde{\beta}_z$ slowly increases with $t_{\rm b}$ from 0.21 to 0.41. Comparing these
values to the ones found for Y1 in the AVRs of Fig.~\ref{heatfits} we note
that $\tilde{\beta}_R$ shows similar values as $\beta_{R}$, whereas $\tilde{\beta}_z$
is always smaller than $\beta_z$, irrespective of the choice of true or degraded ages.
In Fig.~\ref{evoheat} we plot the heating curves shown in
Fig.~\ref{allheat} in a different way by showing several curves corresponding
to populations of different birth times (encoded in colour) on top of each other.
The left panel shows $\sigma_z$, whereas the right panel shows $\sigma_R$.
In dotted lines of corresponding colours we overplot fits of equation
\eqref{eq:heatlaw} to the curves and extend these fitted
curves to $10\,{\rm Gyr}$ to show how the populations would evolve if they continued
to heat according to equation \eqref{eq:heatlaw}.
\begin{figure*}
\centering
\includegraphics[width=18cm]{figs/p8-fig6.pdf}\\
\caption
{The evolution with time $t-t_{\rm b}$ of the vertical (left panel) and radial
(right panel) velocity dispersions in Model Y1 at $R=8\,{\rm kpc}$ for stars with
different birth times ranging from IC stars (black) to stars born after $t=8\,{\rm Gyr}$
(orange). Overplotted as dotted lines of the same colour are fits of equation
\eqref{eq:heatlaw} to the curves, which are extended to 10 Gyr.
}
\label{evoheat}
\end{figure*}
The curves for $\sigma_z(t-t_{\rm b})$ clearly demonstrate that the heating
histories vary with time of birth. This has already been indicated by the
increase in $\tilde{\beta}_z$ with $t_{\rm b}$ shown in Fig.~\ref{allheat}. But this
alone would not cause the large differences. The value $\sigma_z(t-t_{\rm b})$
would reach after 10 Gyr, as encoded by the parameter $\tilde{\sigma}_{10}$ from
equation \eqref{eq:heatlaw}, also decreases strongly with increasing $t_{\rm b}$.
The reason is easily understood. As ABS16 showed, vertical heating in
these models is dominated by GMCs, and for standard parameters (e.g.,
Model Y1), the fraction of disc mass residing in GMCs is $\sim 30$ per cent
at early times and steadily decreasing. At $t=10\,{\rm Gyr}$, $<5$ per cent of the
mass remains in GMCs, consistent with observations of the MW. Consequently,
the global efficiency of GMC heating decreases and the intrinsic heating
history evolves towards smaller $\tilde{\sigma}_{10}$ with mildly varying $\tilde{\beta}_z$.
The AVR for any model is given by the lower envelope of the solid heating curves
in its analogue of Fig.~\ref{evoheat}. A glance at the left panel of
Fig.~\ref{evoheat} shows that the exponent $\beta_z$ required to fit this
envelope must be higher than the values of $\tilde{\beta}_z$ associated with
any individual heating history. Consequently the AVR $\sigma_z(\tau)$ is
shaped by the \emph{evolution} of the heating history and there is no
universal heating history valid for all stellar populations as was assumed
by AB09 and many other authors.
The situation regarding the heating histories $\sigma_R(t-t_{\rm b})$ in the
right panel of Figure \ref{evoheat} is markedly different. The tendency to
stronger heating of populations born at the earliest times remains visible,
but curves for $t_{\rm b}\ga1\,{\rm Gyr}$ lie almost on top of one another, indicating
that after $\sim 1\,{\rm Gyr}$ the heating history hardly changes. From this fact
it follows that the difference between the in-plane AVRs and heating
histories is mild. In the models of ABS16, after the first Gyr, when the
high mass fraction of GMCs can lead to powerful radial heating by clumps
of GMC particles, in-plane heating is dominated by spiral structure and the
bar. These non-axisymmetries are constantly exited by the addition of cold
stellar populations: in the region of the disc, in which its rotation curve
is not dominated by the contribution from the dark halo, and which lies
outside of the bar, Toomre's $Q$ \citep{toomre} parameter settles to a
characteristic value $Q\sim 1.5$. This requires the velocity dispersion
$\sigma_R$ of the entire disc to increase (see Fig. 9 in ABS16). In
consequence, young stars which are born cold need to be heated efficiently at all
times, which results in almost constant in-plane heating histories.
\subsection{Heating histories in standard models}
In Fig.~\ref{betaE} we plot for the heating histories of several models the
parameters $\tilde{\beta}$ (first and fourth rows), $\tilde{\sigma}_{10}$ (second and fifth
rows) and $\tilde{\sigma}(t=0)$ (third and sixth rows) as functions of $t_{\rm b}$
for both vertical (red asterisks) and radial (black asterisks) heating histories
extracted from analogues of Fig.~\ref{allheat}. $\tilde{\sigma}(t=0)$ is determined
by the parameter $\tilde{\tau}_1$ of equation \eqref{eq:heatlaw}, which is required
to be $\tilde{\tau}_1\ge0$, so that $\tilde{\sigma}(t=0)\ge0$. To display the differences
between heating histories and AVRs at $R=8\,{\rm kpc}$ we show in each panel by
horizontal lines the values of the vertical (in gold) and radial heating
parameters (blue) extracted from the corresponding AVRs
at $R=8\pm0.5\,{\rm kpc}$ without $z$ restriction: full lines for the
values obtained using true ages and dashed lines for values obtained using
degraded ages. When $\beta>1$, so the line lies beyond the top of the panel,
the value is printed in the panel. Note that due to the difference in $z$ selection
the AVR parameters differ mildly from those found in Section \ref{sec:heatIndex}.
The models analysed in the first three rows of Fig.~\ref{betaE} are variations on
the standard Model Y1. They all have single-exponential vertical profiles and
bars of varying extent and strength. We note that the $\tilde{\beta}$ values change
little with the time of birth of the populations. The analytic treatment of
scattering by GMCs in \citet{lacey} yielded $\tilde{\beta}=0.25$, and \citet{haenninen}
found $\tilde{\beta}_R=0.21$ and $\tilde{\beta}_z=0.26$ from numerical simulations of heating
by GMCs. \citet{desimone} found that $\tilde{\beta}_R$ caused by transient spirals
can lead to a wide range of $\tilde{\beta}_R\sim0.2-0.7$ depending on properties
of the spirals. These models did not strictly distinguish between AVRs and
heating histories. As their heating indices were derived from the evolution
of velocity dispersions, we assign tildes. The values for the heating index
$\tilde{\beta}$ from our simulations agree well with these results, as
$0.15<\tilde{\beta}_R<0.25$ and $0.20<\tilde{\beta}_z<0.33$.
\begin{figure*}
\centering
\includegraphics[width=18cm]{figs/p8-fig7.pdf}\\
\caption {Parameters from fits of equation \eqref{eq:heatlaw} to heating
histories $\sigma(t-t_{\rm b})$ as functions of time of birth $t_{\rm b}$ of stars found
at $R=8\,{\rm kpc}$ after $t=10\,{\rm Gyr}$. Three panels are shown for each model: $\tilde{\beta}$
(upper); $\tilde{\sigma}_{10}$ (middle); $\tilde{\sigma} (t=0)$ (lower) (the latter is
determined by the parameter $\tilde{\tau}_1>0$). Parameter values $\beta$, $\sigma_{10}$
and $\sigma (t=0)$ found by fitting
equation \eqref{eq:heatlaw} to the AVRs are indicated by horizontal lines:
solid lines when true ages are used and dashed lines when degraded ages are
used. When the AVR yields $\beta>1$, the value is printed on the panel's left
side. } \label{betaE}
\end{figure*}
For the oldest populations in all models, we find $\tilde{\beta}_z\approx\tilde{\beta}_R$,
whereas for the younger populations $\tilde{\beta}_z>\tilde{\beta}_R$. Lower $\tilde{\beta}$
indicate a stronger initial increase in velocity dispersions and an earlier saturation of heating.
For the younger populations the explanation is likely that after
in-plane heating, driven by spiral structure, has essentially saturated, GMCs
continue to increase $\sigma_z$ by deflecting stars from eccentric
near-planar orbits to less eccentric and more highly inclined orbits.
For the populations born early on, GMCs have a high mass fraction in the disc and
can efficiently heat the disc both vertically and radially. Consequently, the high
GMC mass fraction in Y1$\zeta$- leads to $\tilde{\beta}_z\approx\tilde{\beta}_R$
for a wider range of $t_{\rm b}$, whereas in Y2Mb- the lower disc mass leads to less
efficient spiral heating and thus a longer period of GMCs heating the disc
both radially and vertically.
For Models Y1, Y1s2 and Y1$\zeta$-, which grow inside-out,
there is a mild increase in $\tilde{\beta}_z$ visible with
increasing time of birth whereas for Models Y2 and Y2Mb-, which have a
constant input scalelength $h_R=2.5\,{\rm kpc}$, there is a very mild decrease in
$\tilde{\beta}_R$. It is striking that $\tilde{\beta}_z$ for all these five models and for
the vast majority of times of birth are lower than the values of $\beta_z$
inferred from the AVR. In contrast, the values of $\tilde{\beta}_R$
scatter around the values $\beta_R$ from AVRs for all models with the exception
of Model Y1$\zeta$-, which has a higher fraction of its mass in GMCs and for which
the value of $\beta_R$ is consistently higher than $\tilde{\beta}_R$ by $\sim
0.05-0.15$ depending on which ages are used.
For all models, the values of $\tilde{\sigma}_{10}$ scatter around the values
$\sigma_{10}$ found from AVRs. Naively one might think the heating history of
the oldest cohort to agree with the AVR at the oldest ages. However, in
several panels of Fig.~\ref{betaE} $\tilde{\sigma}_{10,R}$ for $t_{\rm b}=0$
is larger than expected by this reckoning. This discordance arises, as
Fig.~\ref{evoheat} illustrates, because equation \eqref{eq:heatlaw} often
provides a poor fit to the heating history of the very oldest stars, and
tends to over-estimate their $\sigma$ at large values of $t-t_{\rm b}$: in
reality for these stars $\sigma$ saturates earlier than equation
\eqref{eq:heatlaw} predicts.
Fits to $\sigma_z(t-t_{\rm b})$ yield values of $\tilde{\sigma}_{10}$ that decrease
with $t_{\rm b}$, just as Fig.~\ref{evoheat} indicates for Model Y1. This
decline is weaker in Y1s2 as its SFR and thus its total GMC mass decline on a
longer timescale. Consequently, the AVR of this model yields a smaller value
of $\beta_z$ than the AVRs of other models. As far as radial heating
histories are concerned, Y1$\zeta$- and to a lesser degree Y2 show a more
significant decline with increasing $t_{\rm b}$ in $\tilde{\sigma}_{R,10}$ than Y1, which explains why AVR values
of $\beta_R$ are mildly larger than the $\tilde{\beta}_R$ values in these models.
$\tilde{\sigma}_{R}(t=0)$ for all models mostly scatters between 10 and $15\,{\rm km\,s^{-1}}$, whereas
$\tilde{\sigma}_{z}(t=0)$ mostly lies between 0 and $6\,{\rm km\,s^{-1}}$. We note that the fitted
$\tilde{\sigma}_{i}(t=0)$ does not necessarily describe the actual $\sigma_{i}$ of very
young stars well, as a best fit can deviate from the fitted data. Still, this
finding reiterates that the radial dispersions of stars increase almost
instantaneously after insertion in reaction to the local non-axisymmetries,
whereas the vertical dispersions increase more slowly. For some fits to radial
AVRs, $\sigma_{R}(t=0)=0$ is favoured and thus differs from
the typical $\tilde{\sigma}_R (t=0)$ values, whereas for others the values from AVRs
and heating histories are similar. We note that for low $t_{\rm b}$ in Model
Y1$\zeta$-, we find $\tilde{\sigma}_{R}(t=0)>20\,{\rm km\,s^{-1}}$ to be higher than average, indicating
that a high mass fraction of GMCs can significantly influence the radial heating.
\subsection{Heating histories in non-standard models}
In the fourth to sixth rows of Figure \ref{betaE}, we show models that do not
finish with single-exponential vertical profiles. Model F2
has a lower density halo than Model Y2, but
shows similar evolution of the heating exponents. It develops $m=2$
non-axisymmetries that extend to $R\sim 10 \,{\rm kpc}$. These lead to smaller
values $\tilde{\beta}_R\sim 0.1$ for stars born at late times. The long bar
causes high values of $\tilde{\sigma}_{R,10}\sim60\,{\rm km\,s^{-1}}$. The values of
$\tilde\beta$ and $\tilde\sigma_{10}$ scatter much less than do the values of
$\tilde{\sigma}_{R}(t=0)$, which cover the range $(0,35)\,{\rm km\,s^{-1}}$, presumably
because this parameter has the smallest influence on the fits. Although we
have seen that the AVRs of Models F2 and Y2 yield significantly different
values of $\beta_z$, the heating histories of these two models yield almost
identical values of $\tilde{\beta}_z$. As was discussed in Section
\ref{sec:heatIndex}, Model F2 has a large bar early on and thus at early
times the cutoff radius and the early surface density of GMCs at $R\sim 8
\,{\rm kpc}$ are both larger than in Y2. Consequently, the efficiency of GMC
heating and thus $\tilde{\sigma}_{z,10}$ decline more strongly in F2, causing
the fit to the AVR $\sigma_z(\tau)$ to yield a larger value of $\beta_z$.
Direct vertical heating by the bar likely plays a role for this model as
well, but disentangling the effects is beyond the scope of this paper.
Model A2$\tau$ has a thicker disc in its ICs than Model Y2 and an input
velocity dispersion $\sigma_0$ that decreases with time
(eq.~\ref{sigmazerot}), so stars at
early times are born significantly hotter than stars at late times, which are
born as cold as those in Model Y2. The declining input dispersions are nicely
visible in $\tilde{\sigma}(t=0)$. As noted in ABS16, vertical dispersions for younger
stars are lower than input dispersions $\sigma_0$ as all stars are added at $z=0$
and thus lose kinetic energy when moving away from the midplane. Interestingly,
this decline in $\tilde{\sigma}(t=0)$ is also reflected in a milder decline in $\tilde{\sigma}_{10}$
and smaller values of $\tilde{\beta}$ in both vertical and radial directions
for stars born at early times. Small values of $\tilde{\beta}$ are likely connected to an earlier
saturation of heating due to higher initial dispersions. At late times we find only
mild differences between the heating indices of Model A2$\tau$ and Model Y2.
Model E2 has an even thicker and more extended disc in its IC than Model
A2$\tau$, and the same small, constant input dispersion $\sigma_0$ as Model
Y2. At early times the thick, extended disc suppresses spiral and bar
formation below the level seen in the thin, compact disc in the IC of Model
Y2. Consequently, early on radial heating is less powerful in Model E2, with
the consequence that the oldest populations heat more slowly in Model E2 and
thus have large $\tilde{\beta}_R>0.3$ as the saturation phase happens later (fourth
panel in second row). A long ($L_{\rm bar}\sim 6\,{\rm kpc}$), strong and buckled
bar forms in E2 after $t\sim 6\,{\rm Gyr}$. This bar significantly influences the
heating of the populations which end up at $R=8\,{\rm kpc}$. For populations born
after bar formation, $\tilde{\beta}_R$ decreases strongly as a strong bar leads to a
very fast increase in $\sigma_R$ for young stars and thus a quicker
saturation and a low value of $\tilde{\beta}_R$. At the same time $\tilde{\sigma}_{R,10}$
decreases, implying that populations born at late times are expected to
attain lower velocity dispersions after $10\,{\rm Gyr}$. This is likely driven by
the lower $\tilde{\beta}_R$, as for these stars there is no information
available at
$t\gtrsim4\,{\rm Gyr}$ and thus the fits likely over-predict the saturation effect
and thus under-predict the continuous in-plane heating. By adding an
additional vertical heating mechanism for young stars, bar buckling increases
$\tilde{\beta}_z$ and $\tilde{\sigma}_{z,10}$. Then the fits very likely over-predict the
vertical dispersion these stars would attain after $10\,{\rm Gyr}$.
A different situation is found in the leftmost panel of the second row for
Model Y4f$\zeta$-. In this model, which has a very compact feeding history, a
fixed cutoff and a high GMC mass fraction, strong and extended $m=3$ and
$m=2$ modes form around $t\sim 6\,{\rm Gyr}$. These events lead to deviations
from simple $t^{\beta}$ AVRs and heating histories as was shown in ABS16
and in Fig.~\ref{heatrad}. Consequently the quality of fits of equation
\eqref{eq:heatlaw} to the heating histories is worse than for other models
and there are strong variations in the radial heating parameters. $\tilde{\beta}_R$
varies between 0 and 1 for stars born after $5\,{\rm Gyr}$, $\tilde{\sigma}_{R, 10}$ is generally
high at $\sim 60\,{\rm km\,s^{-1}}$ due to the strong non-axisymmetries and also scatters strongly
for stars born after $5\,{\rm Gyr}$, and $\tilde{\sigma}_{R}(t=0)$ varies systematically between
0 and $50\,{\rm km\,s^{-1}}$. The decrease in $\tilde{\sigma}_{z,10}$ is strong, which explains the
high AVR value of $\beta_z$.
Finally, we also show one model without GMCs, Model YN1 (second panel second
row). Both the radial and vertical heating histories have $\tilde{\beta}$ values that
differ sharply from those of the standard Y models as the heating curves are
shaped only by non-axisymmetric structure. $\tilde{\beta}_R$ decreases from
0.4 to 0.15 with increasing time of birth, whereas $\tilde{\beta}_z$ is rather high,
with typical values $\tilde{\beta}_z\sim 0.5-0.6$ and a tendency to increase with
$t_{\rm b}$. As in model E2, the decrease in $\tilde{\beta}_R$ and the decrease
in $\tilde{\sigma}_{R, 10}$ for stars born at late times, is likely caused by a
strong bar, which in YN1 forms early and extends to $R\sim5\,{\rm kpc}$ by
the end of the simulation. On account of the change in the main source of
vertical heat, the value of $\tilde{\beta}_z$ for Model YN1
differs sharply from the values yielded by models with GMCs.
\section{Discussion}
\label{sec:discuss}
In a galactic disc the random velocities of stars are increased by the
fluctuating non-axisymmetric component of the gravitational field. GMCs and
spiral arms are both major contributors to this component.
In a naive picture of the heating of the solar neighbourhood, stars diffuse
through velocity space from the velocities of circular orbits under the
influence of fluctuations that constitute a stationary random process
\citep{wielen}. If the diffusion coefficient that governed this process were
independent of ${\bf v}$, the velocity dispersion of a coeval cohort of stars
would grow as $(t-t_{\rm b})^{1/2}$. In reality the diffusion coefficient
must be a declining function of $|{\bf v}|$ because a given potential
fluctuation deflects fast stars through smaller angles than slow stars
(typically $\delta\theta\propto v^{-2}$). Hence
if the fluctuating component of the gravitational potential is statistically
stationary, the exponent in the {\it heating law} $\sigma\propto(t-t_{\rm
b})^{\tilde{\beta}}$ has to be less than $0.5$: \citet{lacey} found $\tilde{\beta}=0.25$ for
analytical models of GMC heating. In the case of spiral structure the concept
of a deflection angle is problematic, but stars with significant random
velocities and therefore eccentric orbits, encounter a given spiral wave at a
variety of orbital phases, so the time-averaged impact of the wave on a fast
star is small. That is, very general physical principles leave no doubt that
in the presence of statistically stationary fluctuations, the exponent in any
heating law is $\la0.25$ \citep{binneyNAR}.
The black and red asterisks in the first and forth rows of
Fig.~\ref{betaE} plot the exponents of heating
histories rather than heating laws: they relate to the variation with
$t-t_{\rm b}$ of $\sigma_R$ and $\sigma_z$ for a group of stars that are now
at $R=8\,{\rm kpc}$ rather than the variation with $t-t_{\rm b}$ of a group of stars
that formed at $R=8\,{\rm kpc}$. Nonetheless, these two groups are similar, so it is
interesting to see that in the standard Y models $\tilde{\beta}_R$ tends to be
constant at a value $\la0.25$, consistent with fairly stationary
fluctuations. In some Y models $\tilde{\beta}_z$ is constant at a similar or slightly
larger value than $\tilde{\beta}_R$, and in other models it increases mildly
with $t_{\rm b}$, remaining at $\tilde{\beta}_z<0.33$ for most coeval cohorts.
Spiral structure in discs with $Q>1$ is maintained by swing amplification of
noise \citep{toomre2,fouvry}. Since the efficacy of swing amplification rises
very sharply as Toomre's
\begin{equation}
Q\equiv{\sigma_R\kappa\over3.36G\Sigma}
\end{equation}
drops towards unity, a cold ($Q\la1$) stellar disc very rapidly heats until
$Q\simeq1.5$. Hence from an early time, a galactic disc will have
$Q\simeq1.5$. As the disc grows in mass, its surface density $\Sigma$ rises
and $\sigma_R$ is driven upwards by spiral structure to keep $Q$
approximately constant (see Fig. 9 in ABS16). Hence, the time dependence of
$\sigma_R$ for stars of all ages at a given radius is essentially
prescribed by the history of gas accretion. As a consequence, young stars
which are born cold are heated efficiently by constantly excited
spiral structure at all times.
Spiral structure heats within the plane but does not directly increase
$\sigma_z$. GMCs, by contrast heat the disc vertically as well as
horizontally. Consequently, the ratio $\sigma_z/\sigma_R$ is a measure of
the relative importance of GMCs and spirals as heating agents.
Fig.~\ref{betaE} reveals that for almost all heating histories,
$\tilde{\beta}_z>\tilde{\beta}_R$. Consequently, the velocity ellipsoids of groups of coeval
stars tend to become rounder over time. The natural explanation is that after
in-plane heating, driven by spiral structure, has essentially saturated, GMCs
continue to increase $\sigma_z$ by deflecting stars from eccentric
near-planar orbits to less eccentric and more highly inclined orbits.
One may plausibly argue, as ABS16 did, that the mass of gas in all GMCs is
proportional to the SFR, since over the lifetime of a GMC a fraction $\zeta$
of the GMC's mass is converted to stars. Hence, the rate at which GMCs heat
the disc declines with the SFR. Moreover, early on the disc has a low mass,
so each GMC represents a larger fraction of the total disc mass. Since the
rate at which an individual GMC heats scales with its mass relative to the
mass of the Galaxy interior to its orbit, each GMC is individually a more
effective heating agent early in the life of the disc. Hence over time the
heating power of the ensemble of GMCs declines faster than the SFR. Large-scale
spiral structure, by contrast, is associated with the self-gravity of the
stellar disc, which grows steadily over time from a small initial value. That
is, over the life of the disc, the impact of spiral structure has increased
relative to that of GMCs.
If the fluctuations that heat the disc were a stationary random process and
the disc were homogeneous, heating histories would be independent of $t_{\rm
b}$ and have the same functional form as the AVR. We have seen that in the
models heating histories do depend on $t_{\rm b}$ and are very different from
the AVR. While the dependence of heating histories on $t_{\rm b}$ and their
deviation from the AVR could be entirely attributed to the non-stationary
nature of the fluctuations, a contributing factor is undoubtedly radial
migration \citep{sellwoodb,sb09}, which adds significant complexity to the
problem by mixing stars that were born at small and large radii. Motivated by
the desire to understand data for the Snhd, we have studied samples of stars
that are currently at $R\simeq R_0$. In a future paper we will consider
groups of stars with a common birth radius.
\section{Conclusions}
\label{sec:conclude}
In this paper, we have used a series of $N$-body simulations of growing disc
galaxies (ABS16) to study (i) age-velocity-dispersion relations (AVRs)
and (ii) the heating histories of the coeval cohorts of stars which make up the AVRs.
As these models feature heavy GMC particles, secular heating is dominated by a
combination of scattering of stars off GMCs and non-axisymmetric disc structures.
To be able to compare these simulations to observational data from the Snhd,
we analysed the impact on the AVR of biases and errors in
measured stellar ages. Stars with ages $\tau\sim2\,{\rm Gyr}$ are very much
over-represented in the GCS data (Fig.~\ref{gcs}). Scattering of such stars to
young ages artificially boosts $\sigma(\tau)$ at the youngest ages, and depresses
$\sigma(\tau)$ at the oldest ages. When a power law in $\tau$ is fitted to
the measured AVR, lower values of the exponent $\beta$ are recovered than would be in
the absence of errors (see also \citealp{martig}). The reduction in $\beta$
is particularly marked in the case of $\sigma_z$ (Fig.~\ref{heatfits}).
On account of spiral structure and bars, AVRs vary with azimuth as well as
radius. Fig.~\ref{azi} quantifies the extent of this azimuthal variation,
which must be borne in mind when considering whether a given model is
consistent with data for the Snhd, which are measured at one particular
azimuth. After taking azimuthal variation into account, we concluded that the
GCS data are consistent with some models in ABS16 that have the expected disc mass
($5\times10^{10}\,M_\odot$) and the cosmologically motivated dark halo
($M=10^{12}\,M_\odot$, $a=30.2\,{\rm kpc}$). Models with a significantly different disc
mass or a less concentrated dark halo are inconsistent with data for the
Snhd. The data also favour the model that starts with a massive, extended
thick disc over models in which (a rather inadequate) thick disc forms
as a consequence of powerful non-axisymmetries developing in the thin disc.
As we do not self-consistently form appropriate thick discs and as
we lack heating by dark matter substructure, which may contribute a minor
part of the observed disc heating, we are not able to put tight constraints
on our model parameters.
AVRs vary with radius. At locations currently inside the bar, the AVR's index
$\beta_R$ is generally very small, $\beta_R<0.1$, as there are no circular orbits
and young stars thus acquire high $\sigma_R$ rapidly. At the end of the bar,
$\beta_R$ rises abruptly and is thereafter constant or slowly rising with $R$
(Fig.~\ref{heatrad}). By contrast, $\beta_z$ sometimes increases and
sometimes decreases at the end of the bar. A buckling bar can lead to
exceptionally high $\sigma_z$ for young stars in the bar regions.
The heating history, $\sigma(t-t_{\rm b})$, of stars now in the Snhd that were
born at time $t_{\rm b}$ can also be fitted by the power-law \eqref{eq:heatlaw}.
We mark the corresponding parameters with a tilde.
For standard models, the heating history depends on $t_{\rm b}$ more strongly
in the case of $\sigma_z$ than $\sigma_R$. Smaller values of the exponent $\tilde{\beta}$ are
required to fit heating histories than AVRs. In fact, values of $\tilde{\beta}_R$ are
consistent with the predictions of dynamics in the case that the fluctuating
gravitational potential is a stationary random process. The values of
$\tilde{\beta}_z$ are generally somewhat larger than is consistent
with a stationary random process, but in agreement with numerical simulations
of stationary GMC heating \citep{haenninen}.
The AVR reflects the history of star and bar formation. The past SFR strongly
affects the AVR for two reasons: the time integral of the SFR
determines the mass of the disc, and thus the fraction of the gravitational
force on a star that derives from the disc rather than the dark halo. At
early times this fraction is small, so spiral arm formation is already suppressed by
a low value of $\sigma_R$. As the mass of the disc increases, spiral
structure increases $\sigma_R$ to keep Toomre's $Q$ nearly constant. If the
SFR is rapidly declining, the rate at which $\sigma_R$ increases will
decline rapidly, and a relatively small value of $\tilde{\beta}_R$ will be required
to fit the heating histories of the oldest stellar groups.
In contrast, the vertical heating is dominated by GMCs. By analysing
histories of stars that end up at $R=8\,{\rm kpc}$, we showed that the heating
histories of older stars reach higher $\sigma_z$ after $10\,{\rm Gyr}$
($\tilde{\sigma}_{10}$) than those of younger stars. The corresponding
$\tilde{\beta}_z$ values only vary mildly with $t_{\rm b}$. This decline in heating
efficiency is connected to the declining influence of GMCs, the total mass of
which declines due to a declining SFR and the mass fraction of which declines
due to a growing stellar disc. When coeval cohorts, whose
$\tilde{\sigma}_{10}$ values
decline with $t_{\rm b}$, are superposed to form an AVR, a value of
$\beta_z$ in excess of $0.5$ is needed to fit the curve.
By combining all these results we have been able to clarify long standing
discrepancies between the observed AVR and theoretical predictions for
combined spiral and GMC heating. Some of our models correctly reproduce the
general shape of both $\sigma_R(\tau)$ \emph{and} $\sigma_z(\tau)$ as
observed in the Snhd and thus also the ratio of the two components. The key
ingredient is that each coeval cohort of stars that contributes to the AVR
has undergone a different heating history and the AVR is not produced by a
single stationary heating law. We conclude that combined GMC and spiral/bar
heating has likely shaped the MW thin disc AVR.
\section*{Acknowledgements}
We thank the referee for comments that helped improve the paper.
This work was supported by the UK Science and Technology Facilities Council (STFC)
through grant ST/K00106X/1 and by the European Research Council under the European
Union's Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement no.~321067.
This work used the following compute clusters of the STFC DiRAC HPC Facility
(www.dirac.ac.uk): i) The COSMA Data Centric system at Durham University, operated
by the Institute for Computational Cosmology. This equipment was funded by a BIS
National E-infrastructure capital grant ST/K00042X/1, STFC capital grant
ST/K00087X/1, DiRAC Operations grant ST/K003267/1 and Durham University.
ii) The DiRAC Complexity system, operated by the University of Leicester
IT Services. This equipment is funded by BIS National E-Infrastructure capital
grant ST/K000373/1 and STFC DiRAC Operations grant ST/K0003259/1.
iii) The Oxford University Berg Cluster jointly funded by STFC, the Large
Facilities Capital Fund of BIS and the University of Oxford.
DiRAC is part of the National E-Infrastructure.
|
1,314,259,993,424 | arxiv | \section{Introduction}
The idea that the space-time might be more than four dimensions is
being considered widely from particle physics to cosmology.
Obviously string theory provided a strong motivation for
considering higher dimensions. Recently the development of string
theory have led us to diverse cosmological scenarios, for example,
D-brane inflation, moduli inflation, cyclic and ekpyrotic
scenarios, mirage cosmology, and string or brane gas cosmology.
One of the primary goals of string cosmology is to achieve string
compactification which can produce inflation successfully.
Early in the study of cosmology based on string theory, it was
realized that the presence of a gas of strings plays an important
role in the evolution of the universe in Refs. \cite{bv,tv}. They
suggested a mechanism to generate dynamically the spatial
dimensionality of spacetime and to explain the problem of initial
singularity. With the symmetry of string theory, called T-duality,
spacetime has the topology of a nine-dimensional torus, and its
dynamics is driven by a gas of fundamental strings. In string
theory the winding modes can annihilate the anti-winding modes if
they meet. Once the windings are removed from some dimensions,
these dimensions can expand. Since strings have (1+1)-dimensional
world volumes, they can intersect efficiently in 2(1+1) spacetime
dimensions or less. Thus, three spatial dimensions can become
large with a gas of strings.
A gas of fundamental strings was generalized to a gas of various
branes to accommodate the development of D-branes in string theory
\cite{magrio,psl,abe}. Many studies on this issue of string
cosmology with string or brane gas was followed (see
\cite{bW0510022} and references therein for comprehensive
reviews). The key point of replacing string gas with brane gas is
that a hierarchy of scales can be achieved between wrapped and
unwrapped dimensions. It has been checked whether the unwrapped
configuration of branes can generate the inflation successfully
\cite{bem,biswas,campos1,bbst}. Also it is known that string or
brane gases of purely winding modes are not enough to stabilize
the extra dimensions. For example, in eleven dimensional
supergravity, Easter {\it et al.} \cite{egjk} have succeeded in
producing anisotropic expansion by selecting a certain wrapping
matrix. However the radions (scales of the extra dimensions) were
not stabilized.
Stabilization of the radion has been the focus of research in
string or brane gas cosmology
\cite{wb0307044,pb,watson,rador0504047,chatrabhuti,kim0403096,cwb,
kaya,bbc,patil,cw,bw0403075,ks,rador0701029,akk}. One way to
obtain the stabilization of the extra dimensions is to introduce
bulk fields \cite{alexander,fgpt,fgm,campos2}. In the previous
work of the author\cite{kim0608131}, it is shown that the extra
dimensions can be stabilized by including a bulk Ramond-Ramond
(RR) flux in the brane gas formalism. For the specific
configuration of brane gas and RR flux where effectively
six-dimensional brane gas is wrapping the extra dimensions and
4-form RR flux is in the unwrapped dimensions, the flux can cause
a logarithmic bounce to the effective potential as the volume of
the extra dimensions shrinks.
Considering the quantum aspect of the string or brane gas, there
will be a large amount of energy when winding modes and
anti-winding modes of branes annihilate with each other. For
example, string winding modes and anti-winding modes can
annihilate into unwound closed string loops which can be treated
as supergravity particles or radiation. Thus it will be
interesting to see how the simplified stabilization mechanism by
brane gas and flux can be modified if we include supergravity
particles. The purpose of this paper is to extend the previous
study by including supergravity particles.
\section{Brane gas dynamics with flux and supergravity particles}
We consider the ten-dimensional supergravity after the dilaton is
stabilized. We start from the point that winding modes are
annihilated in three spatial dimensions causing them free to
expand while the brane gas remains in the extra six dimensions by
the mechanism of Brandenberger and Vafa \cite{bv}. With a gas of
branes only, the extra dimensions will shrink to zero size and our
assumptions of the brane gas cosmology is not valid anymore. To
prevent this collapse we consider the RR flux in the transverse
dimensions. Thus the bulk effective action consists of graviton
and gauge fields representing the four-form RR flux. If we
consider only the bosonic sector, the effective action can be
written as \cite{kim0608131}
\begin{equation}
S = \frac{1}{2 \kappa^2} \int d^{D+1}x \sqrt{-g} \Bigl( R
- \frac{1}{2 \cdot 4!} F_4^2 \Bigr), \label{effact}
\end{equation}
where $R$ is the scalar curvature, $F_4$ is the field strength of
the bulk gauge field, $D=9$, and $\kappa$ is the ten-dimensional
gravitational constant $\kappa^2 = \frac{1}{M^{D-1}_*}$ with $M_*$
being the $(D+1)$-dimensional unification scale.
The gravitational equations of motion are given by
\begin{equation}
G^{MN} = - \kappa^2 ( T_g^{MN} + T_m^{MN} ), \label{einsteineq}
\end{equation}
\begin{equation}
\nabla_M F^{MNIJ} = 0, \label{gaugeeq}
\end{equation}
where $T_g^{MN}$ is the energy-momentum tensor from four-form
RR-flux
\begin{equation}
T_g^{MN} = \frac{1} {12 \kappa^2} ( F^M_{~~IJK} F^{NIJK}
- \frac{1}{8} g^{MN} F_{IJKL} F^{IJKL} ) , \label{emtgauge}
\end{equation}
and $T_m^{MN}$ is the averaged energy-momentum tensor coming from
all the other matter contributions. Also we have the Bianchi
identity since $F$ is an exact form
\begin{equation}
\nabla_{[I} F_{JKLM ]} = 0 . \label{Bianchi}
\end{equation}
We assume that background fields and matter sources are
homogeneous within the dimensions where they exist. Then we can
treat them as functions of time only. Considering the spatial
section to be a $D$-dimensional torus $T^D$, we write the metric
as
\begin{equation}
ds^2 = - dt^2 + \sum_{k=1}^D a^2_k(t) (dx^k)^2 ,
~~~~(0 \le x_k \le 1).
\label{metric}
\end{equation}
The non-vanishing components of the Einstein tensor are
\begin{eqnarray}
G^t{}_t & = & \frac {1}{2} \sum_{k \not= l} \frac{\dot{a}_k}{a_k}
\frac {\dot{a}_l}{a_l}, \\
G^i{}_i & = & \sum_{k \not= i} \frac {\ddot{a}_k}{ a_k}
+ \frac{1}{2} \sum_{k \not= l} \frac{\dot{a}_k }{a_k} \frac {\dot{a}_l}{ a_l}
- \sum_{k \not= i} \frac{\dot{a}_k}{a_k} \frac{\dot{a}_i}{a_i}.
\end{eqnarray}
As in \cite{kim0608131}, we assume that the RR-flux is confined to
$(3+1)$-dimensional submanifold
\begin{equation}
F^{\alpha\beta\gamma\delta} =
\frac {\epsilon^{\alpha\beta\gamma\delta} }{\sqrt{-g_4} } F(t),
\end{equation}
where the Greek indices belongs to $\{ 0, 1, 2, 3 \}$ and $g_4$ is
the induced metric on the $(3+1)$-dimensional submanifold.
With this choice, the Bianchi identity is automatically satisfied
and the solution for $F(t)$ is given by
\begin{equation}
F(t) = \frac{2Q a_1 a_2 a_3 }{ V}, \label{fsol}
\end{equation}
where $Q$ is an integration constant and $V$ is the total spatial
volume $V = \prod_{k=1}^D a_k$. Then the components of the
energy-momentum tensor by the RR-flux are calculated as
\begin{eqnarray}
(T_g)_{~t}^t &=& - \frac{1}{\kappa^2} \Bigl( \frac{F(t)}{2}
\bigr)^2, \\
(T_g)_{~1}^1 &=& (T_g)_{~2}^2 = (T_g)_{~3}^3
= - \frac{1}{\kappa^2} \Bigl( \frac{F(t)}{2} \bigr)^2, \\
(T_g)_{~4}^4 &=& \cdots = (T_g)_{~D}^D
= \frac{1}{\kappa^2} \Bigl( \frac{F(t)}{2} \bigr)^2.
\end{eqnarray}
This corresponds to the energy momentum tensor of a fluid with
\begin{equation}
\rho_g = \frac{1}{\kappa^2} \Bigl( \frac{F(t)}{2} \bigr)^2,
~~~ p_g^1 = p_g^2 =p_g^3 = -\rho_g, ~~~
p_g^4 = \cdots = p_g^D = \rho_g . \label{fluxpressure}
\end{equation}
For other sources of matter, firstly we consider the massless
supergravity particles present in the early universe. The effect
of this source can be expressed by a gas with energy density
$\rho_s$ and pressure $p_s$. We take the gas to be homogeneous and
isotropic perfect fluid with the equation of state
$p_s = \rho_s /D$. The corresponding energy-momentum tensor is
\begin{eqnarray}
(T_s)_{~t}^t &=& - \rho_s , \\
(T_s)_{~1}^1 &=& (T_s)_{~2}^2 = \cdots = (T_s)_{~D}^D
= p_s. \label{sugrapressure}
\end{eqnarray}
If we assume that this energy-momentum tensor is covariantly conserved $\nabla_M
T_s^{MN} = 0 $, the energy density scales with time as
\begin{equation}
\rho_s (t) = \rho_s^0 \Big( \frac{V_0}{V(t)}
\Big)^{\frac{D+1}{D} } ,
\end{equation}
where $\rho_s^0$ and $V_0$ are the energy density and spatial
volume at time $t_0$.
The second source of matter comes from a gas of branes, wrapped on
the various cycles of the torus. The matter contribution of a
single $p$-brane to the action, if the dilaton is stabilized, is
represented by the Dirac-Born-Infeld (DBI) action
\begin{equation}
S_{\rm p} = - T_p \int d^{p+1} \xi \sqrt{ - {\rm det} ( {\hat
g}_{\mu\nu} + {\hat B}_{\mu\nu} + 2 \pi \alpha^\prime {F}_{\mu\nu}
) } , \label{pbract}
\end{equation}
where $T_p$ is the tension of $p$-brane and ${\hat g}_{\mu\nu}$ is
the induced metric to the brane
\begin{equation}
{\hat g}_{\mu\nu} = g_{MN} \frac{\partial X^M}{ \partial \xi^\mu}
\frac{\partial X^N}{ \partial \xi^\nu}.
\end{equation}
Here $M,N$ are the indices of $(D+1)$ dimensional bulk spacetime
and $\mu,\nu$ are those of brane. ${\hat B}_{\mu\nu}$ is the
induced antisymmetric tensor field and ${F}_{\mu\nu}$ is the field
strength tensor of gauge fields $A_\mu$ living on the brane. The
fluctuations of the brane coordinates and other fields within the
branes are negligible when the temperature is low enough and the
radii is grown enough. So we neglect ${\hat B}_{\mu\nu}$ and
${F}_{\mu\nu}$ terms. Ignoring the running of the dilaton, we have
absorbed the effect of constant dilaton into the redefinition of
brane tension so that the Einstein frame and the string frame are
equivalent.
We start, after the thermal stage in the early universe by the
mechanism of Brandenberger and Vafa, from the moment that three
dimensions are completely unwrapped. We take the three dimensions
in which the RR flux exists as the unwrapped ones. The other
$(D-3)$ dimensions are wrapped with gases of branes whose
dimensions are less than or equal to $(D-3)$. Assuming each types
of brane gases makes a comparable contribution, we consider a gas
of effective $(D-3)$-branes whose tension we denote by $T_{D-3}$.
Then the energy-momentum tensor for a gas of these branes can be
written as
\begin{eqnarray}
(T_B)_{~t}^t &=& - \frac{T_{D-3}}{a_1 a_2 a_3} , \\
(T_B)_{~1}^1 &=& (T_B)_{~2}^2 = (T_B)_{~3}^3 = 0, \\
(T_B)_{~4}^4 &=& \cdots = (T_B)_{~D}^D
= - \frac{T_{D-3}}{a_1 a_2 a_3} .
\end{eqnarray}
Since the $SO(D)$ Poincare invariance is broken down to $SO(3)
\times SO(D-3)$ by RR-flux and $(D-3)$-brane gas, we denote the
scale factor of three dimensional space by $a$ and
$(D-3)$-dimensional subspace by $b$. Then the density and pressure
of the brane gas can be written as
\begin{equation}
\rho_B = \frac{T_{D-3}}{a^3} ,~~~
p_B^1= p_B^2= p_B^3 = 0, ~~~
p_B^4 = \cdots = p_B^D
= - \frac{T_{D-3}}{a^3} . \label{branepressure}
\end{equation}
Finally we include the cosmological constant term which can be
interpreted as the space filling branes
\begin{equation}
(T_{\Lambda})_{~N}^M = {\rm diag} ( -\rho_\Lambda, p_\Lambda ) ,
\end{equation}
where $\rho_\Lambda = \Lambda$ and $p_\Lambda = - \Lambda$.
Now we insert the energy-momentum tensors, $(T_g)^{MN}$ and
$(T_m)^{MN} = (T_s + T_B + T_{\Lambda})^{MN}$, into the right hand
side of the Einstein equation (\ref{einsteineq}). After some
algebra, the time component of the Einstein equation can be
expressed as, taking units in which $2 \kappa^2 = 1$ for
simplicity,
\begin{equation}
6(\frac{\dot a}{a})^2 + (D-3)(D-4) (\frac{\dot b}{b})^2 + 6(D-3)
\frac{\dot a}{a} \frac{\dot b}{b} = \rho_s + 2
\frac{Q^2}{b^{2(D-3)}} + \frac{T_{D-3}}{a^3} + \Lambda .
\label{eomzerozero}
\end{equation}
The spatial components for $SO(3)$ and $SO(D-3)$ subspaces are given by
\begin{equation}
\frac{\ddot a}{a} + 2(\frac{\dot a}{a})^2 + (D-3)
\frac{\dot a}{a} \frac{ \dot b}{b}
= -\frac{2(D-4)}{D-1} \frac{Q^2}{b^{2(D-3)}}
+ \frac{\rho_s}{2D}
+ \frac{D-2}{2(D-1)} \frac{T_{D-3}}{a^{3}}
+ \frac{\Lambda}{D-1}, \label{eomso3a}
\end{equation}
\begin{equation}
\frac{\ddot b}{b} + (D-4)(\frac{\dot b}{b})^2
+ 3 \frac{\dot a}{a} \frac{ \dot b}{b}
= \frac{6}{D-1} \frac{Q^2}{b^{2(D-3)}}
+ \frac{\rho_s}{2D}
- \frac{1}{2(D-1)} \frac{T_{D-3}} { a^{3}}
+ \frac{\Lambda}{D-1} . \label{eomso6b}
\end{equation}
The key parameters controlling the relative rates of the growth
for $a$ and $b$ are their accelerations not their velocities. For
the configuration that $a$ exceeds $b$ by many orders of
magnitudes, the source terms in Eqs (\ref{eomso3a}) and
(\ref{eomso6b}) must produce slow-roll conditions for $b$ making
its acceleration small or negative, while keeping the
acceleration of $a$ positive.
\section{Effective potential}
To study the functional behavior analytically, we rewrite the equations of
motion in terms of $\zeta = a^3$ and $\xi = b^{D-3}$ corresponding to the
volumes of $SO(3)$ and $SO(D-3)$ subspaces. The constraint equation
(\ref{eomzerozero}) can be written as
\begin{equation}
\rho_s = \frac{2}{3} (\frac{\dot \zeta}{\zeta})^2
+ \frac{D-4}{D-3} (\frac{\dot \xi}{\xi})^2 + 2
\frac{\dot \zeta}{\zeta} \frac{\dot \xi}{\xi} - 2
\frac{Q^2}{\xi^2} - \frac{T_{D-3}}{\zeta} - \Lambda .
\label{constrainteq}
\end{equation}
Using (\ref{constrainteq}), the second derivative equations can be
written as
\begin{eqnarray}
\frac{1}{3} \frac{\ddot \zeta}{\zeta} &+& \frac{D-3}{3D} \frac{\dot \zeta}{\zeta}
\frac{\dot \xi}{\xi} = \frac{1}{3D} (\frac{\dot \zeta}{\zeta})^2
+ \frac{D^2-3D+1}{2D(D-1)} \frac{T_{D-3}}{\zeta} \nonumber \\
&+& \frac{D-4}{2D(D-3)} (\frac{\dot \xi}{\xi})^2
+ \frac{-2D^2 + 7D+1}{D(D-1)}\frac{Q^2}{\xi^2}
+ \frac{D+1}{2D(D-1)} \Lambda, \label{zetatwodoteq}
\end{eqnarray}
\begin{eqnarray}
\frac{1}{D-3} \frac{\ddot \xi}{\xi} &+& \frac{3}{D(D-3)} \frac{\dot \zeta}{\zeta}
\frac{\dot \xi}{\xi} = \frac{1}{3D} (\frac{\dot \zeta}{\zeta})^2
+ \frac{-2D+1}{2D(D-1)} \frac{T_{D-3}}{\zeta} \nonumber \\
&+& \frac{D-4}{2D(D-3)} (\frac{\dot \xi}{\xi})^2
+ \frac{5D+1}{D(D-1)}\frac{Q^2}{\xi^2}
+ \frac{D+1}{2D(D-1)} \Lambda. \label{xitwodoteq}
\end{eqnarray}
Removing the coupled first derivative terms ($\frac{\dot
\zeta}{\zeta} \frac{\dot \xi}{\xi}$), we have
\begin{eqnarray}
&&\frac{1}{D-3} \Big\{
\frac{\ddot \zeta}{\zeta} + \frac{D-6}{9} (\frac{\dot \zeta}{\zeta})^2
- \frac{9(D^2-3D+1) + (D-3)^2(2D-1)} {6D(D-1)} \frac{T_{D-3}}{\zeta}
\Big\} \nonumber \\
&& = \frac{1}{3} \Big\{ \frac{\ddot \xi}{\xi}
- \frac{(D-6)(D-4)}{2(D-3)^2} (\frac{\dot \xi}{\xi})^2
- \frac{(D-3)^2 (5D+1) + 9(2D^2 - 7 D -1)} {D(D-1)(D-3)} \frac{Q^2}{\xi^2}
\nonumber \\
&& - \frac{(D-6)(D+1)}{2(D-1)(D-3)} \Lambda \Big\} .
\label{separationofvariable}
\end{eqnarray}
Since the left-hand side is a function of $\zeta$ and the
right-hand side is a function of $\xi$, we take the simplest case
by equating them to a constant $E$ to decouple the variable
$\zeta$ and $\xi$
\begin{equation}
\frac{\ddot \zeta}{\zeta} + \frac{D-6}{D} (\frac{\dot \zeta}{\zeta})^2
- \frac{9(D^2-3D+1) + (D-3)^2(2D-1)} {6D(D-1)} \frac{T_{D-3}}{\zeta}
-(D-3)E = 0, \label{eomzeta}
\end{equation}
\begin{eqnarray}
\frac{\ddot \xi}{\xi}
- \frac{(D-6)(D-4)}{2(D-3)^2} (\frac{\dot \xi}{\xi})^2
&-& \frac{(D-3)^2 (5D+1) + 9(2D^2 - 7 D -1)} {D(D-1)(D-3)} \frac{Q^2}{\xi^2}
\nonumber \\
&-& \frac{(D-6)(D+1)}{2(D-1)(D-3)} \Lambda - 3 E = 0 .
\label{eomxi}
\end{eqnarray}
Putting $D=9$, we have
\begin{equation}
\frac{\ddot \zeta}{\zeta} + \frac{1}{3} (\frac{\dot \zeta}{\zeta})^2
- \frac{41} {16} \frac{T_6}{\zeta}
- 6E = 0, \label{eomzetad9}
\end{equation}
\begin{equation}
\frac{\ddot \xi}{\xi}
- \frac{5}{24} (\frac{\dot \xi}{\xi})^2
- \frac{47} {8} \frac{Q^2}{\xi^2}
- \frac{5}{16} \Lambda - 3 E = 0 .
\label{eomxid9}
\end{equation}
We remove the first-order derivative terms with $\zeta = f^{3
\over 4}$, $\xi = g^{24 \over 19}$, then the equations reduce to
the motions of a particle with unit mass in one dimension
\begin{equation}
{\ddot f} - \frac{41}{12} T_6 f^{1 \over 4} - 8 E f = 0,
\label{eomf}
\end{equation}
\begin{equation}
{\ddot g} - \frac{893}{192} Q^2 g^{- \frac{29}{19}}
- {19 \over 24} (3 E + {5 \over 16} \Lambda ) g = 0.
\label{eomg}
\end{equation}
Now we can analyze the behavior of the two subvolumes by
considering the effective potential as in \cite{kim0608131},
$ {\ddot f} = - \frac{d V_{\rm eff} (f) }{df}$,
$ {\ddot g} = - \frac{d V_{\rm eff} (g) }{dg}$.
The effective potentials are calculated as
\begin{equation}
V_{\rm eff} (f)
= - \frac{41}{15} T_6 f^{5 \over 4} - 4 E f^2 ,
\end{equation}
\begin{equation}
V_{\rm eff} (g)
= \frac{16967}{1920} Q^2
g^{- \frac{10}{19}}
- \frac{19}{48} (3 E + \frac{5}{16} \Lambda ) g^2 .
\end{equation}
To make the $SO(3)$ subvolume become large indefinitely, $E$ must
be positive and the shape of $V_{\rm eff} (f)$ is given in Fig. 1.
\begin{figure}
\includegraphics[angle=270 , width=8cm ]{fbouncing.eps} \caption{
Typical shape of effective potential $V_{\rm eff} (f)$ for
unwrapped subvolume for $E > 1$. The plot is for $T_6 = 15/41$ and
$E=1/4$.} \label{fig1}
\end{figure}
The behavior of the effective potential for the $SO(6)$ subvolume
shows a bouncing behavior ($Q^2$ term) for small values of $g$. So
the overall shape of the potential depends on the sign of $3 E +
\frac{5}{16} \Lambda $. For $3 E + \frac{5}{16} \Lambda > 0$, the
shape is given in Fig. 2.
\begin{figure}
\includegraphics[angle=270 , width=8cm ]{gbouncing.eps} \caption{
Typical shape of effective potential $V_{\rm eff} (g)$ for
$3 E + \frac{5}{16} \Lambda > 0$. The plot is for $Q^2 = 1920/16967$ and
$3 E + \frac{5}{16} \Lambda = 48/19$.} \label{fig2}
\end{figure}
In this case, both the unwrapped three dimensions and the wrapped
six dimensions expand monotonically. For $3 E + \frac{5}{16}
\Lambda < 0$, the shape is given by Fig. 3.
\begin{figure}
\includegraphics[angle=270 , width=8cm ]{gconfining.eps} \caption{
Typical shape of effective potential $V_{\rm eff} (g)$ for
$3 E + \frac{5}{16} \Lambda < 0$. The plot is for $Q^2 = 1920/16967$ and
$3 E + \frac{5}{16} \Lambda = - 48/19$.} \label{fig3}
\end{figure}
In this case, the wrapped internal subvolume can oscillate between
two turning points or sit at the minimum of the potential $g_{\rm
min}$ while the unwrapped subvolume expands indefinitely.
In \cite{kim0608131}, the existence of $(3+1)$-form RR flux in the
unwrapped subspace induces a logarithmic bounce to the effective
potential for small values of $\xi$ and this term prevents the
internal subvolume from collapsing. Including the supergravity
particles into the analysis makes the bounce steeper than the case
with only RR flux. The reason can be understood by looking the
signs of the pressures in Eqs. (\ref{fluxpressure}),
(\ref{sugrapressure}), and (\ref{branepressure}). The brane gas
wrapping the internal dimensions exerts negative pressure and
makes the internal subvolume to contract. However, RR flux and
supergravity particles exert positive pressure to prevent the
internal subvolume from collapsing. The internal volume can be
stabilized by the competition of positive and negative pressures
and our result realized this possibility.
\section{Conclusion}
We have studied the anisotropic evolution of spatial dimensions
and the stability of the extra dimensions with particular emphasis
on the role of supergravity particles. We took a perfect fluid
form for their energy-momentum tensor which drives expansion.
Assuming dilaton is stabilized, we focused on the late time
behavior of brane gas cosmology where windings of branes are
completely removed from three dimensions and RR flux exists in
these unwrapped dimensions. We investigated how the existence of
supergravity particles affects the asymmetric evolution of the
universe by reducing the Einstein equations to the motions of a
particle in one-dimensional effective potential. The shape of the
potential for the three dimensional subvolume is barrier-type so
that it can expand indefinitely. However the shape of the
potential for the extra dimensional subvolume can be well-type so
that it can oscillate between two turning points or be fixed at
the minimum of the potential.
In most approaches to the stabilization in string cosmology, it is
crucial that the dilaton runs to a weak coupling. The scale factor
in the Einstein frame is a linear combination of string frame
dilaton and scale factors. If the dilaton is not fixed, this can
cause serious problems at late-time evolution of the universe
since the Newton constant is not fixed. The extra dimensions can
be unstable as far as the dilaton evolves. It will be an important
challenge to include the running of the dilaton into the
stabilization of the radion.
Recently it is shown that dilaton stabilization and radion
stabilization are compatible by Danos, Frey, and Brandenberger
\cite{dfb}. They identified a stable fixed point corresponding to
the dilaton sitting at the minimum of the potential and the radion
taking on the value at which the enhanced symmetry states are
massless. The stability of this fixed point was analyzed by
studying the linearized equations of motion around the fixed
point. The solution shows a damped oscillatory behavior confirming
the compatibility of two types of moduli stabilization. Despite
the promising result, we have to be very careful when we stabilize
both dilaton and radion simultaneously. Cremonini and Watson
\cite{cw} have discussed the stabilization of moduli in eleven
dimensional supergravity. They found that the production rate of
the BPS bound states could be significant for a modified brane
spectrum with enhanced symmetry. These states can lead to a
stabilization by an attractor mechanism. However, the
stabilization drives the evolution to a region where a
perturbative description of the string dynamics fails. That is,
the supergravity approximation is not valid in this region. It is
important not to forget the string theory origin of the low-energy
effective action.
Realizing the transitions between the different thermodynamic
phases of string gas is important in string cosmology. In
\cite{kalwat}, it is pointed out that the dilaton field may
obstruct the transition from the Hagedorn phase of hot and dense
string gas to expanding FRW phase of dilute string gas. They
categorized the possible branches of the solutions according the
sign of the dimensionally reduced effective dilaton and noticed
that the branch changing is impossible as long as the energy
density of the universe is not negative. Finding the appropriate
energy sources which enable the phase changing seems another
challenge in string/brane gas cosmology.
\begin{acknowledgments}
This work was supported by research funds of Kunsan National
University.
\end{acknowledgments}
|
1,314,259,993,425 | arxiv | \section{Introduction}
Gene copy number variation is an ubiquitous phenomenon that manifests itself in multiplication of gene fragments, single genes, groups of genes up to whole genome. Duplicated genes contribute to gene evolution; subsequent mutations may turn one of gene copies into an inactive pseudo-gene, which may accumulate further mutations without affecting the phenotype \cite{krebs2013lewin,li2005expression,lynch2000evolutionary}. Gene copies may be parts of either chromosomal or extra-chromosomal DNA. In bacterial cells, low-copy plasmids appear in the numbers of copies characteristic to plasmid type, and the numbers are conserved during cell division \cite{krebs2013lewin}. Bacterial plasmids, as well as the circular molecules of DNA found in mitochondria and chloroplasts may also appear in high numbers of copies (e.g., $20$-$40$ for chloroplasts of higher plants \cite{krebs2013lewin}).
Variation in the number of copies of a particular gene in a living cell may strongly affect the concentration of protein encoded for by that gene. This in turn may have a profound impact on the phenotype, and hence on the fitness of the organism. The relationship between copy number variation and phenotype is of great interest in higher eukaryotes such as mammals, including humans, where gene copy number variation is known to be related not only to differences in concentrations of some enzymes (e.g., starch amylase, \cite{perry2007diet}) but also to several genetic diseases \cite{roper2006understanding,conrad2007gene} as well as cancer \cite{pollack2002microarray}. However, it is usually easier to study experimentally the effects of copy number variation in model unicellular organisms, such as \textit{E. coli} or \textit{S. cerevisiae}; strains of such organisms differing by gene copy number may be relatively easily constructed \cite{kittleson2011rapid,volfson2006origins}. Yet, within the existing mathematical models of gene expression \cite{hornos2005self,friedman2006linking,shahrezaei2008analytical,ochab2010bimodal,bokes2012exact,aquino2012stochastic,tsimring2014noise, ochab2015transcriptional}, usually a single gene copy is considered, and the influence of gene copy number on gene expression is neglected. To the best of our knowledge, there are only few papers providing a theoretical description of the influence of copy number variation on gene expression \cite{mileyko2008small,volfson2006origins,loinger2009analysis,stewart2010construction,stewart2013under,miekisz2013gene,van2014space}
In particular, in Ref. \cite{mileyko2008small}, the influence of copy number variation on the gene expression level was studied in the case of four different network motifs, from a simple auto-activated gene (positive feedback) to more complicated, two- and three-gene circuits. This analysis, although thorough and throwing much light on the subject, was nonetheless based on deterministic approach so it neglected the molecular noise, inherent to as small biochemical systems as living cells. In the present paper, we will focus on how the noise produced by self-regulating gene depends on the copy number of that gene.
The dependence of gene expression noise on the strength of negative self-regulation of two gene copies was analysed in Refs. \cite{stewart2013under, stewart2010construction}. It was concluded that gene expression noise, measured there by Fano factor, may prevent the evolution of strong negative auto-regulation in diploid cells, and this was proposed as a possible explanation of the observed difference in abundance of negative auto-regulation between \textit{E. coli} (where negative auto-regulation is a frequently appearing network motif) and \textit{S. cerevisiae} and other eukaryotic species (where it is much less frequent). The authors pointed out that it may also account for the fact that duplicated copies of negatively self-regulating genes are relatively rare in \textit{E. coli}, despite the fact that roughly half of all known transcription factors of \textit{E. coli} take part in negative auto-regulation \cite{stewart2013under}. We will show, however, that the widely used quantitative measures of noise, Fano factor and coefficient of variation, may behave in a different way as the gene copy number is varied, so any conclusions about evolutionary selection based on gene expression noise are highly speculative as long as it is not known how the natural evolution measures the noise to select for its most advantageous amount.
Volfson et al. studied gene expression variability as depending on the gene copy number \cite{volfson2006origins} in five strains of \textit{S. cerevisiae} differing by the number of gene-promoter inserts of the GAL system. They used a simple scaling argument to determine whether the fluctuations in protein concentration were of intrinsic or extrinsic origin. According to the standard distinction between the two types of noise, \textit{intrinsic noise} is defined as a side effect of the specific reactions that result in gene expression, when a small number of molecules takes part in these reactions. On the other hand, \textit{extrinsic noise}, also affecting these reactions, is that produced by some unspecified external processes, e.g., fluctuations in the accessibility of transcriptional machinery or fluctuations of the environment. However, it should be noted that in \cite{volfson2006origins} a tacit assumption was made that in order for the simple scaling to hold, the gene of interest should not be self-regulating, i.e., if there are any fluctuations of TF concentration affecting the state of the promoter, they are of extrinsic origin. In such a case, the mean protein concentration scales linearly with the gene copy number $G$, whereas the coefficient of variation (standard deviation divided by the mean) scales as $G^{-1/2}$ for purely intrinsic fluctuations and is independent on $G$ for purely extrinsic fluctuations. In the present paper we will show, however, that this scaling cannot be assumed in the case of self-regulating genes because intrinsic noise in their products affect, at the same time, their promoters as the fluctuations in TF concentration.
We study how the expression of positively or negatively self-regulating gene (cf. Fig. \ref{fig:figure_01}) depends on gene copy number. We assume that this number does not change during the cell's life time and that the gene copies are coupled only by their protein products, being their own transcription factors (TFs). Another assumption is fast on/off switching of the promoter state that allows to describe its regulation by TFs in terms of Hill kinetics; recent experimental observations seem to support this assumption \cite{sherman2015cell,sepulveda2016measurement}. We use the analytical framework proposed in Ref. \cite{friedman2006linking}: The protein is assumed to be produced in exponentially distributed stochastic bursts \cite{cai2006stochastic, yu2006probing}, whereas mRNA, whose dynamics is much faster than that of the protein, is not explicitly present in the model. Analytical expressions for the steady-state distribution of protein concentration can be derived for an arbitrary number of gene copies, not necessarily identical in terms of their affinity for TF, provided all copies are coding for the same protein. We analyse the influence of the mutations changing gene copy number and auto-regulation strength on the shape of the steady-state protein probability distribution.
The model analysed here is one of few analytically tractable stochastic models of multiple gene copies that can be constructed from the single-gene models currently known in the literature. Although the system we study is one of the simplest genetic circuits, we will show further on that it can still produce some behaviours that are unintuitive or have not been associated with this type of a gene system, and that the interpretation of its behaviours still gives rise to some confusion when experimental data are analysed in terms of the amount of noise present in the circuit.
In Section \ref{Identical gene copies} we model $n$ identical copies of a self-regulating gene. We note that the two measures of noise in the system, Fano factor and coefficient of variation, may behave in a qualitatively different manner as functions of gene copy number. We also point out that experimental data acquired from the one-colour assay \cite{stewart2012cellular}, performed on two gene copies, may be interpreted incorrectly in the case of self-regulating genes. In Section \ref{Two nonequivalent gene copies} we study two non-identical copies of a self-regulating gene, which differ in their auto-regulation strength. We show that such a gene pair can show a mixed, binary-graded response to external signal, an effect that has not been, to date, associated with gene duplication. We show that mean expression of two gene copies can scale in a rather unintuitive way as compared to the mean expression of a single gene copy, depending on how much the two copies differ in their auto-regulation strength and depending on the maximal mean frequency of protein bursts, which may have an impact on evolutionary accumulation or extinction of gene duplications. We also point at possible qualitative differences in behaviour of Fano factor and coefficient of variation in the case of non-equivalent gene copies.
\section{Theory \label{Theory and Model}}
\subsection{Model}
We model $G$ copies of a gene in a cell, $G$ being a fixed parameter. The copies may not be identical, due to mutations in the operator or promoter region of each copy. Still, we assume that the gene product (protein) is identical for all of them. We start from the following scheme:
\begin{eqnarray}
(\text{DNA})_{1} & \xrightarrow{\tilde{k}_{11}(x)} & \nonumber \\
(\text{DNA})_{2} & \xrightarrow{\tilde{k}_{12}(x)} & \nonumber \\
& \hdots & \text{mRNA} \xrightarrow{k_{2}} \text{Protein} \nonumber \\
(\text{DNA})_{G} & \xrightarrow{\tilde{k}_{1G}(x)} &
\label{biochemical reaction scheme synthesis}
\end{eqnarray}
\begin{eqnarray}
\text{mRNA} & \xrightarrow{\gamma_{1}} & \emptyset, ~~~ \text{Protein} \xrightarrow{\gamma_{2}} \emptyset.
\label{biochemical reaction scheme degradation}
\end{eqnarray}
\begin{figure}
\begin{center}
\rotatebox{0}{\scalebox{0.2}{\includegraphics{Figure_01.eps}}}
\end{center}
\caption{ Schematic representation of the system consisting of two self-regulating and mutually regulating gene copies. The copies can differ in their operator-TF affinity. The strangth of TF binding to operators can be modified by signal molecules. Here, positive auto-regulation is shown (as depicted by large arrows), but we also consider the case of negative auto-regulation. More than two gene copies are also considered.}
\label{fig:figure_01}
\end{figure}
mRNA production takes place on each of $G$ gene copies. Transcription and translation adds mRNA and protein molecules to the common pool because they are assumed to be identical (\ref{biochemical reaction scheme synthesis}). Similarly, degradation processes of both mRNA and protein are common for the products of all gene copies (\ref{biochemical reaction scheme degradation}).
Both translation as well as mRNA and protein degradation processes (\ref{biochemical reaction scheme degradation}) are treated here as simple first-order reactions, with the rate constants $k_{2}$, $\gamma_{1}$ and $\gamma_{2}$, respectively (see Table \ref{tab:kinetics} in Appendix \ref{Detailed scheme of biochemical reactions}). However, due to auto-regulation (Fig. \ref{fig:figure_01}), transcription rates depend on the protein concentration $x$; the effective rate constants are given by $\tilde{k}_{1j}(x) = k_{1j} h_{j}(x)$, where $k_{1j}$ is the bare rate constant and $ h_{j}(x)$ is the transfer function of the $j$-th gene copy
\begin{eqnarray}\label{eq:h}
h_{j}(x) = (1-\epsilon_j)H_{j}(x) + \epsilon_j, ~~~~~ j =1, 2, \ldots, G.
\label{h general definition}
\end{eqnarray}
$\epsilon_j = k_{1 j \epsilon}/k_{1j}$ is the measure of transcriptional leakage, and thus $\epsilon_j<h_j(x)<1$; $k_{1j}$ has an interpretation of the transcription rate constant for a fully active operator of $j$-th copy, whereas $k_{1 j \epsilon}$ is the corresponding quantity for inactive operator (basal transcription), cf. Appendix \ref{Detailed scheme of biochemical reactions} and Ref. \cite{ochab2015transcriptional}.
The present model assumes that TF binding to the operator is governed by Hill kinetics, i.e. the binding/unbinding rates of a TF molecule to the operator are fast compared to the time scales of other reactions \cite{alon2006introduction, ochab2010bimodal}. In the case of cooperative TF binding, the regulatory function $H_{j}(x)$ in Eq. (\ref{h general definition}) is given by
\begin{eqnarray}
H_{j}(x) &=& \left[1+\left( \frac{x}{K_j} \right)^{n_j}\right]^{-1}.
\label{H general definition cooperative}
\end{eqnarray}
Cooperative TF binding means that the TFs effectively activate/repress the gene only when $n_j$ of them are bound to the operator, e.g. when the TF occurs as multimer or when there are $n_j$ TF binding sites on the operator and each of them, when occupied, makes it more probable for the TF to bind to other binding sites. Cooperativity $n_j$ thus governs the steepness of $H_{j}(x)$, whereas $K_j$ measures the regulation strength of the $j$-th gene copy. $n_j>0$ denotes negative auto-regulation and for $n_j<0$ the feedback is positive. Note that the Hill function $H(x)$ in Eq. (\ref{eq:h}) is multiplied by the ($1-\epsilon$) factor. This is in contrast with the formulation of Ref. \cite{friedman2006linking}, where nonzero leakage introduced only the additive term ($\epsilon$). According to the rules of chemical kinetics, the present formulation is universal (its derivation being explained in detail in \cite{ochab2015transcriptional}) whereas that of Ref. \cite{friedman2006linking} is only valid for small leakage.
Because usually both the mRNA production and degradation reactions are much faster than the corresponding processes for the protein, mRNA concentration is assumed to be a fast degree of freedom and is eliminated from the model \cite{friedman2006linking,shahrezaei2008analytical} (see also \cite{crudu2009hybrid,crudu2012convergence,bokes2012exact,yvinec2014adiabatic,popovic2016geometric} for detailed studies of time scale reduction from the full kinetic scheme in related models). In effect, protein production takes the form of stochastic bursts of a random size \cite{friedman2006linking}. In the case of $G$ gene copies, the probability $p(x,t)$ that at the time $t$ the protein concentration is equal to $x$ satisfies the Master equation
\begin{eqnarray}
\frac{\partial p(x,t)}{\partial t} &=& \gamma_2 \sum_{j=1}^{G} a_{j} \int_{0}^{x} w(x-x^{\prime}) h_{j}(x^{\prime}) p(x^{\prime},t) dx^{\prime} \nonumber \\
&+& \gamma_{2} \frac{\partial }{\partial x}\left[x p(x,t)\right].
\label{generalised t dependent ME of Friedman}
\end{eqnarray}
In the above, the protein concentration $x \geq 0$ is a continuous variable, $u$ is the burst size, $w(u) = \nu(u) - \delta(u)$, where $\nu(u) = (1/b)\exp(-u/b)$ is the burst size probability distribution (note that the burst sizes are identically distributed for each gene copy), whereas $a_{j}$ and $b$ are defined by
\begin{eqnarray}
a_{j} &\equiv& \frac{k_{1j}}{\gamma_{2}}, ~~~~~ b \equiv \frac{k_{2}}{\gamma_{1}},
\label{a j and b definitions}
\end{eqnarray}
whereas $\delta(u)$ is Dirac delta distribution \cite{friedman2006linking}.
The stationary solution of Eq. (\ref{generalised t dependent ME of Friedman}), with the normalisation constant $A$, follows from Eq. (8) of Ref. \cite{friedman2006linking}: %
\begin{eqnarray}
p(x) &=& A x^{-1} e^{-x/b} \prod_{j=1}^{G} \exp\left[ a_{j} \int \frac{h_{j}(x)}{x} dx\right].
\label{solution of ME of Friedman stationary general}
\end{eqnarray}
In the case of cooperative TF binding, $H_i(x)$ is given by Eq. (\ref{H general definition cooperative}) and from Eq. (\ref{solution of ME of Friedman stationary general}) we obtain
\begin{eqnarray}
p(x) &=& A x^{-1} e^{-x/b} \prod_{j=1}^{G} x^{a_{j}} H_j(x)^{\frac{a_{j}(1-\epsilon_i)}{n_{j}}}.
\label{solution of ME of Friedman stationary cooperative}
\end{eqnarray}
(The functional form of $p(x)$ (\ref{solution of ME of Friedman stationary general}) for non-cooperative TF binding is given in Appendix \ref{Noncooperative transcription factor binding}.)
It should be noted that, in the present model, the bursting of each gene copy is a Poisson process, independent from the bursting of all other copies. Thus, their protein production rates are coupled only by the common pool of proteins that regulate the genes as their TFs.
\subsection{Terminology}
In this subsection we briefly explain the meaning of the terms that will be used further on in the paper.
\textit{Influence of external signal on gene regulation.} TF can bind one or more signalling (effector) molecules (Fig. \ref{fig:figure_01}) or undergo phosphorylation, which changes the TF affinity to operator \cite{alon2006introduction, ochab2015transcriptional}. In our model, the presence of signalling molecules is taken into account only implicitly, by assuming that the value of $K_j$ in Eq. (\ref{H general definition cooperative}) depends not only on the TF-operator affinity but also on the fraction of active TFs that are able to bind the operator. This fraction depends on the concentration of the signalling molecules (see Appendix \ref{Detailed scheme of biochemical reactions} in this paper and Ref. \cite{ochab2015transcriptional}, Appendix A therein, for details). In other words, $K_j$, which quantifies the steepness of the Hill function, can be used as a measure of the intensity of an external signal that activates or deactivates the TFs.
\textit{Unimodal vs. bimodal distributions. } A distribution of concentrations of a protein in cell population is unimodal when it has a single maximum and it is bimodal when it has two maxima.
\textit{Graded response vs. binary response. } This concept concerns the changes of the distribution's shape due to variation of the signal intensity that defines the fraction of active TFs able to control the promoter. As the signal level is varied, gene expression varies between its minimal and maximal values. When the protein distribution is unimodal for all signal intensities, such that the signal level only defines the position of the single peak of the distribution, then the response of the gene is graded. On the other hand, the response is binary when the protein distribution changes its shape from unimodal at minimal expression to bimodal at intermediate expression level, and then it settles down again to a fixed unimodal distribution at maximal expression \cite{ochab2015transcriptional}. Further on, we will show that a mixed response is also possible, if, after the unimodal-bimodal-unimodal transition, the distribution does not become fixed but it shifts, now in a graded manner, towards some higher maximum of gene expression.
\section{Results}
In this Section we present the results obtained by numerical evaluation of the probability distributions (\ref{solution of ME of Friedman stationary cooperative}) or their moments.
\begin{figure}[t!]
\begin{center}
\rotatebox{270}{\scalebox{0.3}{\includegraphics{Figure_02a.eps}}}
\rotatebox{270}{\scalebox{0.3}{\includegraphics{Figure_02b.eps}}}
\end{center}
\caption{ Average protein number may depend on gene copy number in a nonlinear manner in self-regulating genes. A: Negative auto-regulation ($n = 4$). B: Positive auto-regulation ($n = -4$). The abrupt increase for $K=700$ and $G=8$ is due to the transition of the protein number distribution through bimodality, cf. Fig. \ref{fig:figure_11_bis} in Appendix \ref{app:pass_bimod_1}. Feedback strength parameter $K=0$ (empty circles), $K=7$ (triangles), $K=70$ (squares), $K=700$ (pentagons), and $K=\infty$ (full circles). Maximum mean burst frequency $a=10$. Mean burst size $b=20$. Leakage $\epsilon=0.05$. Lines provide guide for the eye only. }
\label{fig:figure_07}
\end{figure}
\begin{figure*}[t]
\begin{center}
\rotatebox{0}{\scalebox{0.3}{\includegraphics{Figure_03.eps}}}
\end{center}
\caption{ In self-regulating genes, Fano factor and by coefficient of variation may depend on gene copy number in a qualitatively different manner. A, B: Negative auto-regulation, $n = 4$. A: Depending on the feedback strength parameter $K$, Fano factor $F=\sigma^2/\langle x \rangle$ may both decrease, increase or vary in a non-monotonous manner as gene copy number $G$ is varied. B: Coefficient of variation $\eta = \sigma/\langle x \rangle$ is a monotonically decreasing function of gene copy number $G$. C, D: Positive auto-regulation, $n = -4$. Here, for $K=700$, Fano factor $F(G)$ has just one maximum (C), whereas the coefficient of variation $\eta(G)$ has two clear maxima (D). The sharp maximum for $K=700$ and $G=8$ is due to the transition of the protein number distribution through bimodality, cf. Fig. \ref{fig:figure_11_bis} in Appendix \ref{app:pass_bimod_1}. In absence of gene regulation ($K=0$ and $K=\infty$), $F=b$ and $\eta \sim G^{-1/2}$. For negatively self-regulating genes, $F(G)<b$ and for positive auto-regulation, $F(G)>b$. Parameters and the corresponding symbols are same as in Fig. \ref{fig:figure_07}.}
\label{fig:figure_09}
\end{figure*}
\subsection{Identical gene copies \label{Identical gene copies}}
The assumption of equivalent gene copies is legitimate when the differences between local genetic context (neighbourhood of each gene copy) are negligible, and in the case of some engineered genetic circuits \cite{kittleson2011rapid}.
\subsubsection{The maximum burst frequency scales linearly with gene copy number }\label{subsubsec:a}
From Eq. (\ref{solution of ME of Friedman stationary general}) it follows that if the regulatory functions of each gene are identical, $h_j(x) = h(x)$, then their burst frequencies simply add up. In particular, if the whole gene (i.e. its protein-coding and regulatory parts) is present in $G$ copies such that the maximum burst frequency $a_j = a$, then the system is equivalent to a single copy of a gene, with the parameter re-scaling:
\begin{equation}\label{scaling of the a parameter}
a \to G a.
\end{equation}
For self-regulating genes, the probability density function for the protein number reads therefore
\begin{eqnarray}
p_G(x) &=& A x^{G a - 1} e^{-x/b} \left [ 1+\left(\frac{x}{K} \right)^{n}\right]^{- \frac{ G a(1-\epsilon)}{n}}.
\label{solution of ME of Friedman stationary cooperative G-degenerate}
\end{eqnarray}
\subsubsection{Non-linear scaling of the average protein number, Fano factor and coefficient of variation with gene copy number}
Cell fitness may depend not only on protein concentration, but also on the level of gene expression noise. Two standard quantitative measures of gene expression noise, Fano factor $F = \sigma^2/\mu_1$ and coefficient of variation $\eta = \sigma/\mu_1$, are used interchangeably in the literature \cite{volfson2006origins, friedman2006linking, cai2006stochastic, yu2006probing}. $F$ is a natural quantity to measure deviation of a given probability distribution from Poisson distribution, for which $F=1$. Under the assumption of no extrinsic noise, for unregulated gene expression $h(x)=const.$ and $p_G(x)$ is then a gamma distribution \cite{friedman2006linking}. The mean protein number and the measures of gene expression noise have then a simple dependence on $G$: $\langle x \rangle \sim G^{1}$, $\eta \sim G^{-\frac{1}{2}}$, $F \sim G^{0}$ \cite{volfson2006origins}.
Self-regulation leads to deviations from the above scaling. The dependence of the average protein number $\langle x \rangle$ on gene copy number $G$ is in general non-linear (Fig. \ref{fig:figure_07}). Fano factor $F$ is no longer independent on $G$ (Figs. \ref{fig:figure_09}A,C); coefficient of variation $\eta$ no longer scales like $G^{-1/2}$ (Figs. \ref{fig:figure_09}B,D).
What is striking, the influence of gene copy number $G$ on gene expression noise depends on the particular measure of noise (Fig. \ref{fig:figure_09}). In some of the considered cases $F$ and $\eta$ exhibit different qualitative dependences on $G$: For example, in negatively self-regulating genes, $\eta$ may decrease while at the same time $F$ is an increasing, decreasing or even non-monotonous function of $G$ (Fig. \ref{fig:figure_09}A). For positive auto-regulation, $\eta(G)$ may have two maxima whereas $F(G)$ has just one clear maximum (Fig. \ref{fig:figure_09}C,D, $K=700$). In the case of positive auto-regulation, it is in accordance with intuition that the abrupt changes shared by both measures of noise, $F(G)$ and $\eta(G)$, are associated with transition of the protein number distribution through bimodality (cf. Fig. \ref{fig:figure_11_bis} in Appendix \ref{app:pass_bimod_1}). However, other cases of non-monotonic behaviours of $F(G)$ and $\eta(G)$, including those for negative auto-regulation, are non-intuitive. Since the behaviours of the two measures of noise may differ quite significantly, statements like 'gene duplication increases noise of protein distribution' are meaningless until a particular measure of noise is chosen.
\begin{figure*}[t]
\begin{center}
\rotatebox{0}{\scalebox{0.35}{\includegraphics{Figure_04.eps}}}
\end{center}
\caption{ {Combined effect of transcription-factor noise in self-regulating genes and global extrinsic noise may cause zero covariance between the expression of one and two gene copies, which means that the covariance is not a good indicator of the presence of extrinsic noise, e.g. in one-reporter assay as presented in \cite{stewart2012cellular}. A: Extrinsic noise modelled by gamma-distributed fluctuations in mean protein burst size, $b$, e.g. due to a variable concentration of ribosomes. The distribution $g(b)$ (Eq. (\ref{eq:gamma_b})) has parameters at which the covariance (\ref{eq:cov}) is approximately equal to 0: $\langle b \rangle=20$, $k=5.5$, $\theta=\langle b \rangle / k$, $(var(b))^{1/2}=\theta k^{1/2}$. B: Protein distributions with the contribution of the extrinsic noise $g(b)$ for a single and duplicated self-regulating gene ($q_1(x)$ and $q_2(x)$, Eq. (\ref{eq:q}), solid lines). The clear bimodality of $q_2(x)$ is the effect of strongly bimodal contributions for some values of $b<20$. For comparison, protein distributions for zero extrinsic noise are shown ($p_1(x)$ and $p_2(x)$ with non-fluctuating $b=20$, dashed lines). Parameters: $a=10$, $K=70$, $\epsilon=0.05$, $n=-4$. C: Covariance between the expression of one and two gene copies (Eq. (\ref{eq:cov}), re-scaled by the mean protein number squared), as a function of varying parameters $k$ and $\theta$ in $g(b)$, such that the mean value of $b$ is fixed: $\langle b \rangle=k \theta=20$. The arrow indicates the level of extrinsic noise shown in Fig. A and by solid lines in Fig. B, where the fluctuations of $b$ compensate the transcription-factor noise, so that the covariance is very close to zero. This example shows that zero covariance does not imply that the extrinsic noise is zero. }}
\label{fig:figure_15}
\end{figure*}
\subsubsection{Interpretation of one-reporter assay.} In a large-scale experiment, Stewart-Ornstein et al. \cite{stewart2012cellular} measured the contribution of extrinsic noise in the expression of \textit{S. cerevisiae} genes using the one-reporter assay. The classical two-reporter assay \cite{elowitz2002stochastic} consists in measurement of expression of two reporter genes that produce fluorescent proteins of different colours and the correlation between their fluorescence levels provides the information about the intensity of extrinsic noise that affects globally both promoters. The concept of the one-reporter assay, instead, consists in a comparison of statistics of the expression level $x_1$ of a single reporter gene with the statistics of the expression level $x_1+x_2$ of two copies of that same reporter gene, both producing identical fluorescent proteins. Extrinsic noise was defined in \cite{stewart2012cellular} as $[{ cov(x_1,x_2) / ( \langle x_1 \rangle \langle x_2 \rangle)}]^{1/2}$, where the covariance is defined by the variances of expression of a single gene copy and two identical gene copies \cite{volfson2006origins}:
\begin{eqnarray} \label{eq:cov}
cov(x_1,x_2) = [ var(x_1+x_2) - 2 var(x_1) ]/2,
\end{eqnarray}
assuming that $\langle x_1\rangle=\langle x_2\rangle$ and $var(x_1) = var(x_2)$, because the products of the two gene copies, and the copies themselves, are identical. However, this definition of extrinsic noise becomes problematic if applied to \textit{self-regulating} genes. Here, $var(x_1+x_2) \neq 2 var(x_1)$ even in absence of any extrinsic factors affecting globally both gene copies. Moreover, it is possible that $cov(x_1,x_2)<0$, which would yield the square root of a negative number as the value of extrinsic noise, as defined according to \cite{stewart2012cellular}. In Fig. \ref{fig:figure_13}, Appendix \ref{app:neg_cov}, we show examples of negative covariance produced by negatively and positively self-regulating genes.
It should be noted that the occurrence of negative covariance is, in itself, nothing unusual. What is problematic, is the definition of extrinsic noise as measured by the covariance, because it implies that zero covariance should indicate zero extrinsic noise. We therefore argue that interpretation of the experimental results from one-reporter assay in terms of intrinsic and extrinsic noise must be done with caution: A distinction is needed between (i) extrinsic noise as a factor\textit{ external to the promoter only}, which affects the state of the promoter, e.g. by the concentration of TFs \cite{ochab2010bimodal}, even if these TFs are produced by the gene they regulate, and (ii) extrinsic noise as a global factor affecting both gene copies \textit{independently of their expression} (e.g., the variability in concentration of RNA polymerases or ribosomes).
One can, for example, imagine that two gene copies are self-regulating, which causes negative covariance of their expression, but simultaneously they are affected by a global noise source that increases the covariance, in such a way that the covariance sums out to zero. The interpretation of this result using the definition of extrinsic noise as measured by covariance, which was proposed by Stewart-Ornstein et al. \cite{stewart2012cellular}, would lead to an erroneous conclusion that these genes are not affected by extrinsic noise whatsoever.
We show an example of such a situation in Fig. \ref{fig:figure_15}, where one and two copies of a positively self-regulating gene are additionally affected by global fluctuations in the mean size of a protein burst $b$, e.g. due to varying ribosome concentration in cells. We assume that $b$ is gamma distributed,
\begin{equation} \label{eq:gamma_b}
g(b) = \frac{b^{k -1} e^{-b/\theta}}{\theta^k \Gamma(k)},
\end{equation}
with $\langle b \rangle = k \theta$ and $var(b) = k \theta^2$. Then, the distribution of proteins in cell population \cite{taniguchi2010quantifying}
\begin{equation} \label{eq:q}
q_G(x) = \int_0^\infty p_G(x,b) g(b) db.
\end{equation}
At certain width of the distribution of $b$, the covariance $cov(x_1,x_2)$ between the expression of one and two gene copies is zero, because the global fluctuations in ribosome concentration compensate the negative covariance that was the result of self-regulation. The fact that $cov(x_1,x_2) =0$ does not imply here that extrinsic noise is absent.
\subsection{Two non-equivalent gene copies\label{Two nonequivalent gene copies}}
We now turn to the situation when the promoters or operators of different gene copies are not identical. This may happen due to mutational changes in one of the initially identical copies or due to mutations leading to duplication of an incomplete gene, with missing fragments of the regulatory parts. Gene duplication may also result in two copies which are nonequivalent due to their different neighbourhood (different genetic context).
For simplicity, we confine our attention to two gene copies ($G=2$). We are interested in the effects of mutations affecting TF affinity to the operator region of one of the two copies, such that $K_1 \neq K_2$. For cooperative TF binding, assuming identical $b$ for both copies, the steady-state distribution of protein concentration is given by Eq. (\ref{solution of ME of Friedman stationary cooperative}) for $G=2$:
\begin{equation}
p(x) = A e^{-x/b} x^{a_{1}+a_{2}-1} [H_{1}(x)]^{\frac{a_{1}(1-\epsilon_1)}{n_{1}}} [H_{2}(x)]^{\frac{a_{2}(1-\epsilon_2)}{n_{2}}}.
\label{p(x) for nonequivalent copies}
\end{equation}
In contrast to the case of equivalent gene copies, $p(x)$ (\ref{p(x) for nonequivalent copies}) cannot be obtained from the single gene copy case $p_1(x)$ (\ref{solution of ME of Friedman stationary cooperative G-degenerate}) just by the simple scaling (\ref{scaling of the a parameter}), even for $n_1=n_2\equiv n$, $a_1=a_2\equiv a$, and $\epsilon_1=\epsilon_2$ (these equalities will hold further on). In the following considerations, we have chosen example values of parameters, $n=\pm 4$ and $b=20$.
\begin{figure}
\begin{center}
\rotatebox{0}{\scalebox{0.68}
{\includegraphics{Figure_05.eps}}}
\end{center}
\caption{ {When two positively self-regulating gene copies have different sensitivities to TF, the geometric construction (A) may predict a mixed, binary$+$graded, response (B). Binary response is seen for the distribution peaks in the range $0 < x < 200$, and graded response for $200 < x < 400$. Parameters: $n=-4$, $a=10$, $b=20$, $\epsilon_1= \epsilon_2=0.07$. }}
\label{fig:figure_21}
\end{figure}
\begin{figure}
\begin{center}
\rotatebox{0}{\scalebox{0.68}
{\includegraphics{Figure_06.eps}}}
\end{center}
\caption{ { {Each of the genes, whose collective behaviour was shown in Fig. \ref{fig:figure_21}, has a binary response when present in the cell in a single copy. Parameters are the same as in Fig. \ref{fig:figure_21}. In Fig. B, the orange, yellow, green and blue curves overlap. In Fig. D, the yellow, orange and red curves overlap. }}}
\label{fig:figure_22}
\end{figure}
\subsubsection{Non-equivalent copies of a self-regulating gene may have a mixed binary$+$graded response to a signal.}
A well-known fact is that the distribution of protein concentrations produced by a positively self-regulating gene can be bimodal \cite{friedman2006linking,mackey2011molecular,ochab2015transcriptional,Pajaro2015}. And thus, the conditions for bimodality also hold for $G$ identical copies of such a gene, with the re-scaling (\ref{scaling of the a parameter}). For easier visualisation of the parameter regions where the bimodality is present, one can use a geometric construction that indicates the number and positions of the extrema of $p_G(x)$ \cite{ochab2015transcriptional}:
\begin{equation}
h(x) =\frac{1}{a b G} x + \frac{1}{a G}.
\label{condition for extrema of of ME of Friedman stationary cooperative G-degenerate}
\end{equation}%
The positions of the intersections of the transfer function $h(x)$ and the straight line corresponds to the positions of the extrema of the distribution. If the construction shows two intersections of $h(x)$ and the straight line, then there is one additional maximum in $x=0$. It should be noted, however, that if the geometric construction predicts the mathematical fact of existence of multiple extrema, they may still not always be clearly visible on the plot of distributions: One maximum may be much smaller than the other even if the points of the intersection seem to be well separated on the plot of the geometric construction. A particular example illustrating such a situation of apparent unimodality is presented in Fig. \ref{fig:figure_11_bis}, Appendix \ref{app:pass_bimod_1}. Yet, the geometric construction is a convenient tool to gain a qualitative understanding of the system's behaviour (see also \cite{Pajaro2015} for a more detailed analysis of distribution properties based on the construction). The construction turns out to be especially instructive for the case of two non-equivalent gene copies: Now, it contains two regulatory functions of both genes,
\begin{eqnarray}
\frac{1}{ab} x + \frac{1}{a} &=& h_{1}(x)+h_{2}(x),
\label{nondeg SDGC}
\end{eqnarray}
and it allows us to visualise an example of a nontrivial response of the two-gene system to a varying signal that modifies the TF binding strength \cite{ochab2015transcriptional}. In Fig. \ref{fig:figure_21}, the sensitivities of both gene copies to TF differ by the factor of 16, which is reflected by the corresponding ratio of $K_1$ to $K_2$. An external signal {of a certain intensity}, e.g. the presence of certain concentration of ligand that binds to TF, or phosphorylation of a certain fraction of TFs, changes proportionally both coefficients $K_1$ and $K_2$, which causes the change of the steepness of the regulatory functions, $h_{1}(x)+h_{2}(x)$. For positive self-regulation, the geometric construction predicts that when both gene copies have different sensitivities to TF, a mixed response to the signal is possible, which is a combination of binary and graded responses. First, the more sensitive copy responds in a binary manner to the signal: The probability mass is transferred between two peaks of $p(x)$. But when the binary response is over, i.e., $p(x)$ becomes again unimodal, then gene expression does not saturate at the fixed level. Instead, the single peak moves towards an even higher expression level, now reflecting the (graded) response of the second, less sensitive, gene copy. {One might naively expect that this hybrid behaviour occurs because one of the genes has graded response and the other has binary response. However, this is not the case: In our example, each gene has a binary response when present in the cell in just one copy (Fig. \ref{fig:figure_22}).} The mixed type of cellular response was experimentally found in different contexts \cite{ruf2006mixed, porpiglia2012stat5}, but, to date, it {has not been} associated with gene duplication.\\
\subsubsection{ Evolutionary accumulation of gene duplications may non-trivially depend on the inherent frequency of bursting of a given gene \label{subsubsec:mean} }
Since both copies of the self-regulating gene also regulate each other, the mean expression $\langle x_1 + x_2 \rangle$ of the two copies is, in general, not equal to the double mean expression $\langle x_1 \rangle$ in cells where just a single gene copy is present. The ratio $\langle x_1 + x_2 \rangle/\langle x_1 \rangle$ depends on the regulation strengths $K_1$, $K_2$ of both genes but also on the parameter $a$ that describes the maximum mean burst frequency and can be considered, alongside Fano factor and coefficient of variation, as another measure of the noise present in the system -- the larger $a$, the closer is the behaviour of the system to the deterministic model \cite{ochab2015transcriptional}. Fig. \ref{fig:mean_negative_reg} shows the behaviour of $\langle x_1 + x_2 \rangle/\langle x_1 \rangle$ as a function of $K_2/K_1$ for an example set of parameters. For a given $a$, the function increases (for auto-repression) or decreases (for auto-activation), but the magnitude and threshold of this changes for each value of $a$ in a rather unintuitive way. One can, on the other hand, see that $\langle x_1 + x_2 \rangle/\langle x_1 \rangle$ depends non-monotonically on the noisiness the system, measured by $1/a$. In Fig. \ref{fig:mean_a} we plot cross-sections of Fig. \ref{fig:mean_negative_reg} as functions of $a$. Based on Fig. \ref{fig:mean_negative_reg}, we can discuss two scenarios of gene duplication:
1. The gene copies are identical (green lines in Fig. \ref{fig:mean_a}). In general, gene duplication increases total gene expression, but by a different factor, depending on regulation type and $a$. Duplication of a negatively self-regulating gene (Fig. \ref{fig:mean_a}A, green line) causes a smaller change in its expression than duplication of a positively self-regulating gene (Fig. \ref{fig:mean_a}B, green line). This effect is easy to explain intuitively: in the former case, additional repressors are produced, in contrast to additional activators in the latter case. In the case of auto-activation, there is a value of $a$ at which the increase of expression is maximal. This corresponds to activation of the previously inactive gene due to duplication. The effect can be intuitively visualised using the geometric construction, even though it shows extrema of the protein number distributions and not their means: Since the slope of $x/(ab) + 1/a$ in Eq. (\ref{nondeg SDGC}) depends on $a$, this parameter changes the relative distance between the points of intersection of the straight line with $h_1(x)$ and with $h_1(x)+h_2(x)$ (see Appendix \ref{app:means}, Fig. \ref{fig:activation_duplication}).
If there exists a threshold of natural selection above which gene expression is too high and the cell is eliminated, then only those duplications will remain, for which the expression is increased the least. Thus, duplications of those auto-repressed genes, whose noisiness measured in $1/a$ is low, have a greater chance to survive (Fig. \ref{fig:mean_a}; see also the geometric construction in Appendix \ref{app:means}, Fig. \ref{fig:neg_a}). On the other hand, for auto-activation, the duplications of genes with a small or large, but not intermediate, $a$ have a greater chance to survive; this corresponds to situations when either the duplication does not suffice to induce the genes, or when the gene has already been induced before duplication.
2. The gene copies differ in their operator-TF affinity parameters $K_i$, which can occur when the new copy is placed in a different genetic context than the original (e.g. where the exposition of the operator to TFs is better/worse) or when the promoter is not fully copied. The changes lowering the operator-TF affinity ($K_2>K_1$) are more probable. Intuitively, one can predict that the gene copy with such a defective operator will increase the total expression in the case of auto-repression, or decrease it in the case of auto-activation, as compared to the perfect duplication. However, interestingly, when the copies of a negatively self-regulating gene differ in their affinities to TF in a sufficiently large extent, then an optimal value of $a$ occurs at which their expression increases the least as compared to the expression of a single gene copy (Fig. \ref{fig:mean_a}A, blue and magenta lines; see also the geometric construction in Appendix \ref{app:means}, Fig. \ref{fig:neg_a2}). This means that survival of a defective duplication is more probable for a rather small $a$ (around its optimal value), differently than in the case of a perfect duplication, where large $a$ increased the probability of survival.
On the other hand, in the case of auto-repressed genes, evolution may lead to accumulation of those rare cases of duplications in which the operator-TF affinity is increased (i.e., $K_2$ decreased). For auto-activated genes, such increase would not be evolutionarily preferred.
Accumulation of gene duplications may thus depend not only on the type of regulation (negative/positive) but also on the amount of noise in the system, measured by the maximum mean burst frequency $a$. This dependence is, however, non-trivial because it may be different for perfect and imperfect duplications.
\begin{figure}[t!]
\begin{center}
\rotatebox{0}{\scalebox{0.3}{\includegraphics{Figure_07a.eps}}}
\rotatebox{0}{\scalebox{0.3}{\includegraphics{Figure_07b.eps}}}
\end{center}
\caption{Relative change in the average protein concentration before and after gene duplication, as a function relative affinity of both genes for TF, $K_2/K_1$. A: Negative auto-regulation, $n=4$. B: Positive auto-regulation, $n=-4$. Parameters: $b=20$, $n=4$, $K_1=70$, and $\epsilon_1=\epsilon_2=0.05$. Horizontal dashed lines mark the level of $1$ (green) and $2$ (blue) for comparison.}
\label{fig:mean_negative_reg}
\end{figure}
\begin{figure}[t!]
\begin{center}
\rotatebox{0}{\scalebox{0.615}{\includegraphics{Figure_08.eps}}}
\end{center}
\caption{Relative change in the average protein concentration before and after gene duplication, as a function of maximum mean burst frequency $a$, for various values of $K_2/K_1$. A: Negative auto-regulation, $n=4$. B: Positive auto-regulation, $n=-4$. Parameters and the meaning of horizontal dashed lines are same as in Fig. \ref{fig:mean_negative_reg}.}
\label{fig:mean_a}
\end{figure}
\begin{figure*}[t!]
\begin{center}
\rotatebox{0}{\scalebox{0.3}{\includegraphics{Figure_09.eps}}}
\end{center}
\caption{Two non-equivalent copies of a negatively (A,B) and positively (C,D) self-regulating gene: Different measures of noise, Fano factor $F$ and coefficient of variation $\eta$, may show differences in their behaviour as functions of the relative sensitivity $K_2/K_1$ of both promoters to auto-regulation. For negative auto-regulation, $n = 4$ (A,B), the positions and depth of minima are different for $F$ and $\eta$. For positive auto-regulation, $n = -4$ (C,D), the maxima of both measures of noise roughly correspond to the transition through bimodal distributions, see Fig. \ref{fig:figure_25_bis}. The exact positions and height of the maxima are, however, different for $F$ and $\eta$. Additionally, for both positive and negative auto-regulation, $F$ varies non-monotonically with $a$, whereas the dependence of $\eta$ on $a$ is monotonic. Parameters: $b = 20$, $K_1 = 70$, $\epsilon_1 = \epsilon_2 = 0.05$. }
\label{fig:figure_25}
\end{figure*}
\subsubsection{Fano factor and coefficient of variation behave differently as the function of TF-operator affinity}
According to earlier findings, two identical copies of negatively self-regulating genes are characterised by a smaller gene expression noise than heterozygotes ($K_2 \neq K_1$) \cite{stewart2010construction}. However, a careful analysis of the behaviour of our model at some exemplary values of parameters (Figs. \ref{fig:figure_25}) shows that this rule is not universal. We observe that Fano factor $F$ and coefficient of variation $\eta$ may behave in a different way as the functions of the relative promoter sensitivities $K_1/K_2$ and depending on the maximal burst frequency $a$. In the case of two copies of a negatively self-regulating gene (Fig. \ref{fig:figure_25}A,B), for large $a$, Fano factor has minima in the vicinity of the homozygous case, $K_2=K_1$. For small $a$, the minimum of $F$ occurs when there is a two-fold difference in sensitivity between the promoters (for $a=1$, $K_1=70$, $K_2 \approx 35$). On the other hand, coefficient of variation shows shallower minima and their positions are different than those for Fano factor. For small noise, the minimum of $\eta$ occurs when there is a two-fold difference in sensitivity between the promoters but at different values than in the case of $F$: (for $a=10$, $K_1=70$, $K_2 \approx 140$). If gene pairs with $K_2 \ll K_1$ and $K_2\gg K_1$ are compared, large differences in Fano factor occur for large $a$, but, at the same time, large differences in coefficient of variation occur for small $a$. In the case of two copies of a positively self-regulating gene (Fig. \ref{fig:figure_25}C,D), both measures of noise have maxima (roughly corresponding to the transition through bimodal distributions, see Fig. \ref{fig:figure_25_bis} in Appendix \ref{app:pass_bimod_2}) but again their positions differ for $F$ and $\eta$. In the case of coefficient of variation, the maxima are less pronounced and they disappear above some value of $a$, which is however different than for the maxima of $F$. We also note that, for both negatively and positively self-regulating genes, coefficient of variation varies monotonically with $a$, whereas the behaviour of Fano factor is non-monotonic.
And therefore, if we attempted to draw any conclusions from these examples about evolutionary optimisation of promoter sensitivity with respect to noise, then these conclusions would differ depending on whether they are based on the behaviour of Fano factor or the coefficient of variation. If one of two initially identical genes undergoes mutation, these two measures of noise would give different predictions as to what type of mutation is more beneficial, the one increasing or the one decreasing the sensitivity of the promoter.
\section{Discussion}
\subsection{Conclusions}
In the present paper, we have studied the influence of gene copy number, auto-regulation strength, and transcriptional leakage on the properties of the simplest genetic circuit, a self-regulating gene.
Although this genetic circuit is extremely simple, the analysis of how it behaves depending on the number of gene copies may be crucial for correct interpretation of experimental results. In a large-scale experiment (456 genes), Stewart-Ornstein et al. \cite{stewart2012cellular} used the one-reporter assay to measure covariance between the expression of single genes and two identical copies of those genes in \textit{Saccharomyces cerevisiae}. Adopting the ideas of Volfson et al. \cite{volfson2006origins} who studied non-auto-regulated genes, the authors of \cite{stewart2012cellular} interpreted the covariance as a measure of extrinsic noise affecting the genes. However, if the studied genes were self-regulating, this interpretation would break down because, as we have shown in the present paper, the transcription-factor noise may cause negative covariance, whereas global extrinsic noise, e.g. due to cell-to-cell differences in ribosome concentration, may compensate it, such that the total covariance is zero. In that case, the interpretation proposed in \cite{stewart2012cellular} would lead to an erroneous conclusion that the genes are not affected by extrinsic noise.
{An obvious observation is that, within the studied model, a system of multiple \textit{identical} gene copies can be equally well interpreted as a single ``super-gene'', whose transfer function $h(x)$ is the same as in a single gene copy whose transcription rate is accordingly multiplied. On the other hand, when the gene copies are \textit{non-identical} due to promoter mutation affecting the TF binding, the effective transfer function has a nontrivial shape {(a case that is rather difficult to interpret in terms of some molecular mechanisms affecting a single ``super-gene'')}. We have shown that this may lead to a mixed, binary$+$graded response of the gene system to external signal modulating the TF activity: In a certain range of the signal, the histogram of gene expression is bimodal with the height of the peaks varying as the signal is varied, but when that range is exceeded, the gene expression does not saturate. Instead, a single peak gradually changes its position as the signal intensity is further increased. {This behaviour is the result of mutual regulation of both genes: It may occur even if each of the genes alone has a binary response when present in the cell in a single copy.} The hybrid response was observed in different cellular contexts (nuclear phosphorylated ERK as well as Egr1 and its mRNA, induced by gonadotropin-releasing hormone in L$\beta$T2 mouse cells \cite{ruf2006mixed}, phosphorylated Stat5 induced by erythropoietin in foetal erythroblasts \cite{porpiglia2012stat5}). However, to date, this type of response { has not been} associated with gene duplication.}
Our analysis of the relative change in gene expression before and after gene duplication suggests that the evolutionary survival of additional gene copies may not only depend on whether the auto-regulation is negative or positive, but also on the amount of noise in the system, measured by the inherent maximal mean burst frequency $a$ of a given gene. The dependence for perfect duplications (identical gene copies) may be different than for defective duplications (the operator of the new copy having a lower affinity for TF): In the case of perfect duplications of auto-repressed genes, there may be a preference for accumulation of such duplications when the genes are characterised by high burst frequency $a$. On the other hand, some cases of defective duplications may survive when the genes have an optimal, low burst frequency $a$. In the case of auto-activated genes, evolution may avoid accumulation of duplications of those gene for which $a$ is in an intermediate regime because such duplications of an uninduced gene may lead to exceeding of the induction threshold. Finally, there may also be a (more obvious) preference for those rare cases of duplications of auto-repressed genes, in which the operator-TF affinity is increased, whereas such an increase would not be preferred in the case auto-activated genes. The above predictions can be tested experimentally by checking which types of gene duplications (perfect or imperfect) and in what types of genes (the ones with frequent or infrequent protein bursts) tend to accumulate in the course of evolution.
In order to investigate gene expression noise, we have computed two standard measures of noise (Fano factor $F$ and the coefficient of variation $\eta$). It turns out that $F$ and $\eta$ behave differently as functions of gene copy number, and, in the case of two non-identical gene copies, as functions of the relative auto-regulation strengths of the two genes. Consequently, in any analysis of gene expression noise, the outcome depends on which measure of noise is used. This makes any statements on the influence of gene expression noise on cell fitness ambiguous. On one hand, it seems that coefficient of variation, as a dimensionless quantity, may be a more reasonable choice. On the other hand, even if the qualitative behaviour of $F(G)$ and $\eta(G)$ is similar, it is not guaranteed that a definite conclusion regarding the selective role of gene expression noise can be drawn. For example, in the case of bet-hedging strategy, cell fitness depends on the shape of the protein concentration distribution (bimodal vs. unimodal) in a way that not always can be captured by simple measures of noise like $\eta$ or $F$ (e.g., it is possible that a bimodal distribution with strongly defined peaks can have the same $\eta$ as a wide unimodal distribution). We do not know which measure of gene expression (if any) is used by Nature to quantify the influence of noise on cell fitness, and it is likely that such measure is to be found individually for each system of interest.
\subsection{Limitations of the model}
The main limitation of the present approach is that it allows to treat only the case of gene copies coding for identical gene product. Since the gene copies are assumed to be coupled only by the total protein concentration, the model does not take into account other coupling mechanisms affecting the burst rates of all genes, e.g. general DNA remodelling. Also, the present formalism cannot be used to investigate more complicated genetic circuits (e.g., toggle switch). Moreover, it is likely that in real systems, the same point mutation may affect both the transcription rate (hence, burst frequency), auto-regulation strength, and basal transcription level. In such a case, the model here considered is only a first approximation; within a more involved description, some model parameters ($a$, $\epsilon$, and $K$) should not be treated as independent. Another simplification is that the model is one-dimensional. This may cause neglection of some effects that are possible only in higher dimensions, e.g. oscillations.
We have also assumed here that the gene copy number $G$ is identical for all cells in the population. However, this may be not the case and we may deal with a distribution of $G$ values, e.g. when high-copy plasmids are used to construct a multi-copy strains. In such a situation our model may be easily generalised by introducing a probability distribution $p(G)$ for different values of $G$, a conditional probability for finding $x$ protein in a cell containing $G$ gene copies, $p(x|G)$, and finally the joint probability $p(x,G) = p(x|G)p(G)$, $G \in \mathbb{N}$. The marginal probability distribution $p(x) \equiv \sum_{G} p(x,G)$ should be then used as a correct protein number probability distribution in the population.
Since nuclear transport is neglected within the present model, our results seem to be more relevant to prokaryotes than to eukaryotes. However, in most papers devoted to copy number variation in eukaryotes, the division of the cell into nucleus and cytoplasm is not taken into account, and exactly the same models are used to describe gene expression in both groups of organisms.
Most of proteins in \textit{E. coli} appear in relatively high concentrations \cite{ishihama2008protein, ishihama2014intracellular} and therefore the discreteness of the protein number is not taken into account within the present approach. Our model with a continuous $x$ variable may thus incorrectly describe systems containing small numbers of proteins. Note that this may also include the cases where the total number of TFs is large but the number of active TFs (not taken explicitly into account in our model) is very small. The presence of discrete states of the promoter is here taken into account only in an effective manner by making use of the Hill function. On the other hand, the discrete counterpart of the analytical framework proposed in Ref. \cite{friedman2006linking} is known \cite{aquino2012stochastic}, and it seems adaptable to study the system of multiple copies of a self-regulating gene.
Finally, it should be noted that the present model does not allow to study the changes of gene copy number $G$ in time. We only compare stationary expression of gene systems containing different fixed numbers of gene copies. However, in some cases, the number of gene copies may change on the time scales as short as a fraction of a cell cycle ($10^{3}$-$10^{4}$ s), the most obvious example being chromosome replication during the replication cycle. In rapidly growing and dividing bacteria, DNA replication leads to a more than two-fold increase of the copy number of some genes (multi-forked chromosomes) \cite{krebs2013lewin} . Another example of a rapid copy number variation is the change of a viral genome copy number during multiple bacteriophage infections of bacteria \cite{kobiler2005quantitative, weitz2008collective}. Modelling of time-dependent gene expression in such cases would require a different theoretical approach.
\begin{acknowledgments} We would like to thank Marek Skoneczny and Marcin Tabaka for helpful discussions.
The research was supported by the Ministry of Science and Higher Education Grant No. 0501/IP1/2013/72 (Iuventus Plus).
\end{acknowledgments}
|
1,314,259,993,426 | arxiv | \section*{Funding Information}
This work is supported by the Fundamental Research Funds for the Central Universities, Program for
Key Science and Technology Innovative Research Team of Shaanxi Province (2013KCT-05), and the
National Natural Science Foundation of China ( Grant Nos. 11374008, 11374238, 11374239 and 11534008).
|
1,314,259,993,427 | arxiv | \section{Introduction}
\label{sec:intro}
The discovery of the Higgs boson at the Large Hadron Collider (LHC)~\cite{Aad:2012tfa,Chatrchyan:2012xdj} is a triumph for particle physics, as it was one of the last missing pieces needed to understand the origin of the masses of the Standard Model (SM) particles. Although the SM is now consistent up to high scales, some aspects of the theory still appear contrived. In particular, the mass of the Higgs boson, 125 GeV, is sensitive to physics at much higher scales. At one-loop, the SM Higgs mass squared receives sizable corrections that depend quadratically on the cutoff energy scale, $\Lambda$, as follows
\begin{equation}
m_h^2 = m_0^2 + {1 \over 16 \pi^2} \left( {3 \over 4} g_1^2 + {9 \over 4} g_2^2 + 3 \lambda_h^2 - 12\lambda_t^2 \right)\Lambda^2 \,,
\label{eq:diverge}
\end{equation}
where
$\lambda_h$, $\lambda_t$, v, $g_2$ and $g_1$ are the Higgs quartic, top Yukawa, electroweak vev, $SU(2)_L$ and hypercharge gauge couplings, respectively. $m_0$ is the bare Higgs mass parameter that appears in the Lagrangian prior to renormalization and $m_h$ is the renormalized quadratic term which determines the value of the physical Higgs mass. $\Lambda$ is a momentum cutoff, presumably the next fundamental scale of new physics. In the minimal Standard Model, with a cutoff of $\Lambda \sim 5$~TeV, the value of $m_0$ must be fine-tuned
at the level of one part in 100. %
This hefty dependence of the physical Higgs mass on the cutoff requires a cancellation between physics at the cutoff scale against physics below the cutoff scale which is at odds with the expectation that physics at low energies should not be highly sensitive to the short distance theory ~\cite{Wilson:1971bg,Wilson:1973jj}.
Before the discovery of the Higgs boson, accompanied by the lack of discovery of supersymmetry or any other mechanism for cancelling quadratic divergences, it had been expected that the scale of new physics beyond the SM would be around or lower than a TeV in order to avoid fine-tuning the SM parameters. The LHC, however, can directly probe much higher energy scales, as high as several TeV, and has discovered no new physics beyond the SM. Moreover precision electroweak measurements, which favored a light SM Higgs boson, imply that $\Lambda$ is likely large ($\Lambda \gtrsim 5$ TeV)~\cite{Barbieri:2000gf,Ciuchini:2014dea} if the new physics particles have electroweak quantum numbers and couple to the SM Higgs.
This gap between the scale of new physics implied by avoidance of fine tuning and the scale of new physics forecasted by precision electroweak measurements is known as the little hierarchy problem (LHP).
The traditional way to address the LHP has been to add new particles
and symmetries to the SM
to ensure the one loop quadratic divergence in
equation~\leqn{eq:diverge}
is cancelled~\cite{Schmaltz:2002wx} without large precision electroweak corrections.
Notably, since the largest divergence
is generated by top quarks, extending the SM with fermionic or bosonic top partners is sufficient to quantitatively solve the LHP~\cite{Katz:2005au,Cohen:1996vb}. %
Introducing such top partners has thus played a large role in theoretical particle physics for the last thirty-five years. Prominent examples include extra-dimensions~\cite{Hosotani:1983xw,Davies:1987ei,Hatanaka:1998yp,Randall:1999ee}, Intermediate Higgs (rebranded in the literature as natural composite Higgs)~\cite{Katz:2005au}, Little Higgs~\cite{ArkaniHamed:2002qy}, Twin Higgs~\cite{Chacko:2005pe} and Supersymmetry~\cite{Dimopoulos:1981zb,Cohen:1996vb}. In order for the cancellations to be effective in these models, though, the top partners must have a mass not too far above the top quark mass. However, the LHC has placed such strong lower bounds on the masses of colored top partners~\cite{ICHEP:2016ab} that many of the scenarios listed here are now fine-tuned in most of their parameter space. %
While the colored top partners are currently under siege, an extended Higgs sector is still relatively unconstrained by the LHC. Although the Higgs couplings to gauge bosons have been measured with high precision, the uncertainties on its couplings to fermions, notably to top quarks, are relatively large~\cite{Aaboud:2017jvq,Sirunyan:2018hoz,Khachatryan:2016vau}. In the light of these results, it is particularly tempting to look for a solution to the fine-tuning problem in either or both of these blind spots. %
In what follows, we study how to exploit these uncertainties to cancel or at least alleviate the fine-tuning of the Higgs boson mass.
\subsection{Parametric Naturalness}
\label{sec:idea}
Instead of the divergence structure shown in equation~\leqn{eq:diverge}, we consider a scenario where the largest one-loop corrections to the Higgs mass has the following form~\footnote{We note the different sectors in the above equation could conceivably have different cutoffs. We restrict to equation~\leqn{eq:newdiverge} for simplicity.
\begin{eqnarray}
m_h^2 &=& m_0^2 \label{eq:newdiverge} \\
&+&{1 \over 16 \pi^2} \left( {3 \over 4} g_1^2 + {9 \over 4} g_2^2 + 3 \lambda_h^2 - 12\lambda_t^2 + \sum_i c_i \,\lambda_i^2 \right)\Lambda^2 \,.\nonumber
\end{eqnarray}
Here for simplicity we assume a common cutoff scale, $\Lambda$. The $\lambda_i$ typically correspond to new quartic couplings, present in extended Higgs sectors. Given the cutoff $\Lambda~>~m_h$, there are several strategies to make $m_h$ naturally small:
\begin{itemize}
\item[1.] The $\lambda_i$ summation can be chosen so that the parameters in the last term of equation~\leqn{eq:newdiverge} sum to a small number. The physical mass, $m_h$, is mostly set by the bare mass, $m_0$.
\end{itemize}
This requirement is known as Veltman's condition~\cite{Veltman:1980mj}. Veltman, inspired by Wilsonian effective field theories~\cite{Wilson:1971bg,Wilson:1973jj}, insisted the dimensionless parameters in equation~\leqn{eq:diverge} sum to zero.
This requirement predicted a 316~GeV SM Higgs mass for the minimal SM, which is not realized in nature.
In order to balance the top Yukawa coupling contribution in equation~\leqn{eq:newdiverge}, in a renormalizable theory at least some of the new particles running in the loop must be bosonic so that the coefficients $c_i$ are positive. New gauge bosons with electroweak quantum numbers are generically constrained by the LHC to be at least multi-TeV in mass~\cite{Aaboud:2017buh,Aaboud:2017efa,CMS:2017ilm}. The LHC constraints on new scalars, although weaker, can still push their masses up to almost a TeV. When the scalar masses are set by their quartic couplings and vevs alone, this result in turn leads to Landau poles well before the LHP can be solved~\cite{Barbieri:2006dq,Barbieri:2005kf}. In order to ensure a sizable separation between the new and SM Higgs masses, we therefore consider models involving dimensionful trilinear couplings between the different scalars. Due to gauge invariance, such models will necessary involve not only doublets, but also at least one scalar singlet or triplet that gets a vev.
Using extended Higgs sectors to address Veltman's condition has been done in the literature in various ways. For example, \cite{Masina:2013wja,Karahan:2014ola,Chakrabarty:2018yoy,Darvishi:2017bhf} feature an extended Higgs sector with masses that generate additional quadratic divergences and hierarchy problems. For many extended Higgs sectors in the literature, it is thus impossible to eliminate all of the quadratic divergences~\cite{Chakrabarty:2018yoy,Kim:2018ecv,Darvishi:2017bhf,Biswas:2016bth,Khan:2017xyh,Chabab:2016vqn,Plascencia:2015xwa,Biswas:2014uba,Kobakhidze:2014afa,Chakraborty:2014oma,Chakraborty:2014xqa,Karahan:2014ola,Antipin:2013exa,Masina:2013wja,Jora:2013opa,Bazzocchi:2012de,Chakraborty:2012rb,Bazzocchi:2012pp,Casas:2004gh,Calmet:2003uj,Kundu:1994bs,Ma:2014cfa,Ma:2001sj}.
We introduce two new variations of the Veltman condition, always assuming a single cutoff scale for all loops with a simple momentum cutoff:
\newcounter{saveenum}
\begin{itemize}
\item[2.] The bare mass, $m_0(\Lambda)$, is evaluated at the cutoff and is zero. Then the renormalized Higgs mass is set by the radiative correction and is proportional to the new cutoff scale, $\Lambda'$.
\end{itemize}
In this scenario, it is necessary for the coefficient of $\Lambda^2$ in equation~\leqn{eq:newdiverge} to be small but not zero. The $16\, \pi^2$ suppression helps tremendously; however,
near but not complete cancellations between fermion and bosonic Higgs couplings are still required.
This requirement is a mixture between Veltman's condition and an implied aim of Coleman and Weinberg to obtain electroweak symmetry breaking entirely from radiative corrections~\cite{Coleman:1973jx}. We assume that
at the cutoff, the UV physics gives no contribution to $m_0$, and then that the $\lambda_i$ are such that the quantum corrections yield the observed mass of the SM Higgs at the weak scale.
The last and least restrictive variation of the Veltman condition is simply motivated by avoiding fine-tuned cancellations between physics at different scales.
\begin{itemize}
\item[3.] Both $m_0$ and the radiative correction are non-zero. \end{itemize}
This might seem like no constraint at all, but we will search for regions in parameter space where the cancellations between $m_0$ and the 1-loop corrections to the Higgs mass are not finely tuned, or at least less fine tuned than in the SM.
\section{A Minimal Model}
\label{sec:model}
We focus on building a theory that addresses the LHP without adding new symmetries and partners to cancel the one-loop correction shown in equation~\leqn{eq:diverge}.
In part we are exploiting the fact that the top Yukawa coupling to the 125 GeV Higgs is still not measured precisely at the LHC. The most precise direct measurement allows this coupling to be reduced by as much as 26\% at the 95\% c.l.~\cite{Aaboud:2017jvq,Sirunyan:2018hoz,Khachatryan:2016vau}. This can allow for a reduction of the overall fine-tuning. It is possible that the top quark receives part of its mass from the vev of a heavier Higgs. We thus consider a two Higgs doublet model. We also
add a neutral, real scalar field, $\Phi$, in order to allow for soft trilinear scalar couplings. We consider the standard Type-II two Higgs doublet model (THDM)~\cite{Gunion:1989we}, although, since the main fermion coupling we are interested in is the top, we expect similar results in other THDM variants. We could also consider allowing the both doublets to couple to the top quark, which could introduce flavor changing neutral scalar coupling into the up-quark sector. Since these couplings to light quarks are very small, this type of FCNC is typically compatible with the current experimental constraints.
\subsection{Requirements}
A first step when constructing a mechanism to alleviate the fine-tuning problem is to realize that, in the SM, the corrections to the Higgs mass shown in equation~\leqn{eq:diverge} are largely dominated by the top quark contributions. The size of these contributions, however, strongly depends on the size of the top Yukawa coupling to the Higgs, $\lambda_t$. Although this coupling is about one in the SM, it has not been precisely measured in the LHC yet. Lower Yukawa couplings would allow to significantly reduce the amount of fine-tuning currently associated to many theories of new physics, whose energy scales have been pushed to a few TeV by the LHC. In what follows, we therefore investigate how simple extensions of the SM could lead to reduced top quark couplings to the Higgs and to what extent these different models would reduce the fine-tuning of the Higgs mass.
The large value of the top Yukawa coupling in the Standard Model is necessary to explain the observed top quark mass of $174$~GeV. In order to reduce the large contribution to the higgs mass radiative correction associated with this coupling, it is therefore necessary to introduce either new vector-like fermions that mix with the top quark, or new Higgs bosons that will provide additional contributions to the top mass. The first approach has already been thoroughly explored by~\cite{Katz:2005au}. Here, we focus on the second scenario and investigate models involving two Higgs doublets $H_1$ and $H_2$ and a Higgs singlet $\phi$. Although all three Higgses get vevs, only $H_2$ will couple to the top quark. The observed light Higgs is a linear combination of $H_1$ and $H_2$ and so could have a reduced top quark Yukawa coupling, while the Higgs with a larger coupling to the top could be much heavier. The next section details the couplings of this model and the associated constraints from the LHC searches. After EWSB, one of the Higgs mass eigenstates becomes the $125$~GeV Higgs and couples to the top with a strength proportional to the mixing angle between the neutral Higgses. Flavor changing neutral currents (FCNC) can be completely avoided if the other fermions also only couple to $H_2$, or if leptons and/or the down type quarks couple only to $H_1$. As the couplings of the Higgses to fermions other than the top quark do not lead to meaningful constraints from fine tuning, and can always be made consistent with the FCNC, we do not study them in detail.
Before the discovery of the Higgs boson, it was well-known that a heavy SM Higgs boson would help to alleviate the little hierarchy problem. \cite{Barbieri:2006dq,Barbieri:2005kf} used this fact to generate a heavy SM Higgs boson with a naturally raised cutoff. Such a scenario, however, required large dimensionless couplings in the scalar potential. This inevitably led to low scale Landau poles. %
Given this, \cite{Barbieri:2006dq,Barbieri:2005kf} were able to raise the cutoff only to $\Lambda \sim 1.5$ TeV for a SM Higgs mass of about $400-600$~GeV. In a one Higgs model, the only way to raise the Higgs mass is to have a large quartic coupling which leads to a Landau pole at a low scale. In multi-Higgs doublet models the possibilities are more diverse. In order to ameliorate any potential problems with large quadratic corrections from the scalar sector or low scale Landau poles, we need to limit the size of the dimensionless couplings. For two-Higgs doublet models in which we also do not allow large bare quadratic mass terms, this constraint forbids Higgs masses beyond $\mathcal{O}(100)$~GeV and therefore considerably limits the extent to which the fine-tuning can be reduced.
In order to allow for large Higgs masses, we introduce an additional Higgs singlet $\phi$ that interacts with the two Higgs doublets $H_1, H_2$ via a trilinear term of the form
\begin{align}
\mathcal{L}_{\phi h_1 h_2}\sim A_h\,\phi\, H_1\,H_2.
\end{align}
The coupling $A_h$ is now dimensionful and its values are only constrained by perturbative unitarity and vacuum stability~\cite{Schuessler:2007av,Betre:2014fva}. The contribution to the Higgs mass divergence from these couplings is as best logarithmic
\begin{equation}
\delta m_h^2 \sim {A^2 \over 16\pi^2} \log\left(\Lambda^2/m_h^2 \right)\, ,\end{equation} although the coefficient of the log divergence can be large when $A$ is large.
\subsection{Scalar Sector}
The fields $H_1$ and $H_2$ and the real scalar singlet $\Phi$ get vevs $v_1$, $v_2$, and $u$ respectively, and have the following structure
\begin{align}
H_1 &= \begin{pmatrix}
G^+ \cos\beta + H^+\sin\beta\\
\dfrac{1}{\sqrt{2}}\left(v_1 + h_1 + i \left(G^0 \cos\beta + A^0\sin\beta \right)\right)
\end{pmatrix}\\
H_2 &= \begin{pmatrix}
G^+ \sin\beta - H^+\cos\beta\\
\dfrac{1}{\sqrt{2}}\left(v_2 + h_2 + i (G^0 \sin\beta - A^0\cos\beta)\right)
\end{pmatrix}\\
\Phi &= u + \phi.
\label{eq:fields}
\end{align}
Hence, in addition to the Goldstone bosons $G^0, G^\pm$, the theory involves three neutral scalars $(h_1, h_2, \phi)$, one pseudoscalar $A^0$ and one charged scalar $H^\pm$. As in the 2HDM, we introduce a mixing angle $\beta$ such that the vevs of the $SU(2)$ doublets can be rewritten as
\begin{align}
v_1 &= v\,\cos\beta\quad\quad v_2 = v\,\sin\beta
\end{align}
where $v = 246$~GeV is the electroweak vev.
As for the type-I 2HDM, we require our model to be invariant under a $\mathds{Z}_2$ symmetry. We choose for the Higgs fields to transform under this symmetry as
\begin{align}
h_1 \to h_1 && h_2 \to - h_2 && \Phi \to -\Phi.
\label{eq:shift2}
\end{align}
EWSB however causes this $\mathds{Z}_2$ symmetry to be spontaneously broken. This is problematic cosmologically because of the formation of domain walls~\cite{Zeldovich:1974uw,Kibble:1976sj}. We assume one (or all) of the following: we break this discrete symmetry softly by small terms. Alternatively,
there is a low reheating temperature after inflation~\cite{Vilenkin:1984ib}, below the electroweak scale, so the temperature is never above the phase transition scale, and the domain walls do not form. Finally, the discrete symmetry could originate from a global $U(1)$ at higher energies. At the $U(1)$ symmetry breaking scale, cosmic strings form. Then, when the domain walls form at the electroweak scale, they end on loops of the previously formed cosmic strings and the whole string-domain wall network is no longer stable and rapidly disappears by radiating scalars~\cite{Vilenkin:1984ib}.
\subsubsection{Overview of the Higgs potential}
The most generic potential consistent with the $\mathds{Z}_2$ symmetry discussed above and minimized around the vevs $v_1, v_2$, and $u$ is
\begin{widetext}
\begin{eqnarray}\label{eq:potential}
V &=& \lambda_1 \left( H_1^\dagger H_1 - \frac{v_1^2}{2} - {A_h \,u\, v_2 \over \lambda_1 v_1} \right)^2 + \lambda_2 \left( H_2^\dagger H_2 - \frac{v_2^2}{2} - {A_h \,u\, v_1 \over \lambda_2 v_2} \right)^2 + \lambda_3 \left( \Phi^2- u^2 - {A_h \,v_1\, v_2 \over 4\,\lambda_3 u} \right)^2 \label{eq:scalarpotential} \\
&+& \lambda_4 \left( H_1^\dagger H_1 - \frac{v_1^2}{2} + H_2^\dagger H_2 - \frac{v_2^2}{2} \right)^2 + \lambda_5 \biggl( H_1^\dagger H_1 \,H_2^\dagger H_2 - H_1^\dagger H_2 \,H_2^\dagger H_1
\biggr)
+ \lambda_6 \left( H_1^\dagger H_1 - \frac{v_1^2}{2} \right)\left( \Phi^2- u^2 \right) \nonumber \\
&+& \lambda_7 \left( H_2^\dagger H_2 - \frac{v_2^2}{2} \right)\left( \Phi^2- u^2 \right) + A_h \left( \Phi\, H_1 H_2^\dagger + \Phi \,H_1^\dagger H_2 - u\, v_1 v_2\,\cos \xi \right). \nonumber
\end{eqnarray}
\end{widetext}
The last trilinear term in equation~\leqn{eq:potential} generates off-diagonal contributions to the scalar mass matrix of the form $A_h\, u \,h_1 h_2$ and $A_h v_{1,2} \,h_{2,1}\phi$. Minimizing the scalar potential around the vevs also causes this term to contribute to the diagonal elements of this scalar mass matrix as well as to the masses of the pseudoscalar $A^0$ and charged Higgs $H^\pm$. The squared mass matrix for the neutral scalars can then be written as
\begin{widetext}
\begin{eqnarray}\label{eq:massmatrix}
M_h^2 &= \begin{pmatrix}
2 v^2 (\lambda_1 + \lambda_4)\cos^2\beta - A_h u\tan\beta & A_h u + v^2 \lambda_4\sin(2\beta) & v(2 u \lambda_6 \cos\beta + A_h\sin\beta) \\
A_h u + v^2 \lambda_4\sin(2\beta) & -A_h u \cot\beta + 2 v^2 (\lambda_2 + \lambda_4)\sin^2\beta & v (A_h\cos\beta +2 u\lambda_7\sin\beta)\\
v(2u\lambda_6\cos\beta + A_h\sin\beta) & v(A_h\cos\beta + 2 u\lambda_7\sin\beta) & 8 u^2\lambda_3 - \frac{A_h}{2u} v^2 \sin(2\beta)
\end{pmatrix}.
\end{eqnarray}
\end{widetext}
Diagonalizing this matrix will give three scalar mass eigenstates $(h, h', h'')$ with masses $m_h$, $m_{h'}$, and $m_{h''}$. As a convention for the rest of this work, we define $h$, $h'$, and $h''$ as the states with the largest $h_1$, $h_2$, and $\phi$ component respectively. In the limit where $\lambda_i \ll 1$ and $|A_h|, v_1, v_2 \ll u$, the mass eigenstates can be approximated by
\begin{widetext}
\begin{eqnarray}
m_1^2 &=& 2 A_h\, v_1 +\left( 2\left(\lambda_3 \,u^2+\lambda_7 \,u\, v_2+ (\lambda_2+\lambda_4)\,v_2^2 \,\right)- \frac{ \left( \,\left(2 \lambda_3-2 \lambda_6+\lambda_7 \right) u + (2 \lambda_2-2 \lambda_4+\lambda_7) v_2\right)(u+v_2)v_1^2}{u\, v_2}\right) \hspace{0.5cm}\nonumber \\
&+& \frac{1}{2 A_h v_1 v_2^2}\biggl( 2 \lambda_3 \left(\lambda_3 u^2 -(\lambda_2+\lambda_4)v_2^2\right) u^2 v_2^2 \\
&+& \left(\lambda_3^2\, u^4+\lambda_3 (2 \lambda_2-3 \lambda_3-6 \lambda_4+4 \lambda_6) \,u^2 v_2^2 - (\lambda_2+\lambda_4) (3 \lambda_2-2 \lambda_3-5 \lambda_4+4 \lambda_6) \,v_2^4 \right)v_1^2 \biggr)+ \ldots \nonumber \\
m_2^2 &=& -2 A_h v_1 + \left( 2 \lambda_3\, u^2 - 2 \lambda_7 \,u \,v_2+2 (\lambda_2+\lambda_4)\,v_2^2 +\frac{ \left( (2 \lambda_3-2 \lambda_6+\lambda_7)\,u - (2 \lambda_2-2 \lambda_4+\lambda_7)\,v_2 \right)(u-v_2)\,v_1^2}{u \,v_2} \right)\nonumber \\
&-& \frac{1}{2 A_h v_1 v_2^2}\biggl(2 \lambda_3 u^2 v_2^2 \left(\lambda_3 u^2-2 (\lambda_2+\lambda_4)v_2^2\right)+ \bigl(\lambda_3^2 \,u^4+\lambda_3 (2 \lambda_2-3 \lambda_3-6 \lambda_4+4 \lambda_6)\, u^2 v_2^2 \\
&-& (\lambda_2+\lambda_4) (3 \lambda_2-2 \lambda_3-5 \lambda_4+4 \lambda_6) v_2^4 \bigr) v_1^2 \bigr) + \ldots \nonumber \\
m_3^2 &=& 4 \left(\lambda_1+\lambda_2+\lambda_3-\lambda_6+\lambda_7 \right) v_1^2 -A_h \left(\frac{v_1 \,v_2}{u} + \frac{u \,v_1}{v_2} + \frac{u \,v_2}{v_1}\right) + \ldots
\end{eqnarray}
\end{widetext}
In Appendix~\ref{sec:eig}, we give the neutral mass eigenstates for $\lambda_i \ll 1$ and $v_1, v_2, u \ll |A_h|$. Throughout this work, the scalar field with the largest $h_1$ component $h$ is taken to be the observed Higgs boson, with $m_h = 125$~GeV. In order for $h'$ and $h''$ to be heavier than $h$, $A_h$ should be negative. Thus, the addition of the singlet $\phi$ and its associated trilinear coupling to the model helps to establish a significant mass hierarchy between the $125$~GeV Higgs and the extra scalar fields without having large quartic couplings. The mass eigenstates $(h, h', h'')$ are related to the interaction eigenstates $(h_1, h_2, \phi)$ by the following rotation matrix
\begin{align}
\label{eq:rotmatrix}
\begin{pmatrix}
h\\
h'\\
h''
\end{pmatrix} &=
\begin{pmatrix}
c_1 c_3 - c_2 s_1 s_3 & c_3 s_1 + c_2 c_1 s_3 & s_2 s_3\\
-c_1 s_3 - c_2 c_3 s_1 & c_1 c_2 c_3 - s_1 s_3 & c_3 s_2\\
s_1 s_2 & -c_1 s_2 & c_2
\end{pmatrix}\begin{pmatrix}
h_1\\
h_2\\
\phi
\end{pmatrix}
\end{align}
where the $c_i$, $s_i$ represent the cosines and the sines of the Euler angles $\alpha_i$. We can see that the $\alpha_2$ angle drives the mixing of $\phi$ with the $SU(2)$ doublet states. When this mixing is small, the angle $\alpha = \alpha_1 + \alpha_3$ can be interpreted as the mixing angle between $h_1$ and $h_2$. In this limit, our model becomes similar to a two-Higgs doublet model. As we will see in the rest of this section, our main difference with a standard 2HDM will be that in our scenario, the up-type quarks will couple preferentially to the BSM Higgs bosons in the limit where $\alpha$ is low.
The masses of the charged ($H^{\pm}$) and CP-odd neutral ($A^0$) Higgs bosons can be obtained analytically as follows
\begin{eqnarray}
m_{H^{\pm}}^2 &=& \frac{\lambda_5}{2} v^2 -\frac{A_h u}{\cos\beta\sin\beta} \\
m_{A^0}^2 &=& -{A_h \,u \over \cos\beta\sin\beta}.
\label{eq:mAH}
\end{eqnarray}
Here, the role of the trilinear term $A_h\,h_1\,h_2\,\phi$ in generating a mass hierarchy between the SM Higgs $h$ and the BSM Higgses is obvious as this term leads to the $|A_h|u$ contributions in equation~\eqref{eq:mAH}.
\newline
\subsubsection{Vacuum stability and perturbativity}
In order for the Higgs potential shown in equation~\leqn{eq:potential} to be valid, it needs to be bounded from below and the quartic and trilinear couplings need to satisfy perturbativity and unitarity requirements. In order for the potential to not go to minus infinity for large values of the scalar fields, we require the quartic couplings satisfy the following conditions, taken from~\cite{Drozd:2014yla}
\begin{widetext}
\begin{align}
&\lambda_{1,2} + \lambda_4 > 0 && |\lambda_{6}| < 4\sqrt{\lambda_3 (\lambda_1 + \lambda_4)}\\
&\lambda_3 > 0 && |\lambda_{7}| < 4\sqrt{\lambda_3 (\lambda_2 + \lambda_4)} \\
& \lambda_4 + \lambda_5> -\sqrt{(\lambda_1 + \lambda_4)(\lambda_2 + \lambda_4)} &&\lambda_4 + \lambda_5/2 > -\sqrt{(\lambda_1 + \lambda_4)(\lambda_2 + \lambda_4)} .
\end{align}
In addition for $\lambda_6 < 0$ or $\lambda_7 < 0$, we also require
\begin{align}
-\frac{1}{2}\lambda_6\lambda_7 + 4\lambda_3 (2\lambda_4 + \lambda_5) &> -\sqrt{4\,(4 \lambda_3 (\lambda_1 + \lambda_4) - \lambda_6^2/4)(4 \lambda_3 (\lambda_2 + \lambda_4) - \lambda_7^2/4)} \\
-\frac{1}{2}\lambda_6\lambda_7 + 8\lambda_3 (\lambda_4 + \lambda_5) &> -\sqrt{4\,(4 \lambda_3 (\lambda_1 + \lambda_4) - \lambda_6^2/4)(4 \lambda_3 (\lambda_2 + \lambda_4) - \lambda_7^2/4)}.
\end{align}
\end{widetext}
In addition to the above requirements, the quartic \mbox{couplings} $\lambda_i$ also need to remain perturbative up to at least the cutoff scale $\Lambda$ at which new physics should appear. In the rest of this work, for each scale $\Lambda$ that we consider, we require the scalar quartic couplings to satisfy $|\lambda_i| < 4\pi$ and still fulfil the vacuum stability conditions at the cutoff. Our RGEs for $\lambda_i$, $A_h$, and the top quark Yukawa coupling are shown in Appendix C. \ Besides an extended Higgs sector, our model will have modified couplings of the $125$~GeV Higgs to fermions and gauge bosons. While the couplings of the Higgs to gauge bosons have been measured to be within $10$\% of the SM ones, the measurements of the Higgs couplings to fermions have either been indirect or with large uncertainties. In the rest of this section, we discuss the impact of the Higgs-gauge coupling measurements on the mixing angles of the neutral scalars and how the large uncertainty on the Higgs to top coupling measurement can be exploited to alleviate the fine-tuning problem.
\subsection{Yukawa couplings to fermions}
\label{sec:yukawa}
In standard two-higgs doublet models, the up-type quarks couple preferentially to the interaction eigenstate that contributes mostly to the SM-like Higgs $h$ in the low mixing limit. In our model, in order to obtained a suppressed coupling of the $125$~GeV Higgs to the top quark, we assume that $h$ is mostly $H_1$ while the up-type quarks couple only to $H_2$. This constraints can be the result of a $\mathds{Z}_2$ symmetry under which the right handed up-type quarks such as $t_R$ transform as
\begin{equation}
t_R \to - t_R
\end{equation}
and $H_2\to -H_2$. In order to avoid problematic FCNCs, we may take the down-type quarks and leptons to couple only to either $H_1$ or to $H_2$. The couplings of the Higgs bosons to the quarks can then be written as
\begin{equation}
\mathcal{L}_t = \lambda_u \,q_L\, u_R \,h_2 + \lambda_d \, q_L\, d_R\, h_1.
\label{eq:lambdaud}
\end{equation}
The couplings of the Higgses to leptons will not be relevant for this study. In the rest of this work, we will focus particularly on the couplings of the Higgses to the top and the bottom quarks, that can be written as
\begin{equation}
\mathcal{L}_t = \lambda \,q_L\, t_R \,h_2 + \lambda_b \, q_L\, b_R\, h_1.
\label{eq:lambda}
\end{equation}
In order to obtain the correct top and bottom quark masses $m_t$ and $m_b$, the strength of the couplings $\lambda$ and $\lambda_b$ should be the following
\begin{align}
\lambda &= \frac{\lambda_t^\mathrm{SM}}{\sin\beta}\quad\quad\lambda_b = \frac{\lambda_b^\mathrm{SM}}{\cos\beta}.
\end{align}
The coupling of the top quark to $h'$ is thus typically larger than $1$, which could lead to low scale Landau poles. Requiring no Landau poles for $\lambda_t'$ up to a given cutoff scale $\Lambda$ leads to an upper bound on the value of $\lambda_t'$ at the EW scale, that translates in turn into a lower bound on $\beta$. This lower bound is shown as a function of $\Lambda$ in figure~\ref{fig:ypert}. The RGEs for the top quark and strong couplings that we solved to obtain this result are shown in Appendix~\ref{sec:RGE}. Conversely, the LHC measurements of the $125$~GeV Higgs coupling to $b\bar{b}$ as well as perturbativity requirements on $\lambda_b$ should provide an upper bound on $\beta$. Since the uncertainties on the Higgs coupling to b-quarks measurements at the LHC are huge and $\lambda_b^\mathrm{SM}$ is very small, however, this upper bound is expected to be extremely loose and we do not take it into account in the rest of this work.
When the mixing angles between the scalars are small, the top quark should couple preferentially to $h'$ ---the mass eigenstate most similar to $h_2$--- while the top coupling to $h$
\begin{figure}
\includegraphics[width=\columnwidth]{Plots/tbmin.pdf}
\caption{\label{fig:ypert} Minimum value of $\tan\beta$ allowed by requiring no Landau poles below $\Lambda$ for the top quark coupling to $h'$.}
\end{figure}
, $\lambda_t$, will be mixing suppressed. This coupling can be written as a function of the mixing angles in equation~\eqref{eq:rotmatrix} to obtain
\begin{align}
\label{eq:lambdatrot}
\lambda_t &= \lambda_t^\mathrm{SM} \frac{c_3 s_1 + c_2 c_1 s_3}{s_\beta}.
\end{align}
In the limit where the $\phi$ mixing to $h_1$ and $h_2$ is small, this coupling can be approximated by
\begin{align}
\label{eq:lambdatrot2}
\lambda_t &\approx \lambda_t^\mathrm{SM} \frac{\sin\alpha}{\sin\beta}
\end{align}
where $\alpha = \alpha_1 + \alpha_3$ is the mixing angle between $h_1$ and $h_2$. In order for this coupling to be significantly lower than the SM coupling, it is therefore crucial to depart from the alignment limit, with $\alpha < \beta$. Going away from the alignment limit could however significantly modify the couplings of the $125$~GeV Higgs to other SM particles. In most cases, these deviations are not directly highly constrained at the LHC since the Higgs couplings to the other SM fermions have not been precisely measured to date. The Higgs couplings to photons and gluons do indirectly constrain the fermion couplings more precisely, however these can also be easily modified by introducing higher dimensional operators or other new physics so we do not necessarily need to consider these indirect constraints on the fermion couplings. The Higgs couplings to vector bosons, on the other hand, are tree-level and have been measured to a fairly good level of precision at the LHC. It is therefore crucial to determine how a modification of $\lambda_t$ would affect these couplings in our model.
\subsection{Gauge Sector}
\label{sec:gauge}
The electroweak gauge bosons couple to the Higgs doublets through the standard covariant derivative
\begin{equation}
D \mathcal{H} = \partial \mathcal{H} - i g_2 W \mathcal{H} - i g_1 Y \mathcal{H} \, .
\end{equation}
Here $\mathcal{H} = (h_1, h_2)$ and $g_2$ and $g_1$ are the $SU(2)_L$ and $U(1)_Y$ gauge couplings. Using the definitions introduced in equation~\leqn{eq:fields} for the Higgs fields, we can readily check that we obtain the correct masses for the gauge bosons.
Using the notation defined in equation~\eqref{eq:rotmatrix} for the mixing angles, the light Higgs coupling to the gauge bosons reads as
\begin{align}
g_{hVV} &= g_{hVV}^{\mathrm{SM}} [c_3\,\cos(\beta - \alpha_1) + c_2 s_3\,\sin(\beta - \alpha_1)]
\end{align}
which is always smaller than one. In order to be consistent with the current LHC results~\cite{Khachatryan:2016vau}, $g_{hVV}$ needs to be of at least $90\%$ of the SM value. In order to understand the implications of this constraints, we consider first the case where $c_2 \approx 1$, which corresponds to a scenario where $\phi$ mixes very little with the scalar components of the Higgs doublets. In this limit, the coupling between $h$ and the gauge bosons becomes
\begin{align}
g_{hVV} &= g_{hVV}^{\mathrm{SM}} \cos(\beta - \alpha)
\label{eq:gaugecoupling}
\end{align}
where again $\alpha = \alpha_1 + \alpha_3$. In order for this coupling to be close to the SM value, we therefore need to be in the alignement limit where $\alpha$ and $\beta$ are close to each other. This requirement might be in tension with our end goal of reducing the top coupling to $h$ defined in equation~\leqn{eq:lambdatrot}, the latter being close to one in this limit. The current LHC results however still leave some significant freedom since the requirement that $g_{hVV} \geq 0.9\, g_{hVV}^\mathrm{SM}$ translates into
\begin{align}
|\beta - \alpha| \leq 0.45.
\end{align}
The values of $\lambda_t$ and $\lambda_t'$ for $\alpha = \beta - 0.45$ are shown as a function of $\beta$ in figure~\ref{fig:lambdat}. As can be inferred from both this figure and figure~\ref{fig:ypert}, it is a priori possible to considerably reduce the value of the top coupling to the $125$~GeV Higgs without significantly reducing its coupling to gauge bosons or introducing Landau poles below at least $5$~TeV from the running of $\lambda_t'$. Note that the top Yukawa coupling is most suppressed for low values of $\beta$, so the coupling of the $125$~GeV Higgs to bottom quarks is SM-like and well within the LHC limits.
\begin{figure}
\includegraphics[width=\columnwidth]{Plots/tcoup.pdf}
\caption{\label{fig:lambdat} Value of $\lambda_t'$ when the mixing angle $\alpha$ between the two Higgs doublets is set to $\alpha = \beta - 0.45$, that is, the minimum value allowed by the Higgs-gauge coupling measurements (blue line). The red line shows the top coupling to $H_2$ $\lambda$, defined in equation~\leqn{eq:lambda}. We can see that, if the mixing angle $\alpha$ is free to vary as far from $\beta$ as allowed by the LHC measurements, it is possible to get an extremely suppressed top coupling to the $125$~GeV Higgs while still having $\lambda$ remain perturbative up to at least $5$~TeV.}
\end{figure}
When the singlet $\phi$ mixes with $h_1$ and $h_2$, the reasoning outlined above still applies. Since the mixing between the scalar singlet and the scalar components of the doublets is governed by $\alpha_2$, increasing this mixing would translate into decreasing $c_2$. In order for the $g_{hVV}$ coupling in equation~\leqn{eq:gaugecoupling} to remain close to the SM value, $c_3\cos(\beta - \alpha_1)$ needs to increase. This constraints pushes us into a region of parameter space where $\alpha_3$ is small and the mixing between $h_1$ and $h_2$ is governed by $\alpha_1$. This scenario is qualitatively similar to the one where the singlet $\phi$ is decoupled and the steps detailed above can be repeated with $\alpha_1$ substituted to $\alpha$.
As our results show, strongly reducing the $125$~GeV Higgs coupling to the top quark without creating tension with the current LHC results or introducing low scale Landau poles can be achieved for certain values of the mixing angles between the neutral scalars. It is crucial in particular that the effective mixing angle between $h_1$ and $h_2$ is as far from $\beta$ as allowed by the current Higgs-gauge coupling measurements. This requirement, however, constrains the product $\sqrt{|A_h| u}$, appearing in the mass matrix~\leqn{eq:massmatrix} to not be too large compared to the EW scale. Consequently, the regions of parameter space where the $125$~GeV Higgs coupling to the top quarks is the lowest are also regions where the other Higgses have $\mathcal{O}(100)$~GeV masses. These other Higgses, however, can exhibit large quadratic divergences that could be reduced only by reintroducing sizable fine-tuning. In particular, by construction, $h'$, $A^0$, and $H^\pm$ are associated with order one top Yukawa couplings and therefore need to be much heavier than $h$. In what follows, we discuss how we impose the naturalness requirement and compute the dominant fine-tuning factors.
\section{Naturalness}
\label{sec:natural}
At one loop the quadratic divergences generated by the minimal model for the masses of any of the Higgses can be represented as
\begin{equation}
\delta m_{h_i}^2 = \alpha_t \,\Lambda^2_t + \alpha_g\,\Lambda^2_g + \alpha_h \Lambda_h^2
\end{equation}
where we have followed the notation in~\cite{Barbieri:2006dq}. In what follows, we will neglect the contributions from the gauge boson loops $\alpha_g$ due to the low values of the weak couplings. The quadratic divergences from the top and Higgs boson loops in our model can then be derived from the Coleman-Weinberg potential
\begin{eqnarray}\label{eq:quadraticCW}
V_\mathrm{quadratic} &=&
{\Lambda^2 \over 32\,\pi^2} \left( \lambda_1+2 \lambda_4+\frac{1}{2}\lambda_5 +\frac{1}{2}\lambda_6\right) H_1^\dagger H_1 \nonumber\\
&+&{\Lambda^2 \over 32\,\pi^2} \left( \lambda_2+2 \lambda_4+\frac{1}{2}\lambda_5+\frac{1}{2}\lambda_7 \right) H_2^\dagger H_2 \\
&+& {\Lambda^2 \over 32\,\pi^2} \left(\lambda_3+{\lambda_6 \over 2}+\frac{\lambda_7}{2}\right) \Phi ^2 - {3\,\lambda^2\,\Lambda^2 \over 8\,\pi^2} H_2^\dagger H_2. \nonumber
\end{eqnarray}
We discuss our derivation in more details in appendix~\ref{sec:CWpotential}. The values of the fine-tuning factors $\alpha_i$ can be deduced by rotating into the mass basis and computing the derivatives of $V_\mathrm{quadratic}$ with respect to the different fields. For the light Higgs $h$, we obtain
\begin{eqnarray}
\alpha_{th} &=& {3 \lambda_t^2 \over 4 \pi^2} \\
\alpha_{hh} &=& \alpha_{h11} (c_1 c_3 - c_2 s_1 s_3)^2 + \alpha_{h22} (s_1 c_3 + c_2 c_1 s_3)^2 \nonumber\\
&& + \alpha_{h33} (s_2s_3)^2\nonumber
\end{eqnarray}
where $\lambda_t$ is defined in equation~\leqn{eq:lambdatrot} and the $\alpha_{hii}$ are defined by
\begin{align}
\alpha_{h11} &= -{(2\lambda_1 + 4\lambda_4 + \lambda_5 + \lambda_6)\over 16 \pi^2 }\\
\alpha_{h22} &= -{(2\lambda_2 + 4\lambda_4 + \lambda_5 + \lambda_7)\over 16 \pi^2 }\\
\alpha_{h33} &= -{(2\lambda_3 + \lambda_6 + \lambda_7)\over 16 \pi^2 }
\end{align}
and are weighted by combinations of the cosines and sines of the mixing angles $\alpha_{i}$, defined in equation~\eqref{eq:rotmatrix}. The coupling $\lambda_t$ as well as the Higgs quartic couplings are evaluated at the cutoff scale $\Lambda$. Usually, when the quartic couplings $\lambda_i$ are taken to be perturbative, these quadratic divergences lead to a lower fine-tuning than the ones from the top loops. Similarly, we derive the fine-tuning factors for the other two Higgses:
\begin{eqnarray}
\alpha_{th'} &=& {3 \lambda_t'^2 \over 4 \pi^2}\nonumber \\
\alpha_{hh'} &=& \alpha_{h11} (-c_1 s_3 - c_2 s_1 c_3)^2 + \alpha_{h22} (c_1 c_2 c_3 - s_1 s_3)^2 \nonumber\\
&& + \alpha_{h33} (s_2c_3)^2\nonumber\\
\alpha_{th''} &=& {3 \lambda^2 (c_1 s_2)^2 \over 4 \pi^2} \\
\alpha_{hh''} &=& \alpha_{h11} (s_1s_2)^2 + \alpha_{h22} (c_1s_2)^2 + \alpha_{h33} c_2^2\nonumber.
\end{eqnarray}
Finally, the pseudoscalar and charged Higgses are also associated with large quadratic divergences. The corresponding fine-tuning factors are
\begin{eqnarray}
\alpha_{t\{A,H^\pm\}} &=& {3 \lambda^2\cos^2\beta \over 4 \pi^2} \\
\alpha_{h\{A,H^\pm\}} &=& \alpha_{h11} \sin^2\beta + \alpha_{h22} \cos^2\beta \nonumber.
\end{eqnarray}
The sensitivity of the SM Higgs masses to a given cutoff scale $\Lambda_i$ is given by the formula
\begin{align}
D(m_h) = \left|{\partial \log m_h^2 \over \partial \log \Lambda^2_i} \right| = {|\alpha_i| \, \Lambda^2_i \over m_h^2}
\end{align}
In the rest of this work, the fine-tuning factor that we consider at a given scale will be the maximal value of the fine-tunings associated to the top, gauge boson or Higgs loops, for all the three Higgs bosons
\begin{align}
D_\mathrm{max}(\Lambda) = \mathrm{max}_{j=\{h,h',h'',A,H^\pm\}} \left\{|\alpha_{tj} + \alpha_{hj}| \, \Lambda^2 \over m_j^2\right\}.
\label{eq:finetuning}
\end{align}
This estimate is conservative since it assumes that the cutoff scales for all the loop contributions to the Higgs masses will all be at their lowest possible values for a given $D_\mathrm{max}$. Since $\alpha_{tj}$ and $\alpha_{hj}$ are of opposite signs, they are expected to cancel out, either partially, or, if the Veltman condition 1 is fulfilled, totally.
Besides looking for parameters with low total fine-tuning, we also must consider the current LHC searches for new bosons. In the next section, we detail what searches and decay channels are relevant to our models and how we implement the corresponding constraints.
\section{LHC phenomenology}
\label{sec:lhc}
As highlighted in section~\ref{sec:gauge}, in order to be as far as possible from the alignment limit, it is necessary for either $h'$ or $h''$ to be light, with masses typically below a TeV. If $h'$ is light, in particular, the model will also involve a light pseudoscalar $A^0$ and charged Higgs $H^\pm$, which could both be within the reach of the corresponding LHC searches. At low $\tan\beta$ in the MSSM, which is also the preferred region for our low fine-tuning models as discussed in section~\ref{sec:yukawa}, pseudoscalar Higgses are excluded up to about $400$~GeV. It is therefore crucial to investigate how the different Higgs searches at $13$~TeV LHC will constrain our models.
As discussed in details in section~\ref{sec:gauge}, the couplings of the $125$~GeV Higgs to the gauge bosons are constrained to be SM-like and the couplings to the bottom quark are expected to be much smaller than one. The couplings of these particles to the new Higgs bosons will therefore be suppressed and the corresponding LHC searches should not be particularly sensitive to our model. Similarly, the top quark couplings to $h''$ is expected to be mixing suppressed, which would lead to reduced gluon fusion production rates. The second Higgs $h'$, as well as $A^0$ and $H^\pm$, however, have the following couplings to the top quarks
\begin{align}
\lambda'_t \approx \frac{\lambda_t^\mathrm{SM}}{\sin\beta}\quad\quad \lambda_{t}^{A^0, H^\pm} = \frac{\lambda_t^\mathrm{SM}}{\tan\beta}.
\end{align}
When $\tan\beta$ is of order one or lower these couplings will be of same order as, if not larger than, the SM top Yukawa couplings. These new Higgses will therefore have sizable production rates through gluon fusion at the LHC and should therefore be severely constrained by the current searches.
In MSSM models with $\tan\beta\lesssim 3$, heavy pseudoscalar and charged Higgs bosons have been already excluded up to about $350$~GeV~\cite{Aaboud:2017gsl,Aaboud:2017rel,Aaboud:2017cxo,CMS:2017vpy,CMS:2016ncz}. In order to account for the possible mild suppressions of the production rates of these particles in our model, we focus on parameter points where $h'$, $A^0$, and $H^\pm$ all have masses larger than $250$~GeV, which corresponds to the lowest masses explored by the $13$~TeV LHC Higgs searches. For these masses, the main decay modes are
\begin{align}
\label{eq:decay}
h', h'' &\rightarrow t\bar{t}, b\bar{b}, ZZ, hh, W^+W^-\\
A^0 &\rightarrow Zh, t\bar{t}, b\bar{b}\\
H^\pm &\rightarrow tb, W^\pm h
\end{align}
and the main production modes are
\begin{align}
&gg, VV \rightarrow h', h''\\
&gg \rightarrow A^0\\
&b\bar{b} \rightarrow A^0\\
&g\bar{b} \rightarrow t H^+.
\end{align}
The $13$~TeV LHC searches for heavy BSM Higgs bosons are~\cite{Aaboud:2017gsl,Aaboud:2017rel,ATLAS:2016qmt,TheATLAScollaboration:2016ibb,Aaboud:2017cxo,ATLAS:2016qiq,CMS:2017vpy,CMS:2016ncz}, and target all the decay channels shown in~\eqref{eq:decay} except $h'/A\rightarrow t\bar{t}$ and $H^\pm \rightarrow W^\pm h$, which is expected to be largely subdominant to $H^\pm \rightarrow tb$. In what follows, for each parameter point of our model, we compute the branching ratios corresponding to the decay modes shown in~\eqref{eq:decay} using the formulae given in~\cite{Djouadi:2005gj}. We take the production cross sections from the LHC Higgs cross section working group and rescale them by the following $\kappa$ factors
\begin{align}
\kappa_{ggh'} &= \frac{(c_1c_2c_3 - s_1s_3)^2}{\tan^2\beta}\\
\kappa_{ggA^0} &= \frac{1}{\tan^2\beta}\\
\kappa_{bbA^0} &= \tan^2\beta\\
\kappa_{VVh'} &= [-s_3 \cos(\beta - \alpha_1) + c_2 c_3 \sin(\beta - \alpha_1)]^2\\
\kappa_{gbH^\pm} &= \left[-\frac{1}{\tan\beta} + \frac{m_b}{m_t}\tan\beta\right]^2
\end{align}
where the mixing parameters $\alpha_1$, $c_{123}$, and $s_{123}$ are defined in~\eqref{eq:rotmatrix}. For each channel, we finally compare the values of $\sigma \times \mathrm{Br}$ to the corresponding LHC limits.
\section{Parameter Space and Results}
\label{sec:parameter}
We now scan over the parameter space of our multiple Higgs model to determine how much the fine-tuning can be lowered without breaking perturbativity or being at odds with the LHC results. Our model involves ten parameters: seven quartic couplings $\lambda_i$, the mixing angle $\beta$ between the vevs of the Higgs doublets, the trilinear coupling $A_h$, and the vev $u$ of the singlet $\Phi$. After requiring $m_h= 125$~GeV, the parameter space is then nine-dimensional. Such a large parameter space is particularly difficult to explore. We would therefore like to stress that our final result will be conservative, as narrow regions with low fine-tuning might have been overlooked.
In what follows, we choose a cutoff scale $\Lambda = 5$~TeV and perform a uniform random scan over the following parameters
\begin{align}
u &\in [0, 5]~\mathrm{TeV}\\
\lambda_i' &\in [-2, 2]\\
\beta &\in \left[0, \frac{\pi}{2}\right]
\end{align}
and fix $A_h$ by setting the lightest Higgs mass to be $125$~GeV. We emphasize choosing a common cutoff scale $\Lambda$ is very conservative. The gauge boson, scalar and top sectors could have different cutoffs which would allow a much larger number of models that meet our scan criterion. The $\lambda_i'$ couplings are linear combinations of the $\lambda_i$ couplings, and are defined in Appendix~\ref{sec:RGE}. These combinations are the ones who enter the RGEs and are therefore more relevant to scan over from a perturbativity point of view.
We scan over $10^9$ points and select the models verifying the vacuum stability and perturbativity constraints discussed in section~\ref{sec:model} and for which the couplings of the $125$~GeV Higgs to the gauge bosons are within $10$\% of the corresponding SM values. Additionally, in order to ensure that our results will not be influenced by the physics at the cutoff scale we consider only models where the new particles have masses below $1$~TeV. Finally, we select all points for which the fine-tuning factor $D$ is less than $100$. These points will be represented in blue in the figures shown in this section. For each of these points and for the different BSM Higgs bosons, we compute the cross-section times branching ratio for each of the production and decay channels listed in section~\ref{sec:lhc} and compare it to the results from the ATLAS searches~\cite{Aaboud:2017gsl,Aaboud:2017rel,ATLAS:2016qmt,TheATLAScollaboration:2016ibb,Aaboud:2017cxo,ATLAS:2016qiq}. We consider that a parameter point is not excluded at the LHC if $h'$, $A^0$, and $H^\pm$ are all heavier than $250$~GeV and, for each detection channel, the ratio of the $\sigma \times \mathrm{Br}$ over the $95$\% confidence limit found by ATLAS is less than $1$. In order to account for the important fluctuations of the ATLAS exclusion bounds at low Higgs masses as well as estimate the reach of the future LHC searches, we also define a ``safe'' region where the ratio of the $\sigma \times \mathrm{Br}$ over the $95$\% ATLAS confidence limit for each detection channel is less than $0.1$.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Plots/FT.png}
\caption{\label{fig:FT} Fine-tuning factor $D$ for all the points with $g_V > 0.9$ and $D < 100$ (blue) and for the subset of these points that satisfy the safe LHC constraints defined in the main text (yellow). The vertical axis is in arbitrary units with different scales for the blue and for the yellow.}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Plots/FT_vs_topcoupling.png}
\caption{\label{fig:FTtop} Fine-tuning $D$ versus $\lambda_t$ for all the points with $g_V > 0.9$ and $D < 100$ (blue dots), the points that satisfy both $D < 20$ and the LHC constraints (red triangles), and the subset of these points that satisfy the ``safe'' LHC constraints (yellow stars). Both LHC constraints are defined in Section~V.}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{Plots/u_vs_Ah.png}
\caption{\label{fig:uAh} $u$ versus $A_h$ for all the points with $g_V > 0.9$ and $D < 100$ (blue dots), the points that satisfy both $D < 20$ and the LHC constraints (red triangles), and the subset of these points that satisfy the ``safe'' LHC constraints (yellow stars).}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{Plots/mh3_mh2.png}
\caption{\label{fig:m3m2} $m_{h''}$ versus $m_{h'}$ for all the points with $g_V > 0.9$ and $D < 100$ (blue dots), the points that satisfy the LHC constraints (red triangles), and the subset of these points that pass the ``safe'' LHC constraints (yellow stars).}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{Plots/lambdamax_vs_FT.png}
\caption{\label{fig:lft} Maximal value of the couplings $\lambda_i'$ defined in Appendix~\ref{sec:RGE} at 5~TeV, $|\lambda_i'(5~TeV)|_\mathrm{max}$, versus fine-tuning for all the points with $g_V > 0.9$ and $D < 100$ (blue dots), the points that satisfy the LHC constraints (red triangles), and the subset of these points that pass the ``safe'' LHC constraints (yellow stars).}
\end{figure}
Figure~\ref{fig:FT} shows the normalized distributions of the fine-tuning factors $D$ for all the points with $g_V > 0.9$ and $D < 100$ (blue) as well as the subset of these points that satisfy the LHC constraints defined above (yellow). Whether the LHC constraints are taken into account or not, the fine-tuning factor $D$ can easily reach values smaller than $10$ or even $1$. Thus, lowering the top quark coupling while in the same time exploiting the partial cancellation of the top and scalar one-loop contributions to the Higgs mass could potentially be an way to reduce the fine-tuning in the SM without introducing too much complexity.
In order to understand the interplay between the suppression of the top yukawa coupling and the cancellation from scalar couplings, we show the fine-tuning factor $D$ as a function of $\lambda_t$ in figure~\ref{fig:FTtop}. When the LHC constraints are not introduced, this figure shows two distinct low fine-tuning regions: one region where the top Yukawa coupling is reduced to values as low as $0.55$ for $D < 20$, and one where the top coupling to the SM Higgs is unsuppressed and the reduction of the fine-tuning is entirely due to the other scalars. The latter region involves new Higgses that are typically heavy due to a large quartic coupling, and therefore outside the reach of the LHC. Conversely, most of the first region has already been probed by the LHC. This result is due to the fact that such low values of the fine-tuning require the top coupling of the $125$~GeV Higgs $\lambda_t$ to be significantly reduced. In section~\ref{sec:gauge}, we already argued that $\tan\beta$ cannot be too small in order for the top Yukawa coupling of the $h_2$ to remain perturbative, which in turn requires models with suppressed $\lambda_t$ to be far from the alignment limit. This numerical study then shows that these two conflicting constraints prevent models with reduced $\lambda_t$ to have BSM Higgses heavier than a few hundreds of GeV. In fact, our results show that the only way to obtain suppressed top Yukawa couplings is to have large mixings between the scalars, which can happen only in the low mass regime. Although a few points with $D\lesssim 10$ still survive the current constraints, especially for $\lambda_t \gtrsim 0.7$, they are expected to be probed by the next LHC runs. The hypothesis that the fine-tuning of the $125$~GeV Higgs mass is reduced by suppressing the top quark Yukawa coupling should therefore be fully tested in the near future.
Figures~\ref{fig:uAh} and \ref{fig:m3m2} show the mass scales corresponding to the regions of the parameter space with the lowest fine-tuning, in the $(u, A_h)$ plane and in the $(m_{h'}, m_{h''})$ plane. Although there are narrow regions with either $u \ll |A_h|$ or $m_{h''}\gg m_{h'}$, the low fine-tuning points that pass the safe LHC constraints generally have $|A_h|$ and $u$ being of the same order of magnitude. In most of the parameter space verifying these constraints, we also note that $h''$ is lighter than $h'$. Although most of the fully degenerate limit is already excluded, the points that survive the current LHC constraints generally have $m_{h'}\sim m_{h''}$.
Finally, figure~\ref{fig:lft} shows the maximal value of the $\lambda_i'$ couplings defined in Appendix~\ref{sec:RGE} at the cutoff scale $\Lambda = 5$~TeV as a function of the fine-tuning $D$. Although, in principle, these couplings could reach values down to $2$ for $D < 20$, once the LHC constraints are introduced, these couplings have to be larger than $4$. This result is due to the fact that these LHC constraints disfavor the regions of parameter space with low top Yukawa $\lambda_t$ and large quartic couplings are therefore required in order to cancel the unsuppressed top loop contribution to the Higgs mass, as shown in equation~\leqn{eq:diverge}. Reducing the total fine-tuning in our model therefore implies the existence of a strongly coupled Higgs sector at a few TeV.
\section{Beyond}
We have discussed how the Higgs mass parameter is
calculable from the bare mass and radiative corrections, as shown in equation~\leqn{eq:diverge}. In Section~\ref{sec:idea} we have posed the question of whether the bare mass and the one-loop radiative corrections need to be fine-tuned.
This is not possible in the minimal standard model unless the cutoff scale is below a TeV--a scale the LHC has substantially explored.
In the previous sections, we focused on a minimal model that illustrates aspects of the first and third Veltman conditions in Section~\ref{sec:idea}. We now briefly discuss the second condition.
We briefly note that theories with softly broken shift symmetries can predict light scalars \cite{Georgi:1975tz,ArkaniHamed:2001nc}. %
Extensions of this idea developed into little Higgs model building \cite{ArkaniHamed:2001nc,Schmaltz:2002wx,Katz:2005au}. Here
top partners were introduced to cancel the quadratic sensitivity to the cutoff scale from radiative corrections due to the top coupling. Given the second Veltman condition, it is conceivable that the cancellations just occur between the top and additional scalars for no obvious symmetry reason, or that the observed scalar does not have such a large top coupling as to require top partners. We consider this possibility in future work.
We also note that there is lattice gauge theory evidence that some strongly coupled theories have scalars which are much lighter than the strong coupling scale for a dynamical reason which is not related to a symmetry~\cite{Hasenfratz:2017lne}.
\section{Conclusions}
We have considered the cutoff sensitivity of an extension of the minimal standard model with additional particles which only carry electroweak charges, and argue that they should be scalars. Our goals are more modest that those of theorists who achieve cancellations in the Higgs mass from symmetry considerations, but we do not have to pay the price of introducing new colored top partners or a new strong group, which therefore makes it easier to understand how the new physics has escaped the LHC searches. We do find that there are values of the parameters for which the cancellation between the bare mass and the radiative corrections is not severe and the LHC constraints are satisfied, including points where the top Yukawa coupling to the 125 GeV higgs is reduced and the cutoff scale for new physics is 5 TeV. Our results emphasize the importance of precision Higgs measurements, particularly direct measurements of the top-Higgs coupling, as we find there is room in this model for significant deviation from 1, and this value impacts the degree of fine-tuning and the expectation for the scale of new physics. Searches for additional Higgs bosons are also important in order to understand whether the naturalness paradigm that has played such a big role in theoretical physics could be realized in nature.
\section*{Acknowledgements}
SEH acknowledges support by the NWO Vidi grant ``Self-interacting asymmetric dark matter''. AN is supported in part by the Kenneth Young Memorial Endowed Chair and in part by the DOE under grant DE-SC0011637. DW is supported in part by a Burke faculty fellowship.\\
|
1,314,259,993,428 | arxiv | \section{Introduction}
An open problem in relativistic theories is related to the Hamiltonian
description of particle dynamics for which non-local interactions typically
occur. In this regard, a basic difficulty which is usually met is the lack
of a Hamiltonian formalism for non-local Lagrangian systems. In fact, for
arbitrary non-local Lagrangians it is generally impossible to define the
notion of Legendre transformation \cite{Feytesi}. As a consequence even the
phase-space itself may not be well-defined.
Most approaches to the construction of a Hamiltonian formalism for non-local
first-order Lagrangians have tried to change the functional part of the
Euler-Lagrange equations \cite{Kerner1962,Marnelius1974,Gaida1980,Llosa1993
. In principle this delivers infinite-order Euler-Lagrange equations and a
corresponding infinite-dimensional phase-space. As an alternative, a finite
dimensional phase-space can be recovered by introducing appropriate
asymptotic approximations, i.e., truncating the expansion of the Lagrangian
in terms of finite-order derivatives \cite{Marnelius1974,LLosa1986}.
A typical situation of this kind occurs for the relativistic equation of
motion for single isolated charged particles, subject both to external and
self EM forces, namely the radiation-reaction (RR) equation. There is an
extensive literature devoted to this subject, most of which dealing with
point charges. As remarked by Dorigo \textit{et al.} \cite{Dorigo2008a},
customary formulations based either on the LAD \cit
{Lorentz,Abraham1905,Dirac1938} or LL \cite{LL} equations are \emph
asymptotic,} i.e., obtained by means of asymptotic expansions of different
sort. In particular, as a consequence it follows that the LAD equation is
represented by a third-order ODE, so that it does not admit a Hamiltonian
formulation in the customary sense \cite{Goldstein,Arnold1976}. The LL,
instead, is \emph{intrinsically} non-variational, although it is a
second-order differential equation, being obtained by means of a one-step
\textquotedblleft reduction process\textquotedblright\ from the LAD equation
\cite{Dorigo2008a}. As a consequence, the LAD equation does not define a
dynamical system in the customary sense, since it requires, for non-rotating
particles, a 12-dimensional phase-space involving also the particle
acceleration. Therefore, for different reasons, both the LAD and LL
equations are \textit{manifestly non-Hamiltonian}. In particular, for the LL
equation, this implies that the corresponding phase-space volume is not
conserved. Moreover, within these treatments particles are treated as
point-like, so that non-local EM effects produced by the RR self-interaction
may remain undetermined.
Fundamental problems arise when attempting to formulate classical
statistical mechanics (CSM) for systems of relativistic charged particles
based on the LAD or LL equations. In fact even the proper axiomatic
formulation of the relativistic CSM for radiating particles is missing. This
requires the precise identification of the corresponding phase-space and the
definition of an invariant probability measure on this set. For a system of
charged particles subject solely to an external EM field and the RR
self-force this involves the construction of a Vlasov kinetic treatment. In
this regard, important issues concern:
1) The lack of a standard Hamiltonian formulation of relativistic CSM based
on such asymptotic equations, which implies the lack of a flow-preserving
measure. This feature is shared by both the LAD and LL equations.
2) The proper definition of a phase-space. The problem is relevant for the
LAD equation. In fact, although the construction of kinetic theory is still
formally possible \cite{Hakim,Hakim2}, the corresponding fluid statistical
description seems inhibited.
3) The explicit dependence of the kinetic distribution function (KDF) in
terms of the retarded EM self 4-potential are excluded. In fact, in the LAD
and LL approximations the self-potential does not appear explicitly (see for
example Refs.\cite{Tamburini,Ma2004}). Indeed, within the point-particle
model, underlying both treatments, the retarded self-potential is divergent.
On the other hand, for the fluid treatment:
1) The precise form of the fluid closure conditions may depend on the
approximations adopted in the kinetic description for the representation of
the EM RR self-force. An example-case is provided by Ref.\cite{Ma2004} where
relativistic fluid equations are obtained based on the LL equation. As a
result, it was found that, with the exception of the continuity equation,
all moment equations involve higher-order fluid moments associated to the RR
self-force. It is unclear whether this is an intrinsic physical feature of
the RR interaction or simply a result of the approximations involved.
2) The fluid fields may in principle depend implicitly on the EM self
4-potential. In the framework of the LL equation it is unclear how such an
effect can be dealt with. However, the treatment of such effects seems to
present objective difficulties. In fact, in principle non-local effects
might arise in this way in which \textit{retarded velocity contributions
appear in the kinetic equation. In such a case the explicit construction of
fluid equations would be ambiguous (and might involve an infinite set of
higher-order moments).
The interesting question is whether \emph{these difficulties can be overcome
in physically realizable situations, namely for exactly solvable classical
systems} (of particles)\emph{\ }for which\ the relativistic equations of
motion are both \emph{variational} and \emph{non-asymptotic}. The
prerequisite is provided by the possibility of constructing an exact
representation for the RR equation for a suitable type of classical charged
particles. In the past, their precise identification with
physically-realizable systems has remained elusive because of the difficulty
of the problem. However, as recently pointed out (see Tessarotto \textit{et
al.} \cite{Tessarotto2008a} and Cremaschini \textit{et al.} \cit
{Cremaschini2011}, hereafter referred to as Paper I) in the framework of
special relativity an exact variational RR equation can be obtained for
classical finite-size charged particles. This refers to particles having a
finite-size mass and charge distributions which are quasi-rigid,
non-rotating, spherically symmetric and radially localized on a spherical
surface\emph{\ }$\partial \Omega $ having a finite radius $\sigma >0$ (see
\cite{Nodvik1964} and the related discussion in Paper I). In this
formulation, contrary to the point-particle case, the retarded EM self
4-potential is well-defined, namely, it does not diverge, and can be
determined analytically. As shown in Paper I, it follows that the RR
equation is variational and the corresponding Hamilton variational
functional is symmetric with respect to the non-local contributions. The
latter are due to the retarded EM self interaction arising from the finite
spatial extension of the charge distribution. As a consequence, the
resulting exact RR equation is a second-order delay-type ODE which admits a
Lagrangian formulation in standard form (see discussion below). Furthermore,
under suitable conditions, the same equation defines a classical dynamical
system (\textit{RR dynamical system}).
In this paper we intend to prove that, based on the results of Paper I, the
RR dynamical system admits also a Hamiltonian representation in terms of an
effective non-local Hamiltonian function $H^{eff}$. This implies that the
exact RR equation can also be cast in the equivalent\ \emph{standard
Hamiltonian form} represented by first-order delay-type ODE
\begin{eqnarray}
\frac{dr^{\mu }}{ds} &=&\frac{\partial H^{eff}}{\partial P_{\mu }},
\label{HAMILTON EQUATIONS} \\
\frac{dP_{\mu }}{ds} &=&-\frac{\partial H^{eff}}{\partial r_{\mu }}, \notag
\end{eqnarray
with $\mathbf{y}=\left( r^{\mu },P_{\mu }\right) $ denoting, in
superabundant variables, the particle canonical state which spans the
eight-dimensional phase-space $\Gamma \equiv \Gamma _{r}\times \Gamma _{u}$,
where $\Gamma _{r}$ and $\Gamma _{u}$ are respectively the Minkowski $M^{4}
-configuration space and the 4-dimensional velocity-space, both with metric
\eta _{\mu \nu }\equiv diag\left( 1,-1,-1,-1\right) $. Remarkably, here it
is found that the Hamiltonian structure can be retained also after the
introduction of a suitable short delay-time approximation of the RR force.
The result is an intrinsic feature of the extended particle model adopted in
the present treatment.
As a consequence, the statistical description of the RR dynamical system
follows in a standard way. In particular, here we report both the exact and
asymptotic kinetic and fluid formulations. These are developed for
collisionless relativistic plasmas in the Vlasov-Maxwell description,
including consistently the contribution carried by the RR self-field.
Applications of the theory here developed concern:
1) The kinetic and fluid treatments of relativistic astrophysical plasmas
observed, for example, in accretion disks, relativistic jets, active
galactic nuclei and mass inflows around compact objects.
2) The kinetic and fluid treatments of laboratory plasmas subject to
ultra-intense and pulsed-laser sources.
\subsection{Goals and scheme of the presentation}
In detail, the plan of the paper is as follows. In Section 2 we introduce
the basic definitions concerning the Lagrangian formulation for non-local
interactions and the concept of effective non-local Lagrangian function and
the related covariance properties (THM.1 and Corollary). In Section 3 we
provide an analogous generalization which permits to introduce the notion of
non-local Hamiltonian formulation (THM.2 and Corollary). In particular,
within the framework of the theory here developed, the standard form for the
local Legendre transformation is retained, while the concepts of effective
canonical momenta and effective Hamiltonian function are introduced. Then,
in Section 4 the general case of a rotating finite-size and
spherically-symmetric charged particle is discussed as a physical example of
non-local interaction. It is shown that the corresponding Hamilton
variational functional satisfies the requirements of THMs.1 and 2 (see
THM.3) and therefore admits both Lagrangian and Hamiltonian formulations in
standard form. In Section 5 the theory is applied to the specific case of a
non-rotating particle. As a result, based on the analytical results of Paper
I, the corresponding variational and effective Hamiltonian formulations are
presented (THM.4). This provides the explicit form of the effective
Hamiltonian function and a parameter-free representation for the retarded EM
RR self-force. Then, in Section 6 a Hamiltonian asymptotic approximation of
the RR equation is developed (THM.5), based on the retarded-time expansion
holding in the short delay-time ordering. The approximation overcomes basic
inconsistencies of the LAD and LL treatments, applying in the case of
point-particles. As a consequence, in Section 7 the relativistic kinetic
theory for a collisionless plasma with the inclusion of the EM RR effect is
formulated in Hamiltonian form. The use of superabundant canonical variables
allows a precise identification of the phase-space and the consequent
axiomatic formulation of CSM with non-local EM RR interactions. This is
based on the notion of invariant probability measure in such a setting. As a
result, a relativistic Liouville-Vlasov kinetic equation is proved to hold
for the KDF (THM.6). This permits to achieve a Vlasov-Maxwell description
applicable to relativistic plasmas, in which the RR interaction is
consistently taken into account also in the Maxwell equations both for the
external and self EM fields. In Section 8 the corresponding fluid fields and
fluid moment equations are computed in terms of 4-velocity integrals, which
retain the standard conservative Eulerian form as in the absence of RR
effects. The existence of both explicit and implicit non-local contributions
arising from the RR effect in the fluid equations is discussed. It is shown
that the former are associated to the EM force acting on the fluid, while
the latter enter in the definition of the fluid fields through the effective
momenta. In particular, the explicit dependence of the KDF on the retarded
EM self 4-potential is discussed and an asymptotic estimation of the
implicit contributions is presented. In Section 9 a Lagrangian formulation
of the fluid equations is derived, which allows one to introduce an explicit
parametrization of the non-local RR terms carried by the EM self 4-potential
and the EM RR force. It is shown that the exact fluid equations with the
inclusion of the RR interaction are of delay-type. Section 10 deals instead
with the development of asymptotic approximations of the moment equations.
This is motivated by the requirement of reducing the exact non-local fluid
equations to a local form. Different asymptotic approximations are obtained
for the non-local terms of the RR effect, based both on short-delay time
expansions (THM.7) and an iterative procedure which holds under the
assumption of weak RR self-force (in comparison with external EM and
pressure forces). A detailed analysis of the basic physical properties of
the kinetic and fluid treatments obtained here and a comparison with
previous literature is reported in Section 11. Concluding remarks are
presented in Section 12. Finally, in Appendix A a Green-function approach is
developed for the calculation of the EM self 4-potential, while in Appendix
B the connection with non-canonical representations is provided for the
relativistic kinetic theory.
\section{Non-local Lagrangian formulation}
The natural mathematical apparatus for an abstract description of Lagrangian
and Hamiltonian mechanics is that of variational principles, whose methods
have been studied for a long time by mathematicians and can be found in the
textbooks. Nevertheless, actual problems of interest in classical
relativistic dynamics involving the treatment of non-local interactions have
escaped a solution. In particular, in the literature the prevailing view is
that, while a non-local variational formulation is possible, a corresponding
Hamiltonian representation is generally excluded. In the following we intend
to point out that for a particular class of non-local Lagrangian systems the
problem can be given a complete solution. The latter correspond to
variational problems in which the variational functional is symmetric. To
this end, in this section we briefly recall basic notions holding for local
and non-local Lagrangian systems. This task represents a necessary
prerequisite for the establishment of a corresponding Hamiltonian
formulation and for the subsequent investigation of the Hamiltonian dynamics
of finite-size charged particles with the inclusion of the RR self-force.
\bigskip
\textbf{Definition \#1 - Local and non-local Lagrangian systems.}
A local (respectively, non-local) Lagrangian system is defined by the set
\left\{ \mathbf{x},L\right\} $ such that the following conditions are
satisfied.
\begin{enumerate}
\item $\mathbf{x\equiv }\left( r^{\mu }\left( s\right) ,\frac{dr^{\mu
}\left( s\right) }{ds}\equiv \overset{\cdot }{r}^{\mu }\right) $ is the
Lagrangian state spanning the Lagrangian phase space $\Gamma _{L}\subseteq
\mathbb{R}
^{2N}$.
\item The Lagrangian action functional $S$ is a 4-scalar of the for
\begin{equation}
S=\int_{s_{1}}^{s_{2}}dsL, \label{J-lagr}
\end{equation
with $L$ to be referred to as \emph{variational Lagrangian function}. In
particular, the functional dependencies of $S$ and $L$ are respectively of
the form:
\begin{itemize}
\item $S\equiv S_{0}\left( r\right) $ and $L\equiv L_{0}\left( r,\frac{dr}{d
}\right) $ for local systems;
\item $S\equiv S_{1}\left( r,\left[ r\right] \right) $ and $L\equiv
L_{1}\left( r,\frac{dr}{ds},\left[ r\right] \right) $ for non-local systems,
with $\left[ r\right] $ denoting non-local dependencies.
\end{itemize}
\item In the functional class
\begin{eqnarray}
\left\{ r^{\mu }\right\} &\equiv &\left\{ r^{\mu }(s):r^{\mu
}(s_{i})=r_{i}^{\mu },\text{ for }i=1,2,\text{ }s_{1},s_{2}\in I,\text{
\right. \notag \\
&&\left. \text{with }s_{1}<s_{2}\text{ and }r^{\mu }(s)\in C^{2}(I)\right\} ,
\label{fc}
\end{eqnarray
the synchronous variations $\delta r^{\mu }(s)$ are considered independent
and vanish at the endpoints $r^{\mu }(s_{i})=r_{i}^{\mu }$. Hereafter
\delta $ denotes, as usual, the Frechet functional derivative. For a
synchronous variational principle the interval $ds$ is such that $\delta
ds=0 $ and is subject to the constrain
\begin{equation}
ds^{2}=g_{\mu \nu }dr^{\mu }\left( s\right) dr^{\nu }\left( s\right) ,
\label{cnstra}
\end{equation
where $r^{\mu }\left( s\right) $ are the extremal curves.
\item The Lagrangian action (\ref{J-lagr}) admits a unique extremal curve
r^{\mu }(s)$ such that, for all synchronous variations $\delta r^{\mu }(s)$
in the functional class (\ref{fc}) the Hamilton variational principl
\begin{equation}
\delta S=0 \label{Hamiltonvariational principle II}
\end{equation
holds identically. For non-local systems the non-local Lagrangian must be
suitably constructed in such a way that the extrema curves $r^{\mu }\left(
s\right) $ satisfy the constraint (\ref{cnstra}).
In particular, for local systems the extremal curves of $S_{0}$ are provided
by the Euler-Lagrange (E-L) equation
\begin{equation}
\frac{\delta S_{0}}{\delta r^{\mu }}\equiv F_{\mu }(r)L_{0}=0,
\label{standard part}
\end{equation
where, for an arbitrary set of Lagrange coordinates $q^{\mu },$ $F_{\mu }(q)$
denotes the \emph{E-L differential operator
\begin{equation}
F_{\mu }(q)\equiv \frac{d}{ds}\frac{\partial }{\partial \overset{\bullet }{q
^{\mu }}-\frac{\partial }{\partial q^{\mu }}, \label{E-L operator}
\end{equation
and $\overset{\bullet }{q}^{\mu }\equiv \frac{d}{ds}q^{\mu }\left( s\right)
. $
On the other hand, for non-local systems the extremal curves of the
functional $S_{1}$ are provided by the Euler-Lagrange equation
\begin{equation}
\frac{\delta S_{1}}{\delta r^{\mu }}\equiv \left. \frac{\delta S_{1}}{\delta
r^{\mu }}\right\vert _{\left[ r\right] }+\left. \frac{\delta S_{1}}{\delta
\left[ r^{\mu }\right] }\right\vert _{r}=0, \label{varderlag}
\end{equation
where $\left. \frac{\delta S_{1}}{\delta r^{\mu }}\right\vert _{\left[
\right] }$ and $\left. \frac{\delta S_{1}}{\delta \left[ r^{\mu }\right]
\right\vert _{r}$ carry respectively the contributions due to the local and
non-local dependencies.
\end{enumerate}
\bigskip
\textbf{Definition \#2 - Non-local Lagrangian systems in standard form.}
A non-local Lagrangian system $\left\{ \mathbf{x},L_{1}\right\} $ will be
said to admit a \emph{standard form} if the variational derivative (\re
{varderlag}) yields the E-L equations in\emph{\ the standard form:
\begin{equation}
\frac{\delta S_{1}}{\delta r^{\mu }}+\frac{\delta S_{1}}{\delta \left[
r^{\mu }\right] }\equiv F_{\mu }(r)L_{eff}=0, \label{STANDARD}
\end{equation
with
\begin{equation}
L_{eff}\equiv L_{eff}\left( r,\frac{dr}{ds},\left[ r\right] \right)
\label{efflag}
\end{equation
denoting a suitable \emph{effective non-local Lagrangian.}
\bigskip
On the base of these definitions, the following theorem holds.
\bigskip
\textbf{THM.1 - Non-local and Effective Lagrangian functions}
\emph{Given validity of the definitions \#1 and \#2, it follows that:}
\emph{T1}$_{1}$\emph{) The non-local Lagrangian }$L_{1}$\emph{\ and the
effective Lagrangian }$L_{eff}$\emph{\ are generally different, namely
\begin{equation}
L_{1}\neq L_{eff}. \label{l1leff}
\end{equation}
\emph{T1}$_{2}$\emph{) If }$S_{1}\left( r,\left[ r\right] \right) $\emph{\
admits the general decomposition
\begin{equation}
S_{1}(r,\left[ r\right] )=S_{a}(r)+S_{b}(r,\left[ r\right] ), \label{j1dec}
\end{equation
\emph{with }$S_{a}(r)\equiv $ $\int_{s_{1}}^{s_{2}}dsL_{a}\left( r,\frac{dr}
ds}\right) $ \emph{and }$S_{b}(r,\left[ r\right] )\equiv
\int_{s_{1}}^{s_{2}}dsL_{b}\left( r,\frac{dr}{ds},\left[ r\right] \right)
\emph{, and moreover }$S_{b}(r,\left[ r\right] )$\emph{\ defines a symmetric
functional such that
\begin{equation}
S_{b}(r,\left[ r\right] )=S_{b}(\left[ r\right] ,r), \label{sym1}
\end{equation
\emph{then the effective Lagrangian }$L_{eff}$\emph{\ is related to the
variational non-local Lagrangian }$L_{1}\equiv L_{a}+L_{b}$\emph{\ as
\begin{equation}
L_{eff}=L_{a}+2L_{b}=L_{1}+L_{b}. \label{leff-lab}
\end{equation}
\emph{Proof} - T1$_{1}$) The proof is an immediate consequence of Eqs.(\re
{varderlag}) and (\ref{STANDARD}). In fact, by definition the E-L
differential operator $F_{\mu }(r)$ is a local differential operator that is
required to preserve its form also for non-local systems. On the other hand,
the variational derivative (\ref{varderlag}) is different from (\re
{standard part}). Hence, in order to write the E-L equations associated to
the non-local function $L_{1}$ in standard form, a suitable effective
Lagrangian $L_{eff}$ must be introduced, which must differ from $L_{1}$ and
be expressed in such a way that the non-local dependencies contained in
L_{1}$ can be equivalently treated by means of $F_{\mu }(r)$.
T1$_{2}$) The proof follows by inspecting the general definition (\re
{varderlag}). In this case, in view of the symmetry property (\ref{sym1}),
it follows manifestly tha
\begin{equation}
\frac{\delta S_{1}}{\delta r^{\mu }}\equiv \left. \frac{\delta S_{1}}{\delta
r^{\mu }}\right\vert _{\left[ r\right] }+\left. \frac{\delta S_{1}}{\delta
\left[ r^{\mu }\right] }\right\vert _{r}=\left. \frac{\delta S_{a}}{\delta
r^{\mu }}\right\vert _{\left[ r\right] }+2\left. \frac{\delta S_{b}}{\delta
r^{\mu }}\right\vert _{\left[ r\right] }=0.
\end{equation
Then, by comparing this relation with the definitions both of the E-L
differential operator (\ref{E-L operator}) and the standard form
representation of the E-L equations (\ref{STANDARD}), it follows that the
effective Lagrangian $L_{eff}$ takes necessarily the form given in Eq.(\re
{leff-lab}). This completes the proof of the statement.
\textbf{Q.E.D.}
\bigskip
A basic consequence of Definition \#2 and THM.1 concerns the covariance
property of the E-L equations (\ref{STANDARD}). The result is stated in the
following Corollary.
\bigskip
\textbf{Corollary 1 to THM.1 - Covariance of the E-L equations for arbitrary
point transformations.}
\emph{The Euler-Lagrange equations (\ref{STANDARD}) are covariant with
respect to arbitrary point transformations
\begin{equation}
r^{\mu }\rightarrow q^{\mu }(r) \label{pointtr}
\end{equation
\emph{represented by a diffeomeorphism of class }$C^{k}$, \emph{with }$k\geq
2,$ \emph{which requires they are of the form
\begin{equation}
F_{\mu }(q)\widetilde{L}_{eff}=\frac{\partial r^{\nu }}{\partial q^{\mu }
F_{\nu }(r)L_{eff}=0, \label{elcov}
\end{equation
\emph{with }$\widetilde{L}_{eff}$\emph{\ denoting}
\begin{equation}
\widetilde{L}_{eff}\left( q,\frac{dq}{ds},\left[ q\right] \right) \equiv
L_{eff}\left( r,\frac{dr}{ds},\left[ r\right] \right) .
\end{equation
\emph{As a consequence, Eq.(\ref{STANDARD}) satisfies also the covariance
property with respect to arbitrary infinitesimal Lorentz transformations
(Manifest Lorentz Covariance).}
\emph{Proof}\ - The Euler-Lagrange equations (\ref{STANDARD}) are by
definition covariant provided the variational Lagrangian $L_{1}\left( r
\frac{dr}{ds},\left[ r\right] \right) $ is a 4-scalar (as it is by
construction). Then, it is sufficient to represent the Lagrangian action in
terms of the Lagrangian coordinates $q^{\mu },$ yieldin
\begin{equation}
\widetilde{S_{1}}(q,\left[ q\right] )\equiv S_{1}(r,\left[ r\right] ),
\end{equation
with $\widetilde{S_{1}}(q,\left[ q\right] )$ denoting the transformed action
\begin{equation}
\widetilde{S_{1}}(q,\left[ q\right] )=\int_{s_{1}}^{s_{2}}ds\widetilde{L_{1}
\left( q,\frac{dq}{ds},\left[ q\right] \right)
\end{equation
and $\widetilde{L_{1}}$ denoting the transformed variational non-local
Lagrangian. Hence, the Hamilton variational principle $\delta \widetilde
S_{1}}(q,\left[ q\right] )=0$ yields precisely the E-L equations (\ref{elcov
). This proves the statement. The covariance property of\ Eqs.(\ref{STANDARD
) with respect to point transformations (\ref{pointtr}) includes, as
particular case, Lorentz transformations. Therefore, Eqs.(\ref{STANDARD})
are also \emph{Manifestly Lorentz Covariant }(MLC).
\textbf{Q.E.D.}
\bigskip
We notice the following remarkable features of this treatment:
1) In general, in absence of any kind of symmetry, a non-local Lagrangian
system does not admit a standard form representation in terms of the
effective Lagrangian $L_{eff}$ \cite{Feytesi}.
2) As shown in T1$_{2}$, the possibility of getting an explicit relationship
between $L_{1}$ and $L_{eff}$ is a consequence solely of the symmetry
property (\ref{sym1}) of the functional $S_{b}$. This also proves the
existence of $L_{eff}$ and, as a consequence, of the standard form
representation for non-local systems satisfying Eq.(\ref{sym1}).
3) The symmetry assumption (\ref{sym1}) can be effectively realized in
physical systems. As it will be shown below, this condition is satisfied by
the variational functional which describes the dynamics of finite-size
classical charged particles with the inclusion of the RR effects associated
to the interaction with the EM self-field.
\bigskip
\section{Non-local Hamiltonian formulation}
In this section we deal with the basic features concerning the Hamiltonian
formulation for non-local systems which admit a variational treatment in
terms of non-local Lagrangian functions. This requires the introduction of
the following preliminary definitions.
\bigskip
\textbf{Definition \#3 - Local and non-local Hamiltonian systems.}
A local (respectively, non-local) Lagrangian system $\left\{ \mathbf{x
,L\right\} $ is said to admit a local (non-local) Hamiltonian system
\left\{ \mathbf{y\equiv (}r^{\mu }\mathbf{,}p_{\mu }\mathbf{),}H\right\} $
provide the following conditions are satisfied.
\begin{enumerate}
\item The \emph{variational Hamiltonian }$H$ is defined as the Legendre
transformation of the local (non-local) variational Lagrangian $L$
\begin{equation}
H=p_{\mu }\frac{dr^{\mu }}{ds}-L, \label{canh}
\end{equation
wit
\begin{equation}
p_{\mu }=\frac{\partial L}{\partial \frac{dr^{\mu }}{ds}} \label{canmom}
\end{equation
being the corresponding canonical momentum, with corresponding action
functional $S_{H}\equiv \int_{s_{1}}^{s_{2}}ds\left[ p_{\mu }\frac{dr^{\mu
}{ds}-H\right] $.
\item It is assumed that $H$ is respectively of the form:
\begin{itemize}
\item $H\equiv H_{0}\left( r,p\right) $ for local systems;
\item $H\equiv H_{1}\left( r,p,\left[ r\right] \right) $ for non-local
systems, namely it is a local function of $\left( r,p\right) $ and a
functional of $\left[ r\right] $.
\end{itemize}
The corresponding \emph{Hamilton action functionals} are denoted
respectively a
\begin{equation}
S_{H_{0}}(r,p)=\int_{s_{1}}^{s_{2}}ds\left[ p_{\mu }\frac{dr^{\mu }}{ds
-H_{0}\right]
\end{equation
for local systems, and a
\begin{equation}
S_{H_{1}}(r,p,\left[ r\right] )=\int_{s_{1}}^{s_{2}}ds\left[ p_{\mu }\frac
dr^{\mu }}{ds}-H_{1}\right]
\end{equation
for non-local systems.
\item In the functional clas
\begin{eqnarray}
\left\{ \mathbf{y\equiv (}r^{\mu }\mathbf{,}p_{\mu }\mathbf{)}\right\}
&\equiv &\left\{ \mathbf{y}(s):\mathbf{y}(s_{i})=\mathbf{y}_{i},\text{ for
i=1,2,\text{ }s_{1},s_{2}\in I,\text{ }\right. \notag \\
&&\left. \text{with }s_{1}<s_{2}\text{ and }\mathbf{y}(s)\in C^{2}(I)\right\}
\end{eqnarray
the synchronous variations $\left( \delta r^{\mu }(s),\delta p_{\mu
}(s)\right) $ are all considered independent and vanish at the endpoints
\mathbf{y}(s_{i})=\mathbf{y}_{i}$. By assumption, synchronous variations
imply that $\delta ds=0$, with the interval $ds$ satisfying the constrain
\begin{equation}
ds^{2}=g_{\mu \nu }dr^{\mu }\left( s\right) dr^{\nu }\left( s\right) ,
\end{equation
where $r^{\mu }\left( s\right) $ are the extremal curves.
\item The \emph{modified Hamilton variational principle
\begin{equation}
\delta S_{H}=0
\end{equation
with variations $\left( \delta r^{\mu }(s),\delta p_{\mu }(s)\right) $ is
equivalent to the Hamilton principle (\ref{Hamiltonvariational principle II
), i.e., it yields the same extremal curves in the functional class $\left\{
\mathbf{y}\right\} $.
In particular, for local systems the extremal curves of $S_{H_{0}}$ can be
cast in the \emph{standard Hamiltonian form as first-order ODEs
\begin{equation}
\frac{\delta S_{H_{0}}}{\delta p_{\mu }}=\frac{dr^{\mu }}{ds}=\frac{\partial
H_{0}}{\partial p_{\mu }}=\left[ r^{\mu },H_{0}\right] ,
\end{equation
\begin{equation}
\frac{\delta S_{H_{0}}}{\delta r^{\mu }}=-\frac{dp_{\mu }}{ds}=\frac
\partial H_{0}}{\partial r^{\mu }}=\left[ p_{\mu },H_{0}\right] ,
\end{equation
where the customary Poisson bracket formalism has been used.
On the other hand, for non-local systems the extremal curves of the
functional $S_{H_{1}}$ are provided by the set of first-order ODE
\begin{equation}
\frac{\delta S_{H_{1}}}{\delta p_{\mu }}=0, \label{h1}
\end{equation
\begin{equation}
\frac{\delta S_{H_{1}}}{\delta r^{\mu }}\equiv \left. \frac{\delta S_{H_{1}
}{\delta r^{\mu }}\right\vert _{\left[ r\right] }+\left. \frac{\delta
S_{H_{1}}}{\delta \left[ r^{\mu }\right] }\right\vert _{r}=0, \label{h2}
\end{equation
where $\left. \frac{\delta S_{H_{1}}}{\delta r^{\mu }}\right\vert _{\left[
\right] }$ and $\left. \frac{\delta S_{H_{1}}}{\delta \left[ r^{\mu }\right]
}\right\vert _{r}$ carry respectively the contributions due to the local and
non-local dependencies.
\end{enumerate}
\bigskip
\textbf{Definition \#4 - Non-local Hamiltonian systems in standard form.}
A non-local Hamiltonian system $\left\{ \mathbf{y},H_{1}\right\} $ will be
said to admit a \emph{standard form} if the extremal first-order ODEs (\re
{h1}) and (\ref{h2}) can be cast in the \emph{standard Hamiltonian form} in
terms of the \emph{effective canonical momentum }$P_{\mu }$\emph{\ and
Hamiltonian function }$H^{eff}$ a
\begin{equation}
\frac{\delta S_{H_{1}}}{\delta p_{\mu }}=\frac{dr^{\mu }}{ds}=\frac{\partial
H_{eff}}{\partial P_{\mu }}=\left[ r^{\mu },H_{eff}\right] , \label{djh1}
\end{equation
\begin{equation}
\frac{\delta S_{H_{1}}}{\delta r^{\mu }}\equiv \left. \frac{\delta S_{H_{1}
}{\delta r^{\mu }}\right\vert _{\left[ r\right] }+\left. \frac{\delta
S_{H_{1}}}{\delta \left[ r^{\mu }\right] }\right\vert _{r}=-\frac{dP_{\mu }}
ds}=\frac{\partial H_{eff}}{\partial r^{\mu }}=\left[ P_{\mu },H_{eff}\right]
. \label{djh1-1}
\end{equation
Here both $H_{eff}=H_{eff}\left( r,P,\left[ r\right] \right) $ and $P_{\mu }$
must be defined in terms of the effective Lagrangian function introduced in
Eq.(\ref{STANDARD}) respectively a
\begin{equation}
H_{eff}\equiv P_{\mu }\frac{dr^{\mu }}{ds}-L_{eff} \label{heff}
\end{equation
an
\begin{equation}
P_{\mu }\equiv \frac{\partial L_{^{eff}}}{\partial \frac{dr^{\mu }}{ds}}.
\label{pleff}
\end{equation
From this definition it follows that, if the non-local Hamiltonian system
\left\{ \mathbf{y},H_{1}\right\} $ admits a standard form, then the Poisson
bracket representation holds for $H_{eff}$ and $P_{\mu }$.
\bigskip
The following theorem can be stated concerning the relationship between
H_{1}$ and $H_{eff}$.
\bigskip
\textbf{THM.2 - Non-local and Effective Hamiltonian functions}
\emph{Given validity of the definitions \#3 and \#4 and the results of
THM.1, if }$S_{H_{1}}\left( r,p,\left[ r\right] \right) $\emph{\ admits the
general decomposition
\begin{equation}
S_{H_{1}}\left( r,p,\left[ r\right] \right) =S_{H_{a}}(r,p)+S_{H_{b}}(r,p
\left[ r\right] ),
\end{equation
\emph{with
\begin{eqnarray}
S_{H_{a}}(r,p) &\equiv &\int_{s_{1}}^{s_{2}}ds\left[ p_{a\mu }\frac{dr^{\mu
}{ds}-H_{a}\left( r,p\right) \right] , \\
S_{H_{b}}(r,p,\left[ r\right] ) &\equiv &\int_{s_{1}}^{s_{2}}ds\left[
p_{b\mu }\frac{dr^{\mu }}{ds}-H_{b}\left( r,p,\left[ r\right] \right) \right]
,
\end{eqnarray
\emph{where the canonical momenta }$p_{a\mu }$ \emph{and }$p_{b\mu }$ \emph
are defined respectively as
\begin{eqnarray}
p_{a\mu } &\equiv &\frac{\partial L_{^{a}}}{\partial \frac{dr^{\mu }}{ds}},
\\
p_{b\mu } &\equiv &\frac{\partial L_{^{b}}}{\partial \frac{dr^{\mu }}{ds}},
\end{eqnarray
\emph{and moreover }$S_{H_{b}}(r,p,\left[ r\right] )$\emph{\ defines a
symmetric functional such that
\begin{equation}
S_{H_{b}}(r,p,\left[ r\right] )=S_{H_{b}}(\left[ r\right] ,p,r),
\label{sym2}
\end{equation
\emph{then the effective Hamiltonian }$H_{eff}$\emph{\ is related to the
variational non-local Hamiltonian }$H_{1}\equiv H_{a}+H_{b}$\emph{\ as
\begin{equation}
H_{eff}=H_{a}+2H_{b}=H_{1}+H_{b}, \label{heff-hab}
\end{equation
\emph{where, by definition
\begin{eqnarray}
H_{1} &\equiv &p_{\mu }\frac{dr^{\mu }}{ds}-L_{1}, \label{h1bis} \\
H_{a} &\equiv &p_{a\mu }\frac{dr^{\mu }}{ds}-L_{a}, \label{ha} \\
H_{b} &\equiv &p_{b\mu }\frac{dr^{\mu }}{ds}-L_{b}. \label{hb}
\end{eqnarray}
\emph{Proof} - The proof follows from THM.1 and by invoking the general
definitions (\ref{djh1}) and (\ref{djh1-1}). In fact, in view of the
symmetry property (\ref{sym2}), it follows manifestly tha
\begin{equation}
\frac{\delta S_{H_{1}}}{\delta r^{\mu }}\equiv \left. \frac{\delta S_{H_{1}
}{\delta r^{\mu }}\right\vert _{\left[ r\right] }+\left. \frac{\delta
S_{H_{1}}}{\delta \left[ r^{\mu }\right] }\right\vert _{r}=\left. \frac
\delta S_{H_{a}}}{\delta r^{\mu }}\right\vert _{\left[ r\right] }+2\left.
\frac{\delta S_{H_{b}}}{\delta r^{\mu }}\right\vert _{\left[ r\right] }.
\end{equation
Then, by comparing this relation with the definitions (\ref{heff})-(\re
{pleff}) for the standard Hamiltonian form and using Eqs.(\ref{h1bis})-(\re
{hb}), from the analogous results in THM.1 which concerns the relationship
between $L_{1}$ and $L_{eff}$ in the symmetric case, Eq.(\ref{heff-hab}) is
readily obtained.
\textbf{Q.E.D.}
\bigskip
Finally, as a basic consequence of Definition \#4 and THM.2, the following
Corollary can be stated concerning the covariance property of the Hamilton
equations in standard form.
\bigskip
\textbf{Corollary 1 to THM.2 - Covariance of the Hamilton equations for
arbitrary point transformations.}
\emph{The Hamilton equations (\ref{djh1})-(\ref{djh1-1}) in standard form
are covariant with respect to arbitrary point transformations
\begin{equation}
r^{\mu }\rightarrow q^{\mu }(r) \label{pointtr-bis}
\end{equation
\emph{represented by a diffeomeorphism of class }$C^{k}$ \emph{with }$k\geq
2,$ \emph{which requires they are of the form
\begin{eqnarray}
\frac{dq^{\mu }}{ds} &=&\frac{\partial \widetilde{H}_{eff}}{\partial
P_{\left( q\right) \mu }}=\left[ q^{\mu },\widetilde{H}_{eff}\right] ,
\label{cov-h-1} \\
\frac{dP_{\left( q\right) \mu }}{ds} &=&-\frac{\partial \widetilde{H}_{eff}}
\partial q^{\mu }}=\left[ P_{\left( q\right) \mu },\widetilde{H}_{eff}\right]
, \label{cov-h-2}
\end{eqnarray
\emph{with }$\widetilde{H}_{eff}$\emph{\ denoting
\begin{equation}
\widetilde{H}_{eff}\left( q,P,\left[ q\right] \right) \equiv H_{eff}\left(
r,P,\left[ r\right] \right) \label{heff-tilda}
\end{equation
\emph{and }$P_{\left( q\right) \mu }$ \emph{being the transformed canonical
momentum. As a consequence, Eqs.(\ref{cov-h-1}) and (\ref{cov-h-2}) satisfy
also the covariance property with respect to arbitrary infinitesimal Lorentz
transformations (Manifest Lorentz Covariance).}
\emph{Proof}\ - In fact, for an arbitrary point transformation of the type
\ref{pointtr-bis}), the corresponding transformation for the momenta $P_{\nu
}$ i
\begin{equation}
P_{(q)\mu }=\frac{\partial q^{\nu }}{\partial r^{\mu }}P_{\nu },
\end{equation
which yield
\begin{eqnarray}
\frac{\partial P_{(q)\nu }}{\partial P_{\mu }} &=&\frac{\partial q^{\mu }}
\partial r^{\nu }}, \\
\frac{\partial P_{(q)\mu }}{\partial P_{\nu }} &=&\frac{\partial r^{\nu }}
\partial q^{\mu }}.
\end{eqnarray
Hence, it follows tha
\begin{eqnarray}
\frac{dq^{\mu }}{ds} &=&\frac{\partial \widetilde{H}_{eff}}{\partial
P_{(q)\mu }}=\frac{\partial q^{\mu }}{\partial r^{\nu }}\frac{dr^{\nu }}{ds}
\frac{\partial q^{\mu }}{\partial r^{\nu }}\frac{\partial H_{eff}}{\partial
P_{\nu }}, \\
\frac{dP_{(q)\mu }}{ds} &=&-\frac{\partial \widetilde{H}_{eff}}{\partial
q^{\mu }}=\frac{\partial P_{(q)\mu }}{\partial P_{\nu }}\frac{dP_{\nu }}{ds
=-\frac{\partial r^{\nu }}{\partial q^{\mu }}\frac{\partial H_{eff}}
\partial r^{\nu }},
\end{eqnarray
which implies
\begin{eqnarray}
\frac{\partial \widetilde{H}_{eff}}{\partial P_{(q)\mu }} &=&\frac{\partial
q^{\mu }}{\partial r^{\nu }}\frac{\partial H_{eff}}{\partial P_{\nu }},
\label{htilde-cov1} \\
\frac{\partial \widetilde{H}_{eff}}{\partial q^{\mu }} &=&\frac{\partial
r^{\nu }}{\partial q^{\mu }}\frac{\partial H_{eff}}{\partial r^{\nu }},
\label{htilde-cov2}
\end{eqnarray
where $\widetilde{H}_{eff}$ is defined in Eq.(\ref{heff-tilda}) above.
Therefore, the Hamilton equations in standard form for the Lagrangian
coordinates $q^{\mu }$\ and the canonical momenta $P_{(q)\mu }$\ are
respectively covariant [Eq.(\ref{htilde-cov1})] and controvariant [Eq.(\re
{htilde-cov2})]\ with respect to the point transformation (\ref{pointtr-bis
). This is true also for arbitrary infinitesimal Lorentz transformations,
which proves the MLC of Hamilton equations Eqs.(\ref{cov-h-1}) and (\re
{cov-h-2}) in standard form.
\textbf{Q.E.D.}
\bigskip
\section{An example of non-local interaction: the classical EM RR problem}
A crucial issue of the present investigation concerns the possible existence
of physical systems subject to non-local interactions whose dynamics can be
consistently described in terms of a variational action integral and which
admit at the same time both Lagrangian and Hamiltonian formulations in
standard form. In this section we prove that the EM RR problem for classical
finite-size charged particles represents a physical example of non-local
interactions of this kind. The reason behind the choice of considering
extended particles is the necessity of avoiding the intrinsic divergences of
the RR effect characteristic of the point-particle model.
In fact, consider the general form of the Hamilton action functional for the
variational treatment of the dynamics of an extended charged particle in
presence of an external EM field and with the inclusion of the RR
self-interaction. This can be conveniently expressed as follows:
\begin{equation}
S_{1}\left( z,\left[ z\right] \right) =S_{M}\left( z\right)
+S_{C}^{(ext)}(z)+S_{C}^{(self)}(z,\left[ z\right] ), \label{s1}
\end{equation
where $S_{M},S_{C}^{(ext)}$and $S_{C}^{(self)}$ are respectively the
contributions from the inertial mass and the EM coupling with the external
and the self fields. In particular, denoting by $j^{\left( self\right) \mu
}(r)$ the particle 4-current density generated by the particle itself and
observed at a 4-position $r$, the two coupling action integrals are provided
by the following 4-scalars
\begin{eqnarray}
S_{C}^{(ext)}\left( z\right) &=&\frac{1}{c^{2}}\int_{1}^{2}d^{4}rA^{(ext)\mu
}\left( r\right) j_{\mu }^{\left( self\right) }(r), \label{sext} \\
S_{C}^{(self)}(z,\left[ z\right] ) &=&\frac{1}{c^{2}
\int_{1}^{2}d^{4}rA^{(self)\mu }\left( r\right) j_{\mu }^{\left( self\right)
}(r), \label{sself}
\end{eqnarray
where $A_{\mu }^{(ext)}$ and $A_{\mu }^{(self)}$ denote the 4-vector
potentials of the external and the self EM fields and $z$ is a state to be
suitably defined (see below). A clarification here is in order. The external
EM 4-potential $A_{\mu }^{(ext)}\left( r\right) $ acting on the charged
particle located at the 4-position $r$ is assumed to be produced only by
prescribed \textquotedblleft external\textquotedblright\ sources, namely,
excluding the particle itself, by the remaining possible EM sources
belonging to the configuration space $\Gamma _{r}$. Within the framework of
special relativity, both the inertial term and the coupling term with the
external field carry only local dependencies, in the sense that they depend
explicitly only on the local 4-position $r$. They provide the classical
dynamics of charged particles in absence of any RR effect. On the other
hand, the functional $S_{C}^{(self)}$ associated to the EM self-interaction
contains both local and non-local contributions. In particular, since the
state $z$ of a finite-size particle must include a 4-position vector $r$, it
follows that $S_{C}^{(self)}$ generally depends explicitly on two different
4-positions, $r$ and $\left[ r\right] $, to be properly defined (see below).
The non-local property of $S_{C}^{(self)}$ represents a characteristic
feature of RR phenomena.
From the relationship (\ref{s1}) it follows that the Hamilton action
functional for the treatment of the RR admits the decomposition (\ref{j1dec
) introduced by THM.1, namely it can be written as the sum of two terms,
carrying respectively only local and both local and non-local dependencies.
In order to prove that the same functional admits also a Lagrangian and a
Hamiltonian representation in standard form it is sufficient to show that
the self-coupling functional is symmetric in $z$ and $\left[ z\right] $, in
the sense defined in THM.1. For this purpose we need to determine explicitly
the general expression of the 4-current and the self 4-potential for a
rotating finite-size charged particle.
The first step consists in constructing a covariant representation for the
4-current density. We follow the approach presented by Nodvik \cit
{Nodvik1964}. Thus, we consider an extended charged particle with charge and
mass distributions having the same support $\partial \Omega $, to be
identified with a smooth surface. Denoting by $r^{\mu }\left( s\right) $ the
4-vector position (with proper time $s$) of a reference point belonging to
the internal open domain $\Omega $ and by $\zeta ^{\mu }$ a generic 4-vector
of $\partial \Omega $, the displacement vector $\xi ^{\mu }$ is defined as
\begin{equation}
\xi ^{\mu }\equiv \varsigma ^{\mu }-r^{\mu }(s).
\end{equation
The particle model is prescribed by imposing the constraints of rigidity of
\partial \Omega $, namely for all $\varsigma ^{\mu }$ and $r^{\mu }\left(
s\right) $ \cite{Nodvik1964}
\begin{eqnarray}
\xi ^{\mu }\xi _{\mu } &=&const., \label{eee1a} \\
\xi _{\mu }u^{\mu }(s) &=&0, \label{eee2a}
\end{eqnarray
where $u^{\mu }(s)\equiv \frac{d}{ds}r^{\mu }(s)$. In particular, we shall
assume that mass and charge distributions are spherically symmetric and
therefore characterized by a form factor $f\left( \xi ^{2}\right) \equiv
f\left( \xi ^{\mu }\xi _{\mu }\right) $. This allows one to identify $r^{\mu
}\left( s\right) $ as the center-point of $\partial \Omega $. The extended
particle can in principle exhibit both translational and rotational degrees
of freedom. In particular, the translational motion can be described in
terms of $r^{\mu }\left( s\right) $. Instead, the rotational dynamics, which
includes both space-time rotations associated to the so-called Thomas
precession and pure spatial rotations, can be described in terms of the
Euler angles $\alpha (s)\equiv \left\{ \varphi (s),\vartheta (s),\psi
(s)\right\} $. It follows that, in this case, the Lagrangian state $z$ must
be identified with the set of variables $z\equiv \left( r^{\alpha }\left(
s\right) ,\alpha (s)\right) $. In view of these definitions it is immediate
to prove that the 4-current density for the finite-size particle can be
written as follows
\begin{equation}
j^{\left( self\right) \mu }(r)=qc\int_{-\infty }^{+\infty }ds\left\{ u^{\mu
} \left[ 1-\frac{du_{\alpha }}{ds}x^{\alpha }\right] -\frac{1}{c}\omega
^{\mu \nu }x_{\nu }\right\} f(x^{2})\delta (x^{\alpha }u_{\alpha }),
\label{jmu}
\end{equation
wher
\begin{equation}
x^{\mu }=r^{\mu }-r^{\mu }\left( s\right) \label{xx}
\end{equation
and $\omega ^{\mu \nu }=\omega ^{\mu \nu }\left( s\right) $ is the
antisymmetric angular velocity tensor \cite{Nodvik1964}, which depends on $s$
through the Euler angles $\alpha (s)$. The term $\left[ 1+\frac{du_{\alpha
}{ds}x^{\alpha }\right] $ contains the acceleration of $r^{\mu }\left(
s\right) $ and represents the contribution associated to the Thomas
precession effect. This can be formally eliminated by using the properties
of the Dirac-delta function, implying the identity
\begin{equation}
\delta (x^{\alpha }u_{\alpha }(s))=\frac{1}{\left\vert \frac{d\left[
x^{\alpha }u_{\alpha }\right] }{ds}\right\vert }\delta (s-s_{1})=\frac{1}
\left\vert 1-\frac{du_{\alpha }}{ds}x^{\alpha }\right\vert }\delta (s-s_{1}),
\label{prop}
\end{equation
where by definition $s_{1}=s_{1}\left( r\right) $ is the root of the
algebraic equatio
\begin{equation}
u_{\mu }(s_{1})\left[ r^{\mu }-r^{\mu }\left( s_{1}\right) \right] =0.
\end{equation
As a result, the 4-current can be equivalently expressed a
\begin{equation}
j^{\left( self\right) \mu }(r)=qc\int_{-\infty }^{+\infty }ds\left[ u^{\mu
}\delta (s-s_{1})-\frac{1}{c}\omega ^{\mu \nu }x_{\nu }\delta (x^{\alpha
}u_{\alpha })\right] f(x^{2}). \label{jmu2}
\end{equation}
The second step consists in constructing a Green-function representation for
the EM self-potential $A^{(self)\mu }$ in terms of the 4-current $j^{\left(
self\right) \mu }(r)$. This technique is well-known. Thus, considering the
Maxwell equations in flat space-time, in the Lorentz gauge $A^{(self)\beta
},_{\beta }=0$, the self 4-potential must satisfy the wave equatio
\begin{equation}
\square A^{(self)\mu }=\frac{4\pi }{c}j^{\left( self\right) \mu }(r),
\label{6biz}
\end{equation
where $\square $ represents the D'Alembertian operator and $j^{\left(
self\right) \mu }(r)$ is given by Eq.(\ref{jmu2}). The formal solution of
Eq.(\ref{6biz}) i
\begin{equation}
A^{(self)\mu }(r)=\frac{4\pi }{c}\int d^{4}r^{\prime }G(r,r^{\prime
})j^{\left( self\right) \mu }(r^{\prime }), \label{D-1}
\end{equation
where $G(r,r^{\prime })$ is the retarded Green's function corresponding to
the prescribed charge density. By construction, it follows that $G\left(
r,r^{\prime }\right) $ is symmetric with respect to $r$ and $r^{\prime }$,
and furthermore - since the particle is finite-size - both the 4-current and
the self-potential are everywhere well-defined.
From these general results, it is immediate to prove the following theorem.
\bigskip
\textbf{THM.3 - Symmetry properties of }$S_{C}^{(self)}(z,\left[ z\right] )$
\emph{Given validity of Eq.(\ref{jmu2}) for the covariant expression of the
current density for a finite-size charged particle and of Eq.(\ref{D-1}) for
the general expression of the corresponding EM self-potential, it follows
that:}
\emph{T3}$_{1}$\emph{) The functional }$S_{C}^{(self)}(z,\left[ z\right] ),$
\emph{defined in Eq.(\ref{sself}) as an integral over the 4-volume element }
d^{4}r,$ \emph{can be written as a line integral of the form
\begin{equation}
S_{C}^{(self)}(z,\left[ z\right] )=\int_{-\infty }^{+\infty
}dsL_{C}^{(self)}\left( z,\left[ z\right] \right) , \label{scself-thm3}
\end{equation}
\emph{where }$L_{C}^{(self)}$\emph{\ represents the Lagrangian of the
coupling with the EM self-field. This is defined as
\begin{equation}
L_{C}^{(self)}\left( z,\left[ z\right] \right) \equiv \frac{4\pi q}{c^{2}
\int_{1}^{2}d^{4}r\left\{ \left[ u^{\mu }\delta (s-s_{1})-\frac{1}{c}\omega
^{\mu \nu }x_{\nu }\delta (x^{\alpha }u_{\alpha })\right] f(x^{2})\int
d^{4}r^{\prime }G(r,r^{\prime })j_{\mu }^{\left( self\right) }(r^{\prime
})\right\} . \label{lcself-thm3}
\end{equation}
\emph{T3}$_{2}$\emph{) The functional }$S_{C}^{(self)}(z,\left[ z\right] )$
\emph{contains both local and non-local dependencies in terms of the
variational quantities }$z\equiv z\left( s\right) $\emph{\ and} $\left[
\right] \equiv \left[ z\left( s\right) \right] .$\emph{\ In particular, it
is symmetric in these local and non-local variables, in the sense stated in
THM.1, namely
\begin{equation}
S_{C}^{(self)}(z,\left[ z\right] )=S_{C}^{(self)}(\left[ z\right] ,z).
\label{sym-thm3}
\end{equation}
\emph{T3}$_{3}$\emph{) The functional }$S_{C}^{(self)}(z,\left[ z\right] )$
\emph{contains at most only first-order derivatives of the variational
functions }$z\left( s\right) $.
\emph{Proof} - T3$_{1}$) The proof of the first statement follows by noting
that the action integral $S_{C}^{(self)}(z,\left[ z\right] )$ is a 4-scalar
by definition. Hence, making explicit the expressions of $A^{(self)\mu }$
and $j_{\mu }^{\left( self\right) }$ in Eq.(\ref{sself}) according to the
results in Eqs.(\ref{D-1}) and (\ref{jmu2}), by exchanging the order of the
integrations and invoking the symmetry property of the Green function, the
conclusion can be easily reached. In particular, the variational Lagrangian
is found to be of the general form given in Eq.(\ref{lcself-thm3}).
T3$_{2}$) To prove the second statement we first notice that in Eq.(\re
{scself-thm3}) both $z$ and $z^{\prime }$ are integration variables, while
by definition the variational quantities are identified with $z\left(
s\right) $\emph{\ }and $\left[ z\left( s\right) \right] \equiv z^{\prime
}\left( s^{\prime }\right) $. These dependencies are carried respectively by
the charge current densities $j^{\left( self\right) \mu }(r)$ and $j^{\left(
self\right) \mu }(r^{\prime })$. The result is then reached by noting that
the functional carrying the self-coupling terms is symmetric with respect to
the integrated quantities, and in particular with respect to $j^{\left(
self\right) \mu }(r)$ and $j^{\left( self\right) \mu }(r^{\prime })$. Hence,
exchanging $\left( z,j^{\left( self\right) \mu }(r)\right) $ with $\left(
z^{\prime },j^{\left( self\right) \mu }(r^{\prime })\right) $ does not
affect the form of the functional, with the consequence that Eq.(\re
{sym-thm3}) is identically satisfied.
T3$_{3}$) The proof of the statement is an immediate consequence of the
representation for the current density $j^{\left( self\right) \mu }(r)$
given in Eq.(\ref{jmu2}). In fact, the term proportional to the acceleration
$\frac{du_{\alpha }}{ds}$ in Eq.(\ref{jmu}) and which is associated to the
Thomas precession, does not appear in Eq.(\ref{jmu2}), thanks to the
property of the Dirac-delta function indicated above in Eq.(\ref{prop}).
\textbf{Q.E.D.}
\bigskip
An immediate consequence of THM.3 is that, thanks to THMs.1 and 2, the
variational treatment of the dynamics of finite-size charged particles
subject to the EM RR effect admits both Lagrangian and Hamiltonian
representations in standard form. In particular, in this case, it follows
that the following identification must be introduced
\begin{equation}
L_{b}\equiv L_{C}^{(self)},
\end{equation
where $L_{b}$ is the Lagrangian defined above in THM.1.
\bigskip
\section{Hamiltonian theory for the RR problem}
In this section, based on THMs.1-3 and the theory developed in Paper I, we
proceed constructing the Hamiltonian formulation for the RR problem. For
this purpose, it is convenient to recall the explicit form of the EM self
4-potential obtained in Paper I. For the sake of comparison with traditional
approaches based on point particle models, here we also propose an
alternative approach based on the Green function method. Remarkably, as
pointed out in Appendix A, for the spherically-symmetric and non-rotating
extended particle considered here, the self-potential is proved to be
formally analogous to the well-known solution valid for point charges. This
result holds, however, only in the external domain (with respect to
\partial \Omega $), where $A_{\mu }^{(self)}(r)$ is found to admit the
integral representation (see details in Appendix A)
\begin{equation}
A_{\mu }^{(self)}(r)=2q\int_{1}^{2}dr_{\mu }^{\prime }\delta (\widehat{R
^{\alpha }\widehat{R}_{\alpha }). \label{intrepA}
\end{equation
Here $\widehat{R}^{\alpha }=r^{\alpha }-r^{\alpha }(s^{\prime })$, with
r^{\alpha }$ and $r^{\prime \alpha }\equiv r^{\alpha }(s^{\prime })$
denoting respectively the generic 4-position and the 4-position of the
center of the charge distribution at proper time $s^{\prime }$. As a
fundamental consequence of the finite extension of the particle and the
restrictions on the domain of validity of Eq.(\ref{intrepA}), the resulting
variational functional and Faraday tensor for the self-field turn out to be
completely different from the point-particle treatment. In particular, the
action integral becomes now a non-local functional with respect to the
4-position $r$. As pointed out in Paper I, this can be written as a line
integral in terms of a variational Lagrangian $L_{1}(r,\left[ r\right] )$ as
follows
\begin{equation}
S_{1}(r,\left[ r\right] )=\int_{-\infty }^{+\infty }dsL_{1}(r,\left[ r\right]
).
\end{equation
Here $L_{1}(r,\left[ r\right] )$ is defined as
\begin{equation}
L_{1}(r,\left[ r\right] )=L_{M}(r)+L_{C}^{(ext)}(r)+L_{C}^{(self)}(r,\left[
\right] ), \label{EXTREMAL LAGRANGIAN}
\end{equation
wher
\begin{eqnarray}
L_{M}(r,u) &=&\frac{1}{2}m_{o}c\frac{dr_{\mu }}{ds}\frac{dr^{\mu }}{ds},
\label{LAGRANGIAN -constarint-2} \\
L_{C}^{(ext)}(r) &=&\frac{q}{c}\frac{dr}{ds}^{\mu }\overline{A}_{\mu
}^{(ext)}(r), \label{LAGRANGIAN -EXTERNAL EM}
\end{eqnarray
are the local contributions respectively from the inertial and the external
EM field coupling terms, with $\overline{A}_{\mu }^{(ext)}$ denoting the
surface-averaged external EM potential (see Paper I for its definition). On
the other hand, $L_{C}^{(self)}$ represents the non-local contribution
arising from the EM self-field coupling, which is provided b
\begin{equation}
L_{C}^{(self)}(r,\left[ r\right] )=\frac{2q^{2}}{c}\frac{dr}{ds}^{\mu
}\int_{1}^{2}dr_{\mu }^{\prime }\delta (\widetilde{R}^{\mu }\widetilde{R
_{\mu }-\sigma ^{2}), \label{LAGRANGIAN-SELF-EM}
\end{equation
where the 4-scalar $\sigma ^{2}\equiv \xi ^{\mu }\xi _{\mu }$ is the radius
of the surface distribution with respect to the center $r^{\mu }\left(
s\right) $ and $\widetilde{R}^{\mu }$ is defined a
\begin{equation}
\widetilde{R}^{\alpha }\equiv r^{\alpha }\left( s\right) -r^{\alpha
}(s^{\prime }).
\end{equation
Notice that $\widetilde{R}^{\alpha }$ represents the displacement bi-vector
between the actual position $r^{\alpha }\left( s\right) $ of the charge
center at proper time $s$ and the retarded position $r^{\alpha }(s^{\prime
}) $ of the same point at the retarded proper time $s^{\prime }$. It is
immediate to verify that the representation of $S_{C}^{(self)}$ in terms of
L_{C}^{(self)}$ given in Eq.(\ref{LAGRANGIAN-SELF-EM}) satisfies the
hypothesis of THM.1, and therefore the solution admits a Lagrangian
representation in standard form. As already shown in Paper I and according
to THM.1, this is obtained by settin
\begin{equation}
L_{eff}\equiv L_{M}(r)+L_{C}^{(ext)}(r)+2L_{C}^{(self)}(r,\left[ r\right] ),
\label{extremal Lagrangian-0}
\end{equation
with $L_{M}(r)$, $L_{C}^{(ext)}(r)$ and $L_{C}^{(self)}$ respectively given
by Eqs.(\ref{LAGRANGIAN -constarint-2})-(\ref{LAGRANGIAN-SELF-EM}). Then,
the corresponding E-L equation is provided by the following covariant
4-vector, second-order delay-type ODE:
\begin{equation}
m_{o}c\frac{du_{\mu }(s)}{ds}=\frac{q}{c}\overline{F}_{\mu \nu
}^{(ext)}(r(s))\frac{dr^{\nu }(s)}{ds}+\frac{q}{c}\overline{F}_{\mu
k}^{\left( self\right) }\left( r\left( s\right) ,r\left( s^{\prime }\right)
\right) \frac{dr^{k}(s)}{ds}, \label{RR}
\end{equation
wher
\begin{equation}
u^{\mu }\left( s\right) \equiv \frac{dr^{\mu }(s)}{ds}. \label{4vel}
\end{equation
Here the notation is as follows. Denoting by $F_{\mu \nu }\equiv F_{\mu \nu
}^{(ext)}+F_{\mu \nu }^{(self)}$ the total Faraday tensor, $F_{\mu \nu
}^{(ext)}$ and $F_{\mu \nu }^{(self)}$ are respectively the
\textquotedblleft external\textquotedblright\ and \textquotedblleft
self\textquotedblright\ Faraday tensors generated by $A_{\nu }^{(ext)}$ and
A_{\nu }^{(self)}$, which carry the contributions due to the external
sources with respect to the charged particle and the particle EM
self-interaction. In particular, the 4-tensor $\overline{F}_{\mu \nu
}^{(ext)}(r(s))$ denotes the surface-average of the Faraday tensor
associated to the external EM field, to be identified wit
\begin{equation}
\overline{F}_{\mu \nu }^{(ext)}\equiv \partial _{\mu }\overline{A}_{\nu
}^{(ext)}-\partial _{\nu }\overline{A}_{\mu }^{(ext)},
\label{EXTERNAL EM FIELD}
\end{equation
with $\overline{A}_{\nu }^{(ext)}(r(s))$ only generated by external sources
with respect to the single-particle whose dynamics is described by Eq.(\re
{RR}). Similarly, $\overline{F}_{\mu k}^{\left( self\right) }$\ is the
surface-average of the Faraday tensor contribution carried by the EM self
4-potential. In the parameter-free representation this is given b
\begin{equation}
\overline{F}_{\mu k}^{\left( self\right) }(r,\left[ r\right]
)=-4q\int_{1}^{2}\left[ dr_{\mu }^{\prime }\frac{\partial }{\partial r^{k}
\delta \left( \widetilde{R}^{\alpha }\widetilde{R}_{\alpha }-\sigma
^{2}\right) -dr_{k}^{\prime }\frac{\partial }{\partial r^{\mu }}\delta
\left( \widetilde{R}^{\alpha }\widetilde{R}_{\alpha }-\sigma ^{2}\right)
\right] . \label{COVARIANT FORM}
\end{equation
As pointed out in Paper I, $\overline{F}_{\mu k}^{\left( self\right) }$ can
also be parametrized in terms of the particle proper time $s,$ by letting
r\equiv r\left( s\right) $ and $\left[ r\right] \equiv r\left( s^{\prime
}\right) $ in the previous equation, which also implies $dr_{\mu }^{\prime
}\equiv ds^{\prime }\frac{dr_{\mu }^{\prime }}{ds^{\prime }}$. This means
that the non-locality in Eq.(\ref{COVARIANT FORM}) can be interpreted as
non-locality in the particle proper time.
The remarkable feature of Eq.(\ref{COVARIANT FORM}) is that the RR
self-force (see the second term in the rhs of Eq.(\ref{RR})) contains
non-local effects only through the retarded particle 4-position and not
through the 4-velocity. This feature is fundamental for the subsequent fluid
treatment, since it permits the evaluation in the standard way of the
velocity moments, retaining the exact form of the RR self-interaction.
The system of Eqs.(\ref{RR}) and (\ref{4vel}) defines a delay-type ODE
problem of the for
\begin{equation}
\left\{
\begin{array}{c}
\frac{d\mathbf{y}}{ds}=\mathbf{X}_{H}\left( \mathbf{y},\left[ r\right]
\right) , \\
\mathbf{y}\left( s_{0}\right) =\mathbf{y}_{0}, \\
\mathbf{y}\left( s_{0}^{\prime }\right) =\mathbf{y}_{s_{0}^{\prime }},\ \ \
\forall s_{0}^{\prime }\in I_{s_{0},s_{0}-s_{ret}}
\end{array
\right. \label{initial pro}
\end{equation
with $s_{0}$ and $s_{ret}$ denoting respectively the initial particle proper
time and the causal retarded proper time (see Paper I), and $\mathbf{X}_{H}$
the Hamiltonian vector fiel
\begin{equation}
\mathbf{X}_{H}\left( \mathbf{y},\left[ r\right] \right) \equiv \left\{ \frac
\partial H_{eff}\left( r,P,\left[ r\right] \right) }{\partial P_{\mu }},
\frac{\partial H_{eff}\left( r,P,\left[ r\right] \right) }{\partial r^{\mu }
\right\} . \label{hamvect}
\end{equation
Denoting by $\mathbf{y}\left( s\right) =\mathbf{\chi }\left( \mathbf{y
_{0},\left\{ \mathbf{y}_{s_{0}^{\prime }},\forall s_{0}^{\prime }\in
I_{s_{0},s_{0}-s_{ret}}\right\} ,s-s_{0}\right) $ the formal solution of the
problem (\ref{initial pro}), in the reminder we shall assume that the ma
\begin{equation}
\mathbf{y}_{0}\rightarrow \mathbf{y}\left( s\right) \label{map}
\end{equation
is a diffeomeorphism of class $C^{k}$, with $k\geq 1$.
Based on these results, the Hamiltonian formulation is provided by the
following theorem.
\bigskip
\textbf{THM.4 - Non-local variational and effective Hamiltonian functions
for the non-rotating particle}
\emph{Given validity of THMs. 1-3, it follows that:}
\emph{T4}$_{1}$\emph{) The RR equation (\ref{RR}) for a non-rotating and
spherically-symmetric charged particle admits the non-local Hamiltonian
system }$\left\{ \mathbf{y}\equiv (r^{\mu }\mathbf{,}p_{\mu })\mathbf{,
H_{1}\right\} $\emph{. Here }$p_{\mu }$ \emph{and }$H_{1}\equiv H_{1}\left(
r,p,\left[ r\right] \right) $\emph{\ are respectively the canonical momentum
(\ref{canmom}) defined with respect to the variational Lagrangian }$L_{1}^
\text{ }}$ \emph{given in Eq.(\ref{EXTREMAL LAGRANGIAN}), and the
corresponding non-local variational Hamiltonian (\ref{canh}) defined as the
Legendre transformation of }$L_{1}$\emph{. In particular, the variational
non-local Hamiltonian (\ref{canh}) is identified with
\begin{equation}
H_{1}\left( r,p,\left[ r\right] \right) \equiv \frac{1}{2m_{o}c}\left(
p_{\mu }-\frac{q}{c}A_{\mu }\right) \left( p^{\mu }-\frac{q}{c}A^{\mu
}\right) , \label{NON-LOCAL HAM}
\end{equation
\emph{where }$A_{\mu }$\emph{\ is the total EM 4-potential
\begin{equation}
A_{\mu }\left( r,\left[ r\right] \right) \equiv \overline{A}_{\mu
}^{(ext)}\left( r\right) +\overline{A}_{\mu }^{(self)}\left( r,\left[
\right] \right) , \label{atot1}
\end{equation
\emph{and from Eq.(\ref{LAGRANGIAN-SELF-EM}) }$\overline{A}_{\mu }^{(self)}
\emph{\ is the functional
\begin{equation}
\overline{A}_{\mu }^{(self)}\left( r,\left[ r\right] \right) \equiv
2q\int_{1}^{2}dr_{\mu }^{\prime }\delta (\widetilde{R}^{\mu }\widetilde{R
_{\mu }-\sigma ^{2}). \label{atot2}
\end{equation}
\emph{T4}$_{2}$\emph{) There exist} $P_{\mu }$ \emph{and} $H_{eff},$ \emph
defined respectively by Eqs.(\ref{pleff}) and (\ref{heff}),\textit{\ }such
that
\begin{equation}
H_{eff}\left( r,P,\left[ r\right] \right) \equiv \frac{1}{2m_{o}c}\left(
P_{\mu }-\frac{q}{c}A_{\left( eff\right) \mu }\right) \left( P^{\mu }-\frac{
}{c}A_{\left( eff\right) }^{\mu }\right) , \label{EFFECGIVE HAMILTONIAN}
\end{equation
\emph{with }$A_{\left( eff\right) \mu }$\emph{\ the non-local effective EM
4-potential
\begin{equation}
A_{\left( eff\right) \mu }\left( r,P\right) \equiv \overline{A}_{\mu
}^{(ext)}\left( r\right) +2\overline{A}_{\mu }^{(self)}\left( r,\left[
\right] \right)
\end{equation
\emph{and }$\overline{A}_{\mu }^{(self)}$ \emph{defined in Eq.(\ref{atot2}).}
\emph{T4}$_{3}$\emph{) The effective and variational Hamiltonian functions }
H_{eff}$\emph{\ and }$H_{1}$\emph{\ coincide when expressed in terms of the
4-velocity }$\frac{dr^{\mu }(s)}{ds}$\emph{.}
\emph{Proof}\ - The proof of T4$_{1}$ and T4$_{2}$ follows immediately by
applying THMs.1 and 2 with the variational Lagrangian $L_{1}$ given by Eq.
\ref{EXTREMAL LAGRANGIAN}). In particular, this yield
\begin{equation}
p_{\mu }=m_{o}c\frac{dr_{\mu }(s)}{ds}+\frac{q}{c}\left[ \overline{A}_{\mu
}^{(ext)}+\overline{A}_{\mu }^{(self)}\right] \label{p}
\end{equation
an
\begin{equation}
P_{\mu }=m_{o}c\frac{dr_{\mu }(s)}{ds}+\frac{q}{c}\left[ \overline{A}_{\mu
}^{(ext)}+2\overline{A}_{\mu }^{(self)}\right] . \label{pp}
\end{equation
The corresponding Legendre transformations then provide respectively Eq.(\re
{NON-LOCAL HAM}) and Eq.(\ref{EFFECGIVE HAMILTONIAN}). Finally, by direct
substitution of Eq.(\ref{p}) into Eq.(\ref{NON-LOCAL HAM}) and Eq.(\ref{pp})
into Eq.(\ref{EFFECGIVE HAMILTONIAN}), one obtains tha
\begin{equation}
H_{eff}=H_{1}=\frac{m_{o}c}{2}\frac{dr_{\mu }(s)}{ds}\frac{dr^{\mu }(s)}{ds},
\end{equation
which proves also the last statement.
\textbf{Q.E.D.}
\bigskip
We remark that the Hamilton equation in standard form expressed in terms of
H_{eff}$ and $P_{\mu }$ are differential equations of delay-type, as a
consequence of the non-local dependencies appearing in $H_{eff}$ which are
characteristic of the RR phenomenon. In this case, for the well-posedness of
the solution the initial conditions in the interval $I=\left[
s_{0}-s_{ret},s_{0}\right] $ must be defined, with $s_{0}$ the initial
proper time and $s_{ret}$ a suitable retarded time. However, if the
assumption of inertial motion in the proper time interval $I_{0}=\left[
-\infty ,s_{0}\right] $ holds, then the mappin
\begin{equation}
T_{s_{0},s}:\mathbf{y}_{0}\equiv \mathbf{y}\left( s_{0}\right) \rightarrow
\mathbf{y}\left( s\right) \equiv T_{s_{0},s}\mathbf{y}_{0}, \label{dinsis}
\end{equation
with $\mathbf{y}=\left( r^{\mu },P_{\mu }\right) $, defines a classical
dynamical system (see Paper I), and this dynamical system is Hamiltonian.
\bigskip
\section{A Hamiltonian asymptotic approximation for the RR equation}
In this section we intend to carry out in detail a comparison of the present
approach for extended particles with the customary point-particle treatments
leading to the LAD and LL equations. For this purpose, asymptotic
approximations of the exact RR self-force (\ref{COVARIANT FORM}) are
investigated.
The issue has been partially discussed in Paper I. As pointed out therein,
an asymptotic approximation of the exact RR equation (\ref{RR}) can be
obtained in validity of the \emph{short delay-time ordering}, namely
requirin
\begin{equation}
0<\epsilon \equiv \left\vert \frac{s_{ret}}{s}\right\vert \ll 1,
\label{ordering -1}
\end{equation
where $s_{ret}=s-s^{\prime }$, with $s$ and $s^{\prime }$ denoting
respectively the present and retarded particle proper times. This permits
two different possible strategies, respectively based on Taylor expansions
performed with respect to $s$ (\textit{present-time expansion}) or
s^{\prime }$ (\textit{retarded-time expansion}). In particular, adopting the
present-time expansion for the RR self-force (\ref{COVARIANT FORM}), the
delay-type ODE (\ref{RR}) can be reduced, in principle, to an infinite-order
differential equation. Instead, by truncating the Taylor expansion to
first-order in $\epsilon $, ignoring mass-renormalization terms and taking
the point-particle limit $\sigma \rightarrow 0$, in this way the customary
expression for the LAD equation is recovered (see THM.3 of Paper I).
As remarked in the Introduction, the resulting asymptotic approximation
(given by the LAD equation) is non-variational and therefore
non-Hamiltonian. In addition, contrary to the exact RR equation obtained
here, the LAD equation, as well as the related LL approximation, both fail
in the transient time intervals occurring when the external EM field acting
on the particle is turned on and off. To elucidate this point, let us
consider the dynamics of a charged particle which is in inertial motion in
the past for all $s<s_{0}$ and from $s=s_{0}$ is subject to the action of an
external EM field. Then, by construction, it is immediate to show that in
the transient time interval $I_{0}=\left[ s_{0},s_{0}+s_{ret}\right] $ the
exact RR self-force (\ref{RR}) is manifestly identically zero. In fact, in
the case of inertial motion in the past (namely $u_{\mu }(s^{\prime
})=const. $) the RR self-force vanishes in such a time interval (see THM.1
in Paper I). In contrast, both the LAD and LL equations predict incorrectly
a non-vanishing RR self-force. The same kind of inconsistency (for the LAD
and LL equations) arises when the analogous transient time interval
corresponding to the turning-off of the external EM field is considered \cit
{Dorigo2008a}.
Therefore, the issue arises whether an alternative asymptotic approximation
can be determined (for the exact RR equation) which simultaneously:
1) overcomes this deficiency, by taking into account consistently
relativistic finite delay-time effects characteristic of the RR phenomenon;
2) is variational and admits a standard Hamiltonian formulation.
In this section we propose a solution to this problem, by performing a
retarded-time expansion, which provides an alternative to the LAD and LL
equations.
\bigskip
\subsection{The Hamiltonian approximation}
For definiteness, let us assume that the external force acting on the
particle is non-vanishing only in a finite proper-time interval $I\equiv
\left[ s_{0},s_{1}\right] $. Then, in validity of the ordering (\re
{ordering -1}), we require that the external EM force is slowly varying in
the sense that, denoting $r^{\prime }\equiv r^{\mu }\left( s^{\prime
}\right) $ and $r\equiv r^{\mu }\left( s\right) $
\begin{eqnarray}
\overline{F}_{\mu \nu }^{(ext)}\left( r^{\prime }\right) -\overline{F}_{\mu
\nu }^{(ext)}\left( r\right) &\sim &O\left( \epsilon \right) ,
\label{smooth1} \\
\left( \overline{F}_{\mu \nu }^{(ext)}\left( r^{\prime }\right) -\overline{F
_{\mu \nu }^{(ext)}\left( r\right) \right) _{,h} &\sim &O\left( \epsilon
\right) , \\
\left( \overline{F}_{\mu \nu }^{(ext)}\left( r^{\prime }\right) -\overline{F
_{\mu \nu }^{(ext)}\left( r\right) \right) _{,hk} &\sim &O\left( \epsilon
\right) . \label{smooth3}
\end{eqnarray}
Then, the retarded-time Hamiltonian approximation of the RR equation is
obtained by performing a Taylor expansion in a neighborhood of $s^{\prime }
. The result is summarized by the following theorem.
\bigskip
\textbf{THM.5 - First-order, short delay-time Hamiltonian approximation
(retarded-time expansion).}
\emph{Given validity of the asymptotic ordering (\ref{ordering -1}) and the
smoothness assumptions (\ref{smooth1})-(\ref{smooth3}) for the external EM
field, neglecting corrections of order }$\epsilon ^{n},$ \emph{with} $n\geq
1 $ \emph{(first-order approximation), the following results hold:}
\emph{T5}$_{1}$\emph{) The vector field
\begin{equation}
G_{\mu }\equiv \frac{q}{c}\overline{F}_{\mu k}^{\left( self\right) }\left(
r\left( s\right) ,r\left( s^{\prime }\right) \right) \frac{dr^{k}(s)}{ds}
\end{equation
\emph{appearing in Eq.(\ref{RR}) can be approximated in a neighborhood of }
s^{\prime }$ \emph{as
\begin{equation}
g_{\mu }\left( r\left( s^{\prime }\right) \right) =\left\{ -m_{oEM}c\frac{d}
ds^{\prime }}u_{\mu }\left( s^{\prime }\right) +g_{\mu }^{\prime }\left(
r\left( s^{\prime }\right) \right) \right\} , \label{asymp}
\end{equation
\emph{\ to be referred to as retarded-time Hamiltonian approximation, in
which the first term on the rhs identifies a retarded mass-correction term,
$m_{oEM}\equiv \frac{q^{2}}{c^{2}\sigma }$ \emph{denoting the leading-order
EM mass. Finally, }$g_{\mu }^{\prime }$\emph{\ is the 4-vector
\begin{equation}
g_{\mu }^{\prime }\left( r\left( s^{\prime }\right) \right) =-\frac{1}{3
\frac{q^{2}}{c}\left[ \frac{d^{2}}{ds^{\prime 2}}u_{\mu }\left( s^{\prime
}\right) -u_{\mu }(s^{\prime })u^{k}(s^{\prime })\frac{d^{2}}{ds^{\prime 2}
u_{k}\left( s^{\prime }\right) \right] .
\end{equation}
\emph{T5}$_{2}$\emph{) The corresponding RR equation, obtained replacing }
G_{\mu }$ \emph{with the asymptotic approximation }$g_{\mu }$ \emph{(\re
{asymp}), is variational, Lagrangian and admits a standard Lagrangian form.
Let us denote with }$r_{0}^{\prime }\equiv r_{0}\left( s^{\prime }\right) $
\emph{the extremal particle world-line at the retarded proper time }
s^{\prime }$\emph{. Then, in this approximation the corresponding asymptotic
variational Lagrangian and effective Lagrangian functions coincide. Both are
defined in terms of the asymptotic approximation }
L_{C,asym}^{(self)}(r,r_{0}^{\prime }),$\emph{\ replacing }$L_{C}^{(self)}
\emph{. To leading-order in }$\epsilon $\emph{, this is found to be
\begin{equation}
L_{C,asym}^{(self)}(r,r_{0}^{\prime })=g_{\mu }\left( r_{0}^{\prime }\right)
r^{\mu }.
\end{equation
\emph{T5}$_{3}$\emph{) The asymptotic approximation given by Eq.(\ref{asymp
) is also Hamiltonian. The asymptotic variational and effective Hamiltonian
functions coincide and are given by
\begin{equation}
H_{1,asym}=p_{\mu }\frac{dr^{\mu }}{ds}-L_{1,asym}
\end{equation
\emph{with
\begin{equation}
L_{1,asym}(r,r_{0}^{\prime
})=L_{M}(r)+L_{C}^{(ext)}(r)+L_{C,asym}^{(self)}(r,r_{0}^{\prime }),
\end{equation
\emph{and now
\begin{equation}
p_{\mu }=\frac{\partial L_{1,asym}}{\partial \frac{dr_{\mu }(s)}{ds}}.
\end{equation}
\emph{Proof} - T5$_{1}$) The proof can be carried out starting from Eq.(\re
{RR}) and performing explicitly the Taylor expansion in a neighborhood of
s^{\prime }\equiv s-s_{ret}$. For a generic analytic function $f\left(
s\right) $, this yields the power series of the for
\begin{equation}
f(s)=\sum\limits_{k=0}^{\infty }\frac{(s-s^{\prime })^{k}}{k!}\frac
d^{k}f(s^{\prime })}{ds^{k}}. \label{Taylor-series2}
\end{equation
In particular, for the 4-vectors $\frac{dr_{\mu }(s)}{ds}$ and $\widetilde{R
^{k}$ one obtains respectively the asymptotic approximation
\begin{equation}
\frac{dr_{\mu }(s)}{ds}\cong \frac{dr_{\mu }\left( s^{\prime }\right) }
ds^{\prime }}+(s-s^{\prime })\frac{d^{2}r_{\mu }\left( s^{\prime }\right) }
ds^{\prime 2}}+\frac{(s-s^{\prime })^{2}}{2}\frac{d^{3}r_{\mu }\left(
s^{\prime }\right) }{ds^{\prime 3}}+O\left( \epsilon ^{3}\right)
\end{equation
an
\begin{equation}
\widetilde{R}^{k}\cong (s-s^{\prime })\frac{dr^{k}\left( s^{\prime }\right)
}{ds^{\prime }}+\frac{(s-s^{\prime })^{2}}{2}\frac{d}{ds^{\prime }
u^{k}\left( s^{\prime }\right) +\frac{(s-s^{\prime })^{3}}{6}\frac{d^{2}}
ds^{\prime 2}}u^{k}\left( s^{\prime }\right) +O\left( \epsilon ^{4}\right) ,
\end{equation
while for the time delay $s-s^{\prime }\equiv s_{ret}$ the leading-order
expressio
\begin{equation}
s-s^{\prime }\cong \sigma +O\left( \epsilon ^{2}\right)
\end{equation
holds. By substituting these expansions in Eq.(\ref{COVARIANT FORM}), the
asymptotic solution given by Eq.(\ref{asymp}) can be recovered.
T5$_{2}$)-T5$_{3}$) The proof follows by first noting that
L_{C,asym}^{(self)}$ contributes to the Euler-Lagrange equations only in
terms of the local dependence in terms of $r$. Then, in this approximation
the canonical momentum become
\begin{equation}
p_{\mu }=m_{0}c\frac{dr_{\mu }(s)}{ds}+\frac{q}{c}\overline{A}_{\mu
}^{(ext)}(r)=P_{\mu },
\end{equation
while the asymptotic Hamiltonian reduces t
\begin{equation}
H_{1,asym}\left( r,p,r_{0}^{\prime }\right) =\frac{1}{2m_{o}c}\left( p_{\mu
}-\frac{q}{c}\overline{A}_{\mu }^{(ext)}(r)\right) \left( p^{\mu }-\frac{q}{
}\overline{A}_{\mu }^{(ext)\mu }(r)\right) +g_{\mu }\left( r_{0}^{\prime
}\right) r^{\mu }.
\end{equation
The corresponding Lagrangian and Hamiltonian equations manifestly coincide
with Eq.(\ref{RR}) once the approximation (\ref{asymp}) is invoked for the
vector field $G_{\mu }$.
\textbf{Q.E.D.}
\bigskip
\subsection{Discussion and comparisons with point-particle treatments}
The asymptotic Hamiltonian approximation, here pointed out for the first
time (see THM.5), preserves the basic physical properties of the exact RR
force (\ref{RR}). In fact in both cases, the RR force:
1) is non-local, depending on the past history of the finite-size charged
particle;
2) admits a variational formulation;
3) is both Lagrangian and Hamiltonian;
4) satisfies the Einstein Causality Principle and, when applicable, the
Newton Principle of Determinacy (see also Paper I);
5) describes correctly the transient time intervals in which the external
force is turned on and off (sudden force).
\bigskip
For these reasons, physical comparisons based on the retarded-time
Hamiltonian asymptotic approximation are meaningful. In particular, here we
remark that the present approach departs in several ways with respect to
point-particle treatments based on the LAD and LL equations. More precisely:
1) The same type of asymptotic ordering is imposed, which is based on the
short delay-time ordering (\ref{ordering -1}). However, in contrast with the
LAD and LL equations, the expansion adopted in THM.5 and leading to the
retarded-time Hamiltonian approximation can \emph{only} be performed based
on the knowledge of the exact RR force for finite-size particles.
2) Unlike the LAD and LL equations, the asymptotic Hamiltonian approximation
carries the information of the past dynamical history of the charged
particle through the retarded time $s^{\prime }$. Therefore, the dynamical
equation written adopting the approximation (\ref{asymp}) is still a
delay-type second-order ODE. The construction of its general solution
becomes trivial in this case, since the self-force is considered as an
explicit source term evaluated at proper time $s^{\prime }$.
3) The asymptotic approximation provided by Eq.(\ref{asymp}) cannot be
regarded as a point-particle limit. In fact, the retarded mass-correction
term would diverge in this limit.
4) The exact RR equation satisfies identically by construction the kinematic
constraint $u_{\mu }u^{\mu }=1$. The same constraint is satisfied to
leading-order in $\epsilon $ also both by the retarded and present-time
asymptotic expansions (and hence also the LAD equation).
5) The variational principle introduced in THM.5 is subject to the
constraint that the past history is considered prescribed in terms of the
extremal world-line. This requirement is consistent with the initial
conditions for the RR equation, which is a delay-type ODE depending only on
the past history of the particle. This requires that the world-line
trajectory is prescribed in the past, namely in the time interval $I=\left[
-\infty ,s_{0}\right] $. Since, however, the initial proper time $s_{0}$ is
arbitrary, it follows that $r\left( s\right) $ can be considered prescribed
also in the time interval $I^{\prime }=\left[ -\infty ,s^{\prime }\right] $.
In particular, if for all $s<s_{0}$ the motion is assumed to be inertial,
the initial-value problem associated to the RR equation written in terms of
the retarded asymptotic self-force (\ref{asymp}) is well-posed, in the sense
of the standard Newton Principle of Determinism, as discussed in Paper I
(see in particular THM.4 presented there and dealing with the existence and
uniqueness of solutions for the exact RR equation).
6) One might think that the same type of constrained variational principle,
of the kind adopted in THM.5, could be adopted also for the exact RR
equation. However, this belief is wrong. In fact, since the variational
functional (\ref{EXTREMAL LAGRANGIAN}) is symmetric with respect to the
local and non-local world-line trajectories, there is no distinction between
past and future. Since future cannot be prescribed, such a constrained
variational principle for the exact equation is forbidden. On the contrary,
the extremal RR equation (\ref{RR}) is obtained by imposing also the
Einstein Causality Principle, and therefore it depends only on the past
history.
7) Despite some formal similarities between the retarded-time Hamiltonian
approximation versus the corresponding LAD and LL equations, the latter
cannot be recovered even in the framework of some kind of constrained
variational principle. In fact this would require to consider prescribed for
example, second or higher-order proper-time derivatives of the particle
position vector (namely the acceleration and its derivatives). This
viewpoint is manifestly unacceptable, because it would amount to constraint
the present state of the particle at proper time $s$.
8) The previous argument justifies, in turn, the introduction of the short
delay-time asymptotic approximation given in THM.5. This is performed
directly on the RR force, namely the 4-vector $G_{\mu }$ entering the RR
equation itself. In this way the variational character of the RR problem is
preserved. It follows that the corresponding variational functional as well
as the Lagrangian and Hamiltonian functions for the asymptotic RR equation
are constructed only \textquotedblleft a posteriori\textquotedblright .
9) Another advantage of the new representation (\ref{asymp}) with respect to
the customary LAD and LL equations is that it permits the approximate
treatment of the solution also in the transient\ time intervals after the
turning-on or the turning-off of the external EM field. In particular, in
contrast to the LAD and LL equations, it predicts a vanishing RR self-force
in the turning-on transient phase $I_{0}=\left[ s_{0},s_{0}+s_{ret}\right] $.
10) Finally, it should be remarked that the retarded asymptotic self-force
\ref{asymp}) \emph{cannot} be trivially obtained from the corresponding
local asymptotic representation performed at proper time $s$ and leading to
the LAD equation by simply exchanging $s$ with $s^{\prime }$ (or by a
further Taylor expansion). Indeed, the relationship between the two can only
be established based on the exact form of the self-force.
\bigskip
\section{Collisionless relativistic kinetic theory for the EM RR effect -
Canonical formalism}
In this section we proceed with the construction of the relativistic
classical statistical mechanics (CSM) for a collisionless plasma with the
inclusion of the EM RR effect. In particular we shall prove that the
mathematical formalism introduced in the previous sections to deal with
symmetric non-local interactions allows one to obtain a convenient
formulation for the kinetic theory describing such a system and for the
corresponding fluid representation. The derivation is based on the property
of a symmetric non-local system represented by a finite-size charged
particle of being Hamiltonian with respect to $P_{\mu }$ and $H_{eff}$.
In view of the peculiar features of the non-local RR phenomenon and the
related delay-type differential Hamiltonian equations, it is instructive
here to adopt an axiomatic formulation of the CSM for relativistic systems
with the inclusion of such an effect. We shall assume that the latter are
represented by a system of classical finite-size charged particles subject
only to the action of a mean-field external EM force and a non-local
self-interaction. We intend to show that, using the Hamiltonian
representation in standard form given above, the explicit form of the
relativistic Vlasov kinetic equation can be obtained for the kinetic
distribution function describing the statistical dynamics of such a system.
Therefore, the problem is reduced to a Vlasov-Maxwell description for a
continuous distribution of relativistic charged particles.
For definiteness, let us consider the non-local Hamiltonian dynamical system
in standard form $\left\{ \mathbf{y},H_{eff}\right\} $ given above. This is
characterized by the superabundant state vector $\mathbf{y}=\left( r^{\mu
},P_{\mu }\right) $ spanning the extended 8th-dimensional phase-space
\Gamma $ and with essential state variables $\mathbf{y}_{1}\left( \mathbf{y
\right) $ spanning the 6th-dimensional reduced phase-space $\Gamma _{1}$.
Introducing the \textit{global proper time} $\widehat{s}$, $\Gamma
_{1}\left( \widehat{s}\right) $ is defined a
\begin{equation}
\Gamma _{1}\left( \widehat{s}\right) \equiv \left\{ \mathbf{y}:\mathbf{y}\in
\Gamma ,\ \left\vert u\right\vert =1,\ s\left( y\right) =\widehat{s},\
ds\left( y\right) =\sqrt{g_{\mu \nu }dr^{\mu }dr^{\nu }}\right\} ,
\end{equation
where $\left\vert u\right\vert \equiv \sqrt{u^{\alpha }u_{\alpha }}$ and
s\left( y\right) $ is the world-line proper time uniquely associated to any
\mathbf{y}$. By assumption, $\Gamma _{1}\left( \widehat{s}\right) $ is an
invariant set, i.e., $\Gamma _{1}\left( \widehat{s}\right) =\Gamma _{1}$ for
any $\widehat{s}\in
\mathbb{R}
$. Next, let us consider the Hamiltonian flow $T_{s_{0},s}$ defined in Eq.
\ref{dinsis}). By construction the dynamical system is autonomous, namely
the flow is of the for
\begin{equation}
T_{s_{0},s}\mathbf{y}_{0}\equiv \chi \left( \mathbf{y}_{0},s-s_{0}\right) .
\end{equation
The existence of the dynamical system $T_{s_{0},s}$ for the state $\mathbf{y
\left( s\right) $ has been proved in Paper I. This requires that in the
proper time interval $I_{0}=\left[ -\infty ,s_{0}\right] $ the motion of
each charged particle is inertial, namely the external EM field vanishes in
the same interval. As a result of Eq.(\ref{dinsis}), any point in the
phase-space $\Gamma $ spanned by $\mathbf{y}$ or $\mathbf{y}_{0}$ is
associated to a unique phase-space trajectory, namely such that $\mathbf{y}
\mathbf{y}\left( s\right) $, for any $\mathbf{y}\in \Gamma $. Due to (\re
{dinsis}) there exists necessarily $\mathbf{y}_{0}\equiv \mathbf{y}\left(
s_{0}\right) $ which is mapped in $\mathbf{y}\left( s\right) .$ Viceversa,
for any $s\in
\mathbb{R}
$ there exists a unique $\mathbf{y}=\mathbf{y}\left( s\right) $. However, we
notice here that for the axiomatic formulation of the CSM for the RR problem
the assumption of existence of the dynamical system $T_{s_{0},s}$ is not a
necessary condition. In fact, it is immediate to prove that the minimal
requirement is actually provided only by the existence of the
diffeomeorphism (\ref{map}) defined above.
Now, for a prescribed $\widehat{s}_{0}\in
\mathbb{R}
$ let us consider the set $B\left( \widehat{s}_{0}\right) \subseteq \Gamma
_{1}$, with $B\left( \widehat{s}_{0}\right) $ an ensemble of states $\mathbf
y}_{0},$ each one prescribed at the initial proper time $s_{0}=\widehat{s
_{0}$. Its image generated at any $s=\widehat{s}\in
\mathbb{R}
$ by the flow $T_{s_{0},s}$, for each trajectory, i
\begin{equation}
B\equiv B\left( s\right) \equiv T_{s_{0},s}B\left( s_{0}\right) ,
\end{equation
where $s$ and $s_{0}$ denote now the \textit{global proper times} $\widehat{
}$ and $\widehat{s}_{0}$.
We introduce the following axioms.
\emph{AXIOM \#1:\ Probability on }$K\left( \Gamma \right) $.
Let $K\left( \Gamma _{1}\right) $ be a family of subsets of $\Gamma _{1}$
which are $L$-measurable. We define the probability of $B\left( s\right) \in
K\left( \Gamma _{1}\right) $ as the functio
\begin{equation}
P\left( B\right) :K\left( \Gamma _{1}\right) \rightarrow \left[ 0,1\right]
\end{equation
such that it satisfies the constraint
\begin{eqnarray}
P\left( \Gamma _{1}\right) &=&1, \\
P\left( \varnothing \right) &=&0, \\
P\left( \cup _{i\in N}B_{i}\right) &=&\sum_{i=0}^{\infty }P\left(
B_{i}\right) ,
\end{eqnarray
with $\left\{ B_{i}\in K\left( \Gamma _{1}\right) ,i\in N\right\} $ being an
arbitrary family of separate sets of $K\left( \Gamma _{1}\right) $.
\emph{AXIOM \#2:\ Probability density.}
For any $B\left( s\right) \in K\left( \Gamma _{1}\right) $ and for any state
$\mathbf{y\equiv }\left( r^{\mu },P_{\mu }\right) $ there exists a unique
probability density $\rho \left( \mathbf{y}\right) >0$ on $\Gamma _{1}$ such
tha
\begin{equation}
P\left( B\left( s\right) \right) =\int_{\Gamma }d\mathbf{y}\rho \left(
\mathbf{y}\right) \delta \left( \left\vert u\right\vert -1\right) \delta
\left( s-s\left( y\right) \right) \delta _{B\left( s\right) }\left( \mathbf{
}\right) , \label{intphasesp}
\end{equation
where $d\mathbf{y}=dr^{\mu }dP_{\mu }$ is the canonical measure on $\Gamma $
and $\delta _{B\left( s\right) }\left( \mathbf{y}\right) $ is the
characteristic function of $B\left( s\right) $. Furthermore, $s\left(
y\right) $ is a particle world-line proper time, while $s\equiv s_{0}+\Delta
s$, with $\Delta s$ an \textit{invariant proper time interval} independent
of $s_{0}$. We notice that $s\left( y\right) $ can be equivalently
parametrized in terms of the observer's coordinate time $r^{0}$, namely
\begin{equation}
ds\left( y\right) \equiv dr^{0}\sqrt{g_{\mu \nu }\frac{dr^{\mu }}{dr^{0}
\frac{dr^{\nu }}{dr^{0}}}.
\end{equation}
\emph{AXIOM \#3:\ Equiprobability.}
Then, the equiprobability condition requires that, for all $B\left(
s_{0}\right) $ and for all $s,s_{0}\in I\subseteq
\mathbb{R}
$
\begin{equation}
P\left( B\left( s\right) \right) =P\left( B\left( s_{0}\right) \right) .
\end{equation}
\bigskip
We remark that in the integral (\ref{intphasesp}) the two Dirac-delta
functions can be interpreted as physical realizability conditions, required
to reduce the dimension of the volume element $d\mathbf{y}$ defined on the
extended phase-space $\Gamma $.
We can now introduce the following theorem, concerning the validity of the
Liouville equation for $\rho \left( \mathbf{y}\right) $.
\bigskip
\textbf{THM.6 - Relativistic Liouville equation for }$\rho \left( \mathbf{y
\right) $.
\emph{Given a Hamiltonian system }$\left\{ \mathbf{y},H_{eff}\right\} $
\emph{and imposing the validity of Axioms \#1-\#3, it follows that the
probability density }$\rho \left( \mathbf{y}\left( s\right) \right) $ \emph
is a constant of motion, namely for any} $s,s_{0}\in
\mathbb{R}
$ \emph{(to be intended now as world-line proper times) and for any }
\mathbf{y}_{0}\in \Gamma
\begin{equation}
\rho \left( \mathbf{y}\left( s\right) \right) =\rho \left( \mathbf{y
_{0}\right) ,
\end{equation
\emph{to be referred to as the integral Liouville equation. This can also be
written equivalently as
\begin{equation}
\frac{d}{ds}\rho \left( \mathbf{y}\left( s\right) \right) =0, \label{liouv}
\end{equation
\emph{to be referred to as the differential Liouville equation. As a
consequence, introducing the kinetic distribution function (KDF) }$f\left(
\mathbf{y}\right)
\begin{equation}
f\left( \mathbf{y}\right) \equiv \rho \left( \mathbf{y}\right) N,
\end{equation
\emph{with }$N$\emph{\ being the total number of particles in the
configuration space of }$B\subseteq K\left( \Gamma \right) $\emph{, it
follows that also }$f\left( \mathbf{y}\right) $\emph{\ satisfies the
Liouville equation (\ref{liouv}).}
\emph{Proof}\ - We first notice that, from Axiom \#1, by changing the
integration variables we can write Eq.(\ref{intphasesp}) a
\begin{eqnarray}
P\left( B\left( s\right) \right) &=&\int_{\Gamma }d\mathbf{y}\rho \left(
\mathbf{y}\right) \delta \left( \left\vert u\right\vert -1\right) \delta
\left( s-s\left( y\right) \right) \delta _{B\left( s\right) }\left( \mathbf{
}\right) = \notag \\
&=&\int_{\Gamma }d\mathbf{y}_{0}\left\vert \frac{\partial \mathbf{y}\left(
s\right) }{\partial \mathbf{y}_{0}}\right\vert \rho \left( \mathbf{y}\left(
s\right) \right) \delta \left( \left\vert u\right\vert -1\right) \delta
\left( s-s\left( y\right) \right) \delta _{B\left( s_{0}\right) }\left(
\mathbf{y}\left( s_{0}\right) \right) ,
\end{eqnarray
with $\left\vert \frac{\partial \mathbf{y}\left( s\right) }{\partial \mathbf
y}_{0}}\right\vert $ being the Jacobian of the variable transformation from
\mathbf{y}\left( s\right) $ to $\mathbf{y}_{0}$. On the other hand, since
the system $\left\{ \mathbf{y},H_{eff}\right\} $ is Hamiltonian, it follows
identically that $\left\vert \frac{\partial \mathbf{y}\left( s\right) }
\partial \mathbf{y}_{0}}\right\vert =1$. Hence, invoking Axiom \#2 we can
writ
\begin{equation}
\int_{\Gamma }d\mathbf{y}_{0}\left[ \rho \left( \mathbf{y}\left( s\right)
\right) \delta \left( \left\vert u\right\vert -1\right) \delta \left(
s-s\left( y\right) \right) -\rho \left( \mathbf{y}_{0}\right) \delta \left(
\left\vert u_{0}\right\vert -1\right) \delta \left( s_{0}-s\left(
y_{0}\right) \right) \right] \delta _{B\left( s_{0}\right) }\left( \mathbf{y
\left( s_{0}\right) \right) =0,
\end{equation
from which it must be tha
\begin{equation}
\rho \left( \mathbf{y}\left( s\right) \right) \delta \left( \left\vert
u\right\vert -1\right) \delta \left( s-s\left( y\right) \right) =\rho \left(
\mathbf{y}_{0}\right) \delta \left( \left\vert u_{0}\right\vert -1\right)
\delta \left( s_{0}-s\left( y_{0}\right) \right) . \label{Lio1}
\end{equation
On the other hand, by construction it follows tha
\begin{eqnarray}
\delta \left( \left\vert u\right\vert -1\right) &=&\frac{1}{\left\vert \frac
d\left\vert u\right\vert }{d\left\vert u_{0}\right\vert }\right\vert }\delta
\left( \left\vert u_{0}\right\vert -1\right) =\delta \left( \left\vert
u_{0}\right\vert -1\right) , \\
\delta \left( s-s\left( y\right) \right) &=&\frac{1}{\left\vert \frac{ds}
ds_{0}}\right\vert }\delta \left( s_{0}-s\left( y_{0}\right) \right) =\delta
\left( s_{0}-s\left( y_{0}\right) \right) .
\end{eqnarray
In fact, by definition the 4-velocity is normalized to 1 at all proper
times, so that $\left\vert \frac{d\left\vert u\right\vert }{d\left\vert
u_{0}\right\vert }\right\vert =1$. Furthermore, $s\equiv s_{0}+\Delta s$,
with $\Delta s$ being independent of the initial value $s_{0}$, and hence
\left\vert \frac{ds}{ds_{0}}\right\vert =1$ too.
Finally, because of these conclusions, from Eq.(\ref{Lio1}) it follows tha
\begin{equation}
\rho \left( \mathbf{y}\left( s\right) \right) =\rho \left( \mathbf{y
_{0}\right) ,
\end{equation
which represents the Liouville equation in integral form. By differentiating
with respect to $s$ the equivalent differential representation follows at
once. An analogous equation holds manifestly also for the KDF $f\left(
\mathbf{y}\right) $.
\textbf{Q.E.D.}
\bigskip
We conclude noting that, formally, the Liouville equation for non-local
Hamiltonian systems in standard form is analogous to that characterizing
local Hamiltonian systems. Such an equation can be viewed as a \textit
Vlasov equation} for a relativistic collisionless plasma, in which each
particle is subject only to the action of a mean-field EM interaction,
generated respectively by the external and the self\ EM Faraday tensors. By
definition, in this treatment the latter do not include retarded binary EM
interactions. It follows that, in terms of the Lagrangian equation (\re
{liouv}), the probability density $\rho \left( \mathbf{y}\left( s\right)
\right) $ is parametrized in terms of the single-particle phase-space
trajectory $\left\{ \mathbf{y}\left( s\right) ,s\in I\right\} $. Hence, it
advances in (proper) time $s$ by means of the canonical state $\mathbf{y
\left( s\right) $ as determined by the Hamiltonian equations of motion (\re
{initial pro}).
\bigskip
\subsection{Vlasov-Maxwell description}
To define a well-posed problem, the relativistic Vlasov equation (\ref{liouv
) must be coupled to the Maxwell equations, which determine the total EM
field produced by all the relevant sources. In particular, in order to
determine the external Faraday tensor $F_{\mu \nu }^{(ext)}$, the
corresponding EM 4-potential $A_{\nu }^{(ext)}$ must be determined. In the
Lorentz gauge, this is prescribed requiring it to be a solution of the
Maxwell equation
\begin{equation}
\square A^{(ext)\mu }=\frac{4\pi }{c}j^{\left( ext\right) \mu }(r),
\label{m2}
\end{equation
where $j^{\left( ext\right) \mu }(r)$ is identified with the total current
densit
\begin{equation}
j^{\left( ext\right) \mu }(r)\equiv q\int d^{4}u\delta \left( \left\vert
u\right\vert -1\right) u^{\mu }f\left( \mathbf{y}\right) +j^{\left(
coils\right) \mu }(r).
\end{equation
Here, the first term is the Vlasov 4-current density, namely the velocity
moment of $f\left( \mathbf{y}\right) $ carrying the non-local phase-space
contributions which yield the collective field produced by the plasma. The
second term, instead, is produced by possible prescribed sources located
outside the plasma domain. Therefore, in the Vlasov-Maxwell description the
total EM 4-potential acting on a single particle must be considered as
represented by $A_{\nu }=A_{\nu }^{(ext)}+A_{\nu }^{(self)}$, where $A_{\nu
}^{(self)}$ is given by Eq.(\ref{intrepA}) and $A_{\nu }^{(ext)}$ is the
solution of Eq.(\ref{m2}).
Therefore, the dynamical evolution of the KDF along a single-particle
phase-space trajectory depends both explicitly, via $A_{\nu }^{(self)}$, and
implicitly, via the 4-current $j^{\left( ext\right) \mu }(r)$, on the whole
Faraday tensor $F_{\mu \nu }\equiv F_{\mu \nu }^{(ext)}+F_{\mu \nu
}^{(self)} $. In this way contributions which are non-local both in
configuration and phase-space are consistently included in the theory.
\bigskip
\section{Fluid moment equations}
We now proceed to compute explicitly the relativistic fluid moment equations
which follow from the Liouville equation. To this aim, the relativistic
Liouville equation is conveniently written as a PDE (Eulerian form
\begin{equation}
u^{\mu }\frac{\partial f\left( \mathbf{y}\right) }{\partial r^{\mu }}+G^{\mu
}\left( \mathbf{y}\right) \frac{\partial f\left( \mathbf{y}\right) }
\partial u_{\mu }}=0, \label{euler-kinm}
\end{equation
where $G^{\mu }\left( \mathbf{y}\right) $ is defined by Eq.(\ref{RR}), or as
an ODE (Lagrangian form)
\begin{equation}
\frac{dr^{\mu }}{ds}\frac{\partial f\left( \mathbf{y}\left( s\right) \right)
}{\partial r^{\mu }}+\frac{du_{\mu }}{ds}\frac{\partial f\left( \mathbf{y
\left( s\right) \right) }{\partial u_{\mu }}=0, \label{lagr-kinm}
\end{equation
with $\mathbf{y}\left( s\right) $ being the phase-space trajectory of a
particle. Then, the relativistic fluid equations related to the Liouville
equation are defined as the following integrals over the momentum space
\begin{equation}
\int d^{4}u\delta \left( \left\vert u\right\vert -1\right) G\left[ u^{\mu
\frac{\partial f\left( \mathbf{y}\right) }{\partial r^{\mu }}+G^{\mu }\left(
\mathbf{y}\right) \frac{\partial f\left( \mathbf{y}\right) }{\partial u_{\mu
}}\right] =0.
\end{equation
Similarly, the corresponding fluid fields are defined a
\begin{equation}
\int d^{4}u\delta \left( \left\vert u\right\vert -1\right) Gf\left( \mathbf{
}\right) ,
\end{equation
with $G=1,u^{\mu },u^{\mu }u^{\nu },...$ and $u^{\mu }$ is the 4-velocity.
In particular, we shall denot
\begin{eqnarray}
n\left( r\right) &\equiv &\int d^{4}u\delta \left( \left\vert u\right\vert
-1\right) f\left( \mathbf{y}\right) , \label{den1} \\
N^{\mu }\left( r\right) &=&n\left( r\right) U^{\mu }\left( r\right) \equiv
\int d^{4}u\delta \left( \left\vert u\right\vert -1\right) u^{\mu }f\left(
\mathbf{y}\right) , \label{den2} \\
T^{\mu \nu }\left( r\right) &\equiv &\int d^{4}u\delta \left( \left\vert
u\right\vert -1\right) u^{\mu }u^{\nu }f\left( \mathbf{y}\right) ,
\label{den3}
\end{eqnarray
to be referred to as\textit{\ the number density, the 4-flow and the
stress-energy tensor.}
It is immediate to prove that the corresponding moment equations are as
follows.
\bigskip
\emph{Continuity equation}
For $G=1$ the Liouville equation provides the continuity equatio
\begin{equation}
\partial _{\mu }N^{\mu }\left( r\right) =0. \label{continuity}
\end{equation}
\emph{Energy-momentum equation}
For $G=u^{\nu }$ the Liouville equation provides the energy-momentum equatio
\begin{equation}
\partial _{\mu }T^{\mu \nu }\left( r\right) =F_{\left( tot\right) }^{\nu \mu
}\left( r\right) N_{\mu }\left( r\right) , \label{momentum}
\end{equation
where, from Eq.(\ref{RR}) we have tha
\begin{equation}
F_{\left( tot\right) }^{\nu \mu }\left( r\right) \equiv \frac{q}{m_{o}c^{2}
\left[ \overline{F}^{\left( ext\right) \mu \nu }+\overline{F}^{\left(
self\right) \nu \mu }\right]
\end{equation
is the total EM force, with $\overline{F}^{\left( self\right) \nu \mu }$
containing the retarded non-local contributions arising from the EM RR
effect.
\bigskip
We remark the following properties.
1) As a consequence of the Hamiltonian formulation in standard form, the
fluid equations obtained from the kinetic equation with the inclusion of the
RR effect are formally the same as in the usual treatment for local systems.
2) The contribution of the RR effect to the fluid equations is contained
explicitly in the source term in the rhs of Eq.(\ref{momentum}), and also
implicitly in the definition of the fluid fields. In fact, by assumption,
the KDF is a function of the effective Hamiltonian state $\mathbf{y}\equiv
\left( r^{\mu },P_{\mu }\right) $, which depends on the retarded
self-potential. Hence, the fluid fields defined by Eqs.(\ref{den1})-(\re
{den3}) must be interpreted as the fluid fields of the plasma which is
emitting self-radiation and is therefore subject to the RR effect.
\bigskip
\subsection{The implicit contribution of the RR self-force}
It is worth discussing the features of the theory in connection with the
implicit contribution of the RR effect contained in the definition of the
fluid fields. In particular, here we show that such contribution can be made
explicit and an analytical asymptotic estimation of it can be give provide
some suitable assumptions are imposed on the physical system. This concerns
the case in which the contribution of the self-potential is small in
comparison with the external EM potential in the KDF. In these
circumstances, the exact KDF can be Taylor expanded as follows
\begin{equation}
f\left( \mathbf{y}\right) \simeq f\left( \mathbf{y}_{nc}\right) +\left(
\mathbf{y-y}_{nc}\right) \left. \frac{\partial f\left( \mathbf{y}\right) }
\partial \mathbf{y}}\right\vert _{\mathbf{y}=\mathbf{y}_{nc}}+...,
\label{series1}
\end{equation
where $\mathbf{y}_{nc}\equiv \left( r^{\mu },p_{\mu }\right) $ is the state
which is canonical in absence of the EM self-field. It is clear that, by
construction, only the canonical momenta are involved in this expansion,
since the configuration state is left unchanged by the presence of the
self-force. Therefore, from the form of the previous expansion it follows
that the first term of the series, namely $f\left( \mathbf{y}_{nc}\right) $,
does not contain any contribution from the RR self-field. Consider, for
simplicity, the Taylor series to first order. Then, the corresponding fluid
fields can be decomposed as follows
\begin{eqnarray}
n\left( r\right) &\simeq &n_{0}\left( r\right) +n_{1}\left( r\right) ,
\label{n1} \\
N^{\mu }\left( r\right) &\simeq &N_{0}^{\mu }\left( r\right) +N_{1}^{\mu
}\left( r\right) , \\
T^{\mu \nu }\left( r\right) &\simeq &T_{0}^{\mu \nu }\left( r\right)
+T_{1}^{\mu \nu }\left( r\right) , \label{t}
\end{eqnarray
wher
\begin{eqnarray}
n_{0}\left( r\right) &\equiv &\int d^{4}u\delta \left( \left\vert
u\right\vert -1\right) f\left( \mathbf{y}_{nc}\right) , \\
n_{1}\left( r\right) &\equiv &\int d^{4}u\delta \left( \left\vert
u\right\vert -1\right) \left( \mathbf{y-y}_{nc}\right) \left. \frac{\partial
f\left( \mathbf{y}\right) }{\partial \mathbf{y}}\right\vert _{\mathbf{y}
\mathbf{y}_{nc}}=\frac{2q}{c}\overline{A}_{\mu }^{(self)}\int d^{4}u\delta
\left( \left\vert u\right\vert -1\right) \left. \frac{\partial f\left(
\mathbf{y}\right) }{\partial P_{\mu }}\right\vert _{P_{\mu }=p_{\mu }},
\end{eqnarray
and similar definitions hold for the other two fluid fields.
To illustrate the procedure, let us consider, for example, the case of a
relativistic Maxwellian distribution of the form \cite{degroot
\begin{equation}
f_{M}\left( \mathbf{y}\right) \equiv \frac{1}{\left( 2\pi \hbar \right) ^{3}
\exp \left[ \frac{\mu -P^{\mu }U_{\mu }}{T}\right] , \label{max}
\end{equation
where $\mu $, $P^{\mu }$, $U_{\mu }$ and $T$ are respectively the chemical
potential, the canonical momentum and the fluid 4-velocity and temperature.
Then, in terms of the previous expansion, we obtain for the densit
\begin{eqnarray}
n_{0}\left( r\right) &\equiv &\frac{4\pi m^{2}cT}{\left( 2\pi \hbar \right)
^{3}}K_{2}\left( \frac{mc^{2}}{T}\right) \exp \left[ \frac{\mu }{T}-\frac{q}
c}\frac{\overline{A}_{\mu }^{(ext)}U^{\mu }}{T}\right] , \\
n_{1}\left( r\right) &\equiv &-\frac{2q}{c}\frac{\overline{A}_{\mu
}^{(self)}U^{\mu }}{T}n_{0}\left( r\right) ,
\end{eqnarray
with $K_{2}\left( \frac{mc^{2}}{T}\right) $ being the modified Bessel
function of the second kind. As can be seen, the effect of the RR self-field
appears only in $n_{1}\left( r\right) $ through the integral over the
non-local dependencies contained in the potential $\overline{A}_{\mu
}^{(self)}$. It follows that for a Maxwellian KDF the 4-flow $N^{\mu }\left(
r\right) $ can be written a
\begin{equation}
N^{\mu }\left( r\right) \simeq \left[ n_{0}\left( r\right) +n_{1}\left(
r\right) \right] U^{\mu }\left( r\right) , \label{4flow_Max}
\end{equation
while the expansion terms of the stress-energy tensor $T^{\mu \nu }\left(
r\right) $ are given b
\begin{eqnarray}
T_{0}^{\mu \nu }\left( r\right) &\equiv &\frac{1}{c^{2}}n_{0}eU^{\mu }U^{\nu
}-p_{0}\Delta ^{\mu \nu }, \label{t0} \\
T_{1}^{\mu \nu }\left( r\right) &\equiv &\frac{1}{c^{2}}n_{1}eU^{\mu }U^{\nu
}-p_{1}\Delta ^{\mu \nu }. \label{t1}
\end{eqnarray
Here the notation is as in Ref.\cite{degroot}. Thus, $\Delta ^{\mu \nu }$ is
the projector operator $\Delta ^{\mu \nu }\equiv \eta ^{\mu \nu
}-c^{-2}U^{\mu }U^{\nu },$ $e$ is the energy per particl
\begin{equation}
e=mc^{2}\frac{K_{3}\left( \frac{mc^{2}}{T}\right) }{K_{2}\left( \frac{mc^{2
}{T}\right) }-T
\end{equation
and from the definition of the pressure as $p=nT$ it follows tha
\begin{eqnarray}
p_{0}\left( r\right) &=&n_{0}\left( r\right) T, \\
p_{1}\left( r\right) &=&n_{1}\left( r\right) T=-\frac{2q}{c}\overline{A
_{\mu }^{(self)}U^{\mu }n_{0}\left( r\right) .
\end{eqnarray}
Finally, let us consider how the fluid equations are modified from the
introduction of the series expansion (\ref{series1}). Substituting the
relations (\ref{n1})-(\ref{t}) into the moment equations, for the continuity
equation we ge
\begin{equation}
\partial _{\mu }N_{0}^{\mu }\left( r\right) =-\partial _{\mu }N_{1}^{\mu
}\left( r\right) , \label{as1}
\end{equation
and for the momentum equatio
\begin{equation}
\partial _{\mu }T_{0}^{\mu \nu }=F_{\left( tot\right) }^{\nu \mu }N_{\mu
}-\partial _{\mu }T_{1}^{\mu \nu }. \label{as2}
\end{equation
In this way, on the lhs we have isolated the terms of the \textquotedblleft
unperturbed fluid\textquotedblright , namely the physical observables
corresponding to a charged fluid in absence of RR. On the other hand, the
asymptotic contributions of the RR effect have been isolated on the rhs,
which allows one to interpret them as source terms due to extra forces
acting on the unperturbed fluid. In particular, the presence of the RR acts
like a non-conservative collisional operator, if we interpret it as a sort
of retarded scattering of the fluid (and therefore, of the single particles
at the kinetic level) with itself.
\bigskip
\section{Lagrangian formulation of the fluid equations}
An important issue concerns the treatment of the non-local contributions
appearing in the fluid equations both in the definitions of the fluid fields
and in the source term in the momentum equation. This requires, in
particular, the explicit representation of the self-potential $\overline{A
_{\mu }^{(self)}$ and the EM self-force $\overline{F}_{\mu k}^{\left(
self\right) }$ defined respectively in Eqs.(\ref{atot2}) and (\ref{COVARIANT
FORM}). In fact, in the previous sections these non-local contributions have
been written in a parameter-free representation (integral form), so that
they do not depend on the retarded particle velocity. This allowed us to
perform the velocity integrals in a straightforward way, only in terms of
local 4-velocities, in agreement with the formalism adopted for the
Hamiltonian formulation in standard form.
To treat these non-local terms it is first convenient to represent the fluid
moment equations in Lagrangian form, describing the dynamics of fluid
elements along their respective Lagrangian path (LP). By substituting the
definition (\ref{den2}) in Eq.(\ref{continuity}) we obtain the corresponding
Lagrangian form of the continuity equation, given b
\begin{equation}
\frac{D}{Ds}n+n\partial _{\mu }U^{\mu }=0, \label{cont-lag}
\end{equation
where $\frac{D}{Ds}\equiv U^{\mu }\left( r\left( s\right) \right) \partial
_{\mu }$ is the convective Lagrangian derivative along the LP of the fluid
element parametrized in terms of the arc-length $s$, and $U^{\mu }\left(
r\left( s\right) \right) =\frac{dr^{\mu }\left( s\right) }{ds}$. Similarly,
writing the stress-energy tensor $T^{\mu \nu }\left( r\right) $ as $T^{\mu
\nu }\left( r\right) =nU^{\mu }U^{\nu }+P^{\mu \nu }\left( r\right) $, with
P^{\mu \nu }\left( r\right) \equiv T^{\mu \nu }\left( r\right) -nU^{\mu
}U^{\nu }$, the energy-momentum equation (\ref{momentum}) can be represented
in Lagrangian form as follows
\begin{equation}
n\frac{D}{Ds}U^{\nu }=nF_{\left( tot\right) }^{\nu \mu }U_{\mu }-\partial
_{\mu }P^{\mu \nu }. \label{euler-lag}
\end{equation
Analogous results can be given for the asymptotic equations (\ref{as1}) and
\ref{as2}).
With the introduction of the LPs, the parametrization of the non-local
contributions can be easily reached in terms of the LP arc-length $s$.
Consider, for example, the self-potential $\overline{A}_{\mu }^{(self)}$.
This can be expressed a
\begin{equation}
\overline{A}_{\mu }^{(self)}\left( r,\left[ r\right] \right) \equiv
2q\int_{1}^{2}ds^{\prime }\frac{dr_{\mu }^{\prime }}{ds^{\prime }}\delta
\widetilde{R}^{\mu }\widetilde{R}_{\mu }-\sigma ^{2}),
\end{equation
where by definition now $\frac{dr_{\mu }^{\prime }}{ds^{\prime }}=U^{\mu
}\left( r\left( s^{\prime }\right) \right) $ is defined along a fluid
element LP. Then, by expressing the Dirac-delta function a
\begin{equation}
\delta (\widetilde{R}^{\mu }\widetilde{R}_{\mu }-\sigma ^{2})=\frac{1}
\left\vert 2\widetilde{R}^{\alpha }U_{\alpha }\right\vert }\delta \left(
s^{\prime }-s+s_{ret}\right) ,
\end{equation
it follows that $\overline{A}_{\mu }^{(self)}$ can be equivalently written
in the integrated form a
\begin{equation}
\overline{A}_{\mu }^{(self)}\left( r,\left[ r\right] \right) =q\left[ \frac
U_{\mu }\left( r\left( s^{\prime }\right) \right) }{\left\vert \widetilde{R
^{\alpha }U_{\alpha }\left( r\left( s^{\prime }\right) \right) \right\vert
\right] _{s^{\prime }=s-s_{ret}}, \label{self_integral}
\end{equation
with $\widetilde{R}^{\alpha }$ being the displacement vector defined along a
LP. In particular, in agreement with the Einstein Causality Principle, the
retarded time $s_{ret}=s-s^{\prime }$ is the positive root of the delay-time
equatio
\begin{equation}
\widetilde{R}^{\mu }\widetilde{R}_{\mu }-\sigma ^{2}=0. \label{delay}
\end{equation}
An analogous derivation can be carried out also for the self-force
\overline{F}_{\mu k}^{\left( self\right) }$, giving the following resul
\begin{equation}
\overline{F}_{\mu k}^{\left( self\right) }\left( r,\left[ r\right] \right)
=-2q\left\{ \frac{1}{\left\vert \widetilde{R}^{\alpha }U_{\alpha }(s^{\prime
})\right\vert }\frac{D}{Ds^{\prime }}X_{\mu k}\left( r\left( s^{\prime
}\right) \right) \right\} _{s^{\prime }=s-s_{ret}}, \label{fself}
\end{equation
wher
\begin{equation}
X_{\mu k}\left( r\left( s^{\prime }\right) \right) \equiv \left[ \frac
U_{\mu }(r\left( s^{\prime }\right) )\widetilde{R}_{k}-U_{k}(r\left(
s^{\prime }\right) )\widetilde{R}_{\mu }}{\widetilde{R}^{\alpha }U_{\alpha
}(r\left( s^{\prime }\right) )}\right] .
\end{equation
Again, this expression must be intended as a parametrization defined along a
fluid element LP.
\bigskip
We conclude by commenting on the following remarkable aspects of the theory
presented here.
1) The fluid equations with the inclusion of the non-local effect related to
the EM RR have been derived in a closed analytical form in both Eulerian and
Lagrangian formulations. In particular, it follows that the fluid dynamics
of the non-local kinetic system is intrinsically non-local too.
2) Non-local contributions of the RR appear both in explicit and implicit
contributions, through the definitions of the fluid fields as velocity
moments of the KDF.
3) From the point of view of the fluid description, it follows that the
natural setting for the treatment of the non-local fluid equations is given
by the Lagrangian formulation and the concept of LPs. This is a consequence
of the fact that the exact moment equations are of delay-type. In fact, in
order to properly deal with the non-local contributions of the RR the
parametrization of the retarded effects in terms of the arc-length of the
corresponding LPs is needed. It follows that the dynamics of a generic fluid
element along its LP is related to the EM RR effect produced at the retarded
time along the LP itself.
\bigskip
\section{Asymptotic approximation}
In the previous sections we derived an exact formulation for both kinetic
and fluid theories describing systems of relativistic charged particles
subject to the EM RR self-interaction. In particular, we have pointed out
that the kinetic and fluid equations are of delay-type, and therefore
intrinsically non-local, due to the characteristic feature of the RR effect
of being a non-local retarded effect. The retarded proper time is determined
by Eq.(\ref{delay}) in agreement with the causality principle. Notice that
this equation has formally the same expression for the single-particle or
the kinetic dynamics and for the fluid equations in Lagrangian form (see
also Paper I). By inspecting Eq.(\ref{delay}) it is easy to realize that the
order of magnitude of the delay-time is approximately $s_{ret}\sim \sigma /c
, and therefore very small for classical elementary particles. The smallness
of the retarded time may represent a serious problem for the practical
implementation of the exact theory presented here. In fact, the retarded
time associated to the RR can be orders of magnitude smaller than any other
characteristic time for most of relevant physical situations. The question
is of primary importance, for example, for the actual numerical integration
of the exact fluid equations.
In view of these considerations, in this section we provide asymptotic
estimations of the non-local terms appearing in the moment equations, which
allow one to overcome the difficulty connected with the finite delay-time
intervals carried by the RR phenomenon. This requires to introduce a
suitable asymptotic expansion of the exact non-local terms by means of
approximations in which the self-interaction contributions are all expressed
only through local quantities. The result has potential interest also in
relation to the use of Eulerian integration schemes for the fluid equations
with the inclusion of the RR effect.
Specifically, the present analysis requires to develop an asymptotic
approximation which involves the treatment of the delay-time $s_{ret}$. This
is accomplished within the short delay-time ordering approximation given by
Eq.(\ref{ordering -1}). In the following we shall work adopting the
Lagrangian representation form for the fluid equations. To perform the
asymptotic expansion, we assume that both the external EM field acting on
each fluid element and the macroscopic fluid fields associated to the
kinetic system are smooth function of the coordinate 4-position vector
r^{\alpha }$, namely they are of class $C^{k}$, with $k\geq 2$. The result
of the asymptotic approximation for the terms associated to the RR
self-interaction is provided by the following theorem.
\bigskip
\textbf{THM.7 - First-order, short delay-time asymptotic approximation
(present-time expansion).}
\emph{Given validity of the asymptotic ordering (\ref{ordering -1}) and the
smoothness assumptions for the external EM and the fluid fields, neglecting
corrections of order }$\epsilon ^{n},$ \emph{with} $n\geq 1$ \emph
(first-order approximation)}$,$\emph{\ it follows that:}
\emph{T7}$_{1}$\emph{) The retarded self-potential }$\overline{A}_{\mu
}^{(self)}$ \emph{defined in Eq.(\ref{self_integral}) can be expanded in a
neighborhood of }$s$\emph{\ as follows:
\begin{equation}
\overline{A}_{\mu }^{(self)}=\left. \overline{A}_{\mu }^{(self)}\right\vert
_{s}\left[ 1+O(\epsilon )\right] , \label{asin1}
\end{equation
\emph{where the present-time leading-order contribution }$\left. \overline{A
_{\mu }^{(self)}\right\vert _{s}$ \emph{is given by
\begin{equation}
\left. \overline{A}_{\mu }^{(self)}\right\vert _{s}=q\left[ \frac{1}{\sigma
U_{\mu }\left( r\left( s\right) \right) -\frac{D}{Ds}U_{\mu }\left( r\left(
s\right) \right) \right] ,
\end{equation
\emph{with }$\frac{D}{Ds}$ \emph{being the convective derivative along a
fluid element Lagrangian path.}
\emph{T7}$_{2}$\emph{) Concerning Eq.(\ref{euler-lag}), let us define the
vector field }$K_{\mu }$ \emph{as follows:
\begin{equation}
K_{\mu }\equiv \frac{q}{m_{o}c^{2}}\overline{F}_{\mu \nu }^{\left(
self\right) }U^{\nu }, \label{gmu}
\end{equation
\emph{with }$\overline{F}_{\mu \nu }^{\left( self\right) }$\emph{\ defined
in Eq.(\ref{fself}). Then, in a neighborhood of }$s$\emph{, }$K_{\mu }$\emph
\ can be expanded as follows:
\begin{equation}
K_{\mu }=\left. K_{\mu }\right\vert _{s}\left[ 1+O(\epsilon )\right] ,
\label{gs}
\end{equation
\emph{where the present-time leading-order contribution }$\left. K_{\mu
}\right\vert _{s}$ \emph{is given by
\begin{equation}
\left. K_{\mu }\right\vert _{s}=\left\{ -\frac{1}{\sigma }\frac{q^{2}}
m_{o}c^{2}}\frac{D}{Ds}U_{\mu }\left( r\left( s\right) \right) +g_{\mu
}\right\} ,
\end{equation
\emph{with }$g_{\mu }$\emph{\ denoting the 4-vector
\begin{equation}
g_{\mu }=\frac{2}{3}\frac{q^{2}}{m_{o}c^{2}}\left[ \frac{D^{2}}{Ds^{2}
U_{\mu }-U_{\mu }(s)U^{k}(s)\frac{D^{2}}{Ds^{2}}U_{k}\right] .
\end{equation}
\emph{Proof} - The proof of T7$_{1}$) and T7$_{2}$)\emph{\ }can be reached
by introducing a Taylor expansion in terms of the retarded time $s^{\prime }$
for the relevant quantities appearing in Eqs.(\ref{self_integral}) and (\re
{fself}). In particular, for the 4-velocity $U_{\mu }\left( r\left(
s^{\prime }\right) \right) $ and the displacement vector $\widetilde{R}^{k}$
we obtain respectivel
\begin{equation}
U_{\mu }\left( r\left( s^{\prime }\right) \right) \cong U_{\mu }\left(
r\left( s\right) \right) -(s-s^{\prime })\frac{D}{Ds}U_{\mu }\left( r\left(
s\right) \right) +\frac{(s-s^{\prime })^{2}}{2}\frac{D^{2}}{Ds^{2}}U_{\mu
}\left( r\left( s\right) \right) +O\left( \epsilon ^{3}\right)
\end{equation
an
\begin{equation}
\widetilde{R}^{k}\cong (s-s^{\prime })U^{k}-\frac{(s-s^{\prime })^{2}}{2
\frac{D}{Ds}U^{k}+\frac{(s-s^{\prime })^{3}}{6}\frac{D^{2}}{Ds^{2}
U^{k}+O\left( \epsilon ^{4}\right) ,
\end{equation
while for the time delay $s-s^{\prime }\equiv s_{ret}$ we ge
\begin{equation}
s-s^{\prime }\cong \sigma +O\left( \epsilon ^{2}\right) . \label{alfa}
\end{equation
By substituting these expansions in Eqs.(\ref{self_integral}) and (\re
{fself}), after straightforward calculations the asymptotic solutions (\re
{asin1}) and (\ref{gs}) follow identically.
\textbf{Q.E.D.}
\bigskip
We notice that the asymptotic expansion of the self-potential illustrated in
THM.7 is required to reduce the non-local dependencies which are implicit in
the definition of the fluid fields through the KDF. On the other hand,
within the approximation obtained in THM.7 for the 4-vector $K_{\mu }$, the
RR equation (\ref{euler-lag}) reduces to a local third-order ordinary
differential equation. In particular, Eq.(\ref{fself}) in THM.7 represents
the analogue of the LAD equation for the single-particle dynamics, which
contains the first derivative of the particle 4-acceleration (see also
related discussion in Paper I). In view of this similarity, the asymptotic
solution (\ref{gs}) can be further simplified adopting a second
reduction-step of the same kind of that which leads to the LL form of the
self-force for single charged particles \cite{LL}. This is obtained by
assuming that the RR effect is only a small correction to the motion of the
fluid. As a consequence, an iterative approximation can be adopted which
permits to represent the self-force in terms of the instantaneous fluid
forces. The latter include both the external EM field and the pressure
forces. In particular, according to this method, to leading-order for the
fluid 4-acceleration we hav
\begin{equation}
\frac{D}{Ds}U^{\nu }=F_{\left( ext\right) }^{\nu \mu }U_{\mu }-\frac{1}{n
\partial _{\mu }P^{\mu \nu },
\end{equation
where, for brevity we have introduced the notatio
\begin{equation}
F_{\left( ext\right) }^{\nu \mu }\equiv \frac{q}{m_{o}c^{2}}\overline{F
^{\left( ext\right) \nu \mu }.
\end{equation
The iteration give
\begin{eqnarray}
\frac{D^{2}}{Ds^{2}}U^{\nu } &=&\partial _{l}F_{\left( ext\right) }^{\nu \mu
}U_{\mu }U^{l}+F_{\left( ext\right) }^{\nu \mu }\left( F_{\left( ext\right)
\mu l}U^{l}-\frac{1}{n}\partial _{l}P_{\mu }^{l}\right) + \notag \\
&&+\frac{1}{n}\partial _{\mu }P^{\mu \nu }U^{l}\partial _{l}\ln n-\frac{1}{n
U^{l}\partial _{l}\partial _{\mu }P^{\mu \nu }. \label{LLfluid}
\end{eqnarray
Substituting this expansion in Eq.(\ref{gs}) and invoking the symmetry
property of the Faraday tensor provides for the first-order term $\left.
K_{\mu }\right\vert _{s}$ the following approximation
\begin{equation}
\left. K_{\mu }\right\vert _{s}\simeq \frac{q^{2}}{m_{o}c^{2}}\left\{ -\frac
1}{\sigma }\left[ \frac{q}{m_{o}c^{2}}\overline{F}_{\mu \nu }^{\left(
ext\right) }U^{\nu }-\frac{1}{n}\partial _{\nu }P_{\mu }^{\nu }\right]
\frac{2q}{3m_{o}c^{2}}h_{\mu }^{\left( 1\right) }+\frac{2}{3}h_{\mu
}^{\left( 2\right) }\right\} , \label{kll}
\end{equation
where the first term on the rhs represents the mass-renormalization
contribution, and $h_{\mu }^{\left( 1\right) }$\ denotes the 4-vecto
\begin{equation}
h_{\mu }^{\left( 1\right) }=\partial _{l}\overline{F}_{\mu \nu }^{\left(
ext\right) }U^{\nu }U^{l}-\frac{q}{m_{o}c^{2}}\overline{F}_{\mu \nu
}^{\left( ext\right) }\overline{F}^{\left( ext\right) \nu l}U_{l}+\frac{q}
m_{o}c^{2}}\left( \overline{F}_{kl}^{\left( ext\right) }U^{l}\right) \left(
\overline{F}^{\left( ext\right) k\nu }U_{\nu }\right) U_{\mu }, \label{h11}
\end{equation
while $h_{\mu }^{\left( 2\right) }$ is given b
\begin{eqnarray}
h_{\mu }^{\left( 2\right) } &=&-\frac{q}{m_{o}c^{2}}\frac{1}{n}\overline{F
_{\mu \beta }^{\left( ext\right) }\partial _{l}P^{l\beta }+\frac{1}{n
\partial _{\nu }P_{\mu }^{\nu }U^{l}\partial _{l}\ln n-\frac{1}{n
U^{l}\partial _{l}\partial _{\nu }P_{\mu }^{\nu }+ \label{h22} \\
&&+\frac{q}{m_{o}c^{2}}\frac{1}{n}U_{\mu }U^{k}\overline{F}_{k\beta
}^{\left( ext\right) }\partial _{l}P^{l\beta }-\frac{1}{n}U_{\mu
}U^{k}\partial _{\nu }P_{k}^{\nu }U^{l}\partial _{l}\ln n+\frac{1}{n}U_{\mu
}U^{k}U^{l}\partial _{l}\partial _{\nu }P_{k}^{\nu }. \notag
\end{eqnarray
Eq.(\ref{kll}) represents the fluid analogue of the LL approximation of the
self-force holding for single particle dynamics, with the
mass-renormalization term retained. In particular here we notice that:
1) Eq.(\ref{kll}) provides a local approximation of the fluid self-force
carrying the contribution of the RR effect. In contrast to Eq.(\ref{gs}),
thanks to the iterative reduction procedure only second-order derivatives of
the position vector appear in this approximation.
2) For consistency, Eq.(\ref{kll}) must be evaluated adopting the asymptotic
expansion (\ref{asin1}) also for the evaluation of the self-potential
entering the definition of the fluid fields through the canonical momenta
P_{\mu }$ in the KDF.
3) Moreover, consistent with the approximation in which the RR
self-potential is small with respect to the external EM potential, also the
asymptotic approximation (\ref{series1}) can be adopted, which allows one to
treat explicitly in an asymptotic way all the implicit RR contributions.
4) Finally, collecting together the analytical approximations provided by
Eqs.(\ref{series1}), (\ref{asin1}) and (\ref{kll}), the fluid equations are
reduced to a set of asymptotic local second-order PDEs. This provides a
convenient representation also for Eulerian implementation schemes of the
same equations.
The detail comparison of Eqs.(\ref{LLfluid})-(\ref{h22}) with the literature
is discussed in the next section.
\bigskip
\subsection{Retarded-time asymptotic expansion}
Despite the previous considerations, it is worth pointing out that, formally
also for the fluid equations, an analogous result to THM.7 can be given.
This is based on performing a Taylor expansion of the fluid RR force based
on the retarded-time approximation. In this case, it is found that Eq.(\re
{self_integral}) is approximated a
\begin{equation}
\overline{A}_{\mu }^{(self)}=\left. \overline{A}_{\mu }^{(self)}\right\vert
_{s^{\prime }}\left[ 1+O(\epsilon )\right] , \label{ret1}
\end{equation
where the retarded-time leading-order contribution $\left. \overline{A}_{\mu
}^{(self)}\right\vert _{s^{\prime }}$ is simply given b
\begin{equation}
\left. \overline{A}_{\mu }^{(self)}\right\vert _{s^{\prime }}=\frac{q}
\sigma }U_{\mu }\left( r\left( s^{\prime }\right) \right) ,
\end{equation
while Eq.(\ref{fself}) for the self-force, written in terms of $K_{\mu }$
defined in Eq.(\ref{gmu}), become
\begin{equation}
K_{\mu }=\left. K_{\mu }\right\vert _{s^{\prime }}\left[ 1+O(\epsilon
\right] ,
\end{equation
where the retarded-time leading-order contribution $\left. K_{\mu
}\right\vert _{s^{\prime }}$ is now given b
\begin{equation}
\left. K_{\mu }\right\vert _{s}=\left\{ -\frac{q^{2}}{\sigma m_{o}c^{2}
\frac{D}{Ds^{\prime }}U_{\mu }\left( r\left( s^{\prime }\right) \right)
+g_{\mu }^{\prime }\left( r\left( s^{\prime }\right) \right) \right\} ,
\end{equation
with $g_{\mu }^{\prime }$\ denoting here the 4-vecto
\begin{equation}
g_{\mu }=-\frac{1}{3}\frac{q^{2}}{m_{o}c^{2}}\left[ \frac{D^{2}}{Ds^{\prime
2}}U_{\mu }\left( r\left( s^{\prime }\right) \right) -U_{\mu }\left( r\left(
s^{\prime }\right) \right) U^{k}\left( r\left( s^{\prime }\right) \right)
\frac{D^{2}}{Ds^{\prime 2}}U_{k}\left( r\left( s^{\prime }\right) \right)
\right] . \label{ret2}
\end{equation
This alternative expansion has the distinctive advantage (with respect to
the present-time expansion) of retaining all the physical properties of the
exact fluid equations for the treatment of RR delay-time effects. This
alternative formulation is relevant for comparisons with the point-particle
treatment.
\section{Discussion and comparisons with literature}
In this section we analyze in detail the physical properties of the kinetic
and fluid theory developed for the EM RR problem, providing also a
comparison with the literature. This concerns, in particular, the recent
paper by Berezhiani et al. \cite{Ma2004}, where an analogous research
program is presented for the relativistic hydrodynamics with RR based on the
LL solution of the self-force.
\subsection{Kinetic theory}
Let us start by considering the kinetic theory. The solution here obtained
has the following key features:
1) The kinetic theory adopts the Hamiltonian formulation of the RR problem
here developed. The result is based on the exact analytical solution for the
EM self-potential of finite-size charged particles, obtained in Paper I and
Appendix A.
2) The kinetic theory is here developed for systems of charged particles
subject to an external mean-field EM interaction and the RR self-interaction
produced by the same particles. Due to the non-local property of the RR
interaction, the formulation of kinetic theory is non-trivial. For this
purpose, in contrast to previous literature, an axiomatic formulation of CSM
is adopted. Its key element is the introduction of a suitable definition for
the Lorentz-invariant probability-measure in the particle extended
phase-space. As a consequence, the corresponding Liouville-Vlasov kinetic
equation with the inclusion of the exact RR effect is achieved in
Hamiltonian form, namely in such a way to preserve the phase-space canonical
measure. For comparison, instead, previous literature approaches dealt with
measure non-preserving phase-space dynamics.
3) In particular, the kinetic theory has been developed within the canonical
formalism representing the KDF in terms of the canonical state $\mathbf{y
\equiv \left( r^{\mu },P_{\mu }\right) $. For reference, in Appendix B the
connection with the corresponding non-canonical treatment is provided. This
in turn implies that non-local contributions associated to the
self-potential (\ref{atot2}) enter implicitly in the definition of the
corresponding fluid moments (\ref{den1})-(\ref{den3}). This is made possible
only within the framework of the present exact formulation, in which the
analytical solution for the self-potential is by construction non-divergent.
This feature departs from recent approaches where instead non-Hamiltonian
formulations were adopted, based on the LL point-like approximation of the
RR self-force. In such a case in fact, the explicit dependence of the KDF in
terms of the EM self-potential cannot be retained.
4) Both the RR equation for single-particle dynamics and the kinetic
equation for the KDF are of delay-type, reflecting the characteristic nature
of the RR phenomenon. This property is completely missing from the previous
literature on the subject, exclusively based on the LL local asymptotic
approximation.
\bigskip
\subsection{Fluid theory}
For what concerns the fluid treatment, we notice that:
1) Both the fluid fields and the fluid moment equations retain the standard
form (available in the absence of RR effects) and can be equivalently
represented in Eulerian or Lagrangian form. This follows from the exact
representation here adopted both for the RR self-potential and the RR
self-force. In both cases the only non-local dependencies are those
associated to the position 4-vector.
2) The exact fluid equations with the inclusion of the RR effect are
delay-type PDEs. Because of this feature, their natural representation
appears to be the Lagrangian form. In fact, the integration along the LPs
must be in principle performed taking into account the retarded RR
interaction.
3) From the exact theory presented here it follows that each fluid equation
of a given order does not depend on fluid fields of higher orders. For
example, the momentum equation contains only second-order tensor fields,
identified respectively with the plasma stress-energy tensor and the EM
Faraday tensor. This result contrasts with the treatment given in Ref.\cit
{Ma2004} where instead the asymptotic formulation based on the LL equation
leads to moment equations involving higher-order tensor fields (for
comparison, see also the related discussion in Appendix B).
4) If a kinetic closure is chosen, then the fluid moments appearing in the
fluid equations are all uniquely determined. In particular, the
stress-energy tensor is prescribed in terms of the KDF. This implies that
both implicit and explicit contributions of the RR effect appear in the
resulting equations, carried respectively by the fluid fields and the EM
self-force in the momentum equation. Remarkably, kinetic closure is achieved
prescribing solely the pressure contribution carried by the stress-energy
tensor. Instead, in the approach of Ref.\cite{Ma2004} the closure conditions
involve generally also the specification of higher-order moments of the KDF.
5) An important feature of the exact fluid equations here obtained is that
they can in principle be exactly implemented numerically adopting a
Lagrangian scheme.
6) A remarkable aspect of the present theory is that the relevant asymptotic
expansions are performed only \textquotedblleft a
posteriori\textquotedblright\ after integration over the velocity space.
This means that the approximations involved are introduced only on the
configuration space-variables (i.e., the fluid fields) and not on the
phase-space KDF. In particular, a convenient approximation is the one
obtained in the short delay-time ordering, which reduces the non-local
dependencies to local terms. As a consequence, the introduction of
higher-order moments is ruled out by construction.
\bigskip
\subsection{Comparison with point-particle treatments}
The relevant comparison here is represented by Ref.\cite{Ma2004}. Such an
approach is based on the adoption of the LL equation for the single-particle
dynamics for the construction of the relativistic Vlasov-Maxwell
description. The corresponding moment equations can be in principle adopted
for the construction of a closed set of fluid equations. This requires
however the specification of suitable closure-conditions. Let us briefly
point out the novel features of the current treatment for what concerns the
adoption of the finite-size particle model in the construction of the
kinetic and fluid descriptions. In detail:
1) Both in the kinetic and fluid treatments the RR force is taken into
account by means of a non-local interaction. This is an intrinsic feature of
the assumed finite extension of the charged particle. In the fluid
treatment, in particular, as shown above, the RR force can be parametrized
in terms of the past Lagrangian fluid velocity and position. This permits to
treat consistently the causal delay-time effects due to the finite-size of
the particles.
2) In validity of the asymptotic ordering given by Eq.(\ref{ordering -1}),
an asymptotic retarded-time Hamiltonian approximation of the RR force based
on a retarded-time expansion has been given for the fluid equations. This
approximation preserves the basic physical features of the solution based on
the exact form of the RR self-force.
3) If the present-time asymptotic expansion is performed on the exact fluid
moment equations, the resulting expression of the fluid RR force obtained
adopting the finite-size charge model appears different from that given in
Ref.\cite{Ma2004}.
These conclusions enable us to carry out a detailed comparison with the
literature, emphasizing the basic differences between kinetic and fluid
treatments based on finite-size and point particles.
A) Kinetic theory.
The kinetic equation adopted in Ref.\cite{Ma2004} is based on the LL
equation (see therein Eqs.7 and 8). This means that the RR force in this
approximation is non-conservative, non-variational and therefore
non-Hamiltonian. In addition the LL equation: 1) does not retain finite
delay-time effects characteristic of the RR phenomenon; 2) is not valid in
the case of strong EM fields, where the iterative reduction scheme on which
it is based, may fail; 3) ignores mass-renormalization effects (which are
incompatible with the point-particle model). In contrast, the treatment of
the relativistic Vlasov kinetic equation obtained here (see the Eulerian and
Lagrangian equations (\ref{euler-kinm}) and (\ref{lagr-kinm})) is
qualitatively different. In fact, even if the resulting RR equation remains
a second-order ODE, it is conservative, variational, Hamiltonian and applies
for arbitrary external EM fields. Further remarkable aspects are related to
the adoption of the finite-size charge model, in which the charge and mass
distributions have the same support. As a consequence, in this case the self
4-potential is everywhere well-defined, contrary to the point particle
model. In addition, this is prescribed analytically (see Appendix A), a
feature which allows one to treat consistently the RR delay-time effects.
B) Fluid theory.
The fluid treatment here obtained is provided by the Eulerian Eqs.(\re
{continuity})-(\ref{momentum}) or the equivalent Lagrangian equations (\re
{cont-lag})-(\ref{euler-lag}). The latter, considered as fluid equations,
are manifestly not closed. However, the Hamiltonian formulation achieved
here and holding for finite-size particles allows one to achieve a
physically consistent kinetic closure condition, by prescribing uniquely the
pressure tensor $P_{\mu \nu }$ in Eq.(\ref{euler-lag}). We stress that in
our treatment no higher-order moments need to be specified. In contrast, the
corresponding Euler equation reported in Ref.\cite{Ma2004} (see Eqs.11 and
12 therein) actually depends also on a third-order tensor moment, which must
be prescribed (see comments in Sec.IIIA of Ref.\cite{Ma2004}). Let us now
consider the asymptotic fluid treatments based on the present theory. These
can be achieved invoking either the present-time or the retarded-time
asymptotic expansions (see Section X). The first expansion is mostly
relevant for comparisons with Ref.\cite{Ma2004} (given in THM.7) and enables
one to achieve a local approximation of the delay-time effects carried by
the RR force. However, remarkably, the resulting asymptotic fluid equations
\ref{LLfluid})-(\ref{h22}) remain qualitatively different from the
corresponding ones given in Ref.\cite{Ma2004}. In particular: 1) no
higher-order moments appear after performing the Taylor expansion and the
iteration scheme discussed after THM.7; 2) a non-vanishing mass-correction
contribution is now included (see first term on the rhs of Eq.(\ref{kll})).
Finally, we mention that the retarded-time asymptotic expansion given by
Eqs.(\ref{ret1})-(\ref{ret2}) provides a novel approximation which retains
basic properties of the exact solution. In particular: 1) it only applies
for finite-size particles; 2) it relies on the Hamiltonian formulation of
the RR problem and of the Vlasov-Maxwell treatment; 3) it permits to retain
transient-time and delay-time effects; 4) it takes into account retarded
mass-correction effects; 5) in this approximation the natural fluid
description is Lagrangian.
\bigskip
\bigskip
\section{Conclusions}
In this paper, novel results have been obtained concerning the kinetic and
fluid descriptions of relativistic collisionless plasmas with the inclusion
of EM RR effects.
Relevant consequences of the variational form of the EM RR equation
previously achieved for classical finite-size charged particles have been
investigated. It has been shown that the non-local RR problem admits both
Lagrangian and Hamiltonian representations in standard form, defined
respectively in terms of effective Lagrangian and Hamiltonian functions. A
remarkable novel feature of the theory concerns the development of a
Hamiltonian retarded-time expansion of the RR force, which applies in
validity of the short delay-time asymptotic ordering. On such a basis, the
axiomatic formulation of classical statistical mechanics for relativistic
collisionless plasmas with the inclusion of non-local RR effects has been
presented. As a major result, the kinetic theory for such a system has been
formulated in standard Hamiltonian form. The Liouville-Vlasov equation has
been proved to hold in the extended phase-space, subject to non-local RR
self-interactions. Remarkably, the non-local effects have been proved to
enter the relativistic kinetic equation only through the retarded particle
4-position. As a consequence, the corresponding fluid moment equations can
be determined in standard way by integration over the space of canonical
momenta and cast both in Eulerian and Lagrangian forms. It has been pointed
out that the exact relativistic fluid equations are intrinsically of
delay-type and contain both implicit and explicit non-local contributions
associated to the RR effect. The issue concerning the problem of fluid
closure conditions has been discussed. In contrast with previous literature,
it is found that in the present approach the closure conditions remain the
standard ones, i.e., as in the absence of RR effects. Hence, the
specification of higher-order moments of the KDF, for a given moment
equation, is not required. Finally, appropriate approximations have been
obtained for the fluid equations by employing \textquotedblleft a
posteriori\textquotedblright\ the relevant asymptotic expansions applicable
in the short delay-time ordering. This allows one to reduce the exact
non-local equations either to a set of local PDEs or to retarded PDEs still
retaining finite delay-time effects.
The theory here developed has potential wide-ranging applications which
concern the study of relativistic astrophysical plasmas for which RR
emission processes are important. This involves, for example, plasmas in
accretion disks, relativistic jets and active galactic nuclei. Other
possible applications are also suggested for the case of laboratory plasmas
generated in the presence of pulsed-laser sources.
\bigskip
\section{Appendix A: Integral representation for $A_{\protect\mu }^{(self)}$
- Case of a non-rotating spherical-shell charged particle}
In this Appendix we determine explicitly the integral representation of
A_{\mu }^{(self)}$ for a non-rotating finite-size charged particle described
by the model introduced in Paper I. We first remark that Eqs.(\ref{eee1a})
and (\ref{eee2a}) can be written for a spherically-symmetric charged
particle of radius $\sigma >0$ a
\begin{eqnarray}
\xi ^{\mu }\xi _{\mu } &=&-\sigma ^{2}, \label{eee1} \\
\xi _{\mu }u^{\mu }(s) &=&0. \label{eee2}
\end{eqnarray
Eq.(\ref{eee1}) defines the boundary $\partial \Omega $ on which the charge
and mass are uniformly distributed, while Eq.(\ref{eee2}) represents the
constraint of rigidity for the finite-size particle. We can use the
information from Eq.(\ref{eee1}) to define the \emph{internal} and the \emph
external} domains with respect to the mass and charge distributions. In
particular, in terms of the generic displacement 4-vector $X^{\mu }\in M^{4}$
defined as
\begin{equation}
X^{\mu }=r^{\mu }-r^{\mu }\left( s\right)
\end{equation
and subject to the constrain
\begin{equation}
X^{\mu }u_{\mu }(s)=0,
\end{equation
the following relations hold
\begin{eqnarray}
X^{\mu }X_{\mu } &\leq &-\sigma ^{2}\emph{\ :external}\text{ }\emph{domain,}
\label{ext} \\
X^{\mu }X_{\mu } &>&-\sigma ^{2}\emph{\ :internal}\text{ }\emph{domain,}
\notag \\
X^{\mu }X_{\mu } &=&\xi ^{\mu }\xi _{\mu }=-\sigma ^{2}\emph{\ :boundary
\text{ }\emph{location.} \notag
\end{eqnarray
As proved in Ref.\cite{Cremaschini2011}, for the evaluation of the action
integral $S_{C}^{(self)}$ it is sufficient to know the solution of $A_{\mu
}^{(self)}$ in the external domain. In this domain the EM self 4-potential
generated by the non-rotating finite-size particle must necessarily coincide
with that of a point particle carrying the same total mass and charge. The
retarded 4-potential of a point charge represents a well-known result in the
literature \cite{Jak}. This can be easily obtained by means of the Green
function approach. In particular, introducing the retarded Green function of
a point particle $G(r-r^{\prime })$, the self-potential $A_{\mu }^{(self)}$
takes the for
\begin{equation}
A^{(self)\mu }(r)=\frac{4\pi }{c}\int d^{4}r^{\prime }G(r-r^{\prime })j^{\mu
}(r^{\prime }), \label{D-1bis}
\end{equation
where $G(r-r^{\prime })$ is symmetric in both $r$ and $r^{\prime },$ is
non-vanishing only for $r^{0}<r^{\prime 0}$ and satisfies in this domain the
boundary-value proble
\begin{equation}
\left\{
\begin{array}{c}
\square G(r-r^{\prime })=\delta ^{(4)}\left( r-r^{\prime }\right) , \\
G(r-r^{\prime })=0
\end{array
\right.
\end{equation
As a consequence, written in integral form, the self-potential become
\begin{equation}
A^{(self)\mu }(r,q)=2q\int_{1}^{2}dr^{\mu }\delta (\widehat{R}^{\alpha
\widehat{R}_{\alpha }), \label{d2}
\end{equation
wit
\begin{equation}
\emph{\ }\widehat{R}^{\alpha }\equiv r^{\alpha }-r^{\alpha }(s).
\end{equation
This solution, derived for a point charged particle, also holds for the
rigid finite-size non-rotating spherical shell in the external domain
defined in Eq.(\ref{ext}). As can be seen, this coincides with the result in
Ref.\cite{Cremaschini2011}, where a complete covariant solution for the EM
self 4-potential $A_{\mu }^{(self)}$ holding in both internal and external
domains has been obtained by adopting a derivation based on the principle of
relativity and analogous to that outlined in Ref.\cite{LL} for the point
charge case. Notice that, contrary to the case of point particle, the
self-potential (\ref{d2}) is \textit{well-defined} also on the support of
the charge, namely the ensemble on which the charge is distributed.
\section{Appendix B:\ non-canonical representation}
In this appendix we present the equivalent representation of the kinetic
theory developed in section adopting non-canonical variables. For
definiteness, let us introduce an arbitrary non-canonical phase-space
diffeomorphism from $\Gamma $ to $\Gamma _{\mathbf{w}}$, with $\Gamma _
\mathbf{w}}$ denoting a transformed phase-space having the same dimension of
$\Gamma $
\begin{equation}
\mathbf{y}\equiv \left( r^{\mu },P_{\mu }\right) \rightarrow \mathbf{w\equiv
w}\left( \mathbf{y}\right) , \label{nctra}
\end{equation
where, for example, $\mathbf{w}$ can be identified with the non-canonical
state $\mathbf{y}_{nc}\equiv \left( r^{\mu },p_{\mu }\right) $ defined in
Eq.(\ref{series1}) or with $\mathbf{y}_{u}\equiv \left( r^{\mu },u_{\mu
}\right) $. In the second case the transformation, following from Eq.(\re
{pp}), is realized b
\begin{eqnarray}
r^{\mu } &=&r^{\mu }, \label{a1} \\
u_{\mu } &=&P_{\mu }-\frac{q}{c}\left[ \overline{A}_{\mu }^{(ext)}+
\overline{A}_{\mu }^{(self)}\right] . \label{a2}
\end{eqnarray
The transformed RR equation in the variables $\mathbf{y}_{u}$ becomes
therefore
\begin{eqnarray}
\frac{dr^{\mu }}{ds} &=&u^{\mu }, \label{aa1} \\
\frac{du_{\mu }}{ds} &=&F_{\mu }, \label{aa2}
\end{eqnarray
where $F_{\mu }=\frac{\partial p_{\mu }}{\partial r^{\nu }}u^{\nu }-\frac
\partial u_{\mu }}{\partial P_{\nu }}\frac{\partial H_{eff}}{\partial r^{\nu
}}$. Denoting now b
\begin{equation}
f_{1}\left( \mathbf{w}\left( s\right) \right) =\left\vert \frac{\partial
\mathbf{y}\left( s\right) }{\partial \mathbf{w}\left( s\right) }\right\vert
f\left( \mathbf{y}\left( \mathbf{w}\left( s\right) \right) \right)
\label{app0}
\end{equation
the KDF mapped onto the transformed phase-space $\Gamma _{\mathbf{w}}$ by
the KDF $f\left( \mathbf{y}\left( s\right) \right) $, the differential
Liouville-Vlasov equation (\ref{liouv}) require
\begin{equation}
\frac{d}{ds}\left[ \left\vert \frac{\partial \mathbf{w}\left( s\right) }
\partial \mathbf{w}_{0}}\right\vert f_{1}\left( \mathbf{w}\left( s\right)
\right) \right] =0, \label{app3}
\end{equation
where $\mathbf{w}_{0}\equiv \mathbf{w}\left( s_{0}\right) $. At the same
time, Eq.(\ref{liouv}) also implies, thanks to the chain rule
\begin{equation}
\frac{d}{ds}f\left( \mathbf{y}\left( \mathbf{w}\left( s\right) \right)
\right) =0,
\end{equation
which for consistency delivers the well-known differential identit
\begin{equation}
\frac{d}{ds}\left[ \left\vert \frac{\partial \mathbf{y}\left( s\right) }
\partial \mathbf{w}\left( s\right) }\right\vert \left\vert \frac{\partial
\mathbf{w}\left( s\right) }{\partial \mathbf{w}_{0}}\right\vert \right] =0.
\label{app3bis}
\end{equation
From Eq.(\ref{app3}) it follow
\begin{equation}
\frac{d}{ds}f_{1}\left( \mathbf{w}\left( s\right) \right) +f_{1}\left(
\mathbf{w}\left( s\right) \right) \frac{d}{ds}\ln \left( \left\vert \frac
\partial \mathbf{w}\left( s\right) }{\partial \mathbf{w}_{0}}\right\vert
\right) =0. \label{app4}
\end{equation
This equation can be represented, for example, in terms of $\mathbf{w}\equiv
\mathbf{y}_{u}$. In this case, due to the chain rul
\begin{equation}
\frac{d}{ds}f_{1}\left( \mathbf{w}\left( s\right) \right) =u^{\mu }\frac
\partial f_{1}\left( \mathbf{y}_{u}\right) }{\partial r^{\mu }}+F_{\mu
\frac{\partial f_{1}\left( \mathbf{y}_{u}\right) }{\partial u_{\mu }},
\end{equation
while, thanks to Liouville theore
\begin{equation}
\frac{d}{ds}\ln \left( \left\vert \frac{\partial \mathbf{w}\left( s\right) }
\partial \mathbf{w}_{0}}\right\vert \right) =\frac{\partial F_{\mu }}
\partial u_{\mu }}.
\end{equation
As an application of the result, it follows that, if the LL approximation is
introduced for the 4-vector $F_{\mu }$, namely Eqs.(\ref{aa1}) and (\ref{aa2
) are replaced with asymptotic equations of the for
\begin{eqnarray}
\frac{dr_{LL}^{\mu }}{ds} &=&u_{LL}^{\mu }, \label{ab1} \\
\frac{du_{LL}^{\mu }}{ds} &=&F_{LL}^{\mu }, \label{ab2}
\end{eqnarray
where $F_{LL}^{\mu }$ is the total EM force in this approximation, then Eq.
\ref{app4}) recovers the expression reported in Ref.\cite{Ma2004}. This
provides the connection with the exact canonical theory here developed. We
remark, however, that since the LL equation is only asymptotic, the mapping
between the canonical state $\mathbf{y}\equiv \left( r^{\mu },P_{\mu
}\right) $ and $\mathbf{y}_{LL}\equiv \left( r_{LL}^{\mu },u_{LL\mu }\right)
$ is also intrinsically asymptotic. Therefore, Eqs.(\ref{ab1}) and (\ref{ab2
) remain necessarily non-variational and non-canonical.
\bigskip
\bigskip
|
1,314,259,993,429 | arxiv |
\section{Approach}
\label{sec:approach}
In this section, we describe our approach for discovering executable routine specifications from User Interaction (UI) logs.
We adhere to the RPM pipeline proposed by Leno et al.~\cite{lenobise20}, which we implemented in five macro steps (see Figure~\ref{fig:approach}):
i) \emph{preprocessing and normalization}; ii) \emph{segmentation}; iii) \emph{candidate routine identification}; iv) \emph{automatability assessment}; v) \emph{routines aggregation}.
\begin{figure*}[htb]
\centering
\includegraphics[scale=0.6]{Approach.pdf}
\caption{Outline of the proposed approach}
\label{fig:approach}
\end{figure*}
Our approach takes as input a UI log, which is a chronologically ordered sequence of UIs between a worker and computer-based applications.
In this paper, we assume that the applications used by the worker are either spreadsheet management applications or web browser applications.
A UI log is usually recorded during the execution of the worker's daily tasks using specialized logging tools,
for example, the \emph{Action Logger} tool~\cite{DBLP:conf/bpm/LenoPRDM19}.
An example of a UI log is provided in Table~\ref{tab:uiLog}.
Each row of Table~\ref{tab:uiLog} captures one UI (e.g., clicking a button or copying the content of a cell).
Each UI is characterized by a \emph{timestamp}, a \emph{type}, and a set of \emph{parameters}, or \emph{payload} (e.g., application, button's label and value of a field).
The payload of a UI is not standardized, and depends on the UI type and application.
Consequently, the UIs recorded in the same log may have different payloads.
For example, the payload of UIs performed within a spreadsheet contains information regarding the spreadsheet name and the location of the target cell (e.g., cell row and column).
In contrast, the payload of the UIs performed in a web browser contains information regarding the webpage URL, the name and identifier of the UI's target HTML element and its value (if any); -- see Table~\ref{tab:uiLog} rows 1 and 2.
\input{tables-complete-uiLog-compact}
\newpage
Our approach analyzes the log to identify and output a collection of \emph{executable routine specifications}.
Each routine specification is a pair ($c$, $\Lambda$),
where $c$ is a sequence of UIs, or a \emph{candidate routine}, and $\Lambda$ is a set of \emph{data transformation steps}.
Each \emph{data transformation step} is a triplet that specifies: i) variables from which the data was read, ii) variables to which the data was written,
and iii) a function capturing the data transformation (if any occurs).
Such routine specifications can be compiled into software bots that can be deployed on a tool like UiPath,~\footnote{A commercial tool available at www.uipath.com} which would be able to automatically replicate the routine.
In the following, we describe step-by-step how we generate a collection of executable routine specifications from an input UI log.
\subsection{Preprocessing and Normalization}
\label{sec:preprocessing}
Before diving into the details of this step, we formally define the concepts of a \emph{user interaction} and \emph{user interaction log},
which we will refer to throughout this and the following sections.
\begin{definition}[\textbf{User interaction (UI)}]
A \emph{user interaction (UI)} is a tuple $u = (t, \tau, P_{\tau}, Z, \phi)$, where:
$t$ is a timestamp;
$\tau$ is a UI type;
$P_{\tau}$ is a set of parameters, or \emph{payload};
$Z$ is a set of parameter values; and
$\phi : P_{\tau} \rightarrow Z$ is a value assignment function.
\end{definition}
Table~\ref{tab:uiParam} shows UIs and their associated payloads recorded by the Action Logger tool~\cite{DBLP:conf/bpm/LenoPRDM19}.
The UIs are logically grouped, based on their type, into three groups:
\emph{navigation}; \emph{read}; and \emph{write} UIs.
We assume that every UI is an \emph{instantiation} of one of the UI types from Table~\ref{tab:uiParam},
with every parameter assigned with a specific value.
\begin{definition}[\textbf{User interaction log}]
A user interaction log $\Sigma$ is a sequence of UIs $\Sigma = \langle u_1, u_2, \dots, u_n \rangle$, ordered by their timestamps, i.e., $u_{i\mid t} < u_{j\mid t}$ for any $i,j such that 1 \leq i < j \leq n$.
\end{definition}
\input{tables-uiParameters}
Ideally, UIs recorded in a log should only relate to the execution of the task(s) of interest.
However, in practice, a log often also contains UIs that do not contribute to completing the recorded task(s).
We can consider such UIs to be \emph{noise}.
Examples of noise UIs include a worker browsing the web (e.g., social networking) while executing a task that does not require to do that, or a worker committing mistakes (e.g., filling a text field with an incorrect value or copying a wrong cell of a spreadsheet).
While we cannot detect the former kind of noise without a context-aware noise filter, we can identify the latter type of noise.
Given that noise in a log may negatively affect the segmentation step, we attempt to remove it.
Specifically, the filter we implemented removes UIs whose effects are overwritten by subsequent UIs, and certain navigation UIs that a software robot would not need to replicate.
To identify and remove such UIs, we rely on three search-and-replace rules defined as regular expressions that operate as follows.
\begin{itemize}
\item[1.] Remove UIs of type \emph{select cell}, \emph{select range}, \emph{select field} (e.g., Table~\ref{tab:uiLog}, rows 2, 4, 7);
\item[2.] Remove UIs of type \emph{copy} that are not eventually followed by UI of type \emph{paste} before another UI of type \emph{copy} occurs (e.g., Table~\ref{tab:uiLog}, row 42);
\item[3.] Remove UIs of type \emph{edit cell}, \emph{edit range}, and \emph{edit field} that are followed by another UI of the same type that targets the same cell or field and overwrites its content before a UI of type \emph{copy} occurs (e.g., Table~\ref{tab:uiLog}, row 22).
\end{itemize}
We note that, given an unsegmented log, it is impossible to apply the third rule straightforward, as removing the first UI of type \emph{edit} (considered redundant) may be an error if the second UI of type \emph{edit} belongs to a successive task execution.
Therefore, we postpone the application of the third rule after the segmentation step.
The filtering rules are applied recursively on the log until no more UIs are removed and the log is assumed to be free of \emph{detectable} noise.
Devising and applying more sophisticated noise filtering algorithms would probably benefit the approach presented in this study.
However, the design of such algorithms is outside the scope of this paper, and we leave it as possible future work.
After filtering the log, the vast majority of UIs are unique because they differ by their unique payload.
Note that even the UIs capturing the same action within the same task execution (or different task executions) would appear different.
To discover each task execution recorded in the log, we need to detect all the UIs that even having different payloads correspond to the same action within the same or different task execution(s).
Given a UI, its payload can be divided into \emph{data parameters} and \emph{context parameters}.
The former store the data values used during the execution of tasks, e.g., the value of text fields or copied content.
Consequently, \emph{data parameters} usually have different values in different task executions.
In contrast, the latter capture the context in which UIs were performed, e.g., the application and the location within the application.
Therefore, \emph{context parameters} of the same UI within a task are likely to have the same values across different task executions.
For example, the payload of a UI of type \emph{copy cell} has the following parameters (see also Table~\ref{tab:uiParam}):
\emph{workbook name} (the Excel file name);
\emph{worksheet name} (within the Excel file);
\emph{cell column} (i.e., the column of the cell in the worksheet that was selected for the UI);
\emph{cell row} (i.e., the row of the cell in the worksheet that was selected for the UI);
\emph{value} (i.e., current value of the cell selected for the UI);
\emph{copied content} (the content copied as the result of the UI).
Here, \emph{workbook name}, \emph{worksheet name}, \emph{cell column/row} are \emph{context parameters},
while \emph{copied content} and \emph{value} are \emph{data parameters}.
Different context parameters characterize different UI types.
For example, a UI of type \emph{click button} performed in a web browser has only these context parameters: \emph{URL}; \emph{name} (i.e., the label of the button); \emph{ID} (of the button, as an element in the HTML page); and \emph{type}.
Often, context parameters are determined by the type of UI.
To reduce the chance of possible automated misinterpretations, we allow the user to configure the context parameters of various UI types manually.
To segment an input UI log, we rely on the context parameters of the UIs.
We call a UI whose payload has been reduced to its context parameters a \emph{normalized UI}.
\begin{definition}[\textbf{Normalized UI}]\label{def:nui}
Given a UI $u = (t, \tau, P_{\tau}, Z, \phi)$, the UI $\bar{u} = (t, \tau, \bar{P_{\tau}}, \bar{Z}, \phi)$ is its normalized version, where $\bar{Z}$ contains only the values of the parameters in $\bar{P_{\tau}}$, where $\bar{P_{\tau}}$ is a set of context parameters.
\end{definition}
Two normalized UIs $u_1 = (t_1, \tau, \bar{P_{\tau}}, \bar{Z_1}, \phi_1)$ and $u_2 = (t_2, \tau, \bar{P_{\tau}}, \bar{Z_2}, \phi_2)$ are \emph{equivalent}, denoted by $u_1 = u_2$ iff $\forall p \in \bar{P_{\tau}} \Rightarrow \phi_1(p) = \phi_2(p)$.
A log in which all the UIs have been normalized is a \emph{normalized log}, and we refer to it with the notation $\bar{\Sigma} = \langle \bar{u_1}, \bar{u_2}, \dots, \bar{u_n} \rangle$.
Table~\ref{tab:uiLog} and Table~\ref{tab:norm-uilog} show, respectively, a fragment of a log and its normalized version.
Intuitively, in a normalized log, the chances that two executions of the same task have the same sequence (or set) of normalized UIs are high because they have only context parameters.
We leverage such a characteristic of the normalized log to identify its segments (i.e., start and end of each executed task), and then the routine(s) within the segments.
\input{tables-complete-uiLogNorm-compact}
\subsection{Segmentation}
\label{sec:segmentation}
A log may capture long working sessions, where a worker performs multiple instances of one or more tasks.
The next step of our approach decomposes the log into \emph{segments} that identify the start and the end of each recorded task in the log.
Given a normalized log, we generate its control-flow graph (CFG).
A CFG is a graph where each vertex represents a different normalized UI, and each edge captures a directly-follows relation between the two normalized UIs represented by the source and the target vertices of the edge.
A CFG has an explicit source vertex representing the first normalized UI recorded in the log.
Given a log, the directly follows relation on UI is defined as follows.
\begin{definition}[\textbf{Directly-follows relation}]
Let $\bar{\Sigma} = \langle \bar{u}_1, \bar{u}_2, \dots, \bar{u}_n \rangle$ be a normalized log. Given two UIs, $\bar{u}_x, \bar{u}_y \in \bar{\Sigma}$, we say that $\bar{u}_y$ directly-follows $\bar{u}_x$, i.e., $\bar{u}_x \leadsto \bar{u}_y$, iff $\bar{u}_{x\mid t} < \bar{u}_{y\mid t} \wedge \nexists \bar{u}_z \in \bar{\Sigma} \mid \bar{u}_{x\mid t} \leq \bar{u}_{z\mid t} \leq \bar{u}_{y\mid t}$.
\end{definition}
\begin{definition}[\textbf{Control-Flow Graph (CFG)}]
Given a normalized log, $\bar{\Sigma} = \langle \bar{u_1}, \bar{u_2}, \dots, \bar{u_n} \rangle$, let $\bar{A}$ be the set of all the normalized UIs in $\bar{\Sigma}$. A Control-Flow Graph (CFG) is a tuple $G = (V, E, \hat{v}, \hat{e})$, where:
$V$ is the set of vertices of the graph, each vertex maps one UI in $\bar{A}$;
$E \subseteq V \times V$ is the set of edges of the graph, and each $(v_i, v_j) \in E$ represents a directly-follows relation between the UIs mapped by $v_i$ and $v_j$;
$\hat{v}$ is the graph \emph{entry vertex}, such that $\forall v \in V \nexists (v, \hat{v}) \in E \wedge \nexists (\hat{v}, v) \in E$;
while $\hat{e} = (\hat{v}, v_0)$ is the graph \emph{entry edge}, such that $v_0$ maps $\bar{u_1}$.
We note that $\hat{v} \notin V$, and $\hat{e} \notin E$, since they are artificial elements of the graph.
\end{definition}
It is likely that a CFG is cyclic, since a loop represents the start of a new execution of the task recorded in the log. Indeed, in an ideal scenario, once a task execution ends with a certain UI (a vertex in the CFG), the next UI (i.e., the first UI of the next task execution) should have already been mapped to a vertex of the CFG, and a loop will be generated.
In such a case, all the vertices in the loop represent the UIs performed during the execution of the task.
If several different tasks are recorded in sequence in the same log, we would observe several disjoint loops in the CFG, while if a task has repetitive subtasks, we would observe nested loops in the CFG.
\figurename~\ref{fig:cfg} shows the CFG generated from the log captured in Table~\ref{tab:norm-uilog}, we note that for simplicity we collapsed some vertices as shown in Figure~\ref{fig:collapsing}.
\begin{figure}[htb]
\centering
\subfloat[Before\label{fig:original}]{
\includegraphics[scale = 0.95]{Subprocess.pdf}
}
\hspace{1cm}
\subfloat[After\label{fig:collapsed}]{
\includegraphics[scale = 0.85]{SubprocessCollapsed.pdf}
}
\caption{Collapsed vertices in Figure~\ref{fig:cfg}}
\label{fig:collapsing}
\end{figure}
\begin{figure}[htb]
\centering
\hspace*{-2cm}\includegraphics[scale = 0.8]{CFG.pdf}
\caption{Example of a Control-Flow Graph}
\label{fig:cfg}
\end{figure}
Once the CFG is generated, we turn our attention to identifying its back-edges (i.e., its loops). By identifying the CFG back-edges and their UIs, we extract the start and end UIs of the repeated task. These UIs are used to mark the boundaries between task executions. The back-edges of a CFG can be identified by analyzing the CFG Strongly Connected Components (SCCs). Given a graph, an SCC is a subgraph where for all its pairs of vertices, there exist a set of edges connecting the pair of vertices such that all the sources and targets of these edges belong to the subgraph.
\begin{definition}[\textbf{CFG Path}]
Given a CFG $G = (V, E, \hat{v}, \hat{e})$, a CFG path is a sequence of vertices $p_{v_1,v_k} = \langle v_1, \dots, v_k \rangle$ such that for each $i \in [1,k-1] \Rightarrow v_i \in V \cup \{ \hat{v} \} \wedge \exists (v_i, v_{i+1}) \in E \cup \{ \hat{e}\}$.
\end{definition}
\begin{definition}[\textbf{Strongly Connected Component (SCC)}]
Given a graph $G = (V, E, \hat{v}, \hat{e})$, a strongly connected component (SCC) of G is a pair $\delta = (\bar{V}, \bar{E})$, where $\bar{V} = \{ v_1, v_2, \dots, v_m \} \subseteq V$ and $\bar{E} = \{ e_1, e_2, \dots, e_k \} \subseteq E$ such that $\forall v_i, v_j \in \bar{V} \exists p_{v_i,v_j} \mid \forall v \in p \Rightarrow v \in \bar{V}$. Given an SCC $\delta = (\bar{V}, \bar{E})$, we say that $\delta$ is \emph{non-trivial} iff $\left| \bar{V} \right| > 1$. Given a graph $G$, $\Delta_G$ denotes the set of all the non-trivial SCCs in G.
\end{definition}
Algorithm~\ref{alg:beDetection} and Algorithm~\ref{alg:analyseSCC} describe how we identify the SCCs of the CFG. Given a CFG $G = (V,E,\hat{v},\hat{e})$, we first build its dominator tree $\Theta$ (Algorithm~\ref{alg:beDetection}, line~\ref{alg:domTree}), which captures domination relations between the vertices of the CFG. \figurename~\ref{fig:domTree} shows the dominator tree of the CFG in \figurename~\ref{fig:cfg}.
\begin{figure}[htb]
\centering
\includegraphics[scale = 0.9]{DominatorTreeNew.pdf}
\caption{Dominator tree}
\label{fig:domTree}
\end{figure}
Then, we discover the set of all non-trivial SCCs ($\Delta_G$) by applying the Kosaraju's algorithm \cite{sharir1981strong} and removing the trivial SCCs (Algorithm~\ref{alg:beDetection}, line~\ref{alg:scc}). For each $\delta = (\bar{V}, \bar{E}) \in \Delta_G$, we discover its \emph{header} using the dominator tree (Algorithm~\ref{alg:analyseSCC}, line~\ref{alg:header}). The header of a dominator tree $\delta$ is a special vertex $\hat{h} \in \bar{V}$, such that $\forall p_{\hat{v},v} \mid v \in \bar{V} \Rightarrow \hat{h} \in p_{\hat{v},v}$, i.e., the \emph{header} $\hat{h}$ (a.k.a. the SCC entry) is the SCC vertex that dominates all the other SCC vertices. Once we have $\hat{h}$, we can identify the back-edges as $(v,\hat{h})$ with $v \in \bar{V}$ (line~\ref{alg:incoming}). Finally, the identified back-edges are stored and removed (lines~\ref{alg:backEdges} and ~\ref{alg:edgesSub}) in order to look for nested SCCs and their back-edges by recursively executing Algorithm~\ref{alg:analyseSCC} (line~\ref{alg:recursion}), until no more SCCs and back-edges are found. However, if we detect an SCC that does not have a header vertex (formally, the SCC is irreducible), we cannot identify the SCC back-edges. In such a case, we collect via a depth-first search of the CFG the edges $(v_x, v_y) \in \bar{E}$ such that $v_y$ is topologically deeper than $v_x$ - we call these edges \emph{loop-edges} of the SCC (line~\ref{alg:loops}). Then, out of all the loop-edges, we store (and remove from the SCC) the one having target and source connected by the longest \emph{simple path} entirely contained within the SCC (lines~\ref{alg:deepestEdge} to ~\ref{alg:removeEdge}).
Given the CFG presented in \figurename~\ref{fig:cfg} and its corresponding dominator tree (see \figurename~\ref{fig:domTree}), we identify the SCC that consists of all the vertices except the \emph{entry vertex}. Then, by applying Algorithm~\ref{alg:analyseSCC}, we identify: the SCC header -- \emph{Click Button [New Record]}; and the only back-edge -- (\emph{Click Button [Submit]}, \emph{Click Button [New Record]}), which we save and remove from the SCC. After the removal of this back-edge, we identify the nested SCC that contains edits of \emph{Full Name}, \emph{Date}, and \emph{Phone} fields. Note that this second SCC does not have a header because it is irreducible, due to its multiple entries (\emph{Edit Field [Full Name]} and \emph{Edit Field [Date]}). However, by applying the depth-first search, we identify as candidate loop-edge for removal: (\emph{Edit Field [Phone]}, \emph{Edit Field [Full Name]}). After we remove this edge from the CFG, no SCCs are left, so Algorithm~\ref{alg:analyseSCC} terminates.
\input{algorithm1.tex}
\input{algorithm2.tex}
At this point, we collected all the back-edges of the CFG. Next, we use them to segment the log. We do so by applying Algorithm~\ref{alg:segIdentification}. First, we retrieve all the targets and sources of all the back-edges in the CFG and collect their corresponding UIs (lines~\ref{alg:targets4} and~\ref{alg:sources4}). Each UI mapped onto a back-edge target is an eligible segment starting point (from now on, \emph{segment-start UI}). A back-edge conceptually captures the end of a task execution, while its target represents the first UI of the next task execution. By applying the same reasoning, each UI mapped onto the source of a back-edge is an eligible segment ending point (hereinafter, \emph{segment-end UI}). Then, we sequentially scan all the UIs in the log (line~\ref{alg:uilogscan4}). When we encounter a segment-start UI (line~\ref{alg:segstart4}), and we are not already within a segment (see line~\ref{alg:notinsegment4}), we create a new segment ($s$, a list of UIs), we append the segment-start UI ($\bar{u}$), and we store it in order to match it with the correct segment-end UI (line~\ref{alg:startsegment41} to~\ref{alg:startsegment42}). Our strategy to detect segments in the log is driven by the following underlying assumption: a specific segment-end UI will be followed by the same segment-start UI so that we can match segment-end and segment-start UIs exploiting back-edge's sources and targets (respectively). If the UI is not a segment-start (line~\ref{alg:nostart4}), we check if we are within a segment (line~\ref{alg:insegment4}) and,
if not, we discard the UI, assuming it is noise since it fell between the previous segment-end UI and the next segment-start UI. Otherwise, we append the UI to the current segment, and we check if this UI is a segment-end matching the current segment-start UI (line~\ref{alg:startendmatching4}).
If that is the case, we reached the end of the segment, and we add it to the set of segments (line~\ref{alg:segmentcomplete4}); otherwise, we continue reading the segment.
\newpage
\input{algorithm3.tex}
Table~\ref{tab:segments} shows the segment-start and the segment-end UIs (highlighted in green and red, respectively), which delimits two segments within the normalized UI log of our running example (see also Table~\ref{tab:norm-uilog}).
\input{tables-complete-segmentation-compact}
\subsection{Candidates routines identification}
\label{sec:candidatesDiscovery}
Once the log has been segmented, we move to the identification of the candidate routines. The identification step is based on the CloFast sequence mining algorithm \cite{fumarola2016clofast}. To integrate CloFast in our approach, we have to define the structure of the sequential patterns we want to identify. In this paper, we define a \emph{sequential pattern} within a UI log as a sequence of normalized UIs always occurring in the same order in different segments, yet allowing gaps between the UIs belonging to the pattern. For example, if we consider the following three segments:
$\langle u_1, u_y, u_2, u_3 \rangle$,
$\langle u_1, u_2, u_x, u_3 \rangle$,
and $\langle u_1, u_x, u_2, u_3 \rangle$;
they all contain the same sequential pattern that is $\langle u_1, u_2, u_3 \rangle$.
Furthermore, we define the \emph{support} of a sequential pattern as the ratio of segments containing the pattern and the total number of segments.
We refer to \emph{closed} patterns and \emph{frequent} patterns (relatively to an input threshold) as they are known in the literature. Specifically, a frequent pattern is a pattern that appears in at least a number of occurrences indicated by the threshold, while a closed pattern is a pattern that is not included in another pattern having exactly the same support. By applying CloFast to the log segments, we discover all the \emph{frequent closed} sequential patterns.
Some of these patterns may be \emph{overlapping}, which (in our context) means that they share some UIs. An example of overlapping patterns is the following, given three segments:
$\langle u_1, u_y, u_2, u_3, u_x, u_4 \rangle$,
$\langle u_1, u_y, u_2, u_x, u_3, u_4 \rangle$,
and $\langle u_1, u_x, u_2, u_3, u_4 \rangle$;
$\langle u_1, u_2, u_3, u_4 \rangle$ and $\langle u_1, u_x, u_4 \rangle$ are sequential patterns, but they overlap due to the shared UIs: $u_1$ and $u_4$. In practice, each UI belongs to only one routine, therefore, we are interested in discovering only non-overlapping patterns. For this purpose, we implemented an optimization that we use on top of CloFast. Given the set of patterns discovered by CloFast, we rank them by a pattern quality criterion, and we select the best pattern (i.e., the top one in the ranking). We integrated four pattern quality criteria to select the candidate routines: pattern frequency, pattern length, pattern coverage, and pattern cohesion score~\cite{DBLP:conf/iui/DevL17}. Pattern frequency considers how many times the pattern was observed in different segments.
Pattern length considers the length of the patterns. Pattern coverage considers the percentage of the log that is covered by all the pattern occurrences.
Finally, pattern cohesion score considers the level of adjacency of the elements inside a pattern. It is calculated as the difference between the pattern length and the median number of gaps between its elements. In other words, cohesion prioritizes the patterns whose UIs appear consecutively without (or with few) gaps while taking into account also the pattern length.
For the candidate routine that we identified as the best pattern for a given quality criterion, we collect and remove all its occurrences from the log. An occurrence of a candidate routine is called a \emph{routine instance}. Formally, a routine instance is a sequence of (non-normalized) UIs, e.g., $r = \langle u_1, u_2, u_3, u_4 \rangle$. After the removal of all the instances of the best candidate routine from the log,
we repeat this identification step until no more candidate routines are identified. At the completion of this step, we obtain a set of candidate routines, referred to as $\mathcal{C}_{\Sigma}$, such that, for each candidate routine $c_i \in \mathcal{C}_{\Sigma}$, we can retrieve the set of its routine instances, referred to as $\mathcal{R}_{c_{i}}$.
Considering our running example, with reference to Table~\ref{tab:segments}, assuming that the two routine instances that we identified in the previous step (by detecting their segment-start and segment-end UIs) frequently occur in the original log (a snapshot of which is captured in Table~\ref{tab:uiLog}), and choosing length as a selection criterion, at the end of this step, we would discover two candidate routines, each consisting of 15 normalized UIs (as shown in Table~\ref{tab:segments}). An example of a routine instance for each of the two candidate routines can be easily observed in the original log, Table~\ref{tab:uiLog} rows 1 to 24 and 25 to 49 (excluding the UIs filtered in the first step of our approach).
\subsection{Automatability assessment}
\label{sec:automatabilityAssessment}
The candidate routines in $\mathcal{C}_{\Sigma}$ (and their instances, $\mathcal{R}_{c_{i}}$) that we identified in the previous step represent behavior recorded in the log that frequently repeats itself, thus it is the candidate for automation. However, the fact that a routine is frequently observed in a log is not a sufficient condition to guarantee its automatability. Let us consider the following example; a worker fills in and submits 100 times the same web-form, doing it always with the same sequence of actions but inputting manually-generated data (e.g., received over a phone call or copied from a hard-copy document). In such a scenario, although we would identify the filling and submission of the web-form as a candidate routine, we would not be able to automate it because we cannot automatically generate the data in input to the web-forms. On the other hand, if the data in input to the web-forms was copied from another digital document, for example a spreadsheet, we could probably automate the routine.
Considering such a context, the next step of our approach is to assess the degree of automatability of the discovered candidate routines. To do so, given a candidate routine $c_i \in \mathcal{C}_{\Sigma}$, we check whether all its UIs are deterministic. We consider a UI to be deterministic if a software robot can replicate its execution. This is possible when: i) the input data of a UI can be determined automatically; or ii) the input data of a UI can be provided as input by the user when deploying the software robot. According to such constraints, we can provide the following rules to check whether a UI is deterministic or not.
\begin{itemize}
\item[1.] UIs belonging to the \emph{navigation} group (see Table~\ref{tab:uiParam}) are always deterministic because they do not take in input any data; except the \emph{select cell}, \emph{select field}, and \emph{select range} UIs which are removed during the filtering of the log (as described in Section~\ref{sec:segmentation});
\item[2.] UIs belonging to the \emph{read} group are always deterministic because the only input they require is the source of the copied content (e.g., row and column of a cell), which is either constant or can be inputted by the user when deploying the software robot in UiPath;
\item[3.] UIs belonging to the \emph{write} group that are of type \emph{click} are always deterministic because they do not take in input any data, except the information regarding the element to be clicked which is always constant for a given candidate routine (by construction);
\item[4.] UIs belonging to the \emph{write} group that are of type \emph{paste} are always deterministic because they always retrieve data from the same source (i.e., the system clipboard).
\item[5.] UIs belonging to the \emph{write} group that are of type \emph{edit} are the only ones that are not always deterministic. In fact, these UIs are deterministic only if it is possible to determine the updated value of the edited elements (e.g., the value of a cell in a spreadsheet or of a text field in the web browser after the UI is executed). Furthermore, it has also to be possible to determine the target of the editing, although this is usually constant (if a web element) or can be inputted by the user when deploying the software robot in UiPath.
\end{itemize}
Algorithm~\ref{alg:automatabilityAssessment} shows how we check these five rules given as input a candidate routine $c_i$ and its routine instances $\mathcal{R}_{c_{i}}$, and how we compose the corresponding routine specification of the input $c_i$. The algorithm starts by initializing the set $E$ as a collection of \emph{edit} UI types (\emph{edit cell}, \emph{edit range}, \emph{edit field}). Then, it iterates over all the normalized UIs in the input $c_i$ by checking their types. If the type of a normalized UI $\bar{u}$ is not in $E$ (line~\ref{alg:checktype}), i.e., one of the rules 1 to 4 applies, we add it to the queue $D$, which stores all the deterministic UIs we identified. Otherwise, rule 5 applies. While rules 1 to 4 are simple checks on the UI types, the complexity of rule 5 required us to operationalize it through a separate algorithm, i.e., Algorithm~\ref{alg:checkeditUIs}, which is called within Algorithm~\ref{alg:automatabilityAssessment} (line~\ref{alg:calleditcheck}). Algorithm~\ref{alg:checkeditUIs} returns a pair $(d, \lambda)$, where $d$ is a \emph{boolean} (true if the input normalized UI is deterministic), and $\lambda$ is a \emph{data transformation step} required to automate $\bar{u}$ and therefore available only if $\bar{u}$ is deterministic. Once all the normalized UIs in the input $c_i$ have been checked, Algorithm~\ref{alg:automatabilityAssessment} outputs the \emph{routine specification} of $c_i$, as the pair ($c_i$, $\Lambda$), where $\Lambda$ is the set of all the \emph{data transformation steps} we collected by executing Algorithm~\ref{alg:checkeditUIs} (line~\ref{alg:calleditcheck}).
\input{algorithm4.tex}
\input{algorithm5.tex}
Before moving to the final step of our approach, we describe how Algorithm~\ref{alg:checkeditUIs} verifies whether an input (normalized) UI of type \emph{edit} ($\bar{u}$) is deterministic. In essence, Algorithm~\ref{alg:checkeditUIs} checks whether the value of the element edited by the execution of $\bar{u}$ can be deterministically computed from the UIs observed before $\bar{u}$ (in all the routine instances in $\mathcal{R}_{c_{i}}$).
To do so, the algorithm looks for a possible data transformation function to compute the value of the edited element from the payloads of the UIs observed before $\bar{u}$. If such a data transformation function exists, $\bar{u}$ is considered to be deterministic, and the algorithm returns the identified function in the form of a data transformation step (which also includes source(s) and target of the data transformation function).
In the following, we walk through Algorithm~\ref{alg:checkeditUIs}.
We start by assuming that the UI in input is not deterministic, and we try to prove the opposite. We initialize to false the boolean variable which we will output at the end of the algorithm (line~\ref{alg:setDeterminism}), and we create the necessary data structures (line~\ref{alg:setC} to~\ref{alg:data2}). Given the input candidate routine $c_i$ and the normalized UI $\bar{u}$, we extract the index of $\bar{u}$ within $c_i$ (line~\ref{alg:getPosition}). Then, for each routine instance $r \in \mathcal{R}_{c_{i}}$, we do what follows.
We get the instance of the normalized UI $\bar{u}$\footnote{We recall that a UI instance contains all the parameters, both context and data ones.} by retrieving the UI of index $n$ from $r$ (line~\ref{alg:getInstance}), and we store this UI ($u_1$) in the set $K$ (line~\ref{alg:addInstance}). We read the payload of $u_1$ to retrieve the target element ($t_1$, line~\ref{alg:getTarget}), $t_1$ can be the ID of a web browser element or the location of a cell in a spreadsheet. Also, we read the payload of $u_1$ to retrieve the value of the target element after the editing ($o$, line~\ref{alg:getOutput}). We initialize two queues, $S$ (which stands for \emph{sources}) and $I$ (which stands for \emph{inputs}). Queue $S$ stores the ID or location of the (source) element(s) that produced the data used by the \emph{edit} UI instance $u_1$; while queue $I$ stores the data that was used by the \emph{edit} UI instance $u_1$.
After this initialization, we iterate over all the UI instances preceding $u_1$ in $r$. Such an iteration goes backward from $u_1$ (position $n$ in $r$) till the first UI instance in $r$ (position 1) -- line~\ref{alg:startMainIteration} to~\ref{alg:stopMainIteration}, unless we identify another UI instance of type \emph{edit} performed on the same target element $t_1$ (see lines~\ref{alg:checkForEdit} to \ref{alg:sameEditTarget}). In the iteration captured between line~\ref{alg:startMainIteration} to~\ref{alg:stopMainIteration}, we do the following.
We store all the preceding UI instances ($u_2$) into the set $\Pi$, alongside the routine instance they belong to (i.e., we store a pair $(r, u_2)$ in $\Pi$). For each encountered $u_2$ of type \emph{paste}, we check its target element and we compare it to the target element of $u_1$. If they are the same, we again traverse backward the routine instance from the \emph{paste} UI until we find a \emph{copy} UI $u_3$ (line~\ref{alg:startPasteCheck} to~\ref{alg:stopPasteCheck}).\footnote{Our filtering approach, described in Section~\ref{sec:segmentation} guarantees that there exists a $u_3$ of type \emph{copy} preceding the \emph{paste} UI}. Then, we retrieve the target element of $u_3$ and we append it to queue $S$, and we add the copied value of $u_3$ to queue $I$ (lines~\ref{alg:addSource1} and \ref{alg:addInput1}).
For each encountered $u_2$ of type \emph{edit} (line~\ref{alg:checkForEdit}), we check its target element and we compare it to the target element of $u_1$. If they are the same (line~\ref{alg:sameEditTarget}), we push the \emph{target element} of $u_2$ to the front of queue $S$, and we push the \emph{data content} of the target element after the editing performed by $u_2$ to the front of the queue $I$ (line~\ref{alg:addSource2} and~\ref{alg:addInput2}). When we reach this point, we also stop the iteration over all the UI instances preceding $u_1$ because the value of the target element after performing $u_1$ is obtained from the last \emph{edit} UI performed on the same target element and any other UI (i.e., \emph{paste} UIs) between $u_2$ and $u_1$.
Finally, before moving to the next routine instance (i.e., returning to line~\ref{alg:traverseRoutineInstances}), we store the input data and the output data observed in the current routine instance for the normalized UI $\bar{u}$ in the set $T$, which collects all the input and output data observed for \emph{all} the instances of $\bar{u}$ (see line~\ref{alg:addTransformationExample}).
After performing all the above steps for each routine instance $r \in \mathcal{R}_{c_{i}}$, and collecting all the required data to identify a possible data transformation function into the sets $T, K,$ and $\Pi$, we look for the data transformation function by leveraging two state-of-the-art tools:
Foofah~\cite{DBLP:conf/sigmod/JinACJ17} and TANE~\cite{DBLP:journals/cj/HuhtalaKPT99}. First, we try to identify the data transformation function using Foofah, then -- if Foofah fails -- we use TANE.
Foofah requires in input two series of data values, one referred to as \emph{input} and one referred to as \emph{output}. We generate the two series from the pairs $(I,O)$ that we collected in $T$, which capture examples of data transformations. From these examples, Foofah tries to synthesize an optimal data transformation function to convert input(s) to output.\footnote{For more details about Foofah refer to~\cite{DBLP:conf/sigmod/JinACJ17}.} We note that we run Foofah under the assumption that the output series is noise- and error-free, i.e., the analyzed data transformations are supposed to be correct.
However, Foofah suffers from two limitations: it is inefficient when the size of the input and output series is large; it cannot discover conditional data transformation functions (where different manipulations are applied depending on the input). Hence Foofah cannot deal with heterogeneous data.
To address these limitations, we group the data transformation examples into equivalence classes, where each class represents a different structural pattern of the input data. To create these equivalence classes, for each data sample in the input data series, we discover its symbolic representation describing its structural pattern by applying \emph{tokenization}. The tokenization that we apply replaces each maximal chained subsequence of symbols of the same type (either digits or letters) with a special token character ($\langle d \rangle+$ or $\langle a \rangle+$, resp.), and leaves any other symbol unaltered. For each equivalence class, we discover a data transformation function by providing to Foofah one randomly selected data transformation example from the equivalence class. The use of equivalence classes allows us to remove the heterogeneity of the input data and to facilitate the application of Foofah, which will operate only on a single data transformation example.
If Foofah cannot identify a data transformation function (line~\ref{alg:syntTransDiscoveryEnd}), we turn to TANE, which can discover semantical data transformation functions (also known as \emph{functional dependencies}~\cite{DBLP:journals/cj/HuhtalaKPT99}). TANE requires in input a table where each row contains $n-1$ input data values and an output data value in column $n$ (this is conceptually similar to the input and output series required by Foofah). TANE analyzes each row of such a table to check if there exists any dependency between the values in the first $n-1$ columns and the value in column $n$.\footnote{For more details about TANE refer to~\cite{DBLP:journals/cj/HuhtalaKPT99}.} An example of a semantical data transformation function discovered by TANE would be: if the value of column $i$ is X, then the value of column $n$ is always Y.
In our context, the input table for TANE is a table where each row represents the output data observed in all the UIs preceding $\bar{u}$ in a routine instance, and the last element of the row is the output data of the $\bar{u}$ instance in that routine (i.e., the value of the element edited by the execution of $\bar{u}$ in that routine instance). To build such a table, we require in input all the instances of $\bar{u}$ (which we stored in the set $K$) as well as all the instances of any UI preceding $\bar{u}$ (which we stored in the set $\Pi$). If TANE identifies a semantical data transformation function (line~\ref{alg:depfound}), we set $\bar{u}$ as deterministic (through the boolean $d$), and we compose the data transformation step using the output of TANE (see lines~\ref{alg:tane1} to~\ref{alg:tane2}).
Table~\ref{table:dependencyTable} shows an example of the dependency table that we would build from the log captured in Table~\ref{tab:uiLog} (assuming that the full-length UIs log contains nine instances of the routine showed in rows 1 to 24). Giving Table~\ref{table:dependencyTable} in input to TANE, it would identify that the value of the last column (i.e., the type of student, domestic or international) can be deterministically generated by observing the value of column four (i.e., \emph{country of residence}).
\input{tables-dependencyExample.tex}
If also TANE does not discover any data transformation function, it means that we are not able to automatically determine the value of the element edited by the execution of $\bar{u}$, consequently we assume that $\bar{u}$ is not deterministic. Otherwise, we output the data transformation step discovered.
\begin{figure}[htb]
\centering
\hspace*{-1cm}
\includegraphics[scale = 0.7]{TransformationFunctions.pdf}
\caption{Transformation functions discovered from the running example}
\label{fig:tfunctions}
\end{figure}
\input{tables-transformationSteps}
Considering our running example, Figure~\ref{fig:tfunctions} shows the data transformations functions discovered by Foofah (t1 to t4) and by TANE (t5) when running Algorithm~\ref{alg:checkeditUIs} on an hypothetical extended version of the UI log in Table~\ref{tab:uiLog} and giving as input the routine shown in rows 1 to 24 (Table~\ref{tab:uiLog}) along all its instances, and the \emph{edit} UIs at rows 6, 11, 16, 21, 23 (respectively, for identifying the data transformation functions from t1 to t5). Each data transformation function shows how input data is turned into output data. Although some rules are intuitive to interpret (e.g., t1 and t5), others may appear slightly cryptic. We refer to Foofah~\cite{DBLP:conf/sigmod/JinACJ17} and TANE~\cite{DBLP:journals/cj/HuhtalaKPT99} original studies for an extensive description of the set of rules that the two tools are capable to discover.
Finally, the data transformation functions are integrated into the data transformation steps, which also include the instantiation of the input and the output of the function, as shown in Table~\ref{tab:transSteps}.
\subsection{Routines aggregation}
\label{sec:routinesAggregation}
When a routine can be performed by executing a set of UIs without following a strict order, we may observe multiple execution variants of the same routine in the log. For example, if a worker needs to copy the \emph{first name}, the \emph{family name}, and the \emph{phone number} of a set of customers from a spreadsheet to different web-forms, she may choose to copy the data of each customer in any order (e.g., \emph{first name}, \emph{phone number}, and \emph{family name}, or \emph{family name}, \emph{phone number}, \emph{first name}). In such a scenario, the UI log would record several different execution variants of the same routine. Routine execution variants do not bring any additional value, rather they just generate redundancy within the log leading to the discovery of different routine specifications that would actually execute (once deployed as software bots) the same routine. Considering these routine specification as duplicates, this final step focuses on their removal.
To identify duplicate routine specifications, we start by generating for each routine discovered in the previous step its \emph{data transformation graph}.
\begin{definition}[\textbf{Data Transformation Graph}]
Given a routine specification ($c_i$, $\Lambda$), its \emph{data transformation graph} is a graph $G_\Lambda = (D_\Lambda , L_\Lambda)$, where:
$D_\Lambda $ is the set of vertices of the graph, and each vertex $d \in D_\Lambda$ maps one data transformation step $\lambda \in \Lambda$;
$L_\Lambda \subseteq D_\Lambda \times D_\Lambda$ is the set of edges of the graph, and each edge $(d_i, d_j) \in L_\Lambda$ represents a dependency between two data transformation steps capturing the fact that the target of the data transformation step mapped by $d_i$ is (one of) the source(s) of the data transformation step mapped by $d_j$.
\end{definition}
\figurename~\ref{fig:transformationGraph} shows the data transformation graph of the routine we discovered in the previous step in our running example.
Data transformation graphs can be used to check whether two routine specifications are equivalent, in fact, two routine specifications, ($c_i$, $\Lambda_1$) and ($c_j$, $\Lambda_2$), are equivalent if and only if the following two relations hold: i) their data transformation graphs are the same, i.e., $D_{\Lambda_1}$ = $D_{\Lambda_2}$ and $L_{\Lambda_1}$ = $L_{\Lambda_2}$; ii) their candidate routines $c_i$ and $c_j$ contain the same set of UIs, and all the UIs of type \emph{click button} appear in the same order in both $c_i$ and $c_j$.
\begin{figure}[htb]
\centering
\hspace*{-1cm}
\includegraphics[scale = 0.9]{TransformationGraph-Alt.pdf}
\caption{Data transformation graph example}
\label{fig:transformationGraph}
\end{figure}
By comparing each pair of routine specifications, we first create sets of equivalent routine specifications, and, for each set, we discard all the routine specifications but one. Ideally, we would like to retain the best routine specification of each set, however, we need to define what it means to be the \emph{best} one. We can select the best routine specification by relying on different quantitative metrics, such as frequency, length, or duration of the candidate routine of a routine specification. For example, we can choose frequency as a selection criterion and retain from each set the routine specification whose candidate routine is the most frequent in the UI log.
Intuitively, the most frequent candidate routine represents the common routine execution, so that one may be tempted to use that criterion by default. However, the most frequent routine execution is not necessarily the optimal execution. For example, length or duration could represent better selection criteria. Length prioritizes short candidate routines over long ones, assuming that a candidate routine should comprise as few steps as possible. Duration prioritizes execution times over the number of steps. The duration of a candidate routine can be estimated as the average execution time of each routine instance of the candidate routine that is recorded in the UI log. Note, however, that the duration could be not always reliable since during the routine execution, the worker might perform activities that do not appear in the log or that are not relevant for the routine execution, thus involuntarily increasing the observed execution time of the routine. For this reason, we implemented a combination of length and frequency to select the best routine specification from each set. Precisely, we use length first and then compare the frequencies of the candidate routines having the same length.
\section{Conclusion}
\label{sec:conclusion}
\medskip
This paper presented an approach to discover automatable routines from UI logs.
The approach starts by decomposing the UI log into segments corresponding to paths within the connected components of a control-flow graph derived from the log. These paths represent sequences of actions that are repeated multiple times within the event log, possibly with some variations.
Once the log is segmented, a noise-resilient sequential pattern mining technique is used to extract frequent patterns that corresponds to the candidate routines.
Next, the candidate routines are assessed for their amenability to automation. For each routine, a corresponding executable specification is synthesized, which can be compiled into an RPA script.
Finally, the approach identifies semantically equivalent routines in order to produce a non-redundant set of automatable routines.
The approach has been implemented as an open-source tool, namely Robidium. This article reported on an evaluation of the fit-for-purpose and computational efficiency of the proposed approach. The evaluation shows that the approach can rediscover routines injected into synthetic logs, and that it discovers relevant routines in real-life logs. For most logs, the execution time does not exceed one minute. The only exceptions related to logs where we deliberately injected complex data transformations or where the routine instances overlap in the UI log.
The proposed approach makes a number of limiting assumptions.
First, the effectiveness of the approach is sensitive to noise, e.g.\ clicks that are not related to the routine itself or clicks resulting from user mistakes.
In our evaluation, we observed this phenomenon to varying degrees when dealing with real-life logs.
In practice, the approach can identify correct routines only if they are frequently observed in the log.
Recurring noise affects the accuracy of the results. To address this limitation, we will investigate the use of alternative segmentation and sequential pattern discovery techniques that incorporate noise tolerance mechanisms. Another avenue is to discover sequential patterns using the approach outlined in this article and then to filter out patterns that are \emph{chaotic} in the sense that their occurrence does not affect the probability of other patterns occurring subsequently nor vice-versa. This latter approach has been studied in the context of event log filtering for process mining in~\cite{DBLP:journals/jiis/TaxSA19}.
Second, the approach is designed for logs that capture consecutive routine executions. In practice, routine instances may sometimes overlap (cf. the {\sc S2} \ real-life log in the evaluation). A possible avenue to address this limitation is to search for overlapping frequent patterns directly in the unsegmented log, instead of first segmenting it and then finding patterns in the segmented log. This approach has been previously investigated in the context of so-called Local Process Mining (LPM), where the goal is to discover process models capturing frequently repeated (and possibly overlapping) behavior in an unsegmented sequence of events~\cite{LPM}.
When assessing the automatability of a routine, the propsoed approach assumes that the values of the edited fields are entirely derived
from the (input) fields that are explicitly accessed (e.g., via copy operations) during the routine's execution.
Hence, it will fail to identify automatable user interactions in the case where a worker visually reads from a field
(without performing a \emph{copy} operation on it) and writes what they see into another field. An avenue for addressing this limitation is to complement the proposed method with optical character recognition techniques over screenshots taken during the UI log recording, so as to be able to detect that some of the outputs of a routine come from fields that have not been explicitly accessed via a copy-to-clipboard operation.
Furthermore, the proposed approach is unable to discover conditional behavior, where the transformation function for the target field depends on the value of another field. Consider, for example, a routine that involves copying delivery data.
If the delivery country is USA, then the month comes before the day (MM/DD/YYYY), otherwise the day comes before the month.
Here, the transformation function depends on a condition of the form ``country = USA'', which the proposed approach is unable to discover.
In a similar vein, the proposed approach is able to discover transformations that depend on the structural pattern of the value of the input field(s), but it fails to distinguish the patterns that, although having the same syntactical structure, have different semantics. Following the example above, our approach will put both date types into the same equivalence class. Addressing this limitation would require the development of more sophisticated data transformation discovery techniques, beyond the capabilities of Foofah.
Finally, the method to detect if two routines are semantically equivalent assumes that all button clicks in a UI are effectful, meaning that their presence and the order in which they occur affect the outcome of the routine.
In practice, some clicks may have no effect on the routine's outcome. For example, some clicks may simply serve to pop up a help box, while others may just serve to move from one page to another in a listing.
To address this limitation, we foresee extensions of the proposed method where the alphabet of the UI log is extended with a richer array of actions, and where the routine discovery approach can be configured via a language for the specification of action effects.
\medskip\noindent\textbf{Acknowledgments}. The authors thank Stanislav Deviatykh for his help in the prototype implementation. This research is supported by the Australian Research Council (DP180102839) and the European Research Council (project PIX).
\section{Evaluation}
\label{sec:evaluation}
\urldef{\footurla}\url{https://github.com/volodymyrLeno/RPM_Miner}
\urldef{\footurlb}\url{https://doi.org/10.6084/m9.figshare.12543587}
We implemented our approach as an open-source Java command-line application\footnote{Available at \footurla} and also embedded this in the open-source tool Robidium~\cite{LenoDPRDM20}. Using the command-line application, we conducted a series of experiments to analyze the applicability of our approach in real-life settings.
Specifically, we assessed to what extent our approach can rediscover routines that are known to be recorded in the input UI logs,
and analyzed whether our approach is able to correctly identify automatable and not automatable user interactions within such routines.
Accordingly, we define the following research questions:
\begin{itemize}
\item \textbf{RQ1.} Does the approach discover candidate routines that are known to exist in a UI log?
\item \textbf{RQ2.} Does the approach discover automatable routines that are known to be present in a UI log?
\end{itemize}
\subsection{Datasets}
\label{sec:datasets}
To answer our research questions, we rely on a dataset of 13 logs. These logs can be divided into three subgroups: artificial logs, real-life logs recorded in a supervised environment, and real-life logs recorded in an unsupervised environment.\footnote{The real-life logs were recorded with the Action Logger tool~\cite{DBLP:conf/bpm/LenoPRDM19}. All the logs are available at \footurlb}
Table~\ref{table:datasets} shows the logs characteristics.
\input{tables-datasets.tex}
The artificial logs (CPN1--CPN9) were generated from Colored Petri Nets (CPNs) in \cite{bosco2019}.
The CPNs used have increasing complexity, from low (the net used to generate CPN1) to high (the net used for CPN9).
The underlying routines are characterized by a varying amount of non-deterministic user interactions injected.
They involve simple data transformations, mostly in the form of copy-pasting.
The logs generated were originally noise-free and segmented. We removed the segment identifiers to produce unsegmented logs.
The \emph{Student Records} ({\sc SR}) and \emph{Reimbursement} ({\sc RT}) logs record the simulation of real-life scenarios.
The {\sc SR} \ log simulates the task of transferring students' data from a spreadsheet to a Web form.
The {\sc RT} \ log simulates the task of filling reimbursement requests with data provided by a claimant.
Each log contains fifty recordings of the corresponding task executed by one of the authors, who followed strict guidelines on how to perform the task.
These logs contain little noise, which only accounts for user mistakes,
such as filling the form with an incorrect value and performing additional actions to fix the mistake.
For both logs, we know how the underlying task was executed, and we treat such information as ground truth when evaluating our approach.
While the routines captured in the logs are fully automatable, they include complex transformations to test the automatability assessment step of the approach.
Finally, the \emph{Scholarships} logs ({\sc S1} \ and {\sc S2}) were recorded by two employees of the University of Melbourne who performed the same task. It is the task of processing scholarship applications for international and domestic students.
This task mainly consists of students' data manipulation with transfers between spreadsheets and Web pages.
Compared to the other logs used in our experiences, we have no a-priori knowledge of how to perform the task at hand (no ground truth).
Also, when recording the logs, the University employees were not instructed to perform their task in a specific manner,
i.e.,\ they were left free to perform this task as they would normally do when unrecorded.
\subsection{Setup}
\label{sec:setup}
To measure the quality of the discovered candidate routines, we use the Jaccard Coefficient (JC),
which captures the level of similarity between discovered and ground truth routines.
JC does not penalize the order of the interactions in a routine, which follows from the assumption that a routine could be executed by performing some actions in a different order.
The JC between two routines is the ratio $\frac{n}{m}$,
where $n$ is the number of user interactions that are contained in both routines,
while $m$ is the total number of user interactions present in the two routines.
Given the set of discovered routines and the set of ground truth routines,
for each discovered routine, we compute its JC with all the ground truth routines and assign the maximum JC to the discovered routine as its quality score.
Finally, we assess the overall quality of the discovered routines as the average of the JC of each discovered routine. As the ground truth, we use the segments of the artificial logs and the guidelines given to the author who performed the tasks in {\sc SR} \ and {\sc RT}.
The JC alone is not enough to assess the quality of the discovered routines,
as this measure does not consider the routines we may have missed in the discovery.
Thus, we also measure the total coverage to quantify how much log behavior is captured by the discovered routines.
We would like to reach high coverage with as few routines as possible.
Thus, we prioritize long routines over short ones by measuring the average routine length alongside its coverage.
We assess the quality of the automatable routines discovery by measuring precision, recall and F-score.
For each discovered routine, we compute the corresponding confusion matrix, where \emph{true positives} (TP) are correctly identified automatable user interactions,
\emph{true negatives} (TN) are correctly identified non-automatable user interactions, \emph{false positives} (FP) are the user interactions that were wrongly marked as automatable,
and \emph{false negatives} (FN) are the user interactions that were wrongly marked as non-automatable.
From the constructed confusion matrix, we calculate precision, recall and F-score as follows:
\begin{equation}Precision = \frac{TP}{TP + FP},\end{equation}
\begin{equation}Recall = \frac{TP}{TP + FN},\end{equation}
\begin{equation}F{\text -}score = 2 \cdot \frac{Precision \cdot Recall}{Precision + Recall}.\end{equation}
We report the averages of these metrics for all the discovered routines in the log.
We also report the average ratio of automatable user interactions for the routines in the log.
The results for the {\sc S1} \ and {\sc S2} \ logs were qualitatively assessed with the help of the University of Melbourne employees who performed the task. Specifically, we asked them to compare the rediscovered routines with the actions they performed while recording.
All experiments were conducted on a Windows 10 laptop with an Intel Core i5-5200U CPU 2.20 GHz and 16GB RAM,
using cohesion as a routine selection criterion with the minimum support threshold set to 0.1 and the minimum coverage threshold equal to 0.05.
\subsection{Results}
Table~\ref{table:candidatesIdentification} shows the quality of the discovered routine candidates. Although the synthetic logs only contain the user interactions that belong to routines, we achieved perfect coverage for three logs only, namely CPN1, CPN4 and CPN6.
This is because some execution patterns were observed very rarely.
Since the {\sc SR} \ and {\sc RT} \ logs contain noise, the coverage cannot be 1 in these two cases.
For six logs out of eleven logs, the discovered routines match with the ground truth.
Overall, the JC is very high, above 0.95 for all the logs except CPN5.
The underlying model of the CPN5 log consists of multiple branches, generating 36 different executions.
Considering the fact that some execution patterns are not frequent enough, we discovered only partial routines.
As can be seen clearly, for this log we also achieved the lowest coverage (0.84).
For the {\sc RT} \ log we found two routines consisting of an identical set of actions.
These routines were not merged though, because they are characterized by different transformation functions.
\input{tables-results-candidatesIdentification.tex}
Table~\ref{table:automatableResults} shows the quality of the automatable routines discovery.
We correctly identified all the automatable and not automatable user interactions for the CPN3, CPN6 and {\sc SR} \ logs.
The routines recorded in the CPN3 and {\sc SR} \ logs are fully automatable.
Although the {\sc RT} \ log contains automatable routines only, our approach failed to discover some of the underlying transformations,
and, therefore, incorrectly marked some interactions as not automatable.
Some of the user interactions of the synthetic logs were wrongly identified as automatable.
Although the data values of such interactions can be deterministically computed, the locations of the edited elements were completely random as it was intended in the corresponding models.
Thus, in practice, such interactions are not automatable.
The routines discovered from the CPN5 log are characterized by the lowest number of automatable user interactions, and we achieved the lowest recall for this log (0.805).
Overall, F-score is high, above 0.85 for all the logs, except CPN7 and CPN8.
For these logs we also achieved the lowest recall, meaning that some interactions of the corresponding routines were wrongly identified as not automatable.
Although in the CPN models used to generate the artificial logs, some of the interactions are not deterministic, they are automatable in the context of the discovered routines.
For example, for the CPN9 log we discovered six routines that correspond to the different branches within the model.
For all the executions of a branch we use the same data values, and hence, the corresponding user interactions are automatable.
\input{tables-results-automatableRoutinesDiscovery.tex}
From the {\sc S1}\ log we discovered five fully automatable routines.
The first routine consists in manually adding graduate research student applications to the student record in the university's student management system.
The application is then assessed, and the student is notified of the outcome.
The second routine consists in lodging a ticket to verify possible duplicate applications.
When a new application is entered in the system and its data matches an existing application,
the new application is temporarily put on hold, and the employee fills in and lodges a ticket to investigate the duplicate.
The remaining three routines represent exceptional cases, where the employee either executed the first or the second routine in a different manner (i.e.,\ by altering the order of the actions or overlapping routines executions).
These routines were not identified as duplicate because they are characterized by different sequences of button clicks.
To assess the results, we showed the discovered routines to the employee of the University of Melbourne
who recorded the {\sc S1}\ log, and they confirmed that the discovered routines correctly capture their task executions.
Also, they confirmed that the last three routines are alternative executions of the first routine.\footnote{Detailed results at \footurlb}
While the results from the {\sc S1} \ log were positive,
our approach could not discover any correct routine from the {\sc S2} \ log.
By analyzing the results, we found out that the employee worked with multiple worksheets at the same time,
frequently switching between them for visualization purposes.
Such behavior recorded in the log negatively affects the construction of the CFG and its domination tree,
ultimately leading to the discovery of incorrect segments and routines.
Table~\ref{table:executionTimes} shows the execution time for each step of the approach.
As we can see, the most computationally heavy step is the automatability assessment.
For all the logs, this step took the largest amount of time, except for the CPN5, {\sc S1}, and {\sc S2} \ logs.
While the execution time is still reasonably low for all the artificial logs, it substantially increases for the {\sc SR} \ and {\sc RT} \ logs.
In these two logs, the automatability assessment took 99 percent of the total computation time.
This is caused by the fact that the underlying transformations in these two logs were very complex, often involving regular expressions or long sequences of manipulations.
In contrast, all the transformations in the CPN1-CPN9 logs were simple copy-paste operations.
Overall, for the synthetic logs, the approach took no more than 42 seconds.
The aggregation step required the smallest amount of time.
For the CPN1 log, we discovered only one routine, and, therefore, we did not have to apply any aggregation.
For the {\sc S1} \ and {\sc S2} logs, the most time taking step was the segmentation.
The CFGs constructed for these logs were very complex, with a high number of loops.
This significantly increased the time to identify back-edges in such CFGs and, therefore, the total time of segmentation.
\input{tables-results-executionTime.tex}
\subsection{Threats to validity}
The reported evaluation has a number of threats to validity. First, a potential threat to internal validity is the fact that the context parameters (i.e.\ the attributes in the log that capture the notion of ``user interaction'') were manually selected. These context parameters are required as one of the inputs of the proposed method (in addition to the UI log). To mitigate this threat, the parameters were first selected by each of the two authors of the paper independently, then cross-checked to reach a mutual agreement, and then validated by the other authors based on their understanding of the event logs in question.
Another possible threat to internal validity is the limited use of parameter values to configure the approach at hand. To ensure we do not miss any significantly important behavior in the logs, we used very low support and coverage, equal to 0.1 and 0.05, respectively.
A potential threat to external validity is given by the use of a limited number of real-life logs (four).
These logs focus on one type of task that can be automated via RPA, namely data transferring.
These logs, however, exhibit different characteristics in terms of the complexity of the captured processes and log size.
To mitigate this threat, we additionally performed a more extensive evaluation on a battery of artificial logs.
For two real-life logs, we had no information about the underlying processes.
Therefore we evaluated the results qualitatively with the workers responsible for their execution.
To ensure the full reproducibility of the results, we have released all the logs, both real-life and artificial, used in our experiments.
The only exceptions are the {\sc S1} \ and {\sc S2} \ logs as they contain sensitive information.
\section{Introduction}
\label{sec:intro}
Robotic Process Automation (RPA) allows organizations to improve their processes by automating repetitive sequences of interactions between a user and one or more software applications (a.k.a.\ routines).
Using this technology, it is possible to automate data entry, data transfer, and verification tasks, particularly when such tasks involve multiple applications.
To exploit this technology, organizations need to identify routines that are amenable to automation~\cite{leopold2018identifying}.
This can be achieved via interviews, walk-throughs, job shadowing, or by examining documented procedures~\cite{leopold2018identifying}. These approaches are not always cost-efficient in large organizations, as routines tend to be scattered across the process landscape.
To tackle this gap, several research studies have proposed techniques to analyze User Interaction (UI) logs in order to discover repetitive routines that are amenable to automation via RPA~\cite{jimenez2019method,bosco2019,gao2019automated,leno2020aaai,DBLP:conf/bpm/AgostinelliLMM20}. However, existing approaches in this space make various assumptions that limit their applicability.
First, all of the existing approaches for discovering frequent and/or automatable routines from UI logs assume that the UI log consists of a set of traces (segments) of a task that is presupposed to contain one or more routines.
In practice, however, UI logs are not segmented.
Instead, a recording of a working session consists of a single sequence of actions encompassing many instances of one or more routines, interspersed with other events that may not be part of any routine.
Second, most of the existing approaches ~\cite{jimenez2019method,bosco2019,gao2019automated} discover frequent routines and/or automatable routines, but they do not produce an executable routine specification.
Third, existing approaches do not take into account the fact that the same routine may be performed differently (albeit equivalently) by different workers, or sometimes even by the same worker. In other words, existing approaches may produce redudant routines as output.
This article addresses these gaps by presenting an approach to discover automatable routines from unsegmented UI logs. The approach splits the unsegmented UI log into a set of segments, each representing a sequence of steps that appears frequently in the unsegmented UI log.
It then applies sequential pattern mining techniques to find candidate routines for automation and evaluates their automatability.
For each automatable routine, the approach synthesizes an executable routine specification,
which can be compiled into an RPA bot. This bot can then be executed by an RPA tool to replicate the underlying routine automatically.
The proposed approach has been implemented as an open-source prototype called Robidium~\cite{LenoDPRDM20}.
Using this implementation, we have evaluated the proposed approach on synthetic and real-life UI logs in terms of its execution times and its ability to accurately discover routines from an UI log.
This article is an extended and revised version of a conference paper~\cite{DBLP:conf/icpm/LenoADRMP20}.
The conference version focused on the discovery of frequently repeated routines from unsegmented UI logs (i.e.\ candidate routines).
This article extends this initial approach in two ways. First, this article presents an approach to post-process the identified candidate routines in order to assess their automatability and, in case a routine is fully automatable, to generate an executable routine specification. Second, this article proposes a method to identify semantically equivalent routines, so as to produce a non-redundant set of automable routines.
This article provides a concrete realization of a high-level architecture for discovering automatable routines from UI logs, sketched in~\cite{lenobise20}. To this end, the article proposes concrete techniques to implement each of the building blocks in~\cite{lenobise20}, except for the UI log recording step, which is documented in~\cite{DBLP:conf/bpm/LenoPRDM19}.
The article is structured as follows. Section \ref{sec:related} provides an overview of related work. Section \ref{sec:approach} describes the approach, while Section \ref{sec:evaluation} reports the results of the evaluation. Finally, Section \ref{sec:conclusion} concludes the paper and discusses the directions for future work.
\section{Related work}
\label{sec:related}
The problem addressed by this article is denominated as Robotic Process Mining (RPM) in~\cite{lenobise20}. RPM is a family of methods to discover repetitive routines performed by employees during their daily work, and to turn such routines into software scripts that emulate their execution. The first step in an RPM pipeline is to record the interactions between one or more workers and one or more software applications~\cite{DBLP:conf/bpm/LenoPRDM19}. The recorded data is represented as a UI log -- a sequence of user interactions (herein called UIs), such as selecting a cell in a spreadsheet or editing a text field in a form. The UI log may be filtered to remove irrelevant UIs (e.g., misclicks). Next, it may be decomposed into segments (segmentation). The discovered segments are then scanned to identify routines that occur frequently across these segments. Finally, the resulting frequent routines (a.k.a.\ candidate routines) are analyzed in order to identify those that are automatable and to derive executable routine specifications.
In this section, we review previous research related to the three core research challenges of RPM identified in~\cite{lenobise20}: UI log segmentation, discovery of frequent (candidate) routines and discovery of automatable routines.
\subsection{UI Log Segmentation}
Given a UI log (i.e., a sequence of UIs), segmentation consists in identifying non-overlapping subsequences of UIs,
namely \emph{segments}, such that each subsequence represents the execution of a task performed by an employee from start to end.
In other words, segmentation searches for repetitive patterns in the UI log.
In an ideal scenario, we would observe only one unique pattern (the task execution) repeated a finite number of times.
However, in reality, this scenario is unlikely to materialize.
Instead, it is reasonable to assume that an employee performing X-times the same task would make some mistakes or introduce variance in how the task is performed.
The problem of segmentation is similar to periodic pattern mining on time series.
While several studies addressed the latter problem over the past decades~\cite{cao2007discovery,zhu2017matrix},
most of them require information regarding the length of the pattern to discover or assume a natural period to be available (e.g., hour, day, week).
This makes the adaptation of such techniques to solve the problem of segmentation challenging unless periodicity and pattern length are known a priori.
Under the same class of problems, we find web session reconstruction~\cite{spiliopoulou2003framework},
whose goal is to identify the beginning and the end of web navigation sessions in server log data (e.g., streams of clicks and web page navigation)~\cite{spiliopoulou2003framework}.
Methods for session reconstruction are usually based on heuristics that rely on structural organization of web sites or time intervals between events.
The former approach covers only the cases when all the user interactions are performed in the web applications,
while the latter approach assumes that users make breaks in-between two consecutive segments -- in our case, two routine instances.
Lastly, segmentation also relates to the problem of correlation of event logs for process mining.
In such logs, each event should normally include an identifier of a process instance (case identifier), a timestamp, an activity label, and possibly other attributes.
When the events in an event log do not contain explicit case identifiers, they are said to be uncorrelated.
Various methods have been proposed to extract correlated event logs from uncorrelated ones.
However, existing methods in this field either assume that a process model is given as input~\cite{DBLP:conf/caise/BayomieAE16} or that the underlying process is acyclic~\cite{DBLP:conf/bpm/FerreiraG09}.
Both of these assumptions are unrealistic in our setting: a process model is not available since we are precisely trying to identify the routines in the log, and a routine may contain repetition.
Recent work on UI log segmentation~\cite{DBLP:conf/icpm/Agostinelli20} proposes to use trace alignment between the logs and the corresponding interaction models to identify the segments. In practice, however, such interaction models are not available beforehand. In this article, we outline a segmentation approach that does not require any models as inputs nor does it require that the user specifies one or more explicit delimiters between segments (e.g.\ that the user specifies that a given symbol X represents the start and/or the end of a segment).
\subsection{Frequent Routine Discovery}
Dev and Liu~\cite{DBLP:conf/iui/DevL17} have noted that the problem of routine identification from (segmented) UI logs can be mapped to that of frequent pattern mining, a well-known problem in the field of data mining~\cite{han2007frequent}.
Indeed, the goal of routine identification is to identify repetitive (frequent) sequences of interactions, which can be represented as symbols.
In the literature, several algorithms are available to mine frequent patterns from sequences of symbols.
Depending on their output, we can distinguish two types of frequent pattern mining algorithms:
those that discover only exact patterns~\cite{lee2004efficient,ohlebusch2015alphabet} (hence vulnerable to noise),
and those that allow frequent patterns to have gaps within the sequence of symbols~\cite{wang2004bide,fumarola2016clofast} (hence noise-resilient).
Depending on their input, we can distinguish between algorithms that operate on a collection of sequences of symbols and those that discover frequent patterns from a single long sequence of symbols~\cite{ohlebusch2015alphabet}.
The former algorithms can be applied to segmented UI logs, while the latter can be applied directly to unsegmented ones.
However, techniques that identify patterns from a single sequence of symbols only scale up when identifying exact patterns.
While such approaches discover the frequently repeated routines, they do not analyze whether they are automatable.
In other words, these approaches focus on the discovery of the control-flow models instead of executable specifications.
The identification of frequent routines from sequences of actions is related to the problem of Automated Process Discovery (APD) \cite{DBLP:journals/tkde/AugustoCDRMMMS19},
which has been studied in the field of process mining.
Recent works~\cite{DBLP:conf/bpm/Geyer-Klingeberg18,jimenez2019method} show that RPA can benefit from process mining.
In particular, the work in \cite{jimenez2019method} proposes to apply traditional APD techniques to discover process models of routines captured in UI logs.
However, traditional APD techniques discover control-flow models, while, in the context of RPA,
we seek to discover executable specifications that capture the mapping between the outputs and the inputs of the actions performed during a routine.
\subsection{Discovery of Automatable Routines}
The discovery of automatable sequences of user interactions has been widely studied in the context of Web form and table auto-completion. For example, Excel's Flash Fill feature detects string patterns in the values of the cells in a spreadsheet and uses these patterns for auto-completion~\cite{DBLP:conf/popl/Gulwani11}. However, auto-completion techniques focus on identifying repetitions of keystrokes (sequences of characters). In this article, we look at routines that involve transfering data across fields in one or more applications as well as editing field values.
The discovery of data transfer routines that are amenable for RPA automation has been addressed in~\cite{bosco2019}. This latter paper proposes a technique to discover sequences of actions such that the inputs of each action in the sequence (except the first one) can be derived from the data observed in previous actions. However, this technique can only discover perfectly sequential routines,
and is hence not resilient to variability in the order of the actions,
whereas in reality, different users may perform the actions in a routine in a different order.
Another technique for routine identification~\cite{leopold2018identifying} attempts to identify candidate routines from textual documents -- an approach that is suitable for earlier stages of routine identification and could be used to determine which processes or tasks could be recorded and analyzed in order to identify routines.
In \cite{DBLP:conf/bpm/AgostinelliLMM20} the authors present an approach to automatically discover routines from UI logs and automate them in the form of scripts.
This approach, however, assumes that all the actions within a routine are automatable.
In practice, it is possible that some actions have to be performed manually, and they can not be automated.
The approach presented in \cite{gao2019automated} aims at extracting rules from segmented UI logs that can be used to fill in forms automatically.
However, this approach only discovers branching conditions that specify whether a certain activity has to be performed or not (e.g., check the box of the form).
It focuses only on the copy-paste operations and does not identify more complex manipulations.
In previous work~\cite{leno2020aaai}, we mapped the problem of discovering routines related to the data transferring to the problem of discovering data transformations. In this paper, we reuse this idea and extend it to tackle the problem of assessing if and to what extent a frequent (candidate) routine is automatable, and if such, producing an executable specification.
|
1,314,259,993,430 | arxiv | \section{Introduction}
Batch codes were originally proposed by Ishai \emph{et al.}~\cite{Ishai} for load balancing in distributed systems.
One particular class of batch codes that we are interested in is linear (computational) batch codes~\cite{Lipmaa, dimakis-batch, Zhang-Skachek, Vardy} where the data is viewed as elements of a finite field written as a vector, and it is encoded using a linear transformation of that vector.
Codes for private information retrieval (or PIR codes, in short) were proposed by Fazeli, Vardy and Yaakobi~\cite{Fazeli}. It was suggested therein to emulate standard private information retrieval protocols using a special layer (code) which maps between the requests
of the users and the data which is actually stored in the database.
Linear batch codes and PIR codes have many similarities with locally-repairable codes~\cite{dimakis-survey},
which are used for repair of lost data
in distributed data storage systems. The main difference, however, is that in locally-repairable codes, it is
coded symbols that are to be repaired, while in batch codes and PIR codes it is information symbols that are to be restored~\vit{\cite{skachek}}.
\section{Notation and Preliminaries}
\subsection{Batch and PIR Codes}
We denote by ${\mathbb N}$ the set of natural numbers. For $n \in {\mathbb N}$, define $[n] \triangleq \{1, 2, \cdots, n\}$.
The notation ${\mbox{\boldmath $I$}}$ is used for an identity matrix.
In this work, we consider only (primitive, multiset) batch codes as defined in~\cite{Vardy}.
\begin{definition}[\hspace{-0.1ex}\cite{Vardy}]
\label{batch}
An $(n,k, t)$ batch code ${\mathcal{C}}$ over a finite alphabet $\Sigma$ is defined by
an encoding mapping ${\mathsf{C}} \; : \; \Sigma^k \rightarrow \Sigma^n$, and a decoding mapping ${\mathsf{D}} \; : \; \Sigma^n \times [k]^t\rightarrow \Sigma^t$, such that
\begin{enumerate}
\item
For any ${\mbox{\boldmath $x$}} \in \Sigma^k$ and
$i_1, i_2, \cdots, i_t \in [k] \; $,
\[
{\mathsf{D}}\left({\mbox{\boldmath $y$}}={\mathsf{C}}({\mbox{\boldmath $x$}}), i_1, i_2, \cdots, i_t\right) = (x_{i_1}, x_{i_2}, \cdots, x_{i_t}). \; \]
\item
The symbols in the query $(x_{i_1}, x_{i_2}, \cdots, x_{i_t})$ can be reconstructed from
$t$ respective pairwise disjoint recovery sets of symbols of ${\mbox{\boldmath $y$}}$ (the symbol $x_{i_\ell}$ is reconstructed from the $\ell$-th recovery set for each $\ell$, $1 \le \ell \le t$).
\end{enumerate}
\label{def:batch}
\end{definition}
Let ${\mathbb{F}} = {\mathbb{F}}_q$ be a finite field with $q$ elements, where $q$ is a prime power,
and ${\mathcal{C}}$ be a linear $[n,k]$ code over ${\mathbb{F}}$. Denote the redundancy $\rho \triangleq n-k$.
For a \emph{linear batch code}, the encoding of ${\mathsf{C}}$ is given as a multiplication
by a $k \times n$ generator matrix ${\mbox{\boldmath $G$}}$ over ${\mathbb{F}}$ of an information vector ${\mbox{\boldmath $x$}} \in {\mathbb{F}}^k$,
\begin{equation}
{\mbox{\boldmath $y$}} = {\mbox{\boldmath $x$}} \cdot {\mbox{\boldmath $G$}} \; ;~~ {\mbox{\boldmath $y$}} \in {\mathbb{F}}^n .
\label{def:linear}
\end{equation}
A \emph{linear batch code} with the parameters $n$, $k$ and $t$ over ${\mathbb{F}}_q$, where $t$ is a number of queried symbols, is denoted as an \emph{$[n,k,t]_q$-batch code}.
\begin{definition}[\hspace{-0.1ex}\cite{Fazeli}]
\emph{Linear PIR codes} are defined similarly to linear primitive multiset batch codes, with a difference that the supported queries are of the form $(x_i, x_i, \cdots, x_i), \; i \in [k],$
(and not $(x_{i_1}, x_{i_2}, \cdots, x_{i_t}), \; i_1, i_2, \cdots, i_t \in [k]$ as in batch codes).
\end{definition}
In what follows we only consider linear PIR codes. \vit{For constructions of PIR codes see, for example,~\cite{Lin, Vajha}.}
\subsection{Graphs and Hypergraphs}
Let $W^{(r)}$, $r \ge 2$, denote the set of all unordered $r$-tuples of distinct elements of the set $W$.
A \emph{graph} $G(V,E)$ consists of a finite set $V$, called the \emph{vertex set} and a finite set $E\subseteq V^{(2)}$ of pairs of vertices, called the \emph{edge set}. The graph $G(V,E)$ is \emph{bipartite} with \emph{bipartition} (or \emph{parts}) $(A,B)$ if $A\cup B=V$, $A\cap B = \varnothing$, and $|A\cap e|=1$ and $|B\cap e| = 1$ for every edge $e\in E$. We denote the bipartite graph with distinguished parts $A$ and $B$ as $G(A,B,E)$ where we call $A$ the \emph{left part} and $B$ the \emph{right part}. A \emph{$b$-cycle} in a graph $G(V,E)$ is a cyclic sequence of $b$ vertices and $b$ edges, alternatingly between vertices and edges, such that each edge consists precisely of the two vertices on each side of it in the sequence. A bipartite graph $G(A,B,E)$ is \emph{left-regular} if all \emph{left degrees} $\d (a)\triangleq |\{e\in E\, : \, a\in e\} |$, where $a\in A$, are equal.
A \emph{hypergraph} $\mathcal{G}(V,E)$ consists of a finite set $V$ of vertices and a finite collection $E$ of subsets of $V$, called (hyper)edges. The hypergraph is \emph{$r$-uniform}, or an \emph{$r$-graph}, if each hyperedge consists of the same number $r$ of vertices, that is, $E \subseteq V^{(r)}$. Thus, a graph can be viewed as $2$-uniform hypergraph.
A \emph{Berge cycle} in a hypergraph is a sequence $(e_1,v_1,e_2,v_2,\ldots,v_b,e_{b+1})$ where $e_1,e_2,\ldots,e_b$ are distinct hyperedges, $v_1,v_2,\ldots,v_b$ are distinct vertices, $v_{i-1}, v_i\in e_i$ for all $i$ (we have taken all indices modulo $b$ when defining the sequence) and $e_1=e_{b+1}$.
A hypergraph is \emph{Berge-disconnected} if its vertex set $V$ can be partitioned into two non-empty sets $V=V_1\cup V_2$ such that, for each hyperedge $e$, either $e\cap V_1=\varnothing$ or $e\cap V_2=\varnothing$; it is \emph{Berge-connected} if it is not disconnected. A hypergraph has \emph{Berge girth} equal $k$ if (a) it contains a Berge cycle with $k$ hyperedges; (b) it contains no Berge cycles with fewer than $k$ hyperedges. If a subset of vertices is allowed several (a finite number of) times as a hyperedge, we have a multihypergraph.
We note that a multi-$r$-graph for $r\geq 2$ with Berge girth at least 3 is necessarily a simple hypergraph, i.e.~no subset of vertices appears as an edge several times.
The following definition of the correspondence between bipartite graphs and (multi)hypergraphs will be instrumental.
\begin{definition}
\label{def:correspondence}
With a (multi)hypergraph $\mathcal{G}(V,E)$ one can associate the bipartite \emph{incidence graph} $G(E,V,I)$ with left part $E$ and right part $V$ where $\{e,v\}$ is an edge, i.e.~$\{e, v\}\in I$ in $G$ if and only if $v\in e$ in $\mathcal{G}$. By going backwards, given a bipartite graph $G(E,V,I)$ we construct a (multi)hypergraph $\mathcal{G}(V,E)$ by identifying each $e\in E$ with the set $\{v\in V\,|\,\{e,v\}\in I\}$.
\end{definition}
Therefore, multihypergraphs are in one-to-one correspondence with bipartite graphs.
A multihypergraph is Berge-connected if and only if its incidence graph is connected; there is a one-to-one correspondence between Berge cycles with $k$ hyperedges in the multihypergraph and cycles of length $2k$ in the incidence graph.
An $r$-graph $\mathcal{G}'(V',E')$ is a \emph{sub-$r$-graph} of an $r$-graph $\mathcal{G}(V,E)$ if $V'\subseteq V$ and $E'\subseteq \{e\in E \,|\, e\subseteq V'\}$. We say that the sub-$r$-graph is \emph{induced} by the vertex set \vit{$V'$, if in addition, we have} $E'= \{e\in E \,|\, e\subseteq V'\}$. Similarly we say that a subset of hyperedges $E'$ \emph{induces} the vertex set $\bigcup_{e\in E'} e$.
\subsection{Graph-based Batch and PIR Codes}
Let ${\mathcal{C}}$ be an $[n, k, t]_q$ batch (PIR) code defined by a systematic encoding matrix ${\mbox{\boldmath $G$}} = \left[ {\mbox{\boldmath $I$}} | {\mbox{\boldmath $A$}} \right]$.
Take ${\mbox{\boldmath $y$}} \in {\mathcal{C}}$. The symbols of ${\mbox{\boldmath $y$}}$ corresponding to the systematic part of ${\mbox{\boldmath $G$}}$ are called information
symbols, and the remaining symbols are called parity symbols. The following bipartite graph representation was proposed
in~\cite{dimakis-batch}: let $G(A,B,E)$ be a bipartite graph, where $A$ is the set of the information symbols,
$B$ is the set of the parity symbols, and
$E = \Big\{ \{ u, v \} : u \in A, v\in B, u \mbox{ participates in parity } v \Big\}$.
\vspace{-1ex}
\begin{theorem}(\hspace{-0.1ex}\cite[Theorem 1 and Lemma 2]{dimakis-batch})
\label{theorem1-dimakis}
Let ${\mathcal{C}}$ be an $[n, k]$ systematic code represented by the bipartite graph $G(A, B, E)$.
Assume that there exists an \emph{induced subgraph} $H(A, B', E')$ of $G$, that is, $B' \subseteq B$ and $E' = \{e\in E \, :\, |e\cap B'|=1 \}$, such that:
\begin{itemize}
\item[(i)] Each node in $A$ has degree at least $t-1$ in the bipartite graph $H$.
\item[(ii)] The graph $H$ has girth $\ge 8$ (respectively, $\ge 6$).
\end{itemize}
Then, ${\mathcal{C}}$ is an $[n, k, t]$ batch code (respectively, PIR code).
\end{theorem}
It follows from Theorem~\ref{theorem1-dimakis} that constructions of left-regular bipartite graphs without short cycles yield constructions of batch and PIR codes.
In what follows, we use this approach in order to construct \vit{batch and PIR codes with good parameters}.
Specifically, we use known constructions of good hypergraphs, which can be
mapped to bipartite graphs without short cycles, in order to construct good codes.
\section{Asynchronous batch codes}
In this section, we introduce a new special family of batch codes, termed \emph{asynchronous batch codes}.
Assume that ${\mathcal{C}}$ is a linear $[n, k, t]$ batch code as in Definition \ref{batch}, used for retrieving a batch of $t$ symbols $(x_{\ell_1}, x_{\ell_2}, \cdots, x_{\ell_t})$, $\ell_{i} \in [k]$, in parallel from a coded database that consists of $n$ servers, such that at most one symbol is retrieved from each server. To this end assume that the response time of servers for different requests varies, and thus some symbol $x_{\ell_j}$ (w.l.o.g.) can be retrieved faster then the other symbols. In asynchronous retrieval mode, once $x_{\ell_j}$ was retrieved, it is possible to
retrieve any other request $x_{\ell_{t+1}}$, $\ell_{t+1} \in [k]$, in parallel to retrieving of $(x_{\ell_1}, x_{\ell_2}, \cdots, x_{\ell_{j-1}}, x_{\ell_{j+1}}, \cdots, x_{\ell_t})$, without reading more than one symbol from each server. In that way, the asynchronous batch codes support (asynchronous) retrieval of $t$ symbols in parallel. We proceed with a formal definition.
\begin{definition}
An asynchronous (linear primitive multiset) $[n, k, t]$ batch code ${\mathcal{C}}$ is a (linear primitive multiset) batch code with the additional property that for any legal query $(x_{\ell_1}, x_{\ell_2}, \cdots, x_{\ell_t})$, for all $\ell_{i} \in [k]$,
it is always possible to replace $x_{\ell_j}$ by some $x_{\ell_{t+1}}$, $\ell_{t+1} \in [k]$, such that
$x_{\ell_{t+1}}$ is retrieved from the servers not used for retrieval of $x_{\ell_1}, x_{\ell_2}, \cdots, x_{\ell_{j-1}}, x_{\ell_{j+1}}, \cdots, x_{\ell_t}$, without reading more than one symbol from each server.
\end{definition}
\vit{
\begin{example}
Consider the systematic $[8,4,3]_2$ batch code ${\mathcal{C}}$ generated by the matrix
\[
{\mbox{\boldmath $G$}} \; = \; \left(
\begin{matrix}
1&0&0&0&1&0&1&0 \\
0&1&0&0&1&0&0&1 \\
0&0&1&0&0&1&1&0 \\
0&0&0&1&0&1&0&1
\end{matrix}
\right) \; .
\]
The query $(x_1,x_1,x_1)$ can be retrieved from the following disjoint sets of symbols:
$x_1 = y_1, \; x_1 = y_2 + y_5 \; , x_1 = y_3 + y_7\; . $
Assume that the first queried symbol $x_1$ has already been retrieved (while the last two queries are still being served), and the new query $x_2$ has arrived. Then, we can use the recovery $x_2 = y_4 + y_8$ for the newcomer $x_2$, without affecting the recovery sets of the other two queries.
It can be shown that for any initial selection of the recovery sets, and for any finished query and new query,
there is always a way to select disjoint recovery sets. Therefore, ${\mathcal{C}}$ is an asynchronous $[8,4,3]_2$ batch code.
\end{example}
}
It is straightforward to see that any asynchronous $[n, k, t]$ batch code is an $[n, k, t]$ batch code. The opposite, however, does not always hold.
\begin{example}
Consider batch codes, which are obtained by taking simplex codes as suggested in \cite{WKC2015}.
For example, ${\mathcal{C}}$, formed by the matrix
\[
{\mbox{\boldmath $G$}} = \left(
\begin{matrix}
1&0&0&1&1&0&1 \\
0&1&0&1&0&1&1 \\
0&0&1&0&1&1&1
\end{matrix}
\right)
\]
is a $[7,3,4]_2$ batch code. Assume that the query $(x_1, x_1, x_1, x_1)$ was submitted by the users. Then, one copy of $x_1$
is retrieved from $y_1$, and for each of the remaining three copies of $x_1$, at least two symbols of ${\mbox{\boldmath $y$}}$ have to be used.
Assume that the query that uses $y_1$ was served, but the remaining queries are still being served.
If the next query $x_2$ arrives, it is impossible to serve it without accessing one of the servers containing
$y_2, \cdots, y_7$ at least twice. Therefore, ${\mathcal{C}}$ is not an asynchronous $[7,3,4]_2$ batch code.
\end{example}
It turns out, that the conditions in Theorem~\ref{theorem1-dimakis} yield asynchronous batch codes. More formally:
\begin{theorem}
\label{thrm-asynchronous}
Let ${\mathcal{C}}$ be an $[n, k]$ systematic code represented by the bipartite graph $G(A, B, E)$.
Assume that there exists an \emph{induced subgraph} $H(A, B', E')$ of $G$, that is, $B' \subseteq B$ and $E' = \{e\in E \, :\, |e\cap B'|=1 \}$, such that:
\begin{itemize}
\item[(i)] Each node in $A$ has degree at least $t-1$ in the bipartite graph $H$.
\item[(ii)] The graph $H$ has girth at least 8.
\end{itemize}
Then, ${\mathcal{C}}$ is an {\bf asynchronous} $[n, k, t]$ batch code.
\end{theorem}
The omitted proof follows from \vit{\cite[Lemma 2]{dimakis-batch}}:
\begin{lemma}
\label{lemma3-dimakis}
Let ${\mathcal{C}}$ be an $[n, k]$ systematic code represented by the bipartite graph $G(A, B, E)$.
Assume that there exists an \emph{induced subgraph} $H(A, B', E')$ of $G$, that is, $B' \subseteq B$ and $E' = \{e\in E \, :\, |e\cap B'|=1 \}$, such that:
\begin{itemize}
\item[(i)] Each node in $A$ has degree at least $t-1$ in the bipartite graph $H$.
\item[(ii)] The graph $H$ has girth at least 8.
\end{itemize}
Then, each message symbol has at least $t$ disjoint repair group (including the group formed by the information symbol itself). Moreover, for any $i,j \in [k]$, $i \neq j$, any one of the disjoint repair groups for the message symbol $x_i$ has common symbols
with at most one of the disjoint repair groups for the message symbol $x_j$.
\end{lemma}
\begin{definition}
An asynchronous $[n,k,t]$ batch code, which can be represented as in Theorem~\ref{thrm-asynchronous}, is called a {\bf graph-based asynchronous} batch code.
\end{definition}
\section{Case $t=3$}
In this section, we start by considering a simple case $t=3$. We derive a tight upper bound on
the optimal redundancy of graph-based asynchronous batch codes.
\begin{theorem}
Let ${\mathcal{C}}$ be a graph-based asynchronous $[n,k,t \ge 3]$ batch code.
Then, its redundancy is $\rho \ge 2\sqrt{k}$.
\end{theorem}
\begin{proof}
Let $\hat{G} = (A, B, \hat{E})$ be a bipartite graph that corresponds to the code ${\mathcal{C}}$.
Then, the girth of $\hat{G}$ is $\ge 8$, and $\d(a) \ge 2$ for $a \in A$.
Also, $k=|A|$, $n-k=|B|$, and $t \ge 3 $.
First, we delete edges of $\hat{G}$ such that after deletion $\d(a)=2$ for $a \in A$, and denote the new graph $G$ (note that we change the code). We construct a new (non-bipartite) graph, $G' = (V', E')$, from $G$, by following the correspondence in
Definition~\ref{def:correspondence}. Since the left degree of $G$ is 2, the result is indeed a graph (rather than hypergraph).
Specifically, take $V' = B$. For each $u \in A$,
replace $u$ and two edges $\{u, v_1\}$ and $\{u, v_2\}$ incident with it by a new edge $e_u = \{ v_1, v_2 \}$.
The construction implies that there is a cycle of length $2t$ in $G$ if and only if there is a cycle of length $t$ in $G'$.
Thus, $G$ has girth $\ge 8$ if and only if $G'$ has girth $\ge 4$.
By Mantel's Theorem~\cite{mantel} (see also: Tur\'an's Theorem~\cite{turan}), this implies that the number of edges $|E'|$ satisfies $ |E'| \le |V'|^2/4 \; .$
Since $|A| = k$ and $|B| = n-k$, we obtain that $|V'| = n-k$ and $|E'| = k$. Therefore, the redundancy $\rho = n-k \ge 2\sqrt{k}$.
The redundancy of the original code is at least as large.
\end{proof}
This bound is in fact tight.
For example, consider a complete bipartite graph $G'$ with a vertex set $V' = A' \cup B'$, $A' \cap B' = \varnothing$, $|A'| = |B'|$. This graph has $|V'|^2/4$ edges in total, and girth 4. Moreover, this graph has the largest possible
number of edges for any girth-4 graph with $|V'|$ vertices, as seen by Mantel's Theorem~\cite{mantel}.
Next, we convert this graph into a bipartite graph $G$
by using the inverse of the above mapping. Namely, each edge is replaced
by a triple ``edge, vertex, edge''. We obtain that $G$ is a left regular bipartite graph of left degree 2 with $|A| = |V'|^2/4$ and $|B| = |V'|$. The graph $G$ has girth 8 and hence it yields an asynchronous batch code having length $n = |V'|^2/4+ |V'|$, number of information symbols $k=|V'|^2/4$, redundancy $\rho = 2\sqrt{k}=|V'|$, and $t=3$.
We remark that the lower bound $\rho \ge \sqrt{2k} + O(1)$ on the optimal redundancy of PIR codes (for $t\ge 3$)
was recently obtained by Rao and Vardy in~\cite{Rao-Vardy}, and their result implies the same lower bound on the redundancy
of linear batch codes. Moreover, they show that this bound is tight for PIR codes. In this section,
we showed a tighter lower bound $\rho \ge 2\sqrt{k}$ for graph-based asynchronous batch codes (for all $t \ge 3$), and presented an explicit construction of asynchronous batch codes for $t = 3$ that attain this bound. As we show in the sequel,
for graph-based asynchronous batch codes with $t > 3$ the lower bound on $\rho$ can be further tightened.
We consider a modified code also for general $t$.
\section{Hypergraph Theory}
In their 1973 papers, Brown, Erd\H os and S\'os~\cite{brown1}, \cite{brown2} pose the following extremal combinatorial problems on $r$-graphs.
\begin{problem}
Let $f^{(r)}(\eta;\kappa,s)$ denote the smallest $m$ such that every $r$-graph on $\eta$ vertices with $m$ edges contains at least one sub-$r$-graph on $\kappa$ vertices with $s$ edges. What are the bounds on $f^{(r)}(\eta;\kappa,s)$ for fixed $r$, $\kappa$ and $s$?
\end{problem}
Observe that $F^{(r)}(\eta;\kappa,s) \triangleq f^{(r)}(\eta;\kappa,s)-1$ is the maximum size (number of edges) of an $r$-graph \vit{with no set of $\kappa$ vertices containing} $s$ or more hyperedges. The resolution of the case $f^{(3)}(\eta;6,3)$, known as the $(6,3)$-problem, by Ruzsa and Szemer\'edi~\cite{ruzsa} is a classical result in extremal combinatorics. Erd\H os, Frankl and R\"odl~\cite{erdos} extended this result to any fixed $r$, also giving an easier construction for the lower bound, solving the so-called $(3r-3,3)$-problem. There are various later generalisations of~\cite{ruzsa} and~\cite{erdos}, see for example~\cite{alon} and the references therein, and the survey~\cite{furedi}.
In what follows, we show that finding the maximum number of hyperedges of a hypergraph with a given Berge girth is essentially a generalization of the $(6,3)$-problem for 3-graphs, called the $(\kappa r- \kappa, \kappa)$ problem for $r$-graphs in this terminology. Then, we apply the resolution of the $(3r-3,3)$ problem by~\cite{erdos} to batch codes.
\begin{theorem}\label{berge girth}
Let $B^{(r)}(\eta,\kappa)$ be the maximum number of hyperedges in an $r$-graph with $\eta$ vertices and Berge girth at least $\kappa+1$. Then $F^{(r)}(\eta;\kappa r- \kappa, \kappa) = B^{(r)}(\eta,\kappa)$.
\end{theorem}
We prove this theorem in Lemmas~\ref{help}-\ref{kappa_lemma}.
\begin{lemma}\label{help}
For a Berge-connected hypergraph $\mathcal{G}(V,E)$ with $|V|\geq 2$ we have:
\begin{enumerate}
\item $\sum_{e\in E} (|e|-1) \geq |V|-1.$
\item $\mathcal{G}(V,E)$ contains no Berge cycles (is a Berge tree) if and only if
$\sum_{e\in E} (|e|-1) = |V|-1$.
\item
$\mathcal{G}(V,E)$ contains exactly one Berge cycle if and only if $\sum_{e\in E} (|e|-1) = |V|.$
\end{enumerate}
\end{lemma}
Lemma \ref{help} can be proved using properties of the incidence graphs, the proof is omitted.
\begin{definition}
\label{def:condition}
A hypergraph \emph{satisfies the $(\kappa r-\kappa,\kappa)$-condition} if no set of $\kappa r-\kappa$ of its vertices contains $\kappa$ or more hyperedges.
\end{definition}
\begin{comment} As we shall see, an $r$-graph satisfying the $(\kappa r-\kappa,\kappa)$-condition can be modified to additionally have Berge girth at least $\kappa+1$ while keeping the same number of hyperedges; an $r$-graph of Berge girth at least $\kappa+1$ already satisfies the $(\kappa r-\kappa,\kappa)$-condition.
\end{comment}
\begin{lemma}
\label{girth_lemma}
An $r$-graph of Berge girth at least $\kappa+1$ satisfies the $(\kappa r-\kappa,\kappa)$-condition.
\end{lemma}
\begin{proof}
Consider any $\kappa$ hyperedges of this graph. They induce no Berge cycle. For each of the Berge-connected components (maximal connected subhypergraphs) ${\mathcal{G}}'(V',E')$ of the hypergraph induced by these $\kappa$ hyperedges, we have $\sum_{e\in E'} (|e|-1) = |V'|-1$ by Condition 2 of Lemma~\ref{help}, therefore $\sum_{e\in E} (|e|-1) = \kappa(r-1) = |V|-c$ for the hypergraph induced by these $\kappa$ hyperedges, where $c\geq 1$ is the number of Berge-connected components. Therefore the number of vertices induced by these $\kappa$ hyperedges is $\kappa(r-1)+c > \kappa r-\kappa$. Thus the hypergraph satisfies the $(\kappa r-\kappa,\kappa)$-condition.
\end{proof}
\begin{lemma}
An $r$-graph that satisfies the $(\kappa r-\kappa,\kappa)$-condition can be changed (its hyperedges can be re-wired) so that it still has the same number of hyperedges, still satisfies the $(\kappa r-\kappa,\kappa)$-condition and has Berge girth at least $\kappa+1$.
\label{kappa_lemma}
\end{lemma}
\begin{proof}
If an $r$-graph satisfies the $(\kappa r-\kappa,\kappa)$-condition, then from Definition~\ref{def:condition} the total number of vertices used by any $\kappa$ hyperedges is at least $\kappa(r-1)+1$. Consider two cases.
Case 1: $\;$ If the graph induced by these hyperedges were connected, by Condition 2 of Lemma~\ref{help} it contains no Berge cycles. In that case, there is no cycle with $\le \kappa$ hyperedges.
Case 2: $\;$ If the graph induced by these hyperedges were disconnected, consider a Berge-connected component which has some small Berge-cycles. This component has fewer than $\kappa$ hyperedges, since otherwise there are $\kappa$ of its connected hyperedges violating the $(\kappa r-\kappa,\kappa)$-condition.
Next, take a vertex $v$ in two adjacent hyperedges $e$ and $e'$ of a cycle of $\le \kappa$ hyperedges, and re-wire $e$ by deleting $v$ from it, and adding into $e$ another vertex from outside the connected component.
This procedure strictly reduces the number of connected components with less than $\kappa$ hyperedges (an isolated vertex is a connected component by itself), therefore we can only repeat it a finite number of times, and eventually, we will have no Berge cycles on $\kappa$ or fewer hyperedges (see Lemma~\ref{help}).
\end{proof}
\section{PIR codes from hypergraphs of girth $\ge 3$}
\begin{definition}
A $\tau-(\eta,r,\lambda)$ packing design is an $r$-graph of $\eta$ vertices (called points) and of edges (called blocks) such that each $\tau$-tuple of vertices (points) is contained in at most $\lambda$ edges (blocks).
\end{definition}
Consider an $r$-graph ${\mathcal{G}}(V,E)$, where $V$ is a point set and $E$ is a block set, $|V|=\eta$.
${\mathcal{G}}(V,E)$ of Berge girth at least 3 is equivalently an \emph{$2-(\eta,r,1)$ packing design}. The maximum size (number of blocks) $D(\eta,r)$ of a packing design, is upper-bounded by the well-known improved 1st and 2nd Johnson bounds~\cite{johnson2}, see also~\cite{mills}.
It follows from Keevash' result on the existence of designs~\cite{keevash}, which uses pseudorandomness arguments, that for all large enough $\eta$, there is a packing design attaining either the improved 1st or 2nd Johnson bound (see also~\cite{horsley} referring to an earlier version of~\cite{keevash}).
\vit{An interesting special case is when} each pair of points is contained in a unique block. \vit{In that case we} obtain a \emph{Steiner 2-design}, also known as a combinatorial \emph{$2-(\eta,r,1)$ block design}, or a \emph{$(\eta,r,1)$-BIBD (balanced incomplete block design)}. Compared to packing designs, Steiner 2-designs are much more rare, as they are simultaneously covering designs. Fazeli, Vardy and Yaakobi~\cite{Fazeli} use Steiner 2-designs to construct PIR codes. The construction works verbatim if one starts with a packing design. Following Wilson~\cite{wilson1}-\cite{wilson3},~\cite{Fazeli} obtain asymptotic redundancy of PIR codes $\rho = \Theta(\sqrt{k})$. Wilson in~\cite{wilson1}-\cite{wilson3}
is concerned only with the asymptotics, while concrete packing designs will produce concrete PIR codes.
\begin{comment}
For very small values of $r$, the existing constructions giving packing designs are quite good. Generally,
in these cases such $\eta$ that Steiner 2-designs exist, are quite frequent.
For example, start with any number of parity symbols, and build a Steiner 2-design on a subset containing almost all parity symbols (while ignoring the other parity symbols; see, for example,~\cite{mills}).
By ignoring a small number $\eta-\eta'$ of parity symbols, we obtain an optimal graph-based PIR code with ${\eta'\choose 2} / {r\choose 2}$ information symbols and $\eta$ parity symbols.
\vit{
We obtain constructions of families of PIR codes with $k$ information symbols, whose redundancy $\rho = \rho(k)$ is close to a solution of the equation:
${\rho \choose 2} \Big/ {r \choose 2} = k $.
In particular, $\rho = \Theta(\sqrt{k})$.
}
\end{comment}
\section{Batch codes from hypergraphs of girth $\ge 4$}
Bounds and constructions for $r$-graphs $\mathcal{G}(V,E)$ on $\eta$ vertices of Berge girth at least 4
can be given via the $(3r-3,3)$-problem in the language of the $(6,3)$-problem, as seen from Theorem~\ref{berge girth}.
Bounds apply directly, while constructions may need to be modified slightly to lose small Berge cycles.
In~\cite{erdos}, Erd\H os, Frankl and R\"odl address precisely the $(3r-3,3)$-problem for $r \ge 3$. By modifying the construction of Behrend~\cite{behrend},
they construct $r$-graphs on $\eta$ vertices with the number of hyperdges asymptotically greater than $\eta^{2-\epsilon}$ for any
$\epsilon > 0$.
The construction produces hypergraphs of Berge girth at least 4. The authors prove an upper bound $o(\eta^2)$ on the maximum number of hyperedges, using an early version of the Szemer\'edi's Regularity Lemma (see~\cite{komlos} and~\cite{skokan}).
\begin{comment}
We shall explain the construction of Erd\H os, Frankl and R\"odl to obtain large $r$-graphs of Berge girth at least 4, giving rise to primitive multiset linear batch codes, with the aim to make the construction more accessible.
First, Erd\H os, Frankl and R\"odl prove the following Lemma, with the proof closely related to Behrend's original construction of large 3AP-free sets.
\begin{lemma}
There exists a set of positive integers $A\subseteq \{1,2,\ldots, n\}$ not containing three terms of any arithmetic progression of length $r$ and such that $$|A|\geq \frac{n}{e^{c\log r \sqrt{\log n}}}$$ for some absolute constant $c>0$.
\end{lemma}
\begin{proof}
Omitted. Please see the articles of Erd\H os, Frankl and R\"odl~\cite{erdos} and Behrend~\cite{behrend} for more details.
\end{proof}
Erd\H os, Frankl and R\"odl~\cite{erdos} construct an $\lfloor n/r \rfloor$-by-$r$ rectangular grid of vertices, and lines of cardinality $r$, intersecting each column, are hyperedges. The set of `slopes' is restricted to a set $A$ satisfying the previous Lemma, so that the hypergraph will have Berge girth at least 4, see~\cite{erdos} for more details. However, the hypergraph would still have Berge girth at least 3 if we do not restrict the set of slopes, thus giving rise to a slightly better PIR code than the obtained batch code.
\end{comment}
By mapping the hypergraph $\mathcal{G}(V,E)$ constructed in~\cite{erdos} back onto $G(E,V,I)$,
and by using notations for batch codes, we obtain a bipartite graph with $(n-k)^{2-\epsilon}$ left vertices and $n-k$ right vertices of girth $8$. The corresponding graph-based asynchronous batch code has $k = (n-k)^{2-\epsilon}$, and so its redundancy is bounded from above by $\rho(k) = n - k =
O\left({k}^{1/(2-\epsilon)}\right)$ for any $\epsilon>0$, and for any fixed $t$.
We note that the upper bound in~\cite{erdos} similarly yields the lower bound
\begin{equation}
\lim_{k \rightarrow \infty} \left( {\rho(k)}/{\sqrt{k}} \right) \rightarrow \infty \;
\label{eq:optimal_redundancy}
\end{equation}
for the optimal redundancy $\rho(k)$ of the graph-based asynchronous codes, for any fixed $t \ge 4$.
We compare these results with their counterparts in~\cite{Vardy}, where it was shown that
for any $t \ge 5$ the optimal redundancy of general (multiset primitive) linear batch codes behaves as $O(\sqrt{k} \log k)$,
while for $t \in \{3,4\}$ the corresponding redundancy is $O(\sqrt{k})$.
It is worth mentioning that for $t=4$ there is a gap between the optimal redundancy $O(\sqrt{k})$
of the codes studied in~\cite{Vardy} and the lower bound~(\ref{eq:optimal_redundancy}) for the graph-based asynchronous batch codes. It remains an open question what is the exact asymptotics of the optimal redundancy for the graph-based asynchronous batch codes for $t \ge 5$, and whether the lower bound~(\ref{eq:optimal_redundancy})
actually matches the upper bound $O(\sqrt{k} \log k)$ obtained in~\cite{Vardy},
or there is a gap between the optimal redundancy of these two families of codes.
|
1,314,259,993,431 | arxiv | \section{Introduction}
High-frequency quasi-periodic
oscillations (HF QPOs), whose frequencies are in the range of
100 to 450 Hz, have been observed in some black-hole binaries
and black-hole candidates.
One of characteristics of these HF QPOs is that
they often appear in a pair and their frequencies change little
with time
\footnote
{
In the kHz QPOs of neutron-star X-ray binaries,
the frequencies (and their ratio) of the pair oscillations change
with time.
This is a difinite difference between HF QPOs in black holes
binaries and kHz QPOs in neutron star binaries.
In our warp models,
kHz QPOs of neutron stars are interpreted as disk oscillations with
{\it vertical resonance} (see Kato 2005b), or as the case in which
warp has precession (Kato 2005a).
},
keeping the frequency ratio close to 3:2.
These sources are GRO J1665-40 (300, 450 Hz),
XTE J1550-564 (92, 184, 276 Hz) and GRS 1915+105 (41, 67, 113, 168 Hz)
(e.g., a review by McClintock and Remillard 2006).
Importance of commensurability of pair QPO frequencies on understanding
the disk structure in the innermost
region was emphasized by Abramowicz and Klu{\' z}niak (2001) and
Klu{\' z}niak and Abramowicz (2001).
It is known that oscillations can be excited in a deformed disk
by resonant processes.
One well-known example is superhumps in tidally-deformed dwarf-novae
disks (Whitehurst 1988; Hirose, Osaki 1990; Lubow 1991).
Another example is a spiral pattern on ram-pressure
deformed galactic disks (Tosa 1994; Kato and Tosa 1994).
In black-hole X-ray binaries, similar types of resonant oscillations
should occur when the disks are deformed.
We think that one of the most probable deformations
of disks in the innermost region is a warp.
Based on this idea, we examined excitation of
disk oscillations on warped disks (Kato 2003b, 2004b),
and proposed a resonant excitation model of QPOs
(Kato 2004a,b, 2005a,b; Klu{\' z}niak et al. 2004).
In this warped-disk model, the high-frequency QPOs in black-hole
binaries are g-mode oscillations or
inertial-acoustic oscillations\footnote
{
In this paper inertial-acoustic oscillations represent
the fundamental p-mode oscillations, in which oscillations are nearly
horizontal and horizontal velocity has no node in the vertical direction.
In some recent papers by Kato, however, inertial-acoustic oscillations
are treated together with g-mode oscillations, since
in mathematical analyses of the present resonance problem
both of them can be treated together in a pack without making distinction
(see Kato 2004b).
Hence, when we used the term of g-mode oscillations, inertial-acoustic
oscillations were implicitly included there.
This was misleading.
Hence, in this paper we explicitly mention
inertial-acoustic oscillations without including them in g-mode oscillations.
Here, it is noted that we use the following terminology for disk oscillations.
The modes in which horizontal velocity has no node in the vertical
direction ($n=0$) is called inertial-acoustic oscillations (p-modes).
The modes with $n=1$ are g-modes and corrugation modes (c-modes).
The modes with $n\geq 2$ are g-modes and vertical p-modes}
or their combination, excited by a horizontal resonance.
If this resonance model is correct, it gives a way to estimate the spin
of the central source from observed pair frequencies of QPOs, if
the mass of the source is observationally known.
Recently, Shafee et al. (2006) evaluated the spins of two black-hole
sources (GRO J1655-40 and 4U 1543-47) whose masses are observationally
known, by fitting their spectra with model spectra derived from current
disk models.
Since one of the sources (i.e., GRO J1655-40) which they adopted
has 3:2 pair frequencies, we can independently
estimate the spin of the source.
The purpose of this paper is to present the frequency-spin
relation based on the warped-disk model and to estimate spins of
some black-hole X-ray binaries, including GRO J1655-40.
\section{Horizontal Resonances of G-Mode Oscillations and
Inertial-Acoustic Oscillations in Warped Disks}
Here, we outline the essence of our resonance model in warped disks
[see figure 1 in Kato (2004a) and the similar figures in his subsequent
papers].
Let us consider a wave specified by ($\omega$, $m$, $n$),
where $\omega$ is the frequency of the wave, $m$ is the wavenumber
in the azimuthal direction, and $n$ is a number specifying the node
number in the vertical direction (for details, see, for example,
Kato et al. 1998; Kato 2001).
A warp with no precession is described by (0, 1, 1),
since it is a kind of global, one-armed deformation.
Nonlinear interaction between the wave of ($\omega$, $m$, $n$) and the
warp (1, 1, 0) produces oscillations specified by ($\omega$, $m\pm 1$, $n\pm 1$),
which are called here the intermediate oscillations.
If the amplitude of the wave mode of ($\omega$, $m$, $n$) is fixed,
the disk experiences forced oscillations due to the intermediate
oscillations.
The disk then resonantly responses to the intermediate oscillations
at some particular radius where the dispersion relation for these intermediate
oscillations is satisfied.
At this radius energy exchange between the disk rotation
and the intermediate oscillations is realized.
After this resonant inteaction, the intermediate oscillations feedback
to the original oscillation of ($\omega$, $m$, $n$) by making again
nonlinear interaction with the warp.
This nonlinear feedback process amplifies or dampens the original
oscillation ($\omega$, $m$, $n$), since a resonant process is involved
in the feedback process (Kato 2003b, 2004b).
There are two kinds of resonances.
One is horizontal, and the other is vertical [see Kato 2004b for details].
Detailed examinations on resonant processes (Kato 2004b) show that
the case in which resonance excites oscillations is the case in which
oscillations are inertial-acoustic oscillations and/or g-mode oscillations
and the resonance is horizontal.
Hence, in the followings, we restrict our attention only to the case.
First, we remember that g-mode oscillations and inertial-acoustic
oscillations with frequency $\omega$
and azimuthal wavenumber $m$ predominantly exist around the radius
specified by $(\omega-m\Omega)^2\sim \kappa^2$ by the following reasons.
Here, $\Omega$ and $\kappa$ are Keplerian and (radial) epicyclic
frequencies, respectively.
In both cases of inertial-acoustic and g-mode waves, the group
velocity of these waves vanishes at the radius where
$(\omega-m\Omega)^2=\kappa^2$.\footnote
{
In geometrically thin disks, the local dispersion relation of
oscillations is given by
$$
[(\omega-m\Omega)^2-\kappa^2][(\omega-m\Omega)^2-n\Omega_\bot^2]
=c_{\rm s}^2k^2(\omega-m\Omega)^2,
\nonumber
$$
where $\Omega_\bot$, $c_{\rm s}$, and $k$ are, respectively,
the vertical epicyclic frequency, the acoustic speed, and the radial
wavenumber.
This dispersion relation gives the group velocity
($=\partial\omega/\partial k)$ as
$$
{\partial\omega\over\partial k}
=\pm c_{\rm s}{(\omega-m\Omega)^2[(\omega-m\Omega)^2-\kappa^2]^{1/2}
[(\omega-m\Omega)^2-n\Omega_\bot^2]^{1/2}
\over
(\omega-m\Omega)^4-n\kappa^2\Omega_\bot^2}. \nonumber
$$
}
That is, if we consider wave packets, they stay there for a long time
compared with in other places.
Hence, we think that the waves exist mainly around the radius
specified by\footnote{
The places of $(\omega-m\Omega)^2=\kappa^2$ are also particular places
in the sense that they are boundaries between the propagation
and evanescent regions of waves.
In the case of the inertial-acoustic waves, the propagation region
is described by $(\omega-m\Omega)^2>\kappa^2$, and the region of
$(\omega-m\Omega)^2<\kappa^2$ is the evanescent region.
In the case of the g-mode oscillations, the situation is changed.
That is, $(\omega-m\Omega)^2>\kappa^2$ is the evanescent region
and $(\omega-m\Omega)^2<\kappa^2$ is the propagation region.
}
\begin{equation}
(\omega-m\Omega)^2=\kappa^2.
\label{1}
\end{equation}
The nonlinear interaction of the above oscillations with
a warp gives rise to intermediate oscillations
of ($\omega$, $m\pm 1$).
These intermediate oscillations have resonant interaction with the disk
at the radii where the dispersion relation of the intermediate
oscillations is satisfied (Kato 2003b, 2004b).
In the case of the horizontal resonances the radius is
close to the radii specified by
\begin{equation}
[\omega-(m\pm 1)\Omega]^2=\kappa^2.
\label{2}
\end{equation}
It is important to note that the resonant radii are
independent of the vertical structure of the oscillations,
i.e., independent of $n$.
The resonant radii and the radii where the oscillations
predominantly exist must be the same for resonant interactions
to occur efficiently.
That is, equations (\ref{1}) and (\ref{2}) must be satisfied
simultaneously, which gives
\begin{equation}
\kappa={\Omega \over 2}.
\label{3}
\end{equation}
This is the condition determining the resonant radius.
From equation (\ref{1}), we then see that the frequencies of
resonant oscillations are $m\Omega\pm \kappa$ at the resonant radius.
The above argument is free from the metric.
That is, the above resonant condition is valid
even in the case of the Kerr metric, if
the angular velocity of the Keplerian rotation,
$\Omega$, and the epicyclic frequency, $\kappa$, in the
Kerr metric are adopted.
\section{Resonant Radius and Frequencies of Resonant Oscillations}
In the limit of non-rotating central source
(i.e., the metric is the Schwarzschild one),
the condition, $\kappa=\Omega/2$, is realized at $4.0r_{\rm g}$,
which is just the radius where $\kappa$ becomes the maximum.
Here, $r_{\rm g}$ is the Schwarzschild radius defined by
$r_{\rm g}=2GM/c^2$, $M$ being the mass of the central source.
As the spin parameter, $a_*$, increases the resonant radius, $r_{\rm c}$,
decreases.
The $r_{\rm c}$ -- $a_*$ relation derived from the resonant condition, $\kappa=\Omega/2$, is shown in figure 1.
Next, we calculate frequencies of inertial-acoustic and/or g-mode
oscillations which
have resonance at $\kappa=\Omega/2$.
As mentioned before, they are $m\Omega\pm \kappa$ at the
resonant radius.
They are a set of frequencies, since there are various $m$.
Among them the most observable ones will be those with
small number of $m$.
The axially symmetric oscillations, $m=0$, however, will be less
observable by the very nature of symmetry.
Hence, the oscillations which will be most interesting in relation to
observed QPO frequencies are those with $m=1$ or $m=2$.
Considering this situation, we introduce, for convenience,
symbols given by
\begin{equation}
\omega_{\rm H}=(\Omega+\kappa)_{\rm c}, \quad
\omega_{\rm L}=(2\Omega-\kappa)_{\rm c}, \quad
\omega_{\rm LL}=(\Omega-\kappa)_{\rm c},
\label{4}
\end{equation}
where the subscript c denotes the values at the resonant radius,
$\kappa=\Omega/2$.
It is noted that $\omega_{\rm H}$ and $\omega_{\rm L}$ are equal,
i.e., $\omega_{\rm H}=\omega_{\rm L}$.
Outside the resonant radius (i.e., $r>r_{\rm c}$), $\Omega+\kappa$
is larger than
$2\Omega-\kappa$ since $\kappa>\Omega/2$ there.
Inside the resonant radius, $2\Omega-\kappa$ is larger than
$\Omega+\kappa$.
\subsection{Source State and QPO Frequencies}
The next problem to be examined is the relation between the frequencies of
disk oscillations mentioned above and the observed QPOs frequencies.
For simplicity, let us neglect effects of disk rotation (such as Doppler
boosting) and geometrical effects (such as gravitational bending of light rays
and occultation).
Then, no luminosity variation is observed in
geometrically thin, no warped disks, even if rotating non-axisymmetric
oscillations are superposed on the disks.
This means that some careful consideration on geometrical states of disks
is necessary.
Observations show (Remillard 2005) that all high-frequency QPOs are
associated with the steep power-law state of sources.
They are not observed in the thermal state (i.e., the soft/high
state with no corona), nor in the hard state (i.e., hard/low state
with no thermal disk component).
It is noted that in the steep power-law state a compact hot torus
(corona) and a thermal disk coexist in the innermost region.
Observations further show that the QPOs are observed in the
high energy photons of the power-law
component, not in the soft photons of the thermal disk component.
This observational evidence suggests that a thermal disk is necessary
as a place where oscillations are generated, but the observed QPO
photons are those Comptonized in the hot compact corona (a hot torus).
If this picture is adopted, one-armed oscillations
are observed as time variations with twofold frequency, as
described below.
Let us consider one-armed disk oscillations propagating in the
azimuthal direction with angular frequency $\omega$.
The hot disk region associated with the disk oscillations
is assumed to be inside a torus.
Now, we consider the path of observed photons which are originally emitted
from the hot region of the disk as soft photons and are observed as
high energy photons by Comptonization in the torus.
The path length of the photons in the torus dependes on the phase
relation between the hot region and the observer, as shown in figures
2 and 3.
In the phase shown in figure 2, the path length of photons in torus is
short.
(This phase is called hereafter phase 0.)
In the phase shown in figure 3, however, the path length within the
torus is long.
The latter occurs when the phase is close to 0.75 as well as 0.25.
In the phase 0.5 the path in the torus is shorter than that in the phase of figure 3,
but longer than that in the phase of figure 2 (phase 0).
Hence, observed Comptonized photon numbers will vary as shown in
figure 4.
That is, we have two peaks during one cycle of the oscillations.
Here, a brief comment is made on depths of the primary
minimum (phase 0) and secondary minimum (phase 0.5) in figure 4.
In the phase of the primary minimum, the path length of photons in the torus is short, but they pass through an inner hot and dense region of the torus (see figure 2).
This will increase the Comptonized photon flux, compared with that in the case in which photons
pass an outer cool and less dense region.
The phase of the secondary minimum (phase 0.5) correspons to the latter
case.
This consideration suggests that the difference between the Comptonized photon fluxes in phases 0 and 0.5 is smaller than that simply estimated from
the difference of geometrical path lengths.
Figure 4 should be regarded as results in which the above effects are already taken into account.
Figure 4 shows that one-armed oscillations with frequency $\omega$ bring
about two time-varying components with $\omega$ and $2\omega$.
Let us now roughly estimate the amplitude ratio of the two components
from the light curve in figure 4.
The flux, $f(t)$, shown in figure 4 will be approximated by
\begin{equation}
f(t) =(1+A)-{\rm cos}(4\pi t)-A{\rm cos}(2\pi t),
\end{equation}
with $A$ moderately smaller than unity, where $t$ represents the phase of
the light curve (i.e., $t=0$ at phase 0 and $t=1$ at phase 1).
The amplitude of the $2\omega$-oscillation is normalized to unity and that of
the $\omega$-oscillation is $A$.
The total flux is normalized to become zero at phase 0.
Then, the maximum of the flux is realized near $t=1/4$ and $3/4$ as far as $A$ is
moderately smaller than unity, and is about $2+A$.
The flux at the secondary minimum ($t=0.5$) is $2A$.
That is, the flux ratio of the secondary minimum to the maximum is
roughly $2A/(2+A)$.
The case where the secondary minimum is as deep as the primary minimum,
i.e., $2A/(2+A)=0$, is realized if $A=0$.
That is, in this case we have only $2\omega$ oscillation, as expected.
If, for example, the flux ratio is 0.3, the amplitude ratio is found to be roughly
0.35.
That is, the amplitude of the oscillation with $\omega$ is smaller than that with $2\omega$ by about factor 3.
In the case in which the observer is in a direction close to the
edge-on, a light ray leaving the torus to go to the observer
may enter again into another part of the torus on the way of the path.
Furthermore, the Doppler effects are not negligible on light variation.
Such cases of high inclination angle, however, will not be
the major cases in which QPOs are observed, since outer
parts of disk will screen QPO photons from the observer.
In the case of two-armed oscillations, we can easily find that the main
frequency of observed luminosity variation is the same as that of the
oscillations.
These considerations suggest that the resonant oscillations with
frequency $\omega_{\rm LL}$ mainly give rise to QPOs whose frequencies are
$2\omega_{\rm LL}$, since they are one-armed oscillations, i.e., $m=1$.
Hence, we think that the observed main
frequencies of QPOs, i.e., the frequencies of the pair QPOs, are
$\omega_{\rm L}(=\omega_{\rm H}$) and $2\omega_{\rm LL}$.
Their frequency ratio
is just 3:2, i.e.,
\begin{equation}
\omega_{\rm L}(=\omega_{\rm H}) : 2\omega_{\rm LL}=3:2.
\end{equation}
In the present disk-oscillation model there is no reason
why oscillations with $\omega_{\rm LL}$ are not observed, although
their amplitude may be small.
We think that these oscillations are really observed in some sources.
In XTE J1550-564 three QPOs are observed whose frequencies are
276Hz, 184Hz, and 92Hz.
Their frequency ratios are just 3:2:1, suggesting that $\omega_{\rm LL}$
has been observed.
Furthermore, in a black-hole X-ray transient XTE J1650-500, QPO
frequencies vary with time, but their frequencies are consistent with
being 1:2:3 harmonics (Homan et al. 2003), suggesting that
$\omega_{\rm LL}$ has been also observed in this source.
One may think why QPOs with frequency $2\omega_{\rm H}$ are not
observed. (It is noted that the oscillations with $\omega_{\rm H}$ are
one-armed.)
We think that they should be observed, but there is still no serious
attempt to detect such high frequency QPOs, since
the frequency is higher than the Keplerian frequency in the innermost
region of disks.
\section{Estimate of Spin from Pair QPO Frequencies}
In the case in which the central source is non-rotating, the resonance
occurs at $4.0r_{\rm g}$, and $\omega_{\rm L}(=\omega_{\rm H})$ can
be easily expressed as
\begin{equation}
\omega_{\rm L}=2.14\times 10^3\biggr({M\over M_\odot}\biggr)^{-1}
\ {\rm Hz}.
\qquad (a_*=0)
\label{5}
\end{equation}
Masses of three sources (GRO J1665-40, XTE J1550-564, GRS 1915+105)
which display a pair of HF QPOs have been obtained from spectroscopic
observations.
Using the data, McClintock and Remillard (2005)
derived an interpolation formula giving a relation between observed
frequencies of HF QPOs and $M$, which is
\begin{equation}
3\nu_0 = 2.79\times 10^3\biggr({M\over M_\odot}\biggr)^{-1}
\ {\rm Hz},
\label{6}
\end{equation}
where $\nu_0$ is the fundamental frequency of 3:2:1, and thus $3\nu_0$
corresponds to $\omega_{\rm L}$ in our model.
The frequency $\omega_{\rm L}$ for $a_*=0$ is smaller than $3\nu_0$,
suggesting that the central sources are certainly rotating.
The dependence of $\omega_{\rm L}$ on the spin parameter $a_*$ is
numerically obtained by substituting $r_{\rm c}$ obtained by solving
equation (\ref{3}) into the expression for $\omega_{\rm L}$ [equation
(\ref{4})].
The results are shown in figure 5.
For the three sources, where $M$ and $3\nu_0$ are known,
the spin parameter $a_*$ can be calculated, assuming that
the observed $3\nu_0$ is $\omega_{\rm L}(=\omega_{\rm H})$.
The results are shown in table 1 (see also table 3 of Kato 2004b).
As shown in table 1, the value of spin parameter $a_*$ derived
for GRO J1665-40 is $a_*=0.31$ -- $0.42$, which is somewhat smaller than
$a_*=0.65$ -- $0.75$ derived by Shafee et al. (2006) from a spectrum
fitting.
In the case of GRS 1915+105, the value of $a_*$ is negative if
$M\sim 10.0M_\odot$ is adopted.
This suggests that the mass is much larger than $10M_\odot$, closer
to $18.0M_\odot$.
\begin{table}
\caption{Estimated spin parameter $a_*$.}\label{tab:first}
\begin{center}
\begin{tabular}{cccc}
\hline\hline
Sources & $3\nu_0$(Hz) & $M/M_\odot$ & $a_*$ \\
\hline
GRS 1915+105 & 168 & 10.0 -- 18.0 & negative -- 0.44 \\
XTE 1550-564 & 276 & 8.4 -- 10.8 & 0.11 -- 0.42 \\
GRO 1655-40 & 450 & 6.0 -- 6.6 & 0.31 -- 0.42 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Propagation Regions}
It is worthwhile to note that the propagation region
of inertial-acoustic oscillations and that of g-mode ones
are different, even when their frequencies are the same.
They are inside or outside of the resonant radius, depending on the
modes.
The propagation regions of the inertial-acoustic oscillations with
frequency $\omega$ and azimuthal wavenumber $m$ are in the region
described by $(\omega-m\Omega)^2>\kappa^2$, which is
$\omega>m\Omega+\kappa$ or $\omega<m\Omega-\kappa$.
In the case of g-mode oscillations, the region is $(\omega-m\Omega)^2
<\kappa^2$, which is $m\Omega-\kappa < \omega <m\Omega+\kappa$.
To demonstrate these situations, we show in figure 6 the propagation
regions of inertial-acoustic oscillations and those of g-mode oscillations
whose frequencies are $\omega_{\rm H}$, $\omega_{\rm L}$,
and $\omega_{\rm LL}$.
As shown in figure 6 and mentioned above, the propagation regions of
inertial-acoustic
oscillations and those of g-mode oscillations are in the opposite sides
of the resonant radius, when their frequencies are the same.
In the propagation regions of the g-mode oscillations, there is
a corotation radius, i.e., the radius where $\omega-m\Omega=0$.
At the corotation radius the g-mode oscillations are damped (Kato 2003a,
Li et al. 2003).
The inertial-acoustic oscillations which propagate inward from the
resonant radius will be partially reflected back near the inner edge of
the disk, which may lead to quasi-trapped oscillations.
These considerations may suggest that the main contributor to
the HF QPO may be inertial-acoustic oscillations, rather than g-mode
oscillations.
\section{Discussion}
The basic idea of our model is that the high-freuqency
QPOs are disk oscillations and a deformation of the disk is the essential
cause of their excitation.
As the cause of disk deformation we consider warp.
This is because warp will be one of the most conceivable deformation of
disks in the innermost region.
As mentioned before,
the QPOs are associated
with the steep power-law (SPL) state and are certainly not
in the thermal state where the disk consists only of a thermal disk
component (Remillard 2005).
In the SPL state a compact corona and a thermal disk coexist.
We suppose that triggeres forming a compact high-temperature torus
in the innermost region will not generally axisymmetric since the disks are
highly turbulent, and deform the
disk as well as formation of a torus.
This will be one of possible causes of formation of warped disk.
Let us denote the observed upper and the lower frequencies of the pair
QPOs by $\nu_{\rm u}$ and $\nu_{\rm l}$.
Then, as mentioned before, our present model predicts the presence
of QPOs with frequency of $2\nu_{\rm u}$.
Analysis of obervational data to see whether QPOs of $2\nu_{\rm u}$
are present or not is a cruical check of the present model.
\bigskip
The author thanks the referees and S. Mineshige for valuable commnents.
\bigskip
\leftskip=20pt
\parindent=-20pt
\par
{\bf References}
\par
Abramowicz, M. A., \& Klu{\' z}niak, W. 2001, A\&A, 374, L19 \par
Hirose, M., Osaki, Y. 1990, PASJ, 42, 135\par
Homan, J. Klein-Wolt, M., Rossi, S. Miller, J.M., Wijnands, R., Belloni,
T., van der Klis, M., Lewin, W.H.G., 2003, ApJ, 586, 1262 \par
Kato, S. 2001, PASJ, 53, 1\par
Kato, S. 2003a, PASJ, 55, 257 \par
Kato, S. 2003b, PASJ, 55, 801\par
Kato, S. 2004a, PASJ, 56, 559 \par
Kato, S. 2004b, PASJ, 56, 905\par
Kato, S. 2005a, PASJ, 57, L17 \par
Kato, S. 2005b, PASJ, 57, 699 \par
Kato, S., Fukue, J., \& Mineshige, S. 1998, Black-Hole Accretion Disks
(Kyoto: Kyoto University Press)\par
Kato, S., Tosa, M. 1994, PASJ, 46, 559 \par
Klu{\' z}niak, W., \& Abramowicz, M. 2001, Acta Phys. Pol. B32, 3605 \par
Klu{\' z}niak, W., Abramowicz, M. A., Kato, S., Lee, W. H., \& Stergioulas,
N. 2004, ApJ, 603, L89 \par
Li, L.-X., Goodman, J., Narayan, R. 2003, ApJ, 593,980 \par
Lubow, S.H. 1991, ApJ, 381, 259\par
McClintock, J.E., Remillard, R.A. 2005, "Black Hole Binaries", in
Compact Stellar X-ray Sources, eds. W.H.G. Lewin and M. van der Klis,
Cambridge University Press, Cambridge, in press; astro-ph/0306213 \par
Remillard, R.A. 2005, Astron. Nachr. vol. 326; astro-ph/0510699 \par
Shafee, R., McClintock, J.E., Narayan, R., Davis, S.W., Li, L.-X.,
Remillard, R.A. 2006, ApJ, 636, L113; astro-ph/0508302\par
Tosa, M. 1994, ApJ, 426, L81 \par
Whitehurst, R. 1988, MNRAS 232, 35 \par
\bigskip\bigskip
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure1.eps}
\end{center}
\caption{
Resonant radius as a function of the spin parameter $a_*$.
The resonant radius is given by $\kappa=\Omega/2$.}
\label{fig:figure 1}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure2.eps}
\end{center}
\caption{
A schematic picture showing the light path from a hot region in disks
to an observer in the phase in which the hot region is just in the opposite
side of the central source to the observer.
The path within the torus is shown by dashed line.
It is noticed that the path length in the torus is short, and the observed QPO
photons are not many.
This phase is referred to phase 0, and the phase in which the hot region of the disk
is between the central source and the observer is phase 0.5 }
\label{fig:figure 2}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure3.eps}
\end{center}
\caption{
A schematic picture showing a straight light path from the hot region in disks
to an observer in a phase close to 3/4.
The part of the pass within the torus is shown by dashed line.
The pass within the torus is the longest in this phase as well as in a phase close to
1/4, compared with in other phases.
}
\label{fig:figure 4}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure4.eps}
\end{center}
\caption{
A schematical light-curve during one revolution
of an one-armed oscillation around a central source.
We have two peaks during one cycle of oscillations around phases of
0.25 and 0.75.
}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure5.eps}
\end{center}
\caption{
The frequency $\omega_{\rm L}$ of the upper HF QPO as a
function of the spin parameter $a_*$.}
\label{fig:figure 5}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure6.eps}
\end{center}
\caption{
Propagation regions of the resonant oscillations whose frequencies are
$\omega_{\rm H}$, $\omega_{\rm L}$, and $\omega_{\rm LL}$.
For each, the region, which is shown by arrow, depends on whether
the oscillations are
inertial-acoustic modes or g-modes.
The symbols attached to arrow show frequency and mode.
For example, $\omega_{\rm H,p}$ denotes the inertial-acoustic
oscillations of resonant frequency $\omega_{\rm H}$.
The case of g-mode oscillations, the subscript g is attached
instead of p.
To make clear the propagation regions,
curves representing radial distributions of $\kappa$, $\Omega$,
$\Omega\pm \kappa$, $2\Omega$ and $2\Omega\pm \kappa$ are shown.
The vertical line shows the radius where the resonance occurs.
This figure is drawn for $M=10M_\odot$ and $a_*=0.2$.
}
\label{fig:figure 6}
\end{figure}
\end{document}
|
1,314,259,993,432 | arxiv | \section{Introduction}
Probabilistic data analysis---including Bayesian inference---has
transformed scientific research in the past decade. Many of the most
significant gains have come from numerical methods for approximate
inference, especially Markov chain Monte Carlo (MCMC). For example,
many problems in cosmology and astrophysics\footnote{The methods and
discussion in this document\ have general applicability, but we will
mostly present examples from astrophysics and cosmology, the fields
in which we have most experience} have directly benefited from MCMC
because the models are often expensive to compute, there are many free
parameters, and the observations are usually low in signal-to-noise.
Probabilistic data analysis procedures involve computing and using
either the posterior probability density function (PDF) for the
parameters of the model or the likelihood function. In some cases it
is sufficient to find the maximum of one of these, but it is often
necessary to understand the posterior PDF in detail. MCMC methods are
designed to sample from---and thereby provide sampling approximations
to---the posterior PDF efficiently even in parameter spaces with large
numbers of dimensions. This has proven useful in too many research
applications to list here but the results from the NASA Wilkinson
Microwave Anisotropy Probe (WMAP) cosmology mission provide a dramatic
example \citep[for example,][]{Dunkley:2005}.
Arguably the most important advantage of Bayesian data analysis is
that it is possible to \emph{marginalize} over nuisance parameters. A
nuisance parameter is one that is required in order to model the
process that generates the data, but is otherwise of little interest.
Marginalization is the process of integrating over all possible values of
the parameter and hence propagating the effects of uncertainty about
its value into the final result. Often we wish to marginalize over all
nuisance parameters in a model. The exact result of marginalization
is the marginalized probability function \pr{\ensuremath{\vector{\Theta}} | \ensuremath{\vector{D}}}
of the set (list or vector) of model parameters
\ensuremath{\vector{\Theta}}\ given the set of observations \ensuremath{\vector{D}}
\begin{equation}
\eqlabel{marginalization}
\pr{\ensuremath{\vector{\Theta}} | \ensuremath{\vector{D}}} = \int
\pr{ \ensuremath{\vector{\Theta}}, \ensuremath{\vector{\alpha}} | \ensuremath{\vector{D}}} \,
\mathrm{d} \ensuremath{\vector{\alpha}} \quad,
\end{equation}
where \ensuremath{\vector{\alpha}}\ is the set (list or vector) of nuisance
parameters. Because the nuisance parameter set \ensuremath{\vector{\alpha}}\ can be very large, this
integral is often extremely daunting. However, a
MCMC-generated sampling of values $(\ensuremath{\vector{\Theta}}_t,\ensuremath{\vector{\alpha}}_t)$ of the
model and nuisance parameters from the joint distribution $\pr{\ensuremath{\vector{\Theta}},
\ensuremath{\vector{\alpha}} | \ensuremath{\vector{D}}}$ automatically provides a sampling of values
$\ensuremath{\vector{\Theta}}_t$ from the marginalized PDF $\pr{\ensuremath{\vector{\Theta}} | \ensuremath{\vector{D}}}$.
In addition to the problem of marginalization, in many problems of
interest the likelihood or the prior is the result of an expensive
simulation or computation. In this regime, MCMC sampling is very
valuable, but it is even \emph{more} valuable if the MCMC algorithm is
efficient, in the sense that it does not require many function
evaluations to generate a statistically independent sample from the
posterior PDF. The methods presented here are designed for efficiency.
Most uses of MCMC in the astrophysics literature are based on slight
modifications to the Metropolis-Hastings (M--H) method (introduced
below in \sect{algo}). Each step in a M--H chain is proposed using a
compact proposal distribution centered on the current position of the
chain (normally a multivariate Gaussian or something similar). Since
each term in the covariance matrix of this proposal distribution is an
unspecified parameter, this method has $N\,[N+1]/2$ tuning parameters
(where $N$ is the dimension of the parameter space). To make matters
worse, the performance of this sampler is very sensitive to these
tuning parameters and there is no fool-proof method for choosing the
values correctly. As a result, many heuristic methods have been
developed to attempt to determine the optimal parameters in a
data-driven way \citep[for
example,][]{Gregory:2005,Dunkley:2005,Widrow:2008}. Unfortunately,
these methods all require a lengthy ``burn-in'' phase where shorter
Markov chains are sampled and the results are used to tune the
hyperparameters. This extra cost is unacceptable when the likelihood
calls are computationally expensive.
The problem with traditional sampling methods can be visualized by looking
at the simple but highly anisotropic density
\begin{equation}
\eqlabel{anisotropic}
p(\mathbf{x}) \propto f \left (-\frac{(x_1-x_2)^2}{2\,\epsilon}
- \frac{(x_1+x_2)^2}{2} \right )
\end{equation}
which would be considered difficult (in the small-$\epsilon$ regime) for
standard MCMC algorithms. In principle, it is possible to tune the
hyperparameters of a M--H sampler to make this sampling converge quickly,
but if the dimension is large and calculating the density
is computationally expensive the tuning procedure becomes intractable.
Also, since the number of parameters scales as $\sim N^2$, this problem gets
much worse in higher dimensions.
\Eq{anisotropic} can, however, be transformed into the much easier problem of
sampling an isotropic density by an \emph{affine transformation} of the form
\begin{equation}
y_1 = \frac{x_1-x_2}{\sqrt{\epsilon}} \, ,
\hspace{1cm} y_2 = x_1 + x_2 \quad .
\end{equation}
This motivates affine invariance: an algorithm that is \emph{affine invariant}
performs equally well under all linear transformations; it will therefore be
insensitive to
covariances among parameters.
Extending earlier work by \citet{Christen:2007},
\citet[][hereafter \citetalias{Goodman:2010}]{Goodman:2010} proposed an
affine invariant sampling
algorithm (\sect{algo}) with only two hyperparameters to be tuned for
performance. \citet{Hou:2011} were the first group to implement this
algorithm in astrophysics. The implementation presented here is
an independent effort that has already proved effective in several projects
\citep{sanders2013,reis2013,weisz2013,cieza2013,akeret2012,huppenkothen2012,
monnier2012,morton2012,crossfield2012,roskar2012,bovy2012b,brown2012,
brammer2012,bussmann2012,bovy2012a,lang2012,bovy2012,olofsson2012,dorman2012}.
In what follows, we summarize the
algorithm from \citetalias{Goodman:2010} and the implementation
decisions made in \project{\thisplain}. We also describe the small changes
that must be made to the algorithm to parallelize it. Finally, in the
Appendices, we outline the installation, usage and troubleshooting of
the package.
\section{The Algorithm}\sectlabel{algo}
A complete discussion of MCMC methods is beyond the scope of this document.
Instead, the interested reader is directed to a classic reference like
\citet{MacKay:2003} and we will summarize some key concepts below.
The general goal of MCMC algorithms is to draw $M$ samples
$\{ \ensuremath{\vector{\Theta}}_i \}$ from
the posterior probability density
\begin{equation}
\pr{\ensuremath{\vector{\Theta}}, \ensuremath{\vector{\alpha}} | \ensuremath{\vector{D}}} = \frac{1}{Z}\,\pr{\ensuremath{\vector{\Theta}}, \ensuremath{\vector{\alpha}}}
\, \pr{\ensuremath{\vector{D}} | \ensuremath{\vector{\Theta}}, \ensuremath{\vector{\alpha}}} \quad,
\end{equation}
where the prior distribution $\pr{\ensuremath{\vector{\Theta}}, \ensuremath{\vector{\alpha}}}$ and the likelihood
function $\pr{\ensuremath{\vector{D}}|\ensuremath{\vector{\Theta}},\ensuremath{\vector{\alpha}}}$ can be relatively easily (but not
necessarily quickly) computed for any particular value of
$(\ensuremath{\vector{\Theta}}_i, \ensuremath{\vector{\alpha}}_i)$. The normalization $Z=\pr{\ensuremath{\vector{D}}}$ is
independent of $\ensuremath{\vector{\Theta}}$ and $\ensuremath{\vector{\alpha}}$ once we have chosen the form of the
generative model. This means that it is possible
to sample from \pr{\ensuremath{\vector{\Theta}}, \ensuremath{\vector{\alpha}} | \ensuremath{\vector{D}}} without computing $Z$ ---
unless one would like to compare the validity of two different generative
models. This is important because $Z$ is generally very expensive to
compute.
Once the samples
produced by MCMC are available, the marginalized constraints on $\ensuremath{\vector{\Theta}}$
can be approximated by
the histogram of the samples projected into the parameter subspace spanned
by $\ensuremath{\vector{\Theta}}$. In particular, this implies that the
expectation value of a function of the model parameters $f(\ensuremath{\vector{\Theta}})$ is
\begin{equation}
\expect{f(\ensuremath{\vector{\Theta}})} = \int
\pr{\ensuremath{\vector{\Theta}}|\ensuremath{\vector{D}}}
\, f(\ensuremath{\vector{\Theta}}) \, \mathrm{d}\ensuremath{\vector{\Theta}}
\,\approx\, \frac{1}{M} \sum_{i=1} ^M f(\ensuremath{\vector{\Theta}}_i) \quad.
\end{equation}
Generating the samples $\ensuremath{\vector{\Theta}}_i$ is a non-trivial process unless
$\pr{\ensuremath{\vector{\Theta}}, \ensuremath{\vector{\alpha}}, \ensuremath{\vector{D}}}$ is a very specific analytic distribution
(for example, a Gaussian). MCMC is a procedure for generating a random walk
in the parameter space that, over time, draws a representative set
of samples from the distribution. Each point in a Markov chain
$\ensuremath{X} (t_i) = [\ensuremath{\vector{\Theta}}_i, \ensuremath{\vector{\alpha}}_i]$
depends only on the position of the previous step $\ensuremath{X} (t_{i-1})$.
\paragraph{The Metropolis-Hastings (M--H) Algorithm}
The simplest and most commonly used MCMC algorithm is the M--H method
\citep[\algo{mh};][]{MacKay:2003,Gregory:2005,Press:2007,Hogg:2010}.
The iterative procedure is as follows: (1) given a position
$X(t)$ sample a proposal position $Y$ from the transition distribution
$Q(Y; X(t))$, (2) accept this proposal with probability
\begin{equation}
\mathrm{min} \left( 1,\,
\frac{\pr{\vector{Y} | \ensuremath{\vector{D}}}}{\pr{\vector{X}(t) | \ensuremath{\vector{D}}}} \,
\frac{Q(X(t); Y)}{ Q(Y;X(t))} \right) \quad.
\end{equation}
The transition distribution $Q(Y; X(t))$ is an
easy-to-sample probability distribution for the proposal $Y$ given
a position $X(t)$.
A common parameterization of $Q(Y; X(t))$ is a multivariate Gaussian
distribution centered on $X(t)$ with a general covariance tensor that has
been tuned for performance.
It is worth emphasizing that if this step is accepted $X(t+1) = Y$; Otherwise,
the new position is set to the previous one $X(t+1) = X(t)$ (in other
words, the position $X(t)$ is \emph{repeated in the chain}).
The M--H algorithm converges (as $t \to \infty$) to a stationary set of
samples from the distribution but there are many algorithms with faster
convergence and varying levels of implementation difficulty.
Faster convergence is preferred because of the reduction of computational
cost due to the smaller number of likelihood computations necessary to obtain
the equivalent level of accuracy. The inverse convergence rate can be
measured by the autocorrelation function and more specifically, the integrated
autocorrelation time (see \sect{tests}). This quantity is an estimate of the
number of steps needed in the chain in order to draw independent samples from
the target density. A more efficient chain has a shorter
autocorrelation time.
\begin{algorithm}
\caption{The procedure for a single Metropolis-Hastings MCMC step.
\algolabel{mh}}
\begin{algorithmic}[1]
\STATE Draw a proposal $Y \sim Q (Y; X(t))$
\STATE $q \gets [\pr{\vector{Y}} \, Q(X(t); Y)]
/ [\pr{\vector{X}(t)} \, Q(Y;X(t))]$%
\hspace{1cm}{\footnotesize\it // This line is generally expensive}
\STATE $r \gets R \sim [0, 1]$
\IF{$r \le q$}
\STATE $\vector{X}(t+1) \gets \vector{Y}$
\ELSE
\STATE $\vector{X}(t+1) \gets \vector{X}(t)$
\ENDIF
\end{algorithmic}
\end{algorithm}
\paragraph{The stretch move}
\citetalias{Goodman:2010} proposed an affine-invariant ensemble sampling
algorithm informally called the ``stretch move.'' This algorithm
significantly outperforms standard M--H methods producing independent
samples with a much shorter autocorrelation time (see \sect{acor} for
a discussion of the autocorrelation time). For completeness and for
clarity of notation, we summarize the algorithm here and refer the interested
reader to the original paper for more details. This method involves
simultaneously evolving an ensemble of $K$ \emph{walkers}
$S = \{ \vector{X_k} \}$ where the proposal
distribution for one walker $k$ is based on the current positions of the
$K-1$ walkers in the \emph{complementary ensemble}
$S_{[k]} = \{ \vector{X_j}, \, \forall j \ne k \}$. Here, ``position''
refers to a vector in the $N$-dimensional, real-valued parameter space.
To update the position of a walker at position $\vector{X_k}$,
a walker $X_j$ is drawn randomly from the remaining walkers $S_{[k]}$
and a new position is proposed:
\begin{equation}
\eqlabel{proposal}
\vector{X_k} (t) \to \vector{Y} = \vector{X_j}
+ Z \, [\vector{X_k} (t) - \vector{X_j}]
\end{equation}
where $Z$ is a random variable drawn from a distribution $g(Z = z)$.
It is clear that if $g$ satisfies
\begin{equation}
g(z^{-1}) = z \, g(z),
\end{equation}
the proposal of \eq{proposal} is symmetric. In this case, the chain will
satisfy detailed balance if the proposal is accepted with probability
\begin{equation}
\eqlabel{acceptance}
q = \min \left( 1,\, Z^{N-1} \,
\frac{\pr{\vector{Y}}}{\pr{\vector{X_k} (t)}} \right) \quad,
\end{equation}
where $N$ is the dimension of the parameter space. This procedure is then
repeated for each walker in the ensemble \emph{in series} following the
procedure shown in \algo{goodman}.
\citetalias{Goodman:2010} advocate a particular form of $g(z)$, namely
\begin{equation}
\eqlabel{goodmanprop}
g(z) \propto \left \{ \begin{array}{ll}
\displaystyle\frac{1}{\sqrt{z}} & \mathrm{if}\, z\in
\left [ \displaystyle\frac{1}{a}, a \right ], \\
0 & \mathrm{otherwise}
\end{array} \right .
\end{equation}
where $a$ is an adjustable scale parameter that \citetalias{Goodman:2010} set
to 2.
\begin{algorithm}
\caption{A single stretch move update step from \citetalias{Goodman:2010}
\algolabel{goodman}}
\begin{algorithmic}[1]
\FOR{$k = 1, \ldots, K$}
\STATE Draw a walker $X_j$ at random from the complementary ensemble %
$S_{[k]}(t)$
\STATE $z \gets Z \sim g(z)$, \Eq{goodmanprop}
\STATE $\vector{Y} \gets \vector{X_j} %
+ z \, [ \vector{X_k} (t) - \vector{X_j}]$
\STATE $q \gets z^{N-1} \, p(Y)/p(X_k(t))$ \label{line:hard}%
\hspace{1cm}{\footnotesize\it // This line is generally expensive}
\STATE $r \gets R \sim [0, 1]$
\IF{$r \le q$, \eq{acceptance}}
\STATE $X_k(t+1) \gets Y$
\ELSE
\STATE $X_k(t+1) \gets X_k(t)$
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\paragraph{The parallel stretch move}
It is tempting to parallelize the stretch move algorithm by
simultaneously advancing each walker based on the state of the ensemble
instead of evolving the walkers in series. Unfortunately, this subtly
violates detailed balance. Instead, we must split the full ensemble
into two subsets
($\colorens{0} = \{ \vector{X_k}, \, \forall k = 1, \ldots, K/2 \}$ and
$\colorens{1} = \{ \vector{X_k}, \, \forall k = K/2+1, \ldots, K \}$) and
simultaneously update all the walkers in $\colorens{0}$
--- using the stretch move procedure from \algo{goodman} ---
based \emph{only} on the positions of the walkers in the other set
($\colorens{1}$). Then, using the new positions $\colorens{0}$,
we can update $\colorens{1}$. In this case, the outcome is a valid step
for all of the walkers. The pseudocode for
this procedure is shown in \algo{parallel}. This code is similar to
\algo{goodman} but now the computationally expensive inner loop
(starting at line~\ref{line:parallelloop} in \algo{parallel}) can be run in
parallel.
The performance of this method --- quantified by the autocorrelation time ---
is comparable to the serial stretch move algorithm but the fact that one
can now take advantage of generic parallelization makes it
extremely powerful.
\begin{algorithm}
\caption{The parallel stretch move update step
\algolabel{parallel}}
\begin{algorithmic}[1]
\FOR{$i \in \{0, 1\}$}
\FOR{$k = 1, \ldots, K/2$} \label{line:parallelloop}
\STATE {\footnotesize\it // This loop can now be done in parallel %
for all $k$}
\STATE Draw a walker $\vector{X_j}$ at random from the complementary %
ensemble $\colorens{\sim i} (t)$
\STATE $\vector{X_k} \gets \colorens{i}_k$
\STATE $z \gets Z \sim g(z)$, \Eq{goodmanprop}
\STATE $\vector{Y} \gets \vector{X_j}
+ z \, [ \vector{X_k} (t) - \vector{X_j}]$
\STATE $q \gets z^{n-1} \, p(\vector{Y})/p(\vector{X}_k(t))$
\STATE $r \gets R \sim [0, 1]$
\IF{$r \le q$, \eq{acceptance}}
\STATE $\vector{X_k} (t+\frac{1}{2}) \gets \vector{Y}$
\ELSE
\STATE $\vector{X_k} (t+\frac{1}{2}) \gets \vector{X_k}(t)$
\ENDIF
\ENDFOR
\STATE $t \gets t+\frac{1}{2}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Tests} \sectlabel{tests}
Judging the convergence and performance of an algorithm is a
non-trival problem and there is a huge associated literature
\citep[see, for example,][for a review]{Cowles:1996}. In astrophysics,
spectral methods have been used extensively \citep[for
example][]{Dunkley:2005}. Below, we advocate for one such method: the
autocorrelation time. The autocorrelation time is especially applicable
because it is an affine invariant measure of the performance.
First,
however, we should take note of another extremely important measurement:
the acceptance fraction \ensuremath{a_f}. This is the fraction of proposed steps that are
accepted. There appears to be no agreement on the optimal acceptance rate
but it is clear that both extrema are unacceptable. If $\ensuremath{a_f} \sim 0$, then
nearly all proposed steps are rejected, so
the chain will have very few independent samples and the sampling will not be
representative of the target density. Conversely, if $\ensuremath{a_f} \sim 1$ then nearly
all steps are accepted and the
chain is performing a random walk with no regard for the target density so
this will also not produce representative samples. As a rule of thumb, the
acceptance fraction should be between $0.2$ and $0.5$
\citep[for example,][]{Gelman:1996}. For the M--H algorithm,
these effects can generally be counterbalanced by decreasing (or increasing,
respectively) the eigenvalues of the proposal distribution covariance. For
the stretch move, the parameter $a$ effectively controls the step size so
it can be used to similar effect. In our tests, it has never been
necessary to use a value of $a$ other than $2$, but we make no guarantee that
this is the optimal value.
\paragraph{Autocorrelation time} \sectlabel{acor}
The autocorrelation time is a direct measure of the number of evaluations of
the posterior PDF required to produce independent samples of the target
density. \citetalias{Goodman:2010} show that the stretch-move algorithm
has a significantly shorter autocorrelation time on several non-trivial
densities. This means that fewer PDF computations are required---compared
to a M--H sampler---to produce the same number of independent samples.
The autocovariance function of a time series $\vector{X} (t)$ is
\begin{equation}
C_f (T) = \lim_{t \to \infty} \mathrm{cov}
\left [ f\left (\vector{X}(t+T) \right ),
f\left (\vector{X}(t) \right ) \right ].
\end{equation}
This measures the covariances between samples at a time lag $T$. The
value of $T$ where $C_f(T) \to 0$ measures the number of samples that
must be taken in order to ensure independence. In particular, the
relevant measure of sampler efficiency is the integrated autocorrelation
time
\begin{equation}
\tau_f = \sum_{T=-\infty} ^{\infty} \frac{C_f(T)}{C_f(0)}
= 1+2\sum_{T=1} ^{\infty} \frac{C_f(T)}{C_f(0)}.
\end{equation}
In practice, one can estimate $C_f (T)$ for a Markov chain of $M$ samples as
\begin{equation}
C_f (T) \approx \frac{1}{M-T} \sum_{m=1}^{M-T}
\left [ f(X(T+m)) - \expect{f} \right ] \,
\left [ f(X(m)) - \expect{f} \right ].
\end{equation}
We advocate for the autocorrelation time as a measure of sampler
performance for two main reasons. First, it measures a quantity
that \emph{we are actually interested in} when sampling in practice.
The longer the autocorrelation time, the more samples that we must
generate to produce a representative sampling of the target
density. Second, the autocorrelation time is affine invariant. Therefore,
it is reasonable to measure the performance and diagnose the convergence
of the sampler on densities with different levels of anisotropy.
\project{\thisplain}\ can optionally calculate the autocorrelation time using the Python
module \project{acor}\footnote{\url{http://github.com/dfm/acor}} to estimate
the autocorrelation time. This module is a direct port of the original
algorithm \citepalias[described by][]{Goodman:2010} and implemented by those
authors in
C++.\footnote{\url{http://www.math.nyu.edu/faculty/goodman/software/acor}}
\section{Discussion \& Tips}\sectlabel{advice}
The goal of this project has been to make a sampler that is a useful
tool for a large class of data analysis problems---a ``hammer'' if you
will. If development of statistical and data-analysis understanding
is the key goal, a user who is new to MCMC benefits enormously by
writing her or his own Metropolis--Hastings code (\algo{mh}) from
scratch before downloading \project{\thisplain}. For typical problems, the
\project{\thisplain}\ package will perform better than any home-built M--H code (for
all the reasons given above), but the intuitions developed by writing
and tuning a self-built MCMC code cannot be replaced by reading this
document and running this pre-built package. That said, once those
intuitions are developed, it makes sense to switch to \project{\thisplain}\ or a
similarly well engineered piece of code for performance on large
problems.
Ensemble samplers like \project{\thisplain}\ require some thought for initialization.
One general approach is to start the walkers at a sampling of the
prior or spread out over a reasonable range in parameter space.
Another general approach is to start the walkers in a very tight
$N$-dimensional ball in parameter space around one point that is
expected to be close to the maximum probability point. The first is
more objective but, in practice, we find that the latter is much more
effective if there is any chance of walkers getting stuck in low
probability modes of a multi-modal probability landscape. The walkers
initialized in the small ball will expand out to fill the relevant
parts of parameter space in just a few autocorrelation times. A third
approach would be to start from a sampling of the prior, and go
through a ``burn-in'' phase in which the prior is transformed
continuously into the posterior by increasing the ``temperature.''
Discussion of this kind of annealing is beyond the scope of this
document.
It is our present view that autocorrelation time is the best indicator
of MCMC performance (the shorter the better), but there are several
proxies. The easiest and simplest indicator that things are going
well is the acceptance fraction; it should be in the 0.2 to 0.5 range
\citep[there are theorems about this for specific problems;
for example][]{Gelman:1996}. In principle,
if the acceptance fraction is too low, you can raise it by decreasing
the $a$ parameter; and if it is too high, you can reduce it by
increasing the $a$ parameter. However, in practice, we find that
$a=2$ is good in essentially all situations. That means that when
using \project{\thisplain}\ \emph{if the acceptance fraction is getting very low,
something is going very wrong}. Typically a low acceptance fraction
means that the posterior probability is multi-modal, with the modes
separated by wide, low probability ``valleys.'' In situations like
these, the best idea (though expensive of human time) is to split the
space into disjoint single-mode regions and sample each one
independently, combining the independently sampled regions
``properly'' (also expensive, and beyond the scope of this document)
at the end. In previous work, we have advocated clustering methods to
remove multiple modes \citep{Hou:2011}. These work well when the
different modes have \emph{very} different posterior probabilities.
Another proxy for short autocorrelation time is large expected or mean
squared jump distance (ESJD; \citealt{Pasarica:2010}). The higher the
ESJD the better; if walkers move (in the mean) a large distance per
chain step then the
autocorrelation time will tend to be shorter. The ESJD is not an
affine-invariant measure of performance, and it doesn't have a
trivial interpretation in terms of independent samples, so we prefer
the autocorrelation time in principle. In practice, however, because
the ESJD is a simple expectation value it can be more robustly
evaluated on short chains.
With \project{\thisplain}\ you want (in general) to \emph{run with a
large number of walkers}, like hundreds. In principle, there is no
reason not to go large when it comes to walker number, until you hit
performance issues. Although each step takes twice as much compute
time if you double the number of walkers, it also returns to you twice
as many independent samples per autocorrelation time. So go large.
In particular, we have found that---in almost all cases of low
acceptance fraction---increasing the number of walkers improves the
acceptance fraction. The one disadvantage of having large numbers of
walkers is that the burn-in phase (from initial conditions to
reasonable sampling) can be slow; burn-in time is a few
autocorrelation times; the total run time for burn-in scales with the
number of walkers. These considerations, all taken together, suggest
using the smallest number of walkers for which the acceptance fraction
during burn-in is good, or the number of samples you want out at the
end (see below), whichever is \emph{greater}. A more ambitious
project would be to increase the number of walkers after burn-in; this
requires thought beyond the scope of this document; it can be
accomplished by burning in a set of small ensembles and then merging
them into a big ensemble for the final run.
One mistake many users of MCMC methods make is to take \emph{too many}
samples! If all you want your MCMC to do is produce one- or
two-dimensional error bars on two or three parameters, then you only
need dozens of independent samples. With ensemble sampling, you
get this from a \emph{single snapshot} or single timestep, provided
that you are using dozens of walkers (and we would recommend that you
use hundreds in most applications). The key point is that \emph{you
should run the sampler for a few (say 10) autocorrelation times.}
Once you have run that long, no matter how you initialized the
walkers, the set of walkers you obtain at the end should be an
independent set of samples from the distribution, of which you rarely
need many.
Another common mistake, of course, is to run the sampler for \emph{too
few} steps. You can identify that you haven't run for enough steps
in a couple of ways: If you plot the parameter values in the ensemble
as a function of step number, you will see large-scale variations over
the full run length if you have gone less than an autocorrelation
time. You will also see that if you try to measure the
autocorrelation time (with, say, \project{acor}), it will give you a time that
is always a significant fraction of your run time; it is only when the
correlation time is much shorter (say by a factor of 10) than your run
time that you are sure to have run long enough. The danger of both of
these methods---an unavoidable danger at present---is that you can
have a huge dynamic range in contributions to the autocorrelation
time; you might think it is 30 when in fact it is 30\,000, but you
don't ``see'' the 30\,000 in a run that is only 300 steps long. There
is not much you can do about this; it is generic when the posterior is
multi-modal: The autocorrelation time within each mode can be short but
the mode--mode migration time can be long. See above on low
acceptance ratio; in general when your acceptance ratio gets low your
autocorrelation time is very, very long.
There are some cases where \project{\thisplain}\ won't perform as well as some
more specialized sampling techniques. In particular, when the target density
is multi-modal, walkers can become ``stuck'' in different modes. When
this happens, the vector between walkers is no longer a good proposal
direction. In these cases, the acceptance fraction and autocorrelation
time can deteriorate quickly. While this is a fairly general problem, we
find that in many applications the effect isn't actually very important.
That being said, there are some problems where higher-end machinery
\citep[such as][Hou et al.\ forthcoming]{dnest} is necessary \citep[see, for
example,][]{brewer2012,vh2013}.
Another limitation to the stretch move and moves like it is
that they implicitly assume that the parameters can be assembled into
a vector-like object on which linear operations can be performed.
This is not (trivially) true for parameters that have non-trivial
constraints, like parameters that must be integer-valued or
equivalent, or parameters that are subject to deterministic non-linear
constraints. Sometimes these issues can be avoided by
reparameterization, but in some cases, samplers like \project{\thisplain}\ will not
be useful, or might require clever or interesting improvements. The
\project{\thisplain}\ package is open-source software; please push us changes!
\acknowledgments It is a pleasure to thank
Eric Agol (UWash),
Jo Bovy (IAS),
Brendon Brewer (Auckland),
Jacqueline Chen (MIT),
Alex Conley (Colorado),
Will Meierjurgen Farr (Northwestern),
Andrew Gelman (Columbia),
John Gizis (Delaware),
Fengji Hou (NYU),
Jennifer Piscionere (Vanderbilt),
Adrian Price-Whelan (Columbia),
Hans-Walter Rix (MPIA),
Jeremy Sanders (Cambridge),
Larry Widrow (Queen's), and
Joe Zuntz (Oxford)
for helpful contributions to the ideas and code presented here.
This project was partially supported by the NSF (grant AST-0908357), NASA
(grant NNX08AJ48G), and DOE (grant DE-FG02-88ER25053).
\project{\thisplain}\ makes use of the open-source Python \project{numpy}\ package.
|
1,314,259,993,433 | arxiv |
\section{Introduction}
Goal-Conditioned Reward Sparse (GCRS) task is one of the challenging reinforcement learning tasks with extremely sparse rewards. In the task, the goal is combined with the current state as the policy input, while the agent is able to receive a positive reward only when the goal is achieved. In many cases, the GCRS task is also considered as the Multi-Goal task where the goal is not fixed and can be anywhere in the state space.
Therefore the policy has to learn a general solution that can be applied to a set of similar tasks.
For example, robotic object grasping is such a GCRS task: the target object could be anywhere on the table, the robot has to adjust its arm to reach the object and then grasp it.
The objective of learning such a policy is to find a feasible path from the current state to the goal~\cite{tamar2016value}. Similar tasks include the Multi-Goal benchmarks in robotics control~\cite{plappert2018multi}.
In previous works, reward shaping \cite{ng1999policy}, hierarchical reinforcement
learning~\cite{dietterich2000hierarchical,barto2003recent}, curriculum
learning~\cite{bengio2009curriculum}, and learning from demonstrations~\cite{schaal1997learning,atkeson1997robot,argall2009survey,hester2018deep,nair2018overcoming}
were proposed to tackle the challenges of learning through sparse rewards. These approaches provide manual guidance from different perspectives. Besides, the Hindsight Experience Replay (HER)~\cite{kaelbling1993learning,NIPS2017_7090} was proposed
to relabel failed trajectories and assign hindsight credits as complementary to the primal sparse
rewards, which is still a kind of Temporal Difference learning and relies on the value of reward.
Recently the Policy Continuation with Hindsight Inverse Dynamics (PCHID)~\cite{sun2019policy} is proposed
to learn with hindsight experiences in a supervised learning manner, but the sample efficiency is still limited by the explicit curriculum setting.
In this work, we aim at improving the sample efficiency and stability of solving these GCRS tasks with an alternative approach based on supervised learning.
Specifically, by formulating the exploration in GCRS tasks as a random walk in the state space, solving the GCRS task is then equivalent to decreasing the first hitting time (FHT) in the random walk.
The main idea is to encourage the policy producing trajectories that have shorter FHTs. With such a self-imitation manner, the policy learns to reach more and more \emph{hindsight goals}~\cite{NIPS2017_7090} and becomes more knowledgeable to extrapolate its skills to solve the task.
Based on this formulation, we propose a new method for the GCRS tasks, which conforms a self-imitation
learning approach and is independent of the rewards. Our agent learns from its own success or
hindsight success, and extrapolates its knowledge to new situations, enabling the learning
process to be executed in a much more efficient supervised learning manner.
\begin{figure*}[t]
\centering
\begin{minipage}[htbp]{0.80\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{figures/fig1.pdf}
\end{minipage}%
\caption{Illustration of Evolutionary Stochastic Policy Distillation. The behavior policy $\pi_B$ is composed of a deterministic policy $\pi_T$ and a stochastic term $\tilde{\eta}$ for exploration. We first generate a batch of trajectories with $\pi_B$, and then use a SELECT function to select the transitions finished by $\pi_B$ in shorter FHT than $\pi_T$ and store the corresponding HIDs in a buffer. Finally, we improve $\pi_T$ with supervised learning and then use the updated policy to generate new samples.}
\label{fig0}
\end{figure*}
We summarize our contributions as follows:
\begin{enumerate}
\item By modeling the GCRS tasks as random walks in the state space, we provide a novel Stochastic Differential Equation (SDE) formulation of policy learning and bridge the connection between policy improvement and the reduction of FHT.
\item To reduce the FHT from the SDE perspective, we propose a new method called Evolutionary Stochastic Policy Distillation (ESPD), which combines the mechanism of Evolution Strategy and Policy Distillation, as a self-imitation learning approach for the GCRS tasks.
\item We demonstrate the proposed method on the Fetch robotics benchmark and show our method can work in isolation to solve GCRS tasks with an improved sample efficiency than previous methods HER and PCHID.
\end{enumerate}
\section{Preliminaries}
\noindent \textbf{Markov Decision Process.}
We consider a Markov Decision Process (MDP) denoted by a tuple $\mathcal{M}$ containing $(\mathcal{S}, \mathcal{A}, \mathcal{R},\mathcal{T},\gamma, d_{s_0})$, where $\mathcal{S}$ is a finite continuous state space, $\mathcal{A}$ denotes the finite continuous action space, $\mathcal{R}$ is the reward function, where in this work we focus on a special case where the reward function is binary, i.e., tasks with sparse reward. The transition distribution is in general given by $\mathcal{P}(s_{t + 1}| s_t, a_t)$, but in this work, we consider the deterministic dynamics which can be simplified as a mapping of $\mathcal{T}:\mathcal{S}\times\mathcal{A}\to\mathcal{S}$. Discount factor is denoted by $\gamma \in [0, 1]$ and $d(s_0)$ is the start state distribution.
Given a policy $\pi(a|s)$, let
$J(\pi) = \mathbb{E}_{\pi}[\sum_{t=0}^T \gamma^t r(s_t)]$ denote the discounted expected
return, and an optimal policy $\pi^* = \arg \max_\pi J(\pi)$ maximizes that return.
\noindent \textbf{Universal Value Function Approximator and Multi-Goal RL.}
The Universal Value Function Approximator (UVFA)~\cite{pmlr-v37-schaul15} extends the state space
of \textit{Deep Q-Networks (DQN)}~\cite{mnih2015human} to include goal state as part of
the input, which is useful in the setting where there are multiple goals to achieve.
Moreover, In the work of Schaul et al.~\cite{pmlr-v37-schaul15} it is shown that in the UVFA setting, the learned policy
can be generalized to previous unseen state-goal pairs. Specifically, let $\mathcal{S}_{(S,G)} = \mathcal{S} \times \mathcal{G}$
denote the \textit{extended state space} of $\mathcal{M}$ where $\mathcal{G}$ is a finite goal space. Normally, a representation mapping $m(\cdot): \mathcal{S}\to\mathcal{G}$ is assumed to be known in such
multi-goal RL frameworks~\citep{plappert2018multi}.
Since the goal is fixed within an episode, the transition function $\mathcal{T}': \mathcal{S}_{(S,G)} \times \mathcal{A} \to \mathcal{S}_{(S,G)}$
on the extended state space can be induced from the original transition $\mathcal{T}$ as
$\mathcal{T}'((s, g), a) = (\mathcal{T}(s, a), g).$
Hence, in order to achieve the goal $g$, the agent
must reach a certain state $s_g$ such that $g = m(s_g)$.
\noindent \textbf{Hindsight Experience Replay.}
Learning with sparse rewards in RL problems remains challenging due to that it is difficult to reach the reward through random explorations. Hindsight Experience Replay (HER) proposes to relabel the failed rollouts as successful ones~\cite{NIPS2017_7090} as a method to deal with the goal-oriented sparse reward problem. The agent in HER receives a reward when reaching either the original goal or the relabeled goal in each episode by storing both original transition pairs $((s_t,g),a_t,(s_{t+1},g),r)$ and relabeled transitions $((s_t,g'),a_t,(s_{t+1},g'),r')$ in the replay buffer, where $g'$ is a certain hindsight goal that will be visited in the following steps, i.e., there exist $k$ such that $g' = m(s_{t+k})$, and $r'$ is the hindsight reward, which will be $+1$ when the hindsight goal is achieved.
\section{Related Work}
\noindent \textbf{Learning with Experts and Policy Distillation.}
Policy Distillation (PD) was proposed to extract the policy of a trained RL agent with
a smaller network to improve the efficiency as well as the final performance or
combine several task-specific agents together~\cite{rusu2015policy}. Latter extensions
proposed to improve the learning efficiency~\cite{schmitt2018kickstarting}, enhance
multi-task learning~\cite{teh2017distral,arora2018multi}. All of those methods start from a trained expert agent or human expert experience
that can solve a specific task~\cite{czarnecki2019distilling}. As a comparison, our proposed method focus on extracting knowledge from stochastic behaviors, which is capable to act as a feasible policy itself with regard to the primal task.
In PD, a deterministic teacher policy is provided and can be queried to generate enough data to teach the student policy. However, in ESPD all the teacher policies are unknown as they are \textbf{stochastic variants} of the student policy. We regard those stochastic variants as they are from \textbf{unkonwn deterministic oracle policies} and distill those policies through the single corresponding trajectory. In ESPD, the teacher model is unknown and can not be queried as in PD.
\noindent \textbf{Evolution Strategies and Parameter Noise.}
The Evolution Strategy (ES) was proposed by~\cite{salimans2017evolution} as an alternative
to standard RL approaches, where the prevailing temporal difference based value function
updates or policy gradient methods are replaced as perturbations on the parameter space
to resemble the evolution. Later on, \cite{campos2018importance} improved the efficiency
of ES by means of importance sampling. Besides, the method was also extended to be combined
with Novelty-Seeking to further improve the performance~\cite{conti2018improving}. Thereafter, \cite{plappert2017parameter} proposed to use Parameter Noise as an alternative to the action space noise injection for better exploration. They show such a perturbation
on the parameter space can be not only used for ES methods, but also collected to improve the sample efficiency by combining it with traditional RL methods.
While previous ES algorithms apply perturbations on the parameter noise and keep the
best-performed variates, our approach implicitly execute the policy evolution by distilling
better behaviors, therefore our approach can be regarded as an evolutionary method based on
action space perturbation.
\noindent \textbf{Supervised and Self-Imitate Approaches in RL.}
Recently, several works put forward to use supervised learning to improve the stability and
efficiency of RL. \cite{zhang2019policy} propose to utilize supervised learning to tackle the
overly large gradients problem in policy gradient methods. In order to improve sample efficiency,
the work chose to first design a target distribution proposal and then used supervised
learning to minimize the distance between present policy and the target policy distribution.
The Upside-Down RL proposed by \cite{schmidhuber2019reinforcement} used supervised
learning to mapping states and rewards into action distributions, and therefore acted as a normal
policy in RL. Their experiments show that the proposed UDRL method outperforms several
baseline methods~\cite{srivastava2019training}. In the work of~\cite{sun2019policy}, a curriculum learning
scheme is utilized to learn policies recursively.
The self-imitation idea relevant to ESPD is also discussed in the concurrent work of~\cite{ghosh2019learning}, but ESPD further uses the SELECT function to improve the quality of collected data for self-imitation learning.
\section{Method}
\subsection{Problem Formulation}
\begin{figure*}[t]
\centering
\begin{minipage}[htbp]{0.85\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/fig2.pdf}
\end{minipage}%
\caption{Illustration of the selection process: first we generate episodes with the stochastic behavior policy $\pi_B$, which is composed of the deterministic target policy $\pi_T$ and a noise term drawn from Gaussian, then we check the superiority of generated transitions over the target policy. If $\pi_T$ can not find a shorter path for a transition generated by $\pi_B$, the select function will return True and the transition will be stored for Stochastic Policy Distillation. Therefore, $\pi_T$ will learn to evolve to solve more sub-task, \ie, transitions, continuously.}
\label{illu_fig}
\end{figure*}
In a given goal-oriented reinforcement learning task, we assume there exists an unknown
metric $\mathcal{D}(s_t, g)$ that represents the distance between the current state $s_t$
and the desired goal state $g$. For example $\mathcal{D}(s,g)$ is the Euclidean distance in
barrier-free navigation tasks; or the Manhattan distance in navigation tasks with obstacles.
A feasible solution of the task should be a policy $\pi(s_t,g)$ that outputs an action
$a_t$, such that the distance $\mathcal{D}(s_{t+1},g)\le\mathcal{D}(s_t,g)$ for deterministic
dynamics, or $\mathbb{E}[\mathcal{D}(s_{t+1},g)]\le\mathbb{E}[\mathcal{D}(s_t,g)]$
for stochastic dynamics.
We assume $\mathcal{D}(s,g)$ is continuous and differentiable on $s$, and
$\mathrm{d}s = -\xi\frac{\partial \mathcal{D}(s,g)}{\partial s} \mathrm{d}t$ is a feasible move, as it decreases the distance between $s$ and $g$ when $\xi$ is sufficiently small.
We further assume the state is a vector, the state transition $\Delta s_t = s_{t+1} - s_t$ is determined by both the dynamics $\phi_s(\cdot): \mathcal{S}\times\mathcal{A} \rightarrow \Delta \mathcal{S}$ and the action
$a_t = \pi(s_t,g)$ provided by the policy. We may write a sufficient condition for
a feasible policy:
\begin{equation}
\phi_s(\pi(s,g)) = -\xi\frac{\partial \mathcal{D}(s,g)}{\partial s} \mathrm{d}t,
\end{equation}
we further assume $\phi_s^{-1}(\cdot)$ exists\footnote[2]{The assumption can be released to a existence of pseudo inverse: $\phi_s^{-1}(\phi_s(a)) \in A'$, where $A'$ is a set s.t. $\forall a'\in A'$, $\phi_s(a') = \phi_s(a)$.}, \ie~$\forall a \in \mathcal{A}$,
$\phi_s^{-1}(\phi_s(a)) = a$. Hence, by parameterizing the policy
$\pi(s,g)$ with $\theta$, we have
\begin{equation}
\pi_\theta(s,g) = \phi_s^{-1}\left(-\xi\frac{\partial \mathcal{D}(s,g)}{\partial s}\mathrm{d}t\right),
\end{equation}
is a feasible policy, \ie, it tends to solve the GCRS task as it continuously minimizes the distance between the current state and the goal.
The above equation tells us, in order to learn a well-performing policy, the policy should
learn two unknown functions: the inverse dynamics $\phi_s^{-1}(\cdot)$ and the derivatives
of distance metric $\mathcal{D}(\cdot,\cdot)$ over $s$ and $g$ with regard to the state $s$.
The work of~\citet{sun2019policy} proposed PCHID to use Hindsight Inverse Dynamics (HID)
as a practical policy learning method in such GCRS tasks. Specifically, in Inverse Dynamics,
a model parameterized by $\theta$ is optimized by minimizing the mean square error of predicted
action $\hat{a}_t$ and executed action $a_t$ between adjacent states $s_t$ and $s_{t+1}$, \ie
~$\theta = \mathop{\arg\min}_{\theta} (a_t - \hat{a}_t)^2 = \mathop{\arg\min}_{\theta} (a_t - \pi_{\theta}(s_t,s_{t+1}))^2$. The
HID revises the latter $s_{t+1}$ with its goal correspondence $g_{t+1} = m(s_{t+1})$, where the
mapping $m$ is assumed to be known in normal GCRS task settings. $m: {\mathcal{S}}\to \mathcal{G}$
s.t. $\forall s\in{\mathcal{S}}$ the reward function $r(s,m(s)) = 1$. In the single step transition setting, the learning objective of the policy is to learn HID by
\begin{equation}
\label{eq_hid}
\theta = \mathop{\arg\min}_{\theta} (a_t - \hat{a}_t)^2 = \mathop{\arg\min}_{\theta} (a_t - \pi_{\theta}(s_t,g_{t+1}))^2.
\end{equation}
Eq.\ref{eq_hid} shows the HID can be used to train a policy with supervised learning by minimizing the prediction error. However, to get more capable policy that is able to solve harder cases,
training the policy only with 1-step HID is not enough. The work of PCHID then proposed to check the
optimality of multi-step transitions with a TEST function and learn multi-step HID recursively. Such an explicit curriculum learning strategy is not efficient as multi-step transitions can only be collected after the
convergence of previous sub-policies.
Here we interpret how PCHID works from the SDE perspective.
Practically, a policy is always parameterized as a neural network trained from scratch
to solve a given task. At beginning the policy will not be fully goal-oriented as a
feasible policy. With random initialization, the policy network will just perform
random actions regardless of what state and goal are taken as inputs. We use a coefficient
$\epsilon$ to model the goal awareness of the policy, e.g., $\epsilon=0$ denotes a purely random policy, and $\epsilon=1$ denotes a better policy. In order to collect diverse experiences and improve our target policy, we follow traditional RL approaches to assume a random noise term denoted by $N$
with coefficient $\sigma$ to execute exploration. Hence, the behavioral policy becomes:
\begin{equation}
\label{equation_4}
\pi_{\mathrm{behave}} =\pi_{\theta}(s,g) + \sigma N
= \epsilon \phi_s^{-1}\left(-\xi\frac{\partial \mathcal{D}(s,g)}{\partial s}\mathrm{d}t\right) + \sigma N.
\end{equation}
The behavioral policy above combines a deterministic term and a stochastic term, which in
practice can be implemented as a Gaussian noise or OU-noise~\cite{lillicrap2015continuous}. Although we assume a deterministic
policy $\pi_{\theta}(s,g)$ here, the extension to stochastic policies is straightforward, e.g., the network can predict the mean value and the standard deviation of an action to form a Gaussian
policy family and the Mixture Density Networks~\cite{bishop1994mixture} can be used for more powerful policy representations.
With such a formulation, the PCHID can be regarded as a method that \emph{explicitly}
learns the inverse dynamics with HID, and progressively learns the metric $\mathcal{D}(s,g)$
with Policy Continuation (PC). In this work, we justify that the approach can be extended to a more
efficient synchronous setting that \emph{implicitly} learns the inverse dynamics $\phi_s^{-1}(\cdot)$
and the derivatives of distance metric $\mathcal{D}(\cdot,\cdot)$ with regard to state $s$ at the same time.
The key insight of our proposed method is \emph{minimizing the First Hitting Time~\cite{alili2005representations} of a drifted random walk} (Eq.\ref{equation_4}).
Concretely, the simplest case of Eq.\ref{equation_4} is navigating in the Euclidean space,
where the distance metric is $\mathcal{D}(s,g) = ||g-s||_2$ and the transition dynamics is an identical
mapping, \ie~$\phi_s(a) = a \in \mathcal{A}\subset \mathcal{S} $, and by applying a Gaussian
noise on the action space, we have
\begin{equation}
\pi(s,g) = ds = \epsilon\frac{g-s}{||g-s||_2} \mathrm{d}t + \sigma \mathrm{d}W_t,
\label{equation_norm_ou}
\end{equation}
which is a Stochastic Differential Equation (SDE). As our learning objective is to
increase the possibility of reaching the goal in a finite time horizon, the problem can
be formulated as minimizing the FHT
$\tau = \inf\{{t>0: s_t = g|s_0 > g}\}$, \ie~hitting
the goal in the state space. In practice, the goal state is always a region in the
state space~\cite{plappert2018multi}, and therefore the task is to cross the region as
soon as possible.
\subsection{Evolutionary Stochastic Policy Distillation}
As we analyzed in the previous SDE formulation, the learning objective is to minimize FHT, providing the advantage of learning k-step transitions \textbf{simutaneously}. This advantage in principle can not be used in PCHID as its PC step requires implicit curriculum to guarantee the optimality of sub-policies.
In this section, we propose a practical algorithm that combines evolutionary
strategy with policy distillation to minimize FHT. Specifically, ESPD maintains a target deterministic policy
$\pi_T$, parameterized as a policy network, and a behavioral stochastic policy $\pi_B$
\begin{equation}
\pi_B = \pi_T + \tilde{\eta}, \tilde{\eta}\sim\mathcal{N}(0,\sigma^2),
\label{equation_noise}
\end{equation}
according to Eq.\ref{equation_4}, \ie~the behavior policy comes from adding a Gaussian exploration noise upon the target policy $\pi_T$, as in previous deterministic policy learning literature~\cite{lillicrap2015continuous,fujimoto2018addressing}. For the policy update step, ESPD use the evolutionary idea by distilling the well-performed behavior policies, in terms of FHT, to the target policy, instead of applying policy gradient or the zeroth-order approximation of policy gradient~\cite{salimans2017evolution} to the target policy.
\begin{algorithm}[t]
\caption{ESPD}
\label{Algorithm1}
\begin{algorithmic}
\STATE \textbf{Require}
\STATE ~~1.a reward function $r(s,g) = 1$ if $g = m(s)$ else $0$
\STATE ~~2.a Horizon list $\mathcal{K} = [1,2,...,K]$
\STATE Initialize target policy $\pi_T(s,g)$, $\pi_B(s,g) = \pi_T(s,g) + \tilde{\eta}, \tilde{\eta}\sim\mathcal{N}(0,\sigma^2)$, replay buffer $\mathcal{B} = \{\}$
\FOR{episode $= 1,M$}
\STATE Generate $s_0$, $g$ by the environment
\FOR{$t = 1,T$}
\STATE Select an action by the behavior policy $a_t = \pi_B(s_t,g)$%
\STATE Execute the action $a_t$ and get the next state $s_{t+1}$
\ENDFOR
\FOR{$t = 1, T$}
\FOR{$k = 1, K$}
\STATE Calculate additional goal according to $s_{t+k}$ by $g' = m(s_{t+k})$
\IF{ SELECT($s_t,g'$) = True}
\STATE Store $(s_t,g',a_t)$ in $\mathcal{B}$
\ENDIF
\ENDFOR
\ENDFOR
\STATE Sample a minibatch $B$ from buffer $\mathcal{B}$
\STATE Optimize target policy $\pi_T(s_t,g')$ according to Eq.\ref{eq_sl}
\STATE Update behavior policy $\pi_B = \pi_T + \tilde{\eta}, \tilde{\eta}\sim\mathcal{N}(0,\sigma^2)$
\ENDFOR
\end{algorithmic}
\end{algorithm}
Concretely, during training, $\pi_B$ first interacts with the environment and collects a batch of transition samples, permitting us to generate a batch of HIDs, regardless of their optimality. These HIDs contain a set of transition tuples $(s,g',a)$, where $g'$ denotes the hindsight goal. \ie, the starting point, final achieved goal, and the corresponding action are included in each of these transition tuples. From an oracle-perspective, these HIDs can be regarded as generated from a series of \emph{unknown deterministic policies} instead of a known stochastic policy $\pi_B$, each provides a individual solution for the state-goal pair task $(s,g')$. Among these unknown oracle-policies, some are better than our current target policy $\pi_T$ in terms of FHT, which means they are able to solve the certain state-goal pair task in fewer steps, or they are able to solve some sub-tasks while the $\pi_T$ is not. Although we are not able to access these well-performing oracle-policies directly, we can distill the useful knowledge from them to $\pi_T$ through their corresponding HIDs.
In practice, we use a SELECT function to distinguish those HIDs that outperform $\pi_T$ and store them in a buffer $\mathcal{B} = \{(s_i,g'_i,a_i)\}_{i=1,2,...},$. The SELECT function can be implemented in different ways, (1) reset the environment to a given previous state, which is always tractable in simulation~\cite{nair2018overcoming}, (2) use classifiers, dynamic models or heuristics~\cite{sun2019policy}. In this work we adopt (1) and leave the usage of model-based SELECT functions to the future work.
To implement (1), the SELECT function takes in an episode generated by $\pi_B$. Suppose the episode $(s_t,a_t,s_{t+1},a_{t+1},...,s_{t+k})$ is of length $k$, the SELECT function resets environment to the starting state of this episode $s_t$ and runs $\pi_T$ for up to $k$ steps, trying to reach the final achieved state $s_{t+k}$. \ie, at every step, an action of $\pi_T(s, m(s_{t+k}))$ is performed. If $\pi_T$ is \textbf{NOT} able to reach $s_{t+k}$ within $k$ steps, the corresponding transition tuple $(s_t,m(s_{t+k}),a_t)$ will be collected in the buffer $\mathcal{B}$ and $\pi_T$ will learn from these tuples later. The procedure is illustrated in Fig.\ref{illu_fig}.
Then, we can apply Stochastic Policy Distillation (SPD) to distill the knowledge from the well-performing oracle-policies to $\pi_T$ so that $\pi_T$ may evolve to be more capable to
tackle the same sub-tasks. To be specific, we use supervised learning to minimize the difference between the action stored in the HID buffer and the action $\pi_T$ predicted. The SPD is conducted as
\begin{equation}
\label{eq_sl}
\pi_T = \mathop{\arg\min}_{\pi_T} \frac{1}{N}\sum_{i=1}^N(\pi_T(s_i,g'_i) - a_i)^2,
\end{equation}
where $(s_i, g'_i, a_i)$ are sampled from the HID buffer $\mathcal{B}$.
From this point of view, the ESPD method
is composed of evolution strategy and policy distillation, where a stochastic behavior policy $\pi_B$ acts as the perturbation on the action space and produces diverse strategies (\emph{a population}), and we choose those well-performed strategies to distill their knowledge
into $\pi_T$ (a \emph{selection}). Fig.\ref{fig0} provides an illustration of the learning pipeline and Algorithm \ref{Algorithm1} presents the detailed learning procedure of ESPD.
\begin{figure}[t]
\begin{minipage}[htbp]{1.0\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{ICML20-RL/figures/Fetch.pdf}
\end{minipage}%
\caption{Three robotic manipulation environments. FetchPush-v1: push a block to a goal position, FetchSlide-v1: slide a puch to a goal position and FetchPickAndPlace-v1: lift a block into the air.}
\label{fig_env}
\end{figure}
\begin{figure*}[t]
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/Push_main.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/Slide_main.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/PAP_main.pdf}
\end{minipage}%
\caption{The test success rate comparison on the FetchPush-v1, FetchSlide-v1 and FetchPickAndPlace-v1 among our proposed method (ESPD), ESPD without the SELECT function (W/O Sel.) PCHID, HER and Evolution Strategy (ES).}
\label{main_results}
\end{figure*}
It is worth noting that the term \textit{Evolutionary} in ESPD is used to demonstrate the
\textbf{selection} procedure. Moreover, although plenty of policy gradient methods leverage action space noise in exploration~\cite{lillicrap2015continuous,schulman2017proximal,fujimoto2018addressing}, there is a clear gap between those perturbation and the evolutionary insight in our methods:
The most important difference between action space noise in ESPD and previous policy gradient methods is that ESPD regards those \textbf{stochastic variants} as from a series of unknown \textbf{oracle deterministic policies}, then use a selection function as a filter the well-performed unknown oracle policies and finally distill those transitions to the present policy. On the other hand, previous policy gradient methods do not contain such a selection step, but instead use \textbf{all} those variants to estimate the value function and improve the policy through policy gradient accordingly.
\section{Experiments}
\subsection{Result on the Fetch Benchmarks}
\begin{figure*}[t]
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/Push_factor.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/Slide_factor.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/PAP_factor.pdf}
\end{minipage}%
\caption{The test success rate comparison on the FetchPush-v1, FetchSlide-v1 and FetchPickAndPlace-v1 with different scale of exploration factors.}
\label{exp_factor}
\end{figure*}
\begin{figure*}[t]
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/Push_horizon.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/Slide_horizon.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/PAP_horizon.pdf}
\end{minipage}%
\caption{The test success rate comparison on the FetchPush-v1, FetchSlide-v1 and FetchPickAndPlace-v1 with different scales of Horizon $K$.}
\label{horizon}
\end{figure*}
We demonstrate the proposed method on the Fetch Benchmarks. Specifically,
we evaluate our method on the FetchPush, FetchSlide and
FetchPickAndPlace environments, as shown in Fig.\ref{fig_env}. We compare our proposed method
with the HER~\cite{NIPS2017_7090,plappert2018multi} released in OpenAI
Baselines~\cite{baselines} and the Evolution Strategy~\cite{salimans2017evolution} which is a counterpart of our method with parameter noise. We also include PCHID as a baseline, however as PCHID~\cite{sun2019policy} can be regarded as a special case of ESPD if we gradually increase the hyper-parameter Horizon in ESPD from $1$ to $K$, the performance of PCHID is upper-bounded by ESPD. Such result can also be inferred from our ablation study on the Horizon $K$ in the next section, which shows smaller $K$ limits the performance, and achieves worse learning efficiency than $K=8$, the default hyper-parameter used in ESPD.
Fig.\ref{main_results} shows the comparison of different approaches. For each environment, we conduct 5 experiments with different random seeds and plot the averaged learning curve. In Fig.\ref{main_results}, we also show the ablated experiment result by turning off the SELECT step in ESPD, where the result is denoted as W/O Sel. Such a ablation study demonstrates the importance of the SELECT step. ESPD shows superior learning efficiency and can learn to solve the task in fewer episodes in all the three environments.
\subsection{Sensitivity to Hyper-Parameter}
\paragraph{Exploration Factor}
The exploration factor $\sigma$ controls the randomness of behavior policy and therefore determines the behavior of generated samples. While larger $\sigma$ helps the agents to benefit exploration by generating samples with large variance, smaller $\sigma$ helps to generate a biased sample with little variance. Here we need to select a proper $\sigma$ to balance the variance and bias.
Fig.\ref{exp_factor} shows our ablation study on the selection of different exploration factors. The results are generated with 5 different random seeds.
We find in all environments, the exploration factor $\sigma = 1$ provides sufficient exploration and relatively high learning efficiency.
\paragraph{Horizon $K$}
In our proposed method, the parameter of Horizon $K$ determines the maximal length of sample trajectories the policy can learn from.
Intuitively, smaller $K$ decreases the learning efficiency as the policy is limited by its small horizon, making it hard to plan for the tasks that need more steps to solve.
On the other hand, larger $K$ will provide a better concept of the local as well as global geometry of the state space, and thus the agent may learn to solve more challenging tasks.
However, using large $K$ introduces more interactions with the environment, and needs more computation time.
Moreover, as the tasks normally do not need lots of steps to finish, when the Horizon is getting too large, more noisy actions will be collected and be considered as better solutions and hence impede the learning performance.
Fig.\ref{horizon} shows our ablation studies on the selection of Horizon $K$. The results are generated with 5 different random seeds. We find that $K = 8$ provides satisfying results in all of the three environments.
It is worth noting that HID can be seen as a special case of ESPD if we gradually increase the hyper-parameter Horizon in ESPD from 1 to K.
From this perspective, as shown in Fig.\ref{horizon} where at the beginning of learning a smaller Horizon leads to bad performance, the performance of HID is upper bounded by ESPD.
\section{Conclusion}
This work developed a practical algorithm that can evolve to solve the GCRS problems by distilling knowledge from a series of the stochastic variants of a learned agent. The key insight behind our proposed method is based on the SDE formulation of the GCRS tasks: such tasks can be solved by learning to reduce the First Hitting Time.
Based on the formulation, we proposed the method of Evolutionary Stochastic Policy Distillation (ESPD), which is composed of an evolutionary action space perturbation, a select procedure as a filter to collect data, and a policy distillation learning step to optimize the policy. Our experiments on the OpenAI Fetch benchmarks show that the proposed method has a much improved sample efficiency as well as stability with regard to previous baseline methods, namely the PCHID, the Evolution Strategies, and Hindsight Experience Replay.
\section{Appendix}
\end{document}
\section{Introduction}
This short example shows a contrived example on how to format the authors' information for {\it IJCAI--21 Proceedings}.
\section{Author names}
Each author name must be followed by:
\begin{itemize}
\item A newline {\tt \textbackslash{}\textbackslash{}} command for the last author.
\item An {\tt \textbackslash{}And} command for the second to last author.
\item An {\tt \textbackslash{}and} command for the other authors.
\end{itemize}
\section{Affiliations}
After all authors, start the affiliations section by using the {\tt \textbackslash{}affiliations} command.
Each affiliation must be terminated by a newline {\tt \textbackslash{}\textbackslash{}} command. Make sure that you include the newline on the last affiliation too.
\section{Mapping authors to affiliations}
If some scenarios, the affiliation of each author is clear without any further indication (\emph{e.g.}, all authors share the same affiliation, all authors have a single and different affiliation). In these situations you don't need to do anything special.
In more complex scenarios you will have to clearly indicate the affiliation(s) for each author. This is done by using numeric math superscripts {\tt \$\{\^{}$i,j, \ldots$\}\$}. You must use numbers, not symbols, because those are reserved for footnotes in this section (should you need them). Check the authors definition in this example for reference.
\section{Emails}
This section is optional, and can be omitted entirely if you prefer. If you want to include e-mails, you should either include all authors' e-mails or just the contact author(s)' ones.
Start the e-mails section with the {\tt \textbackslash{}emails} command. After that, write all emails you want to include separated by a comma and a space, following the same order used for the authors (\emph{i.e.}, the first e-mail should correspond to the first author, the second e-mail to the second author and so on).
You may ``contract" consecutive e-mails on the same domain as shown in this example (write the users' part within curly brackets, followed by the domain name). Only e-mails of the exact same domain may be contracted. For instance, you cannot contract ``person@example.com" and ``other@test.example.com" because the domains are different.
\end{document}
\section{Copyright}
All papers submitted for publication by AAAI Press must be accompanied by a valid signed copyright form. They must also contain the AAAI copyright notice at the bottom of the first page of the paper. There are no exceptions to these requirements. If you fail to provide us with a signed copyright form or disable the copyright notice, we will be unable to publish your paper. There are \textbf{no exceptions} to this policy. You will find a PDF version of the AAAI copyright form in the AAAI AuthorKit. Please see the specific instructions for your conference for submission details.
\section{Formatting Requirements in Brief}
We need source and PDF files that can be used in a variety of ways and can be output on a variety of devices. The design and appearance of the paper is strictly governed by the aaai style file (aaai21.sty).
\textbf{You must not make any changes to the aaai style file, nor use any commands, packages, style files, or macros within your own paper that alter that design, including, but not limited to spacing, floats, margins, fonts, font size, and appearance.} AAAI imposes requirements on your source and PDF files that must be followed. Most of these requirements are based on our efforts to standardize conference manuscript properties and layout. All papers submitted to AAAI for publication will be recompiled for standardization purposes. Consequently, every paper submission must comply with the following requirements:
\begin{quote}
\begin{itemize}
\item Your .tex file must compile in PDF\LaTeX{} --- (you may not include .ps or .eps figure files.)
\item All fonts must be embedded in the PDF file --- including includes your figures.
\item Modifications to the style file, whether directly or via commands in your document may not ever be made, most especially when made in an effort to avoid extra page charges or make your paper fit in a specific number of pages.
\item No type 3 fonts may be used (even in illustrations).
\item You may not alter the spacing above and below captions, figures, headings, and subheadings.
\item You may not alter the font sizes of text elements, footnotes, heading elements, captions, or title information (for references and mathematics, please see the limited exceptions provided herein).
\item You may not alter the line spacing of text.
\item Your title must follow Title Case capitalization rules (not sentence case).
\item Your .tex file must include completed metadata to pass-through to the PDF (see PDFINFO below).
\item \LaTeX{} documents must use the Times or Nimbus font package (you may not use Computer Modern for the text of your paper).
\item No \LaTeX{} 209 documents may be used or submitted.
\item Your source must not require use of fonts for non-Roman alphabets within the text itself. If your paper includes symbols in other languages (such as, but not limited to, Arabic, Chinese, Hebrew, Japanese, Thai, Russian and other Cyrillic languages), you must restrict their use to bit-mapped figures. Fonts that require non-English language support (CID and Identity-H) must be converted to outlines or 300 dpi bitmap or removed from the document (even if they are in a graphics file embedded in the document).
\item Two-column format in AAAI style is required for all papers.
\item The paper size for final submission must be US letter without exception.
\item The source file must exactly match the PDF.
\item The document margins may not be exceeded (no overfull boxes).
\item The number of pages and the file size must be as specified for your event.
\item No document may be password protected.
\item Neither the PDFs nor the source may contain any embedded links or bookmarks (no hyperref or navigator packages).
\item Your source and PDF must not have any page numbers, footers, or headers (no pagestyle commands).
\item Your PDF must be compatible with Acrobat 5 or higher.
\item Your \LaTeX{} source file (excluding references) must consist of a \textbf{single} file (use of the ``input" command is not allowed.
\item Your graphics must be sized appropriately outside of \LaTeX{} (do not use the ``clip" or ``trim'' command) .
\end{itemize}
\end{quote}
If you do not follow these requirements, your paper will be returned to you to correct the deficiencies.
\section{What Files to Submit}
You must submit the following items to ensure that your paper is published:
\begin{itemize}
\item A fully-compliant PDF file that includes PDF metadata.
\item Your \LaTeX{} source file submitted as a \textbf{single} .tex file (do not use the ``input" command to include sections of your paper --- every section must be in the single source file). (The only allowable exception is .bib file, which should be included separately).
\item The bibliography (.bib) file(s).
\item Your source must compile on our system, which includes only standard \LaTeX{} 2020 TeXLive support files.
\item Only the graphics files used in compiling paper.
\item The \LaTeX{}-generated files (e.g. .aux, .bbl file, PDF, etc.).
\end{itemize}
Your \LaTeX{} source will be reviewed and recompiled on our system (if it does not compile, your paper will be returned to you. \textbf{Do not submit your source in multiple text files.} Your single \LaTeX{} source file must include all your text, your bibliography (formatted using aaai21.bst), and any custom macros.
Your files should work without any supporting files (other than the program itself) on any computer with a standard \LaTeX{} distribution.
\textbf{Do not send files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth.
\textbf{Do not send supporting files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth.
\textbf{Obsolete style files.} The commands for some common packages (such as some used for algorithms), may have changed. Please be certain that you are not compiling your paper using old or obsolete style files.
\textbf{Final Archive.} Place your PDF and source files in a single archive which should be compressed using .zip. The final file size may not exceed 10 MB.
Name your source file with the last (family) name of the first author, even if that is not you.
\section{Using \LaTeX{} to Format Your Paper}
The latest version of the AAAI style file is available on AAAI's website. Download this file and place it in the \TeX\ search path. Placing it in the same directory as the paper should also work. You must download the latest version of the complete AAAI Author Kit so that you will have the latest instruction set and style file.
\subsection{Document Preamble}
In the \LaTeX{} source for your paper, you \textbf{must} place the following lines as shown in the example in this subsection. This command set-up is for three authors. Add or subtract author and address lines as necessary, and uncomment the portions that apply to you. In most instances, this is all you need to do to format your paper in the Times font. The helvet package will cause Helvetica to be used for sans serif. These files are part of the PSNFSS2e package, which is freely available from many Internet sites (and is often part of a standard installation).
Leave the setcounter for section number depth commented out and set at 0 unless you want to add section numbers to your paper. If you do add section numbers, you must uncomment this line and change the number to 1 (for section numbers), or 2 (for section and subsection numbers). The style file will not work properly with numbering of subsubsections, so do not use a number higher than 2.
\subsubsection{The Following Must Appear in Your Preamble}
\begin{quote}
\begin{scriptsize}\begin{verbatim}
\def\year{2021}\relax
\documentclass[letterpaper]{article}
\usepackage{aaai21}
\usepackage{times}
\usepackage{helvet}
\usepackage{courier}
\usepackage[hyphens]{url}
\usepackage{graphicx}
\urlstyle{rm}
\def\UrlFont{\rm}
\usepackage{graphicx}
\usepackage{natbib}
\usepackage{caption}
\frenchspacing
\setlength{\pdfpagewidth}{8.5in}
\setlength{\pdfpageheight}{11in}
\pdfinfo{
/Title (AAAI Press Formatting Instructions for Authors
Using LaTeX -- A Guide)
/Author (AAAI Press Staff, Pater Patel Schneider,
Sunil Issar, J. Scott Penberthy, George Ferguson,
Hans Guesgen, Francisco Cruz, Marc Pujol-Gonzalez)
/TemplateVersion (2021.1)
}
\end{verbatim}\end{scriptsize}
\end{quote}
\subsection{Preparing Your Paper}
After the preamble above, you should prepare your paper as follows:
\begin{quote}
\begin{scriptsize}\begin{verbatim}
\begin{document}
\maketitle
\begin{abstract}
\end{abstract}\end{verbatim}\end{scriptsize}
\end{quote}
\noindent You should then continue with the body of your paper. Your paper must conclude with the references, which should be inserted as follows:
\begin{quote}
\begin{scriptsize}\begin{verbatim}
\subsection{Algorithms}
\newpage
\section{Method}
In this work, we propose to handle constraints in the most straightforward way: an early termination is triggered whenever the learning policy violates the constraints.
Such an early termination is previously used as a trick to improve the sample efficiency of solving regular MDPs~\cite{wang2019benchmarking}:
terminating bad trajectories accelerates the learning process since the policy space to search is reduced and the time horizon is shortened. Moreover, we do not need to learn to proceed after violations as an \textit{ideal} policy should never break the constraints.
We first introduce two types of constraints in CMDPs in Section~\ref{method_classification}. We will show that the early-termination trick used in locomotion tasks is indeed an intuitive approach for solving CMDP with, what we call, loose constraints. In Section~\ref{sec_etmdp}, we define ET-MDP as the foundation of our proposed method. ET-MDP enables the algorithms previously designed for MDP to solve CMDP tasks. We provide discussion on some practical issues in Section~\ref{sec_prac_issues} and introduce our algorithm to solve ET-MDP efficiently in Section~\ref{sec_context_td3}.
\subsection{Constraint Types}
\label{method_classification}
To show the relationship between normal MDPs and CMDPs as well as better illustrate the inspiration that links CMDPs with early termination, we first unify CMDP and MDP formulation in the loose-constrained cases:
MDPs can be regarded as loose-constrained CMDPs when the constraints do not change their optimal solution. Thus those CMDPs can be solved by the same policy trained from its early termination counterparts. We then extend similar idea to the other case where constraints are tight.
We start with the definition of the learning objective. If we denote a policy set satisfying the constraints $C$ as
\begin{equation}
\label{eq_constrained_class}
\Pi^c = \{\text{any policy } \pi: \sum_{t=1}^H c(s_t,\pi(s_t))\le C \},
\end{equation}
then the learning objective of Eqn.(\ref{cmdp}) becomes $\max_{\pi\in\Pi^c} \mathbb{E}_{\tau\sim\pi, \mathcal{T}}[\sum_{t=1}^H r_t]$. The two types of constraints differ in whether the optimal policy lies in the constrained policy class (Eqn.(\ref{eq_constrained_class})) or not.
\paragraph{Loose Constraints}
In model-free RL, early termination is often used as a default environment setting to accelerate learning~\cite{duan2016benchmarking,wang2019benchmarking}. In such problems, early termination is usually applied when the agent reaches some undesired state, e.g., the center of mass get lower than some certain threshold. We call this kind of constraints loose ones, because the solution of the CMDP is the same as the MDP without constraints:
\begin{equation}
\pi^*=\arg\max_{\pi\in\Pi} \mathbb{E}_{\tau\sim\pi, \mathcal{T}}[\sum_{t=1}^H r_t] \in \Pi^c.
\end{equation}
In such CMDPs, considering the constraints or not won't change the final policy as the optimal solution will learn to not break the constraints automatically. Those loose constraints are shown to be able to accelerate learning in~\cite{pham2018constrained}.
\paragraph{Tight Constraints}
In other cases such as navigation in a space with barriers or lava, the barriers or lava can be regarded as constraints and will clearly change the optimal solution to navigate to the goal point compared with the environment of an empty space where there is no constraint applied:
\begin{equation}
\pi^*=\arg\max_{\pi\in\Pi} \mathbb{E}_{\tau\sim\pi, \mathcal{T}}[\sum_{t=1}^H r_t] \notin \Pi^c.
\end{equation}
In such CMDPs, learning to solve the MDP without the constraints can not lead to a satisfying policy for the CMDP as feasible behaviors of the agent must take the constraints into consideration.
Based on such insights, we investigate the approach to solve the CMDPs with their early-termination (ET) counterparts, namely the ET-MDPs. We show the major challenge may come from the limited state visitation problem, which will be further illustrated in detail in Section~\ref{sec_context_td3}
\subsection{Early Terminated MDP (ET-MDP)}
\label{sec_etmdp}
For any CMDP $(\mathcal{S}, \mathcal{A}, H, r, c, C, \mathcal{T})$, its ET-MDP is defined as a new unconstrained MDP $(\mathcal{S} \cup \{s_e\}, \mathcal{A}, H, r', \mathcal{T}')$, where $s_e$ is the absorbing state after termination. Generally speaking, ET-MDP has a history-dependent transition dynamic. In order to have a regular MDP, one can introduce an extra dimension to state space recording the cumulative costs denoted by $b_t = \sum_{\tau = 1}^{t} c_{\tau}$. Though $b_t$ takes values from a large set, the transition dynamic that involves in $b_t$ is known to the agent:
$\mathcal{T}'(s, b, a) = \mathcal{T}(s, a) \mathbbm{1}(b \leq C) + \mathbbm{1}(s = s_e, b > C)$. The reward function becomes $r'(s, b, a) = r(s, a)\mathbbm{1}(b \leq C) + r_{e}\mathbbm{1}(b > C)$ for some $r_e \in \mathbb{R}$. Since we are searching for a policy for the original CMDP, we still consider policies that are stationary with respect to $b$, $i.e$. $\pi(s, b) \equiv \pi(s)$.
\begin{proposition}
\label{prop_1}
For sufficient small $r_e$, the optimal policy of ET-MDP coincidences with $\pi^*$ of the original CMDP. (Proof is given by Appendix)
\end{proposition}
Proposition 1 indicates that CMDP can be solved with their ET-MDP correspondence as long as the termination reward $r_{e}$ is small enough, which can be easily implemented in practice.
Intuitively, as an early-terminated episode is shorter than the original one, one should be able to save samples by solving ET-MDP. In the following part, we show the benefits of solving CMDP through its ET-MDP.
Now that we consider a special case, where $c(s, a) = \mathbbm{1}(\mathcal{T}(s, a) \in \mathcal{S}_c)$ for some $\mathcal{S}_c \subset \mathcal{S}$. Here the violation is caused by the entrance to some invalid states. We also assume that $\mathcal{S}_c$ is an absorbing class. This is an important case we consider in our experiments, as exploring invalid space is unnecessary and early termination can save samples.
To fairly show the benefits, we introduce a performance measure called regret, the difference between the total rewards of the optimal policy and the rewards received by the running algorithm $\mathcal{L}$:
$$
R_T(\mathcal{L}) = \sum_{k = 1}^{\lfloor T/H \rfloor} (V^c_{\pi^*} - V_{\pi_k}^c),
$$
where $V_{\pi}^c$ is the expected value function under policy $\pi$ and $\pi_k$ is the policy chosen for episode $k$. Regrets for deterministic MDPs can be lower and upper bounded.
\begin{theorem}[Theorem 3 in \cite{wen2013efficient}]
Any reinforcement learning algorithm $\mathcal{L}$ that takes input a state space, action space, a horizon, there exists an MDP, such that the regret $\sup_T R_T(\mathcal{L}) \geq 2H|\mathcal{S}||\mathcal{A}|$. There exists an algorithm $\mathcal{L}$, such that for any MDP, the regret $\sup_T R_T(\mathcal{L}) \leq 2H|\mathcal{S}||\mathcal{A}|$.
\end{theorem}
The above lower bound applies to the CMDP as one can construct an MDP with a extreme loose constraint such that all the policies are valid. The above upper bound applies to our ET-MDP since in this special case, $c(s, a)$ is either $0$ or $1$ and the termination happens whenever a cost of $1$ is received, which makes it unnecessary to record the cumulative cost and our ET-MDP is a regular MDP with state space $(\mathcal{S} \setminus \mathcal{S}_c) \cup \{s_e\}$.
\begin{corollary}
\label{cor:1}
There exists a algorithm $\mathcal{L}_{ET}$ ET-MDP such that for any algorithm $\mathcal{L}_c$ for the original CMDP, the ratio
$$
\frac{\sup_T R_T(\mathcal{L}_c)}{\sup_T R_T(\mathcal{L}_{ET})} \geq \frac{|\mathcal{S}|}{|\mathcal{S}| - |\mathcal{S}_c| + 1}.
$$
\end{corollary}
\begin{remark}[ET-MDPs reduce sample complexity]
\label{remark:2}
The above analysis ignored the fact that ET-MDP does not have to finish the whole $H$ steps for each episode, which means that when an algorithm is actually running, the above dependence on $H$ can be also decreased depending on the actual cutoffs.
\end{remark}
Corollary \ref{cor:1} shows that for tasks with a large invalid space, solving ET-MDP is more efficient. It provides the insight to apply similar methods to more complicated tasks like CMDPs for continuous control. As for the case where the termination trick is applied to the loose constrained tasks such as the Walker2d, Hopper and Humanoid in the MuJoCo Locomotion Suite~\cite{wang2019benchmarking}, we don't need to collect samples from infeasible regions. Thus we need to further investigate whether solving the ET-MDP can be a practically effective way to solve CMDPs.
\subsection{Practical Issues of ET-MDP}
\label{sec_prac_issues}
\subsubsection{Budget Tasks}
\label{sec_budget}
In general, there are two different empirical settings in CMDP. The first is the case where there is a \textit{budget} of behavior costs. Behaviors with some cost is not preferable but is permitted to some extent. Henceforth, to satisfy the constraints in Eqn.(\ref{cmdp}), the historical information of cumulative cost should be taken into consideration in making every-step decision. To achieve this, the primal state space $\mathcal{S}$ must be extended to $\mathcal{S}_{ext} = \mathcal{S}\oplus \mathcal{S}_{budget} \oplus \mathcal{S}_{time}$, where $\mathcal{S}_{budget}$ indicates the budget left in the episode, and $\mathcal{S}_{time}$ provides information on the number of time steps left in the episode~\cite{pardo2018time}.
Previous works under this setting include the Safety-Gym~\cite{safety_gym_Ray2019} and the PointGather environment~\cite{achiam2017constrained}, where the budget is a fixed positive integer that indicates how many times the agent can reach a certain type of states.
\subsubsection{Binary Tasks}
On the other hand, there are cases where the safety is considered to be extremely important that the constraints should never be broken in the deployment time of a learned policy. We call this kind of setting the binary CMDPs, where the binary indicates classifying a trajectory as safe or not safe. This is the relatively simple case and the constraints in Eqn.(\ref{cmdp}) can be simplified as $\sum_{t=0}^{\infty} c_t \le 0$, where $c_t = c > 0$ if the constraints are broken and $c_t = 0$ otherwise. Besides, no more effort is needed to ensure the decision making process Markovian.
For example, navigation in a space with lava belongs to such a setting. Another example is the MuJoCo Locomotion Suites, where a hopper, walker or humanoid simulator is required to move forward as fast as possible, and through the whole time the agent should never fall down. To sum up, this setting can be applied as long as there exists a solution that a task can be accomplished without stepping into any positive-cost regions.
\subsubsection{Empirical Tightened Approximation}
\label{sec_tightened_appx}
In this work, we propose to use a strict version of budget tasks: we show in experiments that considering a budget task as a binary task can be a practical approximation by not permitting the agent to reach any risky or costly regions. Although there can be problems in some certain environments where the CMDP can not be solved if the budget is too small, we will show in most of the standard benchmarks, such an approximated solution has good empirical performance in the next section. Analysis and example on the exception cases where such an implementation fails are provided in Appendix~\ref{counterexample}.
\begin{wrapfigure}{l}{8cm
\centering
\includegraphics[width=0.5\columnwidth]{fig/State_Visit.pdf}
\caption{The difference in state visitation frequency of MDPs and ET-MDPs in a diagnostic 2-D navigation environment. \textbf{Left}: the environment, an agent starts from the central red point in each episode and yellow lines denote lava, i.e., danger zone; \textbf{Middle}: the state visitation frequency of a random agent in a MDP by ignoring the lava; \textbf{Right}: the state visitation frequency of a random agent with lava in a ET-MDP. The limited state visitation in ET-MDP is the major challenge for existing RL algorithms.}
\label{fig_context_illu}
\vskip -0.2in
\end{wrapfigure}
\subsection{Solving ET-MDP with Context Models}
\label{sec_context_td3}
In the previous section, we have shown CMDPs can be solved with their ET-MDP counterparts, and the next step is to find suitable solvers for the ET-MDP task. Intuitively, solving ET-MDP is similar to solving normal MDPs as there are no constraints that should be taken into consideration. Any prevailing algorithm can be applied as ET-MDP solver, such as TD3~\cite{fujimoto2018addressing}, SAC~\cite{haarnoja2018soft}, PPO~\cite{schulman2017proximal}, TRPO~\cite{schulman2015trust}, Evolution Strategies (ES)~\cite{salimans2017evolution} and so on.
However, since ET-MDP is different from normal MDPs as there are possibly lots of terminated states $\mathcal{S}_{end}$, algorithms designed for normal MDP tasks are easy to get trapped in limited states, leading to relatively low learning efficiency. Similar problem has been discussed and termed as distribution shift in ~\cite{agarwal2019theory}.
Figure~\ref{fig_context_illu} illustrates the difference in state visitation frequency of normal MDP (middle) and ET-MDP (right) under random exploration in a 2-D navigation environment, where the central red point denotes the starting point and the constraints are shown as yellow boundaries in the left figure. As all of the constraint-violation states will lead to termination in ET-MDP, the generalization ability of the learned policy becomes extremely important. Intuitively, learning algorithms that can generalize better to previously unseen states will be more competent in such tasks.
To solve such a challenge, we propose to adopt context models proposed in Meta RL literature~\cite{fakoor2019meta}.
While in previous work the context models concentrate on the generalization ability among a series of Meta RL tasks, we apply context models to solve a specified ET-MDP task and tackle the limited state visitation problem. Context models in RL are shown to learn generalizable representations between \textit{tasks} in meta-RL. Thus we regard solving the ET-MDP task (e.g., avoid getting terminated and collect as many reward as possible) with different initial states as different tasks, the context models should be able to learn transferable representations over different \textit{initial states}, and generalize learned policies to previously unseen states to avoid being terminated.
We use the Gated Recurrent Units~\cite{cho2014learning} to model context variables as generalizable representations to solve ET-MDPs. We follow~\cite{fakoor2019meta} to build the context model based on TD3~\cite{fujimoto2018addressing}. Here we use separated context networks in our work for training stability, $i.e.$, we use $\mathcal{C}_{w_a}$ for actor and $\mathcal{C}_{w_c}$ for critic, such that both the actor $\pi$ and critic $Q$ take an additional context variable as input:
\begin{align}
\pi = & \pi(s,z_a),\\
Q = & Q(s,a,z_c),
\end{align}
where $z_a = \mathcal{C}_{w_a}(\mathcal{Z}'_L)$, $z_c = \mathcal{C}_{w_c}(\mathcal{Z}'_L)$ and $\mathcal{Z}'_L$ is the previous $L$ step historical transitions: $\mathcal{Z}'_L=\{s_{t-L}, a_{t-L}, r_{t-L}, ..., s_{t-1},a_{t-1},r_{t-1}\}$. If $t-L\le0$, we use zero state $\boldsymbol{0}_s$, zero action $\boldsymbol{0}_a$ and zero reward $\boldsymbol{0}_r$ instead.
The context models ($\mathcal{C}_{w_a}, \mathcal{C}_{w_c}$) are optimized through the gradient chain rule in the optimization of actor and critic networks, with the gradient of
\begin{align}
\label{eq_upd_cwa}
&\Scale[0.9]{\nabla_{w_a} Q_{w_1}(s,a,z_c)|_{a=\pi_\theta(s,z_a)}\nabla_{z_a} \pi_\theta (s,z_a)|_{z_a = \mathcal{C}_{w_a}(\mathcal{Z}'_L)}\nabla_{w_a}\mathcal{C}_{w_a}(\mathcal{Z}'_L)},\\
&\nabla_{w_c} \textbf{TD}(Q(s,a,\mathcal{C}_{w_c}(\mathcal{Z}'_L)))
\end{align}
separately, where \textbf{TD} denotes the temporal difference error. Details of the proposed algorithm are provided in Algorithm~\ref{Algorithm2} in Appendix~\ref{detailed_algo}. In the next section we will demonstrate the superiority of the proposed methods in solving ET-MDPs.
\section{Conclusion}
We address the safe exploration from the perspective of solving Early-Terminated MDP (ET-MDP).
Different from previous CMDP formulations where the constraints requires ad-hoc algorithm design, we propose an equivalent formulation of those tasks as ET-MDP, which lead to identical optimal value function as well as optimal policy to the previous CMDP formulation.
To better exploit the potential benefit of solving CMDPs with ET-MDP counterparts, we further introduce the context models to mitigate the limited state visitation problem in solving ET-MDP and improves the empirical learning efficiency in terms of better asymptotic return and lower cost in various safe RL benchmarks.
\section{Introduction}
\label{submission}
While reinforcement learning (RL) achieves great success in solving challenging decision making problems, several critical issues need to be addressed before it can be adopted in real-world applications. RL Safety, including safe optimization and safe exploration, is one of them. The learning paradigm of RL is composed of exploration and exploitation with experiences from trial-and-error~\cite{sutton1998introduction}. Thus the RL agents need to attempt a wide range of states and actions to better estimate their values, some of which are harmful that may lead to major damage.
To tackle the RL safety problem, \citet{altman1999constrained} defines the Constrained Markov Decision Processes (CMDPs), where the policy optimization of standard RL algorithms should be executed in a constraint-satisfied policy class.
Many deep RL approaches for CMDPs are proposed after the rising of deep neural network function approximators: those works mainly focus on the optimization of the CMDP tasks, $i.e.$, how to effectively convert a CMDP task into a solvable form. \citet{achiam2017constrained} extend the trust region methods~\cite{schulman2015trust} into the context of CMDPs and guarantees the monotonicity of policy improvement; the Lagrangian methods, barrier (interior point) methods used in normal constrained optimization tasks and Lyapunov methods are extended to solve the CMDPs based on their MDP counterparts in ~\citet{chow2017risk, taylor2020learning, liu2020ipo,cheng2019end, perkins2002lyapunov,chow2018lyapunov, sikchi2020lyapunov}; another approach is based on safety-critic, where an additional critic is learned beside the primal critic for rewards to predict the cost of possible behaviors~\cite{zhang2020cautious,bharadhwaj2020conservative,srinivasan2020learning}.
Almost all those previous works on CMDP are derived from on-policy methods except for ~\citet{srinivasan2020learning}, which leverages an off-policy critic for the reward-related critic. Different from the previous works, our proposed safe-RL algorithm depends purely on off-policy algorithms which are known to achieve better sample efficiency.
It is worth mentioning that although we mainly focus on off-policy methods in this work, in principle, the proposed framework can be also combined with other on-policy algorithms to solve safe-RL tasks.
In this work, we provide both theoretical analyses and empirical experiments to show that CMDPs can be efficiently solved through their early terminated counterparts, namely ET-MDPs, which terminate an episode whenever the constraints are violated. We first show under deterministic and tabular MDPs, early termination can filter out invalid episodes and improve sample complexity. We go further to explore if the same improvement holds for more complex cases in practice.
The challenge is that it is not suitable to directly apply the conventional RL algorithms like TD3~\cite{fujimoto2018addressing} to the ET-MDP tasks. The issue lies on the problem of \textbf{limited state visitation} during learning. Intuitively, if we terminate a training episode whenever the agent violates the constraints, the agent's behavior will be limited in a relatively small state space compared to the exploration without the constraints. Thereby, the learning efficiency will be limited~\cite{agarwal2019theory}. In order to solve the limited state visitation problem in ET-MDP, we adopt the
idea of context models, previously introduced in Meta-RL literature ~\cite{fakoor2019meta,rakelly2019efficient} to improve the generality of policies across different training tasks. In our setting of ET-MDP, a context variable is learned to improve the generality of the learned policy over different states, thus it enables our policy to perform safely over different states within one task
We evaluate our method on a range of CMDP environments, including both the deterministic and the stochastic environments with different types of constraints. These environments are the diagnostic 2D-Maze navigation task with different levels, stochastic navigation environments from the Safety-Gym~\cite{safety_gym_Ray2019}, the PointGather~\cite{achiam2017constrained}, and MuJoCo Locomotion environment with constraints~\cite{1606.01540}.
The proposed method shows a remarkably improved performance in terms of both learning efficiency and asymptotic performance under constraints.
\section{Experiments}
We evaluate our proposed method on a diverse set of environments, including 1. loose constrained tasks (Hopper-Not-Fall, Walker-Not-Fall, Humanoid-Not-Fall), 2. static maze tasks with different levels, 3. stochastic navigation tasks (PointGoal1-v0, CarGoal1-v0), and 4. PointGather. The first two sets of environments are binary tasks while the other two sets of environments are budget tasks. Examples of environments are shown in Figure~\ref{fig_envs}.
\begin{figure}[t]
\vskip 0.2in
\begin{center}
\begin{minipage}[htbp]{1.0\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/envs_etmdp.pdf}
\end{minipage}%
\caption{Examples of the tested environments: The first three figures show the diagnostic 2D-Nav tasks with different constraint level; the following three figures show the budget tasks where agents control a point or a car to collect reward without hitting cost regions too many times; the last three figures show loose-constrained tasks where agents need to learn to move forward without falling.}
\label{fig_envs}
\end{center}
\vskip -0.2in
\end{figure}
We aim to validate the following claims in our experiments:
\begin{enumerate}
\item CMDPs can be solved by solving their ET-MDP counterparts with the context-based TD3. For tight-constrained CMDPs, early termination can help improve learning efficiency (Section~\ref{exp_tight}), The tightened approximation in budget tasks can achieve satisfying empirical performance.
\item While directly applying the standard RL algorithms like TD3 has the problem of limited state visitation, the context model mitigates the problem and improves the sample efficiency (Section~\ref{exp_state_visitation}).
\item Context-based TD3 can further improve the performance on loose-constrained tasks (Section~\ref{exp_loose}).
\end{enumerate}
\begin{figure}[t]
\vskip 0.2in
\begin{center}
\begin{minipage}[htbp]{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/PointGoal1_reward_NeurIPS_withSAC.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/CarGoal1_reward_NeurIPS.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/PointGather_reward_NeurIPS.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/CarGoal1_reward_cost_aware_NeurIPS.pdf}
\end{minipage}\\%
\begin{minipage}[htbp]{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/PointGoal1_cost_NeurIPS_withSAC.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/CarGoal1_cost_NeurIPS.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/PointGather_cost_NeurIPS.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/CarGoal1_cost_cost_aware_NeurIPS.pdf}
\end{minipage}%
\caption{Results on the three budget tasks. The first three columns show the rewards and the costs of different methods on the three environments respectively, while the last column shows the performance comparison between learning with extended state space and tightened approximation. As discussed in Sec.~\ref{sec_tightened_appx}}
\label{fig_main_budget}
\end{center}
\vskip -0.2in
\end{figure}
\subsection{Tight Constraints}
\label{exp_tight}
In this section, we evaluate different methods on the environments where constraints do change the optimal solution. As termed in the previous section, those are tight constrained problems.
\subsubsection{Binary Tasks}
\begin{wraptable}{l}{5cm
\caption{Success rate of different methods on the diagnostic environment.}
\centering
\small
\begin{tabular}{lrr}
\toprule
Success Rate & Easy & Hard \\
\midrule
TD3 & 2/10 & 2/10 \\
CPO & 4/10 & 0/10 \\
PPO-Lag & 3/10 & 1/10 \\
Ours & \textbf{10/10} & \textbf{8/10}\\
\bottomrule
\end{tabular}
\vspace{-0.1in}
\end{wraptable}
We first experiment on a maze environment where an agent is asked to navigate to the goal point without stepping into the lava. The input of the agent is the coordinate of current state, and permitted action is limited to $[-1,1]$. We generate four different level of tasks. In all experiments the size of the maze is set to be $16$, and episodic length is set to be $32$, which is two times of the side length. In each episode, the agent is initialized in the center of the maze. Stepping into the target position which is located at middle of right edge will result in a $+30$ reward, and stay in the position will continuously receive that reward. A tiny punishment of $-0.1$ is applied for each timestep otherwise.
\begin{wrapfigure}{l}{5cm
\centering
\begin{minipage}[htbp]{1.0\linewidth}
\vspace{0.in}
\centering
\includegraphics[width=1.0\linewidth]{fig/DangerZone.pdf}
\end{minipage}%
\caption{Experiment Results on the diagnostic 2D navigation environment.}
\label{fig_dangerzone}
\vspace{-0.4in}
\end{wrapfigure}
In those tasks, the constraints are binary as the agent is not permitted to step into lave during navigation to the target point. Results shown in Figure~\ref{fig_main_maze} show the superior performance of ET-MDP with Context TD3 in solving those constrained MDP tasks with binary constraints. Our method enables the off-policy methods to be applied to constrained optimization with remarkable sample efficiency.
\subsubsection{Budget Tasks}
For the budget tasks, we experiment on PointGoal1-v0, CarGoal1-v0, and PointGather to show the performance of our proposed method. In PointGoal1-v0, a mass point navigates in a 2-D maze to collect reward while avoiding dangerous regions, which will lead to a $+1$ cost. The budget for the cost is set to be $+25$ in our experiments~\cite{safety_gym_Ray2019}. In CarGoal1-v0, a car replaces the mass point in the previous environment to attain the same objective and the threshold is increased to $+50$ as the task is more challenging~\cite{stooke2020responsive,bharadhwaj2020conservative}. In PointGather, a mass point is asked to collect apples while avoiding bombs which will lead to a $+1$ cost, and the cost budget is set to $0.1$~\cite{achiam2017constrained}, i.e., the agent is permitted to run into a bomb every ten games on average.
As we have shown in Section~\ref{sec_budget}, the previous information of cost should be taken as an additional input for policies to satisfy the Markov property in those environments. Another approach is to leverage tightened approximation in ET-MDP, where the budget tasks are converted to binary tasks. e.g., in all of those environments, the cost budget is set to $0$ and the episode will be terminated whenever a cost is encountered.
Figure~\ref{fig_main_budget} shows our experiment results with the binary approximation. In all experiments, ET-MDP with Context TD3 is able to reach the best asymptotic performance in terms of both high reward and low cost.
We conduct ablation studies on the approximation discussed in Section~\ref{sec_tightened_appx} to show how the empirical performance of such an approximation. We experiment on the car navigation environment to compare the performance of the primal CMDP with extended state space $\mathcal{S}_{ext} = \mathcal{S} \oplus \mathcal{S}_{budget}\oplus \mathcal{S}_{time}$ and with the tightened approximation. The last column of Figure~\ref{fig_main_budget} shows the experimental results we get: the tightened approximation leads to better constraints-satisfaction, $i.e.$, lower cost, while being able to achieve higher reward.
\subsection{Ablation study of context models under limited state visitation}
\label{exp_state_visitation}
We demonstrate the superiority of Context-TD3 model over vanilla TD3 in the diagnostic environment, where the tasks is no doubt an MDP, $i.e.$, the decision of the agent should be made only based on its present state and has no relevance to the historical information. We hence claim the improvement of Context-TD3 relies on better generalization ability rather than the memory mechanism proposed in previous works that also include recurrent networks in RL.
In this set of experiments, two different environments are generated to compare Context-TD3 and TD3. In the Random-Init environment, the initial position of the agent is uniformly distributed in the map while in the Fix-Init environment the initial position is fixed to the center of the map, which leads to a relatively limited state visitation.
The Context-TD3 performs much better than vanilla TD3 in the Fix-Init environment, showing the experiences collected in limited region can be better generalized to unseen states when context models are introduced.
\begin{figure}[t]
\vskip 0.2in
\begin{center}
\begin{minipage}[htbp]{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Hopper_NeurIPS.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Walker_NeurIPS_NeurIPS.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Human_NeurIPS.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Maze_Init_NeurIPS.pdf}
\end{minipage}%
\caption{The first three figures show learning curves of TD3 and Context TD3 with/without early termination trick in three MuJoCo locomotion tasks; The last figure shows that context model can remarkably improve learning efficiency when the state visitation is limited.}
\label{fig_loose_constraints}
\end{center}
\vskip -0.2in
\end{figure}
\subsection{Loose Constraints}
\label{exp_loose}
We evaluate our method on the MuJoCo locomotion benchmarks where early termination (ET) is previous applied as a default setting to benefit learning. This experiment shows that loose constraints like \textit{center of mass higher than a certain threshold} are crucial for sample-efficient learning as they greatly reduce the state space
In this set of experiments, we remove the \textit{alive bonus} term in the reward during training, otherwise this term will be always the same (i.e., $1000$ for Hopper, Walker and $5000$ for Humanoid) and the rewards between environments with ET and without ET can not be compared fairly. In evaluation, the \textit{alive bonus} term is kept as default settings so that the asymptotic performances can be compared to previous agents trained in the vanilla environments to get a basic sense of what our policies have learned. The results are shown in Figure~\ref{fig_loose_constraints}: while both Context-TD3 and TD3 are able to learn locomotion skills when ET is applied, none of those methods get success when learning without ET. Moreover, Context-TD3 outperforms TD3 in terms of learning efficiency in all three environments.
\section{Related Work}
Learning RL policy under safety constraints~\citep{garcia2015comprehensive,amodei2016concrete,safety_gym_Ray2019} becomes an important topic in RL community due to the safety concerns of RL in real-world applications. For example, ~\citet{richter2019open} applies RL to the simulated surgical robot. ~\citet{kendall2019learning} implements RL algorithm in the autonomous driving scenario. In those applications, the safety of the learned policy is critical and the policy should be optimized under some safety constraints.
The common practice for this problem is to involve human interventions ~\cite{saunders2018trial} or correction of the output action~\cite{dalal2018safe,van2020online} under uncertain conditions.
In previous works, ~\citet{saunders2018trial} proposes HIRL, a scheme for safe RL requiring extra manual efforts to intervene the agent when it produces actions that lead to catastrophic outcomes. ~\citet{dalal2018safe} equips the policy network with a safety layer that can modulate the output action as an absolute safe one. However, the linear layer is incompetent to capture the dynamics of complex environments and it requires pre-training, which brings extra computation and risks to constraint violations.
\citet{achiam2017constrained} proposes the Constrained Policy Optimization (CPO), which is an analytical solution to solve CMDP through trust region optimization.
The close relationship between CPO and the family of trust region methods~\cite{schulman2015trust} makes it difficult to implement and extend to other existing RL algorithms.
Our context-based ETMDP approach, on the contrary, is highly flexible and can be implemented on top of various algorithms such as PPO~\cite{schulman2017proximal}, TRPO~\cite{schulman2015trust} and TD3~\cite{fujimoto2018addressing}.
Another straightforward approach to the soft constraint problem is the Lagrangian method~\cite{safety_gym_Ray2019}, which relaxes the hard-constrained optimization problem to an unconstrained one with an auxiliary penalty term.
An interesting result reported in~\cite{safety_gym_Ray2019} is that the approximation errors in CPO prevent it from fully satisfying the constraint. In contrast, simple Lagrangian method can find constraint-satisfying policies that attain nontrivial returns.
Note that while previous works have discussed the effectiveness of applying early termination~\cite{hamalainen2019visualizing} and absorbing state~\cite{geibel2005risk} in constrained RL to further improve the performance of their proposed methods~\cite{wachi2020safe}, we show in our work such an early-terminated MDP approach can be formulated in a more principled way. We also verify that it is capable enough to work in isolation to solve constrained RL tasks, with proper learning algorithms we will introduce in the following section.
\section{Preliminaries}
\label{sect:prel}
\textbf{Constrained Markov Decision Process.} The standard formulation of Constrained RL is the Constrained Markov Decision Process (CMDP), where an agent interacts with the environment under certain constraints. Here we consider the deterministic CMDP with a fixed horizon $H\in\mathbb{N}^+$ denoted by a tuple $(\mathcal{S},\mathcal{A}, H,r,c,C,\mathcal{T})$, where $\mathcal{S}$ and $\mathcal{A}$ are the state and action space; $r, c:\mathcal{S}\times\mathcal{A} \to \mathbb{R}$ denote the reward function and cost function;
$C\in \mathbb{R}^+$ is the upper bound on the permitted expected cumulative cost;
$\mathcal{T}: \mathcal{S} \times \mathcal{A} \mapsto \mathcal{S}$ denotes the transition function.
We use $\Pi$ to denote the stationary policy class, where $\Pi = \{\pi: \mathcal{S}\times\mathcal{A}\to[0,1],\sum_a\pi(a|s) = 1\}$.
An algorithm for CMDP is to find $\pi^*\in\Pi$ as the result of the following optimization problem,
\begin{equation}
\label{cmdp}
\begin{array}{ll}
\max_{\pi\in\Pi} \mathbb{E}_{\tau\sim\pi, \mathcal{T}}[\sum_{t=1}^H r_t],
\quad \text{s.t.} \quad \mathbb{E}_{\tau\sim\pi,\mathcal{T}}[\sum_{t=1}^H c_t] \le C,
\end{array}
\end{equation}
where the expectation is taken over the trajectory $\tau = (s_1, a_1, r_1, \dots, s_H, a_H, r_H)$ generated by policy $\pi$ under the environment $\mathcal{T}$.
\textbf{Lagrangian Method.} The Lagrangian method relaxes the problem Eqn.(\ref{cmdp}) to an unconstrained optimization problem with a penalty term
\begin{equation}
\label{eq:lagrangian}
\pi^{*}=\max_{\pi\in\Pi}\min_{\lambda\ge0} \mathbb{E}_{\tau\sim\pi,\mathcal{T}}[\sum_{t=1}^H r_t-\lambda c_t] + \lambda C ,
\end{equation}
where $\lambda\ge0$ is known as the Lagrangian multiplier. Suppose the policy $\pi$ is parameterized by $\theta$, i.e., $\pi = \pi_\theta$, the optimization over $\theta$ and $\lambda$ can be conducted iteratively through policy gradient ascent and stochastic gradient descent respectively according to Eqn.(\ref{eq:lagrangian}). \citet{chow2018lyapunov} points out that one of the possible defects of the Lagrangian methods is the violation of constraints during training, which is successfully solved by our proposed method.
\textbf{Constrained Policy Optimization.} \citet{achiam2017constrained} proposes the Constrained Policy Optimization (CPO), an analytical way to solve CMDP through trust region optimization. Specifically, CPO develops an approximation of Eqn.(\ref{cmdp}) by replacing the objective and constraints with surrogate functions~\citep{achiam2017constrained,schulman2015trust} and provides theoretical analysis on the worst case performance as well as constraint violation. In CPO, the policy is updated as:
\begin{equation}
\label{cpo}
\begin{array}{ll}
\pi_{k+1}=\arg\max_{\pi\in\Pi} \mathbb{E}[A_{r, 1}^{\pi_k}(s,a)],
\quad \text{s.t.} \widetilde{J}_c(\pi_k)\le C, \quad \bar{D}_{KL}(\pi||\pi_k)\le \delta,
\end{array}
\end{equation}
wherein $\widetilde{J}_c(\pi_k) = \mathbb{E}_{\tau\sim\pi_k, \mathcal{T}}[\sum_{t=1}^H c_t] + \mathbb{E}_{s,a}[A_{c, 1}^{\pi_k}(s,a)]$, $k=0,1,...,K$. Here $A_{r, i}^{\pi_k}(s,a)$ and $A_{c, i}^{\pi_k}(s,a)$ denote the advantage functions of reward and cost at step $i$ respectively. CPO is closely-connected to the $\theta$-projection approach of \citet{chow2018lyapunov}.
The close relationship between CPO and the family of trust region algorithms makes it difficult to implement and to extend to other existing RL algorithms. On contrast, our proposed approach is highly flexible and is easy to implement to a various algorithms in nature.
\textbf{Context Models for Meta-RL.} Meta-RL aims to learn a good inductive bias of policy that can be quickly generalized to previously unseen tasks. In the meta-training phase, several tasks $\mathcal{D}_{train} = \{D_{(k)}\}_{k=1}^K$ are sampled from a task distribution $\mathcal{D}_{meta}$. In the meta-testing phase, $\mathcal{D}_{test} = \{D_{(k)}\}_{k=K+1}^N$ are sampled from the same task distribution.
Although the meta-optimization approaches~\cite{finn2017model,nichol2018first} have been successfully applied to various image classification tasks, their performance is relatively limited in RL tasks~\cite{fakoor2019meta}. Recent advance of the context approach meta-RL~\cite{rakelly2019efficient} learns a latent representation of the task and construct a context model through recurrent networks~\cite{gers1999learning,cho2014learning}. In this work, we follow~\cite{fakoor2019meta} to use the simplest form of meta-training, i.e., the multi-task objective:
\begin{equation}
\hat{\theta}_{meta} = \arg \max_\theta \frac{1}{n} \sum_{k=1}^n \mathbb{E} [\ell^{(k)}(\theta)],
\end{equation}
where $\ell^{(k)}(\theta)$ denotes the objective evaluated on the $k$-th task $D_{(k)}$.
\section{Proof for Proposition1}
\label{prop_1_prf}
\begin{proof}
The value function of CMDP is defined in the feasible region $\Pi^c = \{\pi\in\Pi^c: \sum_{t=1}^H c(s_t,a_t)\le C, a_t=\pi(s_t), s_{t+1} = \mathcal{T}(s_t,a_t) \}$, where $\pi\in\Pi^{c}$ and
\begin{equation}
V_c^{\pi}(s) = \sum_{t=1}^{H} r(s_t,a_t), \text{ where } \quad a_t=\pi(s_t), s_{t+1} = \mathcal{T}(s_t,a_t)
\end{equation}
The learning objective is to find $\pi\in\Pi^{c}$ such that
\begin{equation}
V^*_c(s) = \max_{\pi\in \Pi^c} V_c^{\pi}(s).
\end{equation}
The value function for ET-MDP is defined similarly as normal MDP by
\begin{equation}
V_{ET}^{\pi}(s) = \sum_{t=1}^{H} r'(s_t, b_t, a_t), \text{ where } a_t=\pi(s_t), s_{t+1} = \mathcal{T}'(s_t, b_t,a_t), b_{t+1} = b_t + c_t.
\end{equation}
For any $\pi\in\Pi^c$, the trajectories are the same in the ET-MDP and its counterpart. We have
$
r'(s_t, b_t, a_t) = r(s_t, a_t)
$
for all $t \leq H$. Therefore, we have $V_c^{\pi} = V_{ET}^{\pi}$ for $\pi \in \Pi^c$.
The optimal value function of ET-MDP is defined over its optimal policy
\begin{equation}
V^*_{ET}(s) = \max \{ \max_{\pi\in \Pi^c} V_c^{\pi}(s), \max_{\pi \not\in \Pi^c} \sum_{t=1}^{h_{\pi}\le H} r(s_t,a_t)+r_e\},
\end{equation}
where $h_{\pi}$ is the step at which the constraint is violated.
Therefore, $V^*_{ET}(s) = V^*_c(s)$ for sufficiently small $r_e$ and the optimal state values are achieved for the same optimal policy $\pi^*\in\Pi^c$
\end{proof}
\section{Detailed Pseudo-Code of the Proposed Method}
\label{detailed_algo}
\begin{algorithm}[h]
\caption{Context TD3 in ET-MDP}
\label{Algorithm2}
\begin{algorithmic}[1]
\STATE Initialize critic networks $Q_{w_1}$, $Q_{w_2}$, actor network $\pi_{\theta}$
\STATE Initialize context models $\mathcal{C}_{w_a},\mathcal{C}_{w_c}$ for the actor and critic networks separately with recurrent networks.
\STATE Initialize target networks $w'_1 \leftarrow {w_1}$, $w'_2 \leftarrow {w_2}$, $\theta' \leftarrow {\theta}$
\STATE Initialize replay buffer $\mathcal{B} = \{\}$
\STATE Initialize a context queue $\mathcal{Z}_L$ with length $L$ by $\mathcal{Z}_L = [\boldsymbol{0}_s,\boldsymbol{0}_a,\boldsymbol{0}_r] \times L$, maintain a copy $\mathcal{Z}'_L \leftarrow \mathcal{Z}_L$
\FOR{$t = 1,2,...$}
\STATE Interact with environment and get transition tuple $(s,a,r,c,s')$, $r\leftarrow r+r_e$ if $c>0$.
\STATE Update context queue with $\mathcal{Z}_L$, append $(s,a,r)$, and store $(s,a,r,s',\mathcal{Z}'_L,\mathcal{Z}_L)$ in $\mathcal{B}$, update $\mathcal{Z}'_L\leftarrow\mathcal{Z}_L$
\STATE Sample a batch of transitions $\{(s,a,r,s',\mathcal{Z}'_L,\mathcal{Z}_L)\}$ from $\mathcal{B}$
\STATE Calculate context variable for actor and critic with $z_a = \mathcal{C}_{w_a}(\mathcal{Z}'_L), z_c = \mathcal{C}_{w_c}(\mathcal{Z}'_L)$, and context variable for calculating the next action and next value $z'_a = \mathcal{C}_{w_a}(\mathcal{Z}_L), z'_c = \mathcal{C}_{w_c}(\mathcal{Z}_L)$
\STATE Calculate perturbed next action by $\tilde{a}\leftarrow \pi_{\theta'}(s',z'_a) + \epsilon$, $\epsilon$ is sampled from a clipped Gaussian.
\STATE Calculate target critic value $y$ and update critic networks:\\
~~$y \leftarrow r + \gamma \min_{i=1,2}Q_{w'_i}(s',\tilde{a},z'_c)$\\
~~$w_i \leftarrow \arg\min_{w_i} \mathbf{MSE}(y,Q_{w_i}(s,a,z_c))$
\STATE Update $w_c$, the context model for critic through \\
~~$w_c \leftarrow \arg\min_{w_c} \mathbf{MSE}(y,Q_{w_i}(s,a,\mathcal{C}_{w_c}(\mathcal{Z}'_L)))$
\STATE Update $\theta$ by the deterministic policy gradient, with learning rate $\eta$:\\
~~$\theta\leftarrow \theta - \eta \nabla_a Q_{w_1}(s,a,z_c)|_{a=\pi_\theta(s,z_a)}\nabla_\theta \pi_\theta (s,z_a)$
\STATE Update $w_a$, the context model for actor according to Eqn.(\ref{eq_upd_cwa})
\STATE Update target networks, with $\tau \in (0,1)$: \\
~~$w'_i \leftarrow \tau {w_i} + (1-\tau)w'_i$; $\theta' \leftarrow \tau {\theta} + (1-\tau)\theta'$
\STATE Break this episode if $c>0$.
\ENDFOR
\end{algorithmic}
\end{algorithm}
\newpage
\section{Reproduction Checklist}
\paragraph{Network Structure}
Our implementation of Context TD3 is mainly based on the code of ~\cite{fujimoto2018addressing}. The hyper-parameters of TD3 are the same as the authors recommend in the paper. In our Context TD3, we also use 3-layer MLPs for both actor and critic networks (with 256 hidden units).
We find in our experiments using separated context networks that trained through gradients of actor and critic will benefit learning. Details of ablation study on the network structure are provided in Appendix~\ref{appd_sep_struc}
\paragraph{Value of {$r_e$}} In our analysis, the value of $r_e$ can be selected as any sufficiently small number. However selecting too small value may lead to over-conservative behavior. In our experiments reported in the main text, we use $r_e=-10$ for the maze environments and $r_e=-1$ for the other environments. Ablation studies on the selection $r_e$ are provided in Appendix~\ref{appd_r_e}.
\paragraph{Batch Size} In our experiments we follow ~\citet{fujimoto2018addressing} to use a mini-batch size of $256$. In PPO, CPO and PPO-Largrangian, we use a batch size of $1000$ and mini-batch size of $256$ for the short-horizon games (e.g., Maze, PointGather, both with $T\le32$), so that there are around $1000$ episodes in training. For the long-horizon games where $T\sim 1000$, we collect $10$ trajectories for each episode for better training stability~\cite{schulman2015trust,schulman2017proximal}.
\paragraph{Hardware and Training Time}
We experiment on a server with 8 TITAN X GPUs and 32 Intel(R) E5-2640 CPUs. Experiments take $0.5$ (the maze environment with 0.1M interactions) to $10$ hours (the safety-gym with 1M interactions) to run. The training of Context TD3 will introduce higher computation expense as additional context models need to be trained.
\section{Environments Details}
\label{appd_env_details}
\subsection{Maze Environment}
In the Maze environment, an agent is asked to navigate to the goal point without stepping into the lava. The input of the agent is the coordinate of current state, and permitted action is limited to $[-1,1]$. We generate four different level of tasks. In all experiments the size of the maze is set to be $16$, and episodic length is set to be $32$, which is two times of the side length. In each episode, the agent is initialized in the center of the maze. Stepping into the target position which is located at middle of right edge will result in a $+30$ reward, and stay in the position will continuously receive that reward. A tiny punishment of $-0.1$ is applied for each timestep otherwise.
\begin{figure}[h]
\vskip 0.2in
\begin{center}
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Maze_Level88_env.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Maze_Level79_env.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Maze_Level610_env.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Maze_Level511_env.pdf}
\end{minipage}%
\caption{The four environments with different constraint levels in our experiments. From left to right: Maze-Level-1, Maze-Level-2, Maze-Level-3, Maze-level-4. The regions with orange color are Lava region where the agent should not step into. For each game, the agent is initialized at center of the map, therefore the difficulty of finding a solution without violating the constraints becomes harder and harder from Level-1 to Level-4.}
\label{fig_maze_envs}
\end{center}
\vskip -0.2in
\end{figure}
\section{Additional Experiments}
\subsection{Sensitivity to Hyper-Parameter}
\subsubsection{Value of Historical Horizon}
We experiment on the maze environments to show how the proposed method work with different length of historical horizon in the context model. Results are shown in Figure~\ref{fig_hist_horizon}. \textbf{Context $1$} means we only include the past state, action, reward in the computation of context variables, while \textbf{Context $7$} indicates the past $7$ steps of transitions are leveraged in generating the context variables. We find the context model with historical horizon $3$ achieve good performance in all level of the maze environments.
\begin{figure}[h]
\vskip 0.2in
\begin{center}
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Maze_Level_8_abla_con.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Maze_Level_79_abla.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Maze_Level_610_abla.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Maze_Level_511_abla.pdf}
\end{minipage}%
\caption{Ablation studies on the selection of different length of historical horizon. All corresponding costs are zero under our ETMDP settings.}
\label{fig_hist_horizon}
\end{center}
\vskip -0.2in
\end{figure}
\subsubsection{Number of Hidden Units in GRUs}
We experiment on the selection of different number of hidden units used in GRUs. We compare the results with $30$ hidden units (reported in the main text, denoted as \textbf{Context} in Figure~\ref{fig_hidden_units}) with the results with $120$ hidden units (denoted as \textbf{Context Large} in Figure~\ref{fig_hidden_units})). We find using $30$ hidden units is enough to achieve improved performance, and in the same time balance the computational cost. And using too much hidden units may lead to reduction on learning efficiency (in the Humanoid-Not-Fall-v0 environment).
\begin{figure}[h!]
\vskip 0.2in
\begin{center}
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Hopper_large.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Walker_large.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Human_large.pdf}
\end{minipage}%
\caption{Ablation studies on the number of hidden units used in GRU, and comparison on different selection of network structure: shared v.s. separated context model.}
\label{fig_hidden_units}
\end{center}
\vskip -0.2in
\end{figure}
\subsubsection{Value of $r_e$}
\label{appd_r_e}
We show experimental results on the selection of different value of the ending reward $r_e$ in this section. Figure~\ref{fig_r_e} shows the results on the CarGoal, PointGoal and PointGather environments. In both TD3 and Context TD3 working in ETMDP, smaller $r_e$'s result in more conservative policies that achieve lower cost and lower primal task reward.
\begin{figure}[h]
\vskip 0.2in
\begin{center}
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/CarGoal1_reward_ablation.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/PointGoal1_reward_ablation.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/PointGather_reward_ablation.pdf}
\end{minipage} \\%
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/CarGoal1_cost_ablation.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/PointGoal1_cost_ablation.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/PointGather_cost_ablation.pdf}
\end{minipage}
\caption{Ablation studies on the selection of $r_e$, the value of absorbing reward. The first line shows the episodic return curves of each methods in different environments while the second line shows the corresponding episodic costs. Using smaller $r_e$ will lead to more conservative behavior, i.e., slightly lower return and lower cost.}
\label{fig_r_e}
\end{center}
\vskip -0.2in
\end{figure}
\subsection{More Environments}
\subsubsection{Other Benchmarks}
In this section we show our experiments on various MuJoCo and DeepMind Control benchmarks to show the superiority of the Context TD3 over the vanilla TD3 in sample-efficient learning in MDP tasks. Figure~\ref{fig_more_exps} shows the experiment results. In most environments, Context TD3 achieves better asymptotic performance while being able to converge faster. We use the same hyper-parameter of historical horizon $= 7$ in all experiments. Elaborated searching for hyper-parameters may result in even better performance.
\begin{figure}[h!]
\vskip 0.2in
\begin{center}
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/ball_incup_catch.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/cartpole-balance_sparse.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/cartpole-balance.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/cartpole-swingup_sparse.pdf}
\end{minipage}\\
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/cartpole-swingup.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/cheetah-run.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/finger-turn_hard.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/hopper-hop.pdf}
\end{minipage} \\
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/hopper-stand.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/walker-walk.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/walker-run.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/walker-stand.pdf}
\end{minipage}\\
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/LunarLanderContinuous.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Hopper_env.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Walker2d_env.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.25\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Humanoid_env.pdf}
\end{minipage}%
\caption{Experiment results on the DeepMind Control Suite. In all 16 benchmark environments we experimented on, context models outperforms normal TD3 in most environments (13 out of 16), showing the superiority of context models in improving learning efficiency.}
\label{fig_more_exps}
\end{center}
\vskip -0.2in
\end{figure}
\newpage
\subsection{Model Structure}
\label{appd_sep_struc}
\subsubsection{GRU v.s. Transformer}
In this section we provide ablation studies on the choice of context models: we compare the results of Context models based on GRUs and based on recent advances of self-attention based models~\cite{vaswani2017attention}. The results are shown in Figure~\ref{fig_tf}, where we find leveraging the transformer models can not result in better performance.
\begin{figure}[h!]
\vskip 0.2in
\begin{center}
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Hopper_TF.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Walker2d_TF.pdf}
\end{minipage}%
\caption{Ablation studies on the selection of context models.}
\label{fig_tf}
\end{center}
\vskip -0.2in
\end{figure}
\subsubsection{Shared v.s. Separated Context Variables}
In the work of \cite{fakoor2019meta}, the context model is trained only through the learning of critic networks. Differently, in our experiments we find training context models separately for the actor and critic can result in better performance. \textbf{Context Shared} in Figure~\ref{fig_hidden_units} denotes the results when the context model is shared by actor and critic as recommended in the Meta-RL literature ~\citep{fakoor2019meta}.
\section{Detailed Learning Curves}
\begin{figure}[h!]
\vskip 0.2in
\begin{center}
\begin{minipage}[htbp]{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Maze_Level_8.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Maze_Level_79.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Maze_Level_610.pdf}
\end{minipage}%
\begin{minipage}[htbp]{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/Maze_Level_511.pdf}
\end{minipage}%
\caption{Learning curves of the Maze environments. As the constraints are binary, any reward gained when the constraints are violated is not taken into consideration. Therefore only episodic return curves are shown in the figures. i.e., All of those rewards are gained \textit{without breaking the constraints}.}
\label{fig_main_maze}
\end{center}
\vskip -0.2in
\end{figure}
\section{Counterexample where Tightened Approximation Will Fail}
\label{counterexample}
Although in our work we find using the tightened approximation in CMDP tasks with budget constraint is able to achieve satisfactory performance, here are counterexamples that applying such an approximation in ETMDP can fail to solve the corresponding CMDP. Such a counterexample is not hard to construct, though we may not meet such a situation in practice, we expose the possibility in this section for future exploration. The basic idea is: when the task must be solved with the budget consumed partially, the approximation will fail to find a proper solution. In the example shown in Figure~\ref{fig_counter}, if the agent is initialized at center of the map, and the cost budget is $1$, permitting the agent to cross the constrained region for once, the tightened approximation will not lead to good policy as the agent can never learn how to behave outside the rectangle region. For those problems, the primal cost-aware design introduced in Section~\ref{sec_prac_issues} must be used.
\begin{figure}[t]
\vskip 0.2in
\begin{center}
\begin{minipage}[htbp]{0.33\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{fig/CounterEx.pdf}
\end{minipage}%
\caption{Counterexample where the tightened approximation will fail.}
\label{fig_counter}
\end{center}
\vskip -0.2in
\end{figure}
\newpage
\section{Qualitative Results}
We also include demo videos in the supplemental materials. Where the performance of agents trained with different algorithms in the PointGoal1-v0 and CarGoal1-v0 safe-navigation tasks are shown qualitatively.
|
1,314,259,993,434 | arxiv | \section{Introduction}
Quantum degenerate gases, such as Bose-Einstein condensates (BECs) \cite{bloch-review} or cold Fermi gases \cite{inguscio-fermi}, trapped in optical lattices, provide a flexible platform for investigating condensed matter models and quantum phase transitions \cite{bloch-qpt}.
It has been proposed to use these systems as
quantum simulators of solid state systems \cite{cirac-chains} and
for implementing quantum information processing (QIP)
\cite{brennen, jaksch, dorner}. Experiments on neutral atoms have
shown some of the ingredients needed for QIP: the preparation of a
Mott insulator state with just one particle per well, which is
used as the initial state of a quantum register \cite{bloch-qpt},
single-qubit rotation \cite{lee07}, and controlled motion of atoms
so as to effect entangling interactions \cite{mandelNature03,lee07}.
A general requirement of QIP is accurate control of a quantum
system. Often this includes control of degrees of freedom other
than the qubit or computational basis, for example the center of mass
motion of an ion or atom for which the spin (internal state) represents the qubit.
One approach to achieving such accurate control is adiabatic
manipulation of the relevant Hamiltonian. Unfortunately
adiabaticity limits the speed of operations. One way to overcome
this difficulty is to use optimal control methods
\cite{oc1,dorner}. Here we show that such techniques could improve
the speed and fidelity of transport of atoms in an optical
lattice.
Recent experiments \cite{anderliniSwap, mandelNature03,trotzky08a}
have shown that quantum gates could be implemented in controllable
optical potentials
by adjusting the overlap between atoms trapped in neighboring
sites of an optical lattice. High fidelity of this dynamic
process could be achieved by adiabatically changing the trapping
potential. This, however, limits the overall gate speed to be much
lower than the trapping frequency \cite{dorner,garcia-2003}. Here we
present a detailed numerical analysis of the transport process
used to effect a two-qubit quantum gate in \cite{anderliniSwap},
which is performed with the controllable double-well optical
potential described in \cite{anderlini-pra} and find that it gives
an accurate description of the evolution measured in the
experiment.
Then we apply optimal control theory
to the transport process of the atoms, both with and without
interactions, to show how to increase the speed of the gate. The
success of this method in this specific problem demonstrates the
promise of optimal control for coherent manipulation of a diverse
class of quantum systems.
\section{The experiment} \label{sec:model}
A two-qubit quantum gate with neutral atoms can be realized in
optical lattices through a controlled interaction-induced evolution of the wavefunction that depends on the states of the two atoms \cite{brennen,jaksch}.
Because atoms in their electronic ground states generally have short-range interactions, in order to use these contact interactions to produce entanglement, the atomic wavefunctions must be made to overlap.
Once the interaction has taken place for a fixed time, the two atoms
can be separated again thus finishing the gate. In this paper we
consider the control of such motion in a specific setup; however
our theory can be applied to more general systems.
\subsection{The Double-Well Lattice} \label{sec:doublewell}
Neutral $^{87}$Rb atoms are loaded into the sites of a 3D optical
lattice obtained by superimposing a 2D optical lattice of
double-wells \cite{anderlini-pra} in the horizontal plane and an
independent 1D optical lattice in the vertical direction. The
horizontal lattice has a unit cell that can be dynamically
transformed between single-well and double-well configurations.
The horizontal potential experienced by the atoms is
\cite{anderliniJPhysB}:
\begin{eqnarray}\label{eq:potential2D}
V(x,y)&=&- V_0\left[ \cos^2\left(\frac{\beta}{2}\right)(\cos^2k y+\cos^2k
x)+
\right . \nonumber\\
&+&\left . \sin^2\left(\frac{\beta}{2}\right)(\cos k y+ \cos(k
x-\theta))^2\right]
\end{eqnarray}
where $x$ and $y$ are the spatial coordinates,
$k=2\pi/\lambda$ is the laser wave-vector and $\lambda$ is the laser wavelength. The potential
\eqref{eq:potential2D} depends on three parameters:
\begin{enumerate}
\item[(i)] the strength $V_0$ of the potential wells; \item[(ii)]
the ratio $\tan\left(\frac{\beta}{2}\right)$ of vertical to horizontal
electric field components;
\item[(iii)] the phase shift $\theta$ between vertical and horizontal light components.
\end{enumerate}
The angle $\beta$ determines the height of the barrier between
adjacent double-well sites: by changing $\beta/\pi$ from 0 to
$0.5$ the potential changes from a symmetric double-well configuration, with a spacing of lambda/2 (lambda/2 lattice), to a single-well configuration, with a spacing of lambda (lambda lattice). By changing $\beta$ and $\theta$ together
one varies the energy offset (tilt) of a well with respect to the
neighboring one. The tilt of the double well is zero for $\theta
/\pi$ = $\pm 0.5$, while it is maximum (with a value depending on
$\beta$) for $\theta /\pi$ = 0 or $\pm\,1$.
To effect a quantum gate, one varies the three parameters in time so as to move atoms
occupying adjacent wells into the same well, allowing
them to interact and finally returning them to their original
positions.
In Fig.~\ref{fig:potpsi} we show four snapshots of the cross-section
of the optical potential along the direction of the double wells
($x$), and of the single-particle wave functions of the two atoms
during a particular transport sequence. In the initial configuration
each atom is prepared in the ground state of separate wells so that
the properly symmetrized initial state is:
\begin{equation} \label{eq:psiin}
\ket{\Psi_{\rm in}} =\frac{1}{\sqrt 2}\left(
\ket{\psi_L}_1\ket{\psi_R}_2 + \ket{\psi_R}_1\ket{\psi_L}_2\right)
\end{equation}
where $\psi_L$ and $\psi_R$ are wavefunctions localized in the left
and right well, respectively, and 1 and 2 are the labels of the two
(indistinguishable) atoms (see Fig.~\ref{fig:potpsi}a). For quantum gate operation, we would also include the internal state of the atoms, but here we concentrate only on the external state (center-of-mass motion). $\psi_L$ and
$\psi_R$ are linear combinations of the lowest symmetric and
antisymmetric energy eigenfunctions of the single-particle
potential.
In Eq.~\eqref{eq:psiin} and throughout the text we use the convention that single- (two-) particle states are labeled with lower (upper) case greek letters.
\begin{figure}[htbp]
\centering
\includegraphics*[scale=0.4]{1}
\caption{(Color online) a) Initial configuration: the potential (solid lines)
is in the $\lambda/2$ configuration and the single-particle wave functions
are localized in the left (dashed) or right (dotted) well.
b)--c)
Intermediate snapshots obtained by
lowering the central barrier and tilting the potential. d) End of the
process: the particle initially in the right well ends in the ground state of the
single well, while the particle initially in the left well ends
in the first excited state. }
\label{fig:potpsi}
\end{figure}
The potential is changed by lowering the barrier and at the same
time lowering the right well with respect to the left one, Fig.
\ref{fig:potpsi}b)-c). The atom initially in the right well remains
in the ground state, evolving into the lowest state $\phi_0$ of the
final potential, while the atom initially in the left well evolves
into the first excited state $\phi_1$, Fig. \ref{fig:potpsi}d). When
the two atoms are in the same potential well they interact through
the usual contact interaction, which can
be used to generate the entangling operation needed to realize a two-qubit quantum gate \cite{jaksch,dorner}.
\subsection{Experimental Procedure}
The experimental characterization of the transport process is
accomplished by performing the potential transformation depicted in
Fig.~\ref{fig:potpsi} with atomic samples loaded either only in the
left sites or only in the right sites of the double wells
\cite{anderliniJPhysB}.
Briefly, Bose-Einstein condensates of $^{87}\textrm{Rb}$ atoms
with $4\cdot 10^3 \leq N_{\textrm{BEC}} \leq 2\cdot 10^4$ are
loaded in the sites of the $\lambda$-lattice with an exponential
ramp of 200 ms duration. This loading populates only the ground
band of the optical potential with mostly one atom per lattice
site \cite{SebbyPRL07}. Then the potential is transformed to the
$\lambda/2$-lattice in such a way that the atoms eventually occupy
either only the right sites or only the left sites of the double
wells \cite{anderliniJPhysB,lee07}. Starting from this initialized
state, we perform the transport process illustrated in Fig.
\ref{fig:potpsi}a)--d). At the end of the process we measure the
occupation of the lattice bands. To this purpose, we map the
quasi-momentum of atoms occupying different vibrational levels of
the optical potential onto real momenta lying within different
Brillouin zones \cite{kastberg95,greiner2001}. This is achieved by
switching off the optical potential in 500 $\mu$s and acquiring an
absorption image of the sample after a 13 ms time-of-flight. In
this way atoms occupying different vibrational levels appear
spatially separated, allowing us to measure the amount of
population in each vibrational state.
The comparison between these measurements and the theoretical model
requires an accurate determination of the evolution of the
parameters $V_0$, $\beta$ and $\theta$ characterizing the optical
lattice during the experimental sequences. The parameter $V_0$,
which corresponds to the depth of the potential when it is set in
the $\lambda/2$ configuration, is measured by pulsing the
$\lambda/2$-lattice and observing the resulting momentum
distribution in time of flight \cite{Ovchinnikov1998}. The
parameters $\beta$ and $\theta$, which determine the shape of the
double-well potential and are controlled using two electro-optic
modulators (EOMs), are determined from measurements of the
polarization of the laser beams after the EOMs as a function of
their respective control voltages \cite{anderlini-pra}.
We perform two series of experimental sequences in order to study
the properties of the atomic transport as a function of
the duration of the process and of the energy tilt between the two
potential wells during the merge. In a first series of
measurements the sequence involves converting
the lattice from the double-well to the single-well
configuration by changing $\beta$, rotating the polarization of
the input light using a linear ramp, while leaving constant the
light intensity and the setting of the electro-optic modulator
EOM$\theta$ dedicated to the control of the phase shift $\theta$.
This sequence is repeated for different durations of the linear
ramp from $T = 0.01$ ms to 1.01 ms. In a second series of
measurements we consider the dependence of the transport on the
tilt of the double-well potential during the merge. We perform the
lattice transformation using always the same duration of $T = 0.5$
ms, the same intensity of the light beam and the same ramp for
changing the polarization angle $\beta$, while the the setting of EOM$\theta$ is kept constant in
time during a sequence. We then repeat the sequence for different
settings of EOM$\theta$. The time dependence for
the three lattice parameters $V_0$, $\beta$ and $\theta$ for
measurements of the first series and of the second series are
shown in Fig. \ref{fig:pulse}{\sf a} and Fig. \ref{fig:pulse}{\sf
b}, respectively. Fig. \ref{fig:pulse}{\sf b} shows the evolution
of the parameter $\theta$ for two different settings of
EOM$\theta$. The potential parameters are determined using
our calibration of the lattice setup, taking into account effects
such as different losses on the optical elements for different
polarizations of the lattice beams and the dependence of the
optical potential on the local polarization of the light
\cite{lee07}. These effects are responsible for the change of the
potential depth $V_0$ and of the angle $\theta$ during the
sequence despite the fact that both the intensity of the
light and the settings of EOM$\theta$ are not actively changed.
\begin{figure}[htbp]
\centering
\includegraphics*[scale=0.5]{2}
\caption{Two possible sequences {\sf a}
(left) and {\sf b} (right)
employed to shift the atoms from a double- to a single-well configuration
are shown in the left and right part of the panel. For each sequence we show
the time-dependence of $V_0$, $\beta$, $\theta$ for a sequence duration $T$. For sequence {\sf b} we show the time dependence of $\theta$ for two settings of EOM$\theta$: $-0.42\; \pi$ (solid) and $-0.48\; \pi$ (dashed).}
\label{fig:pulse}
\end{figure}
\section{Theoretical model} \label{sec:nint}
Here we describe the theoretical methods that we implement for
investigating the dynamics in the system described above, starting
with the case of non-interacting particles. Then, we
consider the experimental realization of the merging of adjacent
lattice sites into a single site shown in Fig. \ref{fig:potpsi}
and we compare the results obtained in the experiment with the
expectations based on our theoretical model. This stage represents a useful
benchmark to evaluate the reliability of the numerical model as
well as for gaining insight into the details of the optical
potential experienced by the atoms. Finally, we present the technique for optimizing the transport sequence, and we
show how we can achieve a significantly higher fidelity at fixed operation time for the atomic motion than by using smooth sequences based on adiabatic
evolution.
\subsection{Theoretical Framework}
We consider the 1D problem restricted to the x axis by
assuming that the optical potential can be separated along the
three spatial directions, allowing us to express the atomic
wavefunctions as a product of three independent terms.
We consider the harmonic approximation of the
potential in the $y$
and $z$ directions, having trap frequencies $\nu_y$ and $\nu_z$
respectively, that can be calculated as shown in
\cite{spielman06} and we assume that along $y$ and $z$ the
atoms always occupy the lowest vibrational state. This restriction
does not put limitations in studying dynamic processes involving
low energy states of the double-well potential since it can be
chosen to have non-degenerate vibration frequencies along all
three directions, with the lowest frequency always along $x$. We calculate the
eigenstates of the system along the $x$ direction by solving the
eigenvalue equation using the finite difference method
\cite{thomas}. For the time evolution we consider the integration
of the time dependent Schr\"{o}dinger equation using the
Crank-Nicolson method \cite{cn}. This method has the advantage of being unconditionally stable and the error in the results scale quadratically with the number of space-time grid points in which the Schr\"{o}dinger equation is solved. The relative error of the data presented is always less than $10^{-3}$. In Appendix \ref{sec:numerical}
we present a more detailed description of our numerical methods.
\subsection{Comparison to experimental results}
In this section we present the theoretical analysis of the
transport processes described above and we discuss the agreement
between the model and the experimental measurements. We start by
considering the time evolution of the Hamiltonian spectrum during
the two sequences {\sf a} and {\sf b} shown in Fig.
\ref{fig:spectrum}.
\begin{figure}[htbp]
\includegraphics*[scale=0.5]{3}
\caption{Instantaneous spectrum of the 1D Hamiltonian for
sequences {\sf a} and {\sf b} for EOM$\theta$: $-0.42\; \pi$.}
\label{fig:spectrum}
\end{figure}
At time $t=0$ the spectrum is made of nearly degenerate doublets of
almost equally spaced pairs of harmonic oscillator states, while at time $t=T$ the levels are similar
to those of a single harmonic oscillator\footnote{We have verified that
the results for the 1D spectrum are in good agreement with full
calculations in 2D (restricted to states with vibrational
excitation along the $x$ direction). Additional energy levels are
present in the 2D spectrum, associated with states with
vibrational excitation along $y$. However, those states can be
neglected for studying the dynamical process considered here since
their energy is always higher than the three lowest states of the
1D spectrum.}.
Fig.~\ref{fig:spectrum} shows the time evolution of the eigenstates of the single-particle Hamiltonian.
The atoms initially prepared in the two local ground states
in the right and left wells ($\psi_R$ and $\psi_L$) evolve into the
instantaneous eigenstates ending in the ground and first excited
state of the final configuration, respectively. This approach
requires the sequence to be performed slowly with respect to the
timescale associated with the gaps between the relevant energy
levels. The optimal ``speed'' in the parameter space can be
calculated using the Landau-Zener theory for avoided level
crossings.
For gaining quantitative insight into the properties of the
transport we perform numerical simulations for the sequences used
in the experiments, also taking into account possible deviations
of the parameters from the experimental calibrations, and we
compare the results with the experimental measurements.
The relevant quantities for our analysis will be the
overlap $f^\alpha_n$ of the energy eigenstates $\phi_{n}$ of the
final potential with the evolved state $\psi_\alpha$ where $\alpha=L,R$ indicates the initial well occupation:
\begin{equation}
f_n^\alpha=\left|\bra{\phi_n} U(T)\ket{ \psi_{\alpha}} \right |^2
\end{equation}
where the operator $U(T)$ is the single-particle time-evolution operator from time $t=0$ to time $t=T$.
In the experiment $f^\alpha_n$ can be measured as
the population of each energy level at the end of the process.
We now consider how the atoms evolve when the parameters
change according to sequence {\sf a} of Fig.~\ref{fig:pulse} as a
function of the total time $T$, focusing on atoms starting in
$\ket{\psi_L}$\footnote{Both in the experiments and in the
simulations the evolution of the atom initially in the right well,
i.e. in state $\ket{\psi_R}$, shows a weaker dependence on the
properties of the sequence and is less instructive. For instance, in
the simulations for $T=0.5$ ms the population in the ground state
$f_0^R$ is of order of $99 \%$ for a broad range of parameters.}.
\begin{figure}[htbp]
\hspace*{2.2in}\mbox{}\\\includegraphics*[scale=0.5]{4a}\\
\hspace*{2.2in}\mbox{}\\\includegraphics*[scale=0.5]{4b}\\
\hspace*{2.2in}\mbox{}\\\includegraphics*[scale=0.5]{4c}
\caption{Population of the first three
eigenstates of the Hamiltonian, ground (top),
first excited (middle), second excited (bottom), at the end of sequence {\sf a} as
a function of the sequence duration $T$.
The experimental data (symbols) are compared to the four sets of numerical data
(lines) obtained for $\theta/\pi = -0.454 +\Delta
\theta_a/\pi$, with $\Delta
\theta_a/\pi$ = 0 (dot-dash), -0.02 (dash), -0.03 (solid) and
-0.04 (dot), while -0.454 is the nominal value
of $\theta/\pi$ expected from the calibrations.
}
\label{fig:population1}
\end{figure}
In Fig.~\ref{fig:population1} we show the final population of the
ground $f_0^L$, first $f_1^L$ and second excited state $f_2^L$ measured in
the experiments and calculated for four values of $\theta$ which
differ from the one of Fig.~\ref{fig:pulse} {\sf a} by a constant
offset $\Delta \theta_a$ \footnote{We do not consider variations in
$V_0$ and $\beta$ due to the small dependence of the transport
process on those parameters within the range associated with the
accuracy of our calibrations.}. The results obtained by the model are
in reasonable agreement with the experimental observations; we find best
quantitative matching for $\Delta \theta_a/\pi$ = -0.03, for which
the rms deviation between model and theory is reduced from 0.13(at $\Delta \theta =0$) to
0.08.
Now we consider the sequence {\sf b} of Fig.
\ref{fig:pulse}, performed for different ending values $\theta_b$ of
the parameter $\theta$ around $-0.47 \pi$. As shown in Fig.~\ref{fig:population2}, both the experiment and
the model show a strong dependence on $\theta_b$ for the transport
of the atom starting in the left site of the double well. The
transport into the first excited state has a maximum theoretical
fidelity of 0.95 for $\theta_b/\pi$ = -0.474. Less negative values
of $\theta_b$, i.e. increasing tilts, lead to a decrease of fidelity
due to the increase in the fraction of population ending in the
second excited state. Values of $\theta_b/\pi$ closer to -0.5, i.e.
more symmetric configurations of the double well, lead to decrease
of fidelity associated with larger fractions of population ending in
the ground state. Also in this case the experimental data and the
theoretical model are in satisfactory agreement. For these data the
deviation between theory and experiment is more sensitive to the
value of the phase shift $\theta_b$. We find best agreement by
shifting the value determined from the calibration by an offset
$\Delta \theta_b/\pi$ = -0.015, which reduces the rms deviation from
0.4 to 0.15. The axis for the experimental data in Fig.~\ref{fig:population2} has been corrected by the offset $\Delta \theta_b$.
\begin{figure}[t]
\centering
\includegraphics*[scale=0.45]{5}
\caption{
Population (overlap absolute squared) of the first three
eigenstates of the Hamiltonian at the end of sequence {\sf b}
as a function of $\theta_b$.
The duration of the sequence is fixed
to $T = 0.5$ ms. The experimental data (symbols) are in good agreement with
the numerical data (lines). In this graph the $x$ axis for the experimental
has been shifted by an offset of -0.015 with respect to the initial
calibration.
}
\label{fig:population2}
\end{figure}
Thus, while showing the reliability of the model in describing the
dynamic process, the comparison between theoretical and experimental
results also allows one to refine the calibration of the parameters
characterizing the optical potential.
Finally we find that adding an offset of $\Delta \theta/\pi$ =
-0.016 to the $\theta$ calibration brings the data from both
sequences to a good agreement with the theory and reduces the rms
deviation from 0.19 to 0.11. This is three times larger than the uncertainty of the offset from our EOM calibration but is still consistent with measurements of the lattice tilt from \cite{SebbyPRL07}. This discrepancy might be explained by the birefringence in the vacuum cell windows, which is not accounted for in our model. Inclusion of this offset should improve both the predictivity
of the model and the experimental optimization of the collisional
gate based on the numerical technique described below.
\section{Optimized transport}
In this section we employ optimal control theory to obtain fast and
high-fidelity gates. Our aim is to find a temporal dependence of the
control parameters $V_0(t)$, $\beta(t)$, $\theta(t)$ that improves
the fidelity even for a shorter sequence duration, when the
adiabatic sequences presented above yield a poor fidelity. Quantum
optimal control techniques have been successfully employed in a
variety of fields: molecular dynamics \cite{oc1, oc2, oc3},
dynamics of ultracold atoms in optical lattices \cite{tannor-2002,vaucher,romero}, implementation of
quantum gates \cite{dorner,monta-oc}.
We use the Krotov algorithm \cite{krotov} as the optimization
procedure. The objective is to find the optimal shapes of the control
parameter sequences that maximize the overlap (fidelity) between the
evolved initial wave function and a target wave function. The
initial and target wave functions are fixed a priori.
The algorithm works also for more than
one particle. The method consists in iteratively modifying the shape
of the control parameters according to a ``steepest descent method''
in the space of functions (for more details see \cite{dorner}). The
method requires evolving each particle's wave function and an
auxiliary wave function backward and forward in time according to the
Schr\"odinger equations. In our simulations we use the
Crank-Nicolson scheme to realize this step as described in
Appendix~\ref{sec:numerical}.
\subsection{Non-interacting case}
We optimize the gate for $T=0.15$ ms\footnote{We chose this time in
order to show the benefits of the optimization procedure for a
sequence duration which cannot provide a good fidelity with smooth
parameter ramps based on adiabatic evolution. While a shorter
duration could be chosen in principle, this choice can be easily
experimentally implemented with no major changes in the present
experimental apparatus, allowing future prosecution of our studies
on this subject. Increasing the total time should improve the best fidelity.} choosing as a starting point for the optimization a sequence similar to Fig.~\ref{fig:pulse}{\sf b} where $\theta$ is for simplicity taken constant to the final value $\theta_b/\pi=-0.474$. Without optimization the fidelities for the atom
initially in the left and right well are $f^L_1 = 0.57$ and $f_0^R =
0.69$, respectively. The infidelities are shown in
Fig.~\ref{fig:infid-opt} as a function of the number of optimization
steps: the algorithm of optimization is proven to yield a monotonic
increase in fidelity \cite{oc1}, however it does not guarantee to
reach its 100\% value. The results for the two atoms give a fidelity
above $98.7 \%$.
\begin{figure}[htbp]
\centering
\includegraphics*[scale=0.3]{6}
\caption{Infidelities ($1-f^\alpha_n$) for the atom initially in the left
($\alpha=L, n=1$, squares) and in the right well ($\alpha=R, n=0$, plus) as a function of the optimization step.
}
\label{fig:infid-opt}
\end{figure}
The resulting optimized parameter sequences are shown in
Fig.~\ref{fig:pulse-opt} and compared to the original sequence without
optimization.
We find that the optimized sequence for the potential depth $V_0$
differs negligibly from the initial guess. In principle, the
algorithm could achieve still higher single-particle fidelities from
different starting points.
\begin{figure}
\centering
\includegraphics*[scale=0.5]{7}
\caption{Initial (dotted) and optimized waveforms (solid)
$\beta(t)$ and $\theta(t)$ as a function of time for a sequence
of $T=0.15$ ms.}
\label{fig:pulse-opt}
\end{figure}
In Fig.~\ref{fig:psi-evol} we show the square absolute value of the wave functions of
the two atoms as a function of time, the 1D potential time
dependence and the projections of the initially left-well state onto the lowest four instantaneous energy eigenstates $\ket{\phi_n(t)}$:
\begin{eqnarray}
p_n(t) = \left | \bra{\phi_n(t)} U(t)\ket{\psi_L} \right|^2.
\end{eqnarray}
Notice that $p_n(T)=f_n^L$. As can be easily seen,
the optimal time evolution is much less smooth than the adiabatic one
as it takes advantage of quantum interference between non-adiabatic excitation paths
to obtain better results.
\begin{figure*}[t!]
\hspace{-1cm}
\hspace{-1cm}
\includegraphics*[scale=0.17,angle=270]{8a}
\hspace{-1cm}
\includegraphics*[scale=0.17,angle=270]{8b}
\hspace{-1cm}
\includegraphics*[scale=0.17,angle=270]{8c}
\hspace{-0.5cm}
\includegraphics*[scale=0.28,angle=270, viewport=45 30 460 520]{8d}\\
\vspace{1 cm}
\hspace{-1cm}
\hspace{-1cm}
\includegraphics*[scale=0.17,angle=270]{8e}
\hspace{-1cm}
\includegraphics*[scale=0.17,angle=270]{8f}
\hspace{-1cm}
\includegraphics*[scale=0.17,angle=270]{8g}
\hspace{-0.5cm}
\includegraphics*[scale=0.28,angle=270, viewport=45 30 460 520]{8h}
\caption{(Color online) Comparison between the evolution of the atoms with and
without optimal control. Top (left to right): non optimized case, absolute square value of the wave functions
as a function of time (atoms initially in the left and right well respectively); 1D trapping potential
as a function of time; projections $p_n(t)$ at time $t$ of the state initially in the left well onto the instantaneous eigenstates $\ket{\phi_n(t)}$ with $n=0$ (blue solid), $n=1$ (red dashed), $n=2$ (green dotted), $n=3$ (magenta dot-dashed). Bottom: analogous plots for the optimized case.}
\label{fig:psi-evol}
\end{figure*}
\subsection{Interaction effects}\label{sec:interactions}
Up to now we have considered only the
single-particle evolution of the system, i.e. without including
any interaction between the particles. This approximation is valid
in our transport sequence as long as the two wave functions
in nearby wells do not overlap. When the two
particles overlap in the same well we must take into account
interactions, and we model them with an effective 1D contact
potential:
\begin{equation}
V_{\rm{int}}(|x_1-x_2|) = g_{1D}\delta(x_1-x_2)
\end{equation}
where $x_i$ are the coordinates of the two atoms and $g_{1D}$ is an
effective $1D$ coupling strength \cite{olshanii} expressed by
$g_{1D} = 2 a_s h \sqrt{\nu_y \nu_z}$, where $a_s$ is the scattering
length for $^{87}\mathrm{Rb}$ atoms and $h$ is the Planck constant.
The spectrum is modified by the interactions: the state with one atom in each well is lower by $\sim 3$ kHz than the doubly-occupied states.
As in the case without interactions we start with each atom
localized in a separate well.
Notice that we are considering wave-functions that are symmetric
under the exchange of the coordinates of the two particles.
\begin{figure}[htbp]
\centering
{\sf a}\hspace{-4mm}\includegraphics*[scale=0.17,angle=270]{9a}
{\sf b}\hspace{-4mm}\includegraphics*[scale=0.17,angle=270]{9b}\\
{\sf c}\hspace{-4mm}\includegraphics*[scale=0.17,angle=270]{9c}
{\sf d}\hspace{-4mm}\includegraphics*[scale=0.17,angle=270]{9d}
\caption{(Color online) Absolute square values of the relevant
symmetric wave-functions in the coordinates of the two atoms: {\sf a}
Initial wave function in the state $\ket{\tilde\Psi_{\rm in}}$
with one atom in the left well and one atom in the right well; {\sf b}
wave function of the target state $\ket{\tilde\Phi_{\rm tg}}$. {\sf c}
evolved wave-function using the non-optimized sequences of Fig.~\ref{fig:pulse}{\sf b}
giving a fidelity $F_{\rm int}=0.22$ (for $T=150\mu$s); {\sf d} evolved wave-function using the
optimized sequences of Fig.~\ref{fig:pulse-opt} giving a fidelity $F_{\rm int}=0.93$.}
\label{fig:wave-int}
\end{figure}
We consider the two-particle fidelity:
\begin{equation}
F_{\rm int}=\left|\bra{\tilde\Phi_{\mathrm{tg}}} U_{\textrm{int}} (T) \ket{\tilde\Psi_{\mathrm{in}}}\right|^2
\end{equation}
where $U_{\textrm{int}} (T) $ is the two-particle evolution
operator for the Hamiltonian of the two atoms, which includes
interactions. $ \ket{\tilde\Psi_{\mathrm{in}}}$ is an eigenstate of the
two-particle Hamiltonian at time $t=0$, corresponding in the limit
$g_{1D}\to 0$ to the symmetrized product of the
single-particle wavefunctions localized in each well (see
Eq.\eqref{eq:psiin}); the target state
$\ket{\tilde\Phi_{\mathrm{tg}}} $ is an eigenstate of the
two-particle Hamiltonian at time $t=T$ which, in the
limit of vanishing interactions, corresponds to the state
\begin{equation} \label{eq:psitg}
\ket{\Phi_{\mathrm{tg}}} =\frac{1}{\sqrt 2}\left(
\ket{\phi_0}_1\ket{\phi_1}_2 + \ket{\phi_1}_1\ket{\phi_0}_2\right)
\end{equation}
The square modulus of $\ket{\tilde \Psi_{\mathrm{in}}}$ and $
\ket{\tilde\Phi_{\mathrm{tg}}}$, in the two-atom coordinate
representation, are shown in Fig.~\ref{fig:wave-int}{\sf a-b}. In order to make a comparison between the interacting and non interacting cases we define a two-particle
fidelity also in the non-interacting case:
\begin{equation}
F=\left|\bra{\Phi_{\mathrm{tg}}} U_1 (T)\otimes U_2(T) \ket{\Psi_{\mathrm{in}}}\right|^2
\end{equation}
where $U_1(T)$ and $U_2(T)$ are the single-particle evolution operators for the two atoms without interactions.
\begin{table}[htdp]
\begin{tabular}{|c|c|c|c|c|}
\hline
&$f_0^R$&$f_1^L$&$F$&$F_{\rm int}$\\
\hline
non optimized&0.69&0.57&0.22&0.22\\
\hline
transport optimized&0.99&0.99&0.98&0.93\\
\hline
interaction optimized&0.98&0.98&0.96&0.97\\
\hline
\end{tabular}
\caption{Results of our numerical simulations for three different sets of control parameters: the non optimized case Fig.~\ref{fig:pulse}{\sf b}; the transport optimized case Fig.~\ref{fig:pulse-opt} where the optimal control algorithm is used without taking into account interactions; the interaction optimized case where the optimal control algorithm is used taking into account interactions. The quantities shown are: the single-particle populations $f_0^R$ and $f_1^L$ calculated without interactions, the two-particle fidelities $F$ and $F_{\rm int}$ calculated without and with interactions.}
\label{tab:1}
\end{table}
In Table~\ref{tab:1} we summarize our results for $T=0.15$ ms obtained with three different sequences: first, the non optimized sequence Fig.~\ref{fig:pulse}{\sf b}; second, the transport optimized case Fig.~\ref{fig:pulse-opt} where we used the optimal control algorithm to optimize the single-particle populations not taking into account interactions; third, the interaction optimized case where we apply the optimal control algorithm
using as the initial guess the transport optimized sequence Fig.~\ref{fig:pulse-opt} and then optimizing including the interactions in the evolution.
The resulting
wave-functions for the non optimized and transport optimized sequences are compared in Fig.~\ref{fig:wave-int}{\sf c-d}.
Without optimal control the two-particle fidelity with and without interactions is $F=F_{\rm int}=0.22$ while with (non-interacting) optimization we obtain $F \simeq f_0^R \, f_1^L=0.98$ and $F_{\rm int}=0.93$. This shows that interactions spoil slightly
the efficiency of the transport process as one might expect. Optimal control can subsequently be applied while including interactions in the optimization, producing a control sequence with a fidelity of
$F_{\rm int}=0.97$.
Another consideration is the experimental bandwidth available for feedback control. The optimized control waveforms Fig.~\ref{fig:pulse-opt} were obtained with no restriction on the frequency response of the control, and typically have frequency components on the order of a few times the lattice vibrational spacings (see Fig.~\ref{fig:band}), i.e. larger than the bandwidth of our control electronics. Clearly, using a filtered version of these waveforms will lead to lower control fidelity
and it will be important to increase the experimental bandwidth of the control electronics (currently about 50~kHz). In addition, it may be useful to develop an optimization sequence that includes the limited control bandwidth, although it is likely that frequencies on the order of the vibrational spacing will always be needed.
\begin{figure}[htbp]
\includegraphics[scale=0.7 ]{10}\\
\caption{
The normalized Fourier transform magnitudes $|\tilde{\beta}(f)|$ (solid) and $|\tilde{\theta}(f)|$ (dashed) of the optimized control sequences $\beta(t)$ and $\theta(t)$ shown in Fig.~\ref{fig:pulse-opt}. The spectra are normalized to the value at the fundamental frequency $1/T=6.67$~kHz.
}
\label{fig:band}
\end{figure}
\section{Conclusions} \label{sec:conclusion}
We have presented a detailed, numerical analysis of the transport
process of neutral atoms in a time dependent optical lattice. We
show how to improve the fidelity of the transport process for $T=0.15$ ms from
$F_{\rm int}=0.22$, using simple adiabatic switching, to $F_{\rm
int}=0.97$, using optimal control theory. We expect better results for longer control times. We
analyze the effect of atom-atom interactions on the transport
process and we show that the optimal control parameter sequences
found in the non-interacting case still work when including
interaction. We obtained the same transformation as
in the case of the adiabatic transport with a better fidelity and
in a time shorter by more than a factor of three, which represents
a relevant improvement in terms of scalability of the number of
gates that can be performed before the system decoheres due to the
coupling to its environment. This technique can be easily adapted
to other similar transport processes and also extended to atoms in
different magnetic states, which can allow the implementation of a
fast, high-fidelity quantum gate in a real optical lattice setup
with the qubits encoded in the atomic internal states
\cite{anderliniSwap}. In the future, it would be interesting to study
the possibility of including the effect of errors in the
optimization procedure and thus investigate in more details
the robustness and noise-resilience of the optimal control
technique.
\acknowledgments This work was supported by the European
Commission, through projects SCALA (FET/QIPC), EMALI (MRTN-CT-2006-035369) and QOQIP, by
the National Science Foundation through a grant for the Institute
for Theoretical Atomic, Molecular and Optical Physics at Harvard
University and Smithsonian Astrophysical Observatory, and by DTO. SM acknowledges
support from IST-EUROSQIP and the Quantum Information program
of ``Centro De Giorgi'' of Scuola Normale Superiore.
The computations have been performed on the HPC facility of the
Department of Physics, University of Trento. We thank J.
Sebby-Strabley for experimental support.
|
1,314,259,993,435 | arxiv | \section{Introduction}
In an empty universe the \textit{null geodesics} along which photons travel correspond directly to straight lines. However, in the presence of a non-uniform distribution of matter the null geodesics are perturbed \textit{via} gravitational interaction with the local matter over or under density \textit{i.e.} the photons are \textit{gravitationally lensed} \citep{[48], [2], [23], [24]}. As this gravitational interaction is sensitive only to the total matter distribution, and the overwhelming majority of matter is typically dark, gravitational lensing provides a natural probe of dark matter itself \citep{[25]}.
\par
Collections of associated photons emitted from a distant object travel along separate geodesics which are perturbed in different ways by the intervening matter distribution, \textit{e.g.} photons traveling closer to matter over densities will interact more strongly and therefore be perturbed more than those farther away. As such the geometry of a distant object is warped \citep{[1]} -- \textit{i.e.} colloquially the object is \textit{lensed}.
\par
Provided the propagating photons at no time come closer than one Einstein radius to the intervening matter over and under densities, the object is \textit{weakly lensed}. Weak gravitational lensing of distant galaxies manifests itself at first order into two quantities; the spin-0 convergence $\kappa$ which is a magnification, and the spin-2 shear $\gamma$ which is a perturbation to the galaxy ellipticity (third-flattening).
\par
Both the shear $\gamma$ and the convergence $\kappa$ have dominant intrinsic (\textit{i.e.} in the absence of lensing effects) underlying values which makes measurements of the lensing effect difficult. In fact, there is no \textit{a priori} way to know the intrinsic brightness of a galaxy (resulting in an inherent degeneracy -- the mass-sheet degeneracy) and so the convergence is not an observable quantity \citep{[48]}. However, as the intrinsic ellipticity distribution of galaxies has zero mean one can average to recover the shearing contribution, hence the shear is an observable quantity. As such, measurements of the shear field are taken and inverted to form estimators of the convergence. Typically this inverse problem is seriously ill-posed and so substantial uncertainty on the reconstructed convergence map is introduced.
\par
A wealth of information may be calculated directly from the shear field \citep[often in the form of second order statistics \citep{kilbinger2015} -- such as the power spectrum as in][]{[26],[56]} though recently there is increasing interested in extracting Non-Gaussian information from the convergence field, \textit{e.g.} peak statistics, Minkowski functionals, extreme value statistics \citep{[27], [28], [52], [54], [55]}.
\par
Primarily, the interest has arisen as higher-order statistics of the convergence field have been shown to provide complementary constraints on dark matter cosmological parameters which are typically poorly constrained by second-order statistics \citep{[50]}.
\par
However, to make principled statistical inferences from the convergence field, the inversion from $\gamma$ to $\kappa$ must be treated in a principled statistical manner \nobreak--\nobreak\hskip4pt something which until recently was missing from convergence reconstruction algorithms which were either not framed in a statistical framework \citep[\textit{e.g.}][]{[29],[5],[6],[3],[15]} or made assumptions of Gaussianity \citep[\textit{e.g.}][]{[30],[26],[43]}. As the information of interest in higher-order convergence statistics is non-Gaussian, assumptions of Gaussianity in the reconstruction process severely degrade the quality of the cosmological information.
\par
Recently, a mass-mapping framework was developed \citep[see][]{[M1]} which addressed precisely this issue. This new sparse hierarchical Bayesian mass-mapping formalism can be rapidly computed, can be extended to big data, and provides a principled statistical framework for quantifying uncertainties on reconstructed convergence maps. Notably, it has been shown to accurately reconstruct very high dimensional Bayesian estimators many orders of magnitude faster than \textit{state-of-the-art} proximal MCMC algorithms \nobreak--\nobreak\hskip4pt it was specifically benchmarked against Px-MALA \citep{[51],Durmus2018} in \citet{[M2]}.
\par
In this paper, we propose two novel uncertainty quantification techniques, aimed to answer two frequently asked questions of the recovered convergence map. The first of these questions asks where a feature of interest in the reconstructed convergence map could have been observed \nobreak--\nobreak\hskip4pt typically this has been addressed by bootstrapping; however we can now infer it directly in a Bayesian manner. The second question asks given a magnitude threshold what is the maximum and minimum number of peaks which could have been observed, within some well defined confidence.
\par
The structure of this article is as follows. To begin, in section \ref{sec:WeakGravitationLensing} we provide cursory introduction to weak lensing from a mathematical perspective, with emphasis on the weak lensing planar forward model in subsection \ref{sec:LensingForwardModel}. Following this we provide a brief overview of Bayesian inference and the previously developed \citep{[M1]} sparse hierarchical Bayesian mass-mapping algorithm in section \ref{sec:VERITAS}. An introduction to Bayesian credible regions, specifically the highest posterior density credible region is provided in section \ref{sec:HPD_region}. In section \ref{sec:BayesianLocation} we develop a novel Bayesian inference approach to quantifying the uncertainty in reconstructed feature location, which we then showcase on illustrative N-body cluster simulation data in section \ref{sec:BayesLocationApplication}. We then introduce a novel Bayesian inference approach for recovery of principled uncertainties on the aggregate peak count statistic in section \ref{sec:peak_uncertainties}. Following this we showcase this Bayesian inference approach to quantify uncertainty in the aggregate peak statistic in section \ref{sec:PeakDemonstration} on N-body large scale structure (LSS) illustrative simulation data. Finally we draw conclusions in section \ref{sec:Conclusion}.
\section{Weak Gravitational Lensing} \label{sec:WeakGravitationLensing}
In this section we provide a brief introduction to weak gravitational lensing, with emphasis on how this effect manifests itself into observable quantities. For a detailed background review of the field see \citet{[1],[2]}. For a more mathematical background of the field, with emphasis on statistical methods see \citet{[48], [23], [24]}. For a background of specifically the peak statistics see \citet{[53]}.
\subsection{Mathematical Background} \label{sec:MathematicalBackground}
In a non-uniform distribution of matter the null geodesics along which photons travel are no longer straight lines, instead they are now sensitive to the local matter distribution. When many photons are propagating from a distant object to us here and now, the local matter distribution adjusts the geometry of the object we observe \nobreak--\nobreak\hskip4pt the object is \textit{gravitationally lensed}.
\par
Provided the trajectory of the propagating photons at no time comes closer than one Einstein radius $\theta_E$ to the intervening matter over densities then the lens equation,
\begin{equation}
\beta = \theta - \theta_E^2 \frac{\theta}{|\theta|^2},
\end{equation}
remains effectively singular and we are in the \textit{weak lensing} regime. Equivalently one can define the weak lensing regime to be convergence fields for which $\kappa \ll 1$ -- ensuring that the shear signal remains linear. Here the Einstein radius is given by,
\begin{equation}
\theta_E = \sqrt{\frac{4GM_{\text{lens}}}{c^2}\frac{f_K(r - r^{\prime})}{f_K(r)f_K(r^{\prime})}},
\end{equation}
where $G$ is the gravitational constant, $M_{\text{lens}}$ is lensing mass, $c$ is the speed of light \textit{in vacuo} and $f_K(r)$ is the angular diameter distance defined as:
\begin{equation} \label{eq:angular_diameter_distance}
f_K(r)=\begin{cases}
\sin(r) \quad &\text{if}\quad K = 1, \\
r \quad &\text{if}\quad K = 0,\\
\sinh(r) \quad &\text{if}\quad K = -1,\\
\end{cases}
\end{equation}
where $r$ is the comoving distance and $K$ is the curvature of the universe, which has been observed to be consistent with $0$ by \citet{[47]}.
\par
As galaxies are naturally sparsely distributed across the sky, most observations fall within the weak lensing regime. The weak gravitational lensing effect can be described by a lensing potential $\phi(r,\omega)$ which is the integrated Newtonian potential $\Phi(r, \omega)$ along the line of sight
\begin{equation} \label{eq:lineofsightmass}
\phi(r,\omega) = \frac{2}{c^2} \int_0^r dr^{\prime} \frac{f_K(r - r^{\prime})}{f_K(r) f_K(r^{\prime})} \Phi(r^{\prime}, \omega),
\end{equation}
where $\omega = (\theta, \psi)$ are angular spherical co-ordinates. A further constraint exists, such that the local Newtonian potential $\Phi(r,\omega)$ must satisfy the Poisson equation:
\begin{equation} \label{eq:Poisson}
\nabla^2 \Phi(r,\omega) = \frac{3 \Omega_M H_0^2}{2a(r)} \delta(r,\omega),
\end{equation}
for matter density parameter $\Omega_M$, Hubble constant $H_0$ and scale parameter $a(r)$. Combined, equations (\ref{eq:lineofsightmass}) and (\ref{eq:Poisson}) allow one to make inferences of cosmological parameters from observations of the lensing potential $\phi(r,\omega)$.
\par
At linear order, gravitational lensing manifests itself as two quantities: the spin-0 convergence field $\kappa$ (magnification) and the spin-2 shear field $\gamma$ (perturbation to ellipticity). It can be shown that \citep{[1],[2]} these quantities can be related to the lensing potential $\phi(r,\omega)$ by,
\begin{align}
& \kappa(r,\omega) = \frac{1}{4}(\eth \bar{\eth} + \bar{\eth} \eth) \; \phi(r,\omega), \label{eq:kappatophi} \\
& \gamma(r,\omega) = \frac{1}{2} \eth \eth \; \phi(r,\omega), \label{eq:gammatophi}
\end{align}
where $\eth$ and $\bar{\eth}$ are the spin raising and lowering operators respectively and are in general defined to be:
\begin{align}
\eth \equiv -\sin^s\theta \Big ( \frac{\partial}{\partial \theta} + \frac{i \partial}{\sin\theta \partial \psi} \Big ) \sin^{-s}\theta, \\
\bar{\eth} \equiv -\sin^{-s}\theta \Big ( \frac{\partial}{\partial \theta} - \frac{i \partial}{\sin\theta \partial \psi} \Big ) \sin^{s}\theta.
\end{align}
\subsection{Lensing Planar Forward Model} \label{sec:LensingForwardModel}
Often second order statistics \citep{kilbinger2015} related to the shear $\gamma$ are computed \citep[\textit{e.g.} the shear power spectrum as in][]{[56],[26]}, though increasingly there is interest in extracting weak lensing information from the convergence directly \nobreak--\nobreak\hskip4pt typically higher-order non-Gaussian information.
\par
Unfortunately, due to an inherent degeneracy the convergence is not an observable quantity \citep{[1],[2],[48]} \nobreak--\nobreak\hskip4pt this effect is colloquially referred to as the \textit{mass-sheet degeneracy}. However, as the intrinsic ellipticity distribution of galaxies has zero mean, averaging many ellipticity observations within a given pixel provides a good estimate of the shear field.
\par
In fact, there exists a further degeneracy between $\kappa$ and $\gamma$ such that the true observable is the \textit{reduced shear} $g$ but for the context of the paper we will assume $\gamma \approx g \ll 1$ \nobreak--\nobreak\hskip4pt see \citeauthor{[57]} pg.153, \citet{[M1]}, or \citet{[3]} for details on how to account for the non-linear reduced shear.
\par
As both $\kappa$ and $\gamma$ are related to $\phi$ a relation between $\kappa$ and $\gamma$ can be formed. Therefore, typically measurements of the shear field are taken and inverted to form estimates of the underlying convergence field. For small fields of view the \textit{flat sky approximation} can be made, which reduces the spin-raising $\eth$ and lowering $\bar{\eth}$ operators to \citep{[4]}:
\begin{equation}
\eth \approx -(\partial_x + i \partial_y) \quad \mbox{and} \quad \bar{\eth} \approx -(\partial_x - i \partial_y).
\end{equation}
From equations (\ref{eq:kappatophi}) and (\ref{eq:gammatophi}) it is clear that the forward model in Fourier space between $\kappa$ and $\gamma$ is given by
\begin{equation} \label{eq:forwardmodel}
\hat{\gamma}(k_x,k_y) = \bm{\mathsf{D}}_{k_x,k_y} \hat{\kappa}(k_x, k_y),
\end{equation}
with the mapping operator being
\begin{equation} \label{eq:forward_model}
\bm{\mathsf{D}}_{k_x,k_y} = \frac{k_x^2-k_y^2+2ik_xk_y}{k_x^2+k_y^2},
\end{equation}
where we have dropped the spin subscripts for clarity. To recover an estimator of the convergence one must invert this forward model.
\par
The most naive inversion technique is that of Kaiser-Squires (KS) inversion \citep{[5]}, which is direct inversion in Fourier space, \textit{i.e.}
\begin{equation} \label{eq:Kaiser-Squires}
\hat{\kappa}^{\text{KS}} = \bm{\mathsf{D}}^{-1} \hat{\gamma},
\end{equation}
where we have again dropped function arguments and subscripts for clarity. This approach attempts to mitigate the effect of noise by convolving the recovered convergence estimate with a broad Gaussian smoothing kernel, which severely degrades the quality of non-Gaussian information. This poses a somewhat serious issue as non-Gaussian information is precisely the information that is to be extracted from the convergence field. Therefore for higher-order convergence statistics the KS estimator is patently sub-optimal.
\par
Moreover, decomposition of spin fields on bounded manifolds is well known to be degenerate \citep{[4]} and so inversion of shear to convergence for masked fields is inherently ill-defined \nobreak--\nobreak\hskip4pt in particular the KS estimator is known to break down for non-trivial masking. In fact the lensing inverse problem is often seriously ill-posed, therefore motivating methods regularized by prior information.
\section{Sparse Bayesian Mass-mapping} \label{sec:VERITAS}
Many mass-mapping algorithms have been considered \citep[\textit{e.g.}][]{[29],[5],[6],[3],[15],[49],[22]}, however in the context of this paper we wish to conduct principled statistical analysis of the reconstructed convergence map, and so we opt for the sparse hierarchical Bayesian algorithm presented in \citet{[M1]} and benchmarked against MCMC algorithms in \citet{[M2]}.
\par
Recently a sparse hierarchical Bayesian framework for convergence reconstruction was presented \citep{[M1]} which is not limited to Gaussian priors \nobreak--\nobreak\hskip4pt in fact the prior need not even be differentiable. In this work we adopt this mass-mapping algorithm, which we briefly describe below.
\par
First, by Bayes' theorem the \textit{posterior distribution} of the convergence $\kappa$ reads
\begin{equation} \label{eq:bayes}
p(\kappa|\gamma) = \frac{p(\gamma|\kappa)p(\kappa)}{\int_{\mathbb{C}^N} p(\gamma|\kappa)p(\kappa)d\kappa},
\end{equation}
which shows how one should infer the \textit{posterior} $p(\kappa|\gamma)$ from the \textit{likelihood function} (data fidelity term) $p(\gamma|\kappa)$ and the \textit{prior} (regularization term) $p(\kappa)$ \citep[see \textit{e.g.}][for a clear introduction to Bayesian inference in a cosmological setting]{[46]}. In the scope of this paper we do not consider the \textit{Bayesian evidence} $\int_{\mathbb{C}^N} p(\gamma|\kappa)p(\kappa)d\kappa$ as it acts as a normalization term and so does not effect the recovered solution. Typically the Bayesian evidence is used for model comparison which is an avenue of study in of itself.
\par
In a Bayesian inference problem one often wishes to find the solution $\kappa$ which maximizes the posterior given data and some model. From the monotonicity of the logarithm function, maximization of the posterior is equivalent to minimization of the \textit{log-posterior} such that
\begin{equation} \label{eq:log-posterior}
\argmaxT_{\kappa} \big \lbrace p(\kappa|\gamma) \big \rbrace \equiv \argminT_{\kappa} \big \lbrace -\log ( \; p(\kappa|\gamma) \;) \big \rbrace,
\end{equation}
where the convergence $\kappa$ which maximizes the posterior is given by $\kappa^{\text{map}}$, where MAP stands for \textit{maximum a posteriori}. Provided the posterior is log-concave the minimization of the log-posterior takes the form of a \textit{convex optimization} problem, which can be rapidly computed even in high dimensional settings.
\par
Let $\kappa \in \mathbb{C}^N$ be the discretized complex convergence field and $\gamma \in \mathbb{C}^M$ be the discretized complex shear field, where $M$ is the number of binned shear measurements and $N$ is the dimensionality of the recovered convergence field. Suppose we can define a \textit{measurement operator} $\bm{\Phi} \in \mathbb{C}^{M \times N}: \kappa \mapsto \gamma$ which maps $\kappa$ onto $\gamma$. On the plane, the measurement operator is given by \citep[\textit{e.g.}][]{[M1]}
\begin{equation} \label{eq:measurement_operator}
\bm{\Phi} = \bm{\mathsf{F}}^{-1} \bm{\mathsf{D}} \bm{\mathsf{F}},
\end{equation}
where $\bm{\mathsf{D}}$ is the planar forward model in Fourier space as defined in equation (\ref{eq:forward_model}), and $\bm{\mathsf{F}}$ $(\bm{\mathsf{F}}^{-1})$ is the forward (inverse) fast Fourier transform.
\par
Now suppose that some noise $n$ is contaminating our measurements, then measurements are obtained through the \textit{measurement equation}
\begin{equation}
\gamma = \bm{\Phi} \kappa + n.
\end{equation}
Within this article we will consider the typical case in which the noise $n \sim \mathcal{N}(0,\sigma_n^2) \in \mathbb{C}^M$, \textit{i.e.} i.i.d. (independent and identically distributed) zero mean Gaussian noise of variance $\sigma^2_n$. In this setting the likelihood function is given by a multivariate Gaussian, which for diagonal covariance $\Sigma = \sigma_n^2 \mathbb{I}$ is given compactly as
\begin{equation}
p(\gamma|\kappa) \propto \exp \Bigg(\frac{-\norm{\bm{\Phi} \kappa - \gamma}_2^2}{2\sigma_n^2} \Bigg),
\end{equation}
which \citep[as in][]{[M1]} is regularized by a sparsity promoting Laplace-type $l_1$-norm wavelet prior
\begin{equation}
p(\kappa) \propto \exp \Big(-\mu \norm{\bm{\Psi}^{\dag}\kappa}_1 \Big),
\end{equation}
where $\mu \in \mathbb{R}_{+}$ is the jointly inferred MAP regularization parameter \citep{[16]} \nobreak--\nobreak\hskip4pt the derivation and implementation of which may be found in \citet{[M1]}. Sparsity promoting priors in wavelet dictionaries have explicitly been shown to be effective in the weak lensing setting \citep{[15], [6], [9], [M1]}.
\par
Using these terms, the minimization of the log-posterior becomes
\begin{equation} \label{eq:optimization}
\kappa^{\text{map}} = \argminT_{\kappa} \Bigg \lbrace \mu \norm{ \bm{\Psi}^{\dag}\kappa}_1 + \frac{\norm{\bm{\Phi} \kappa - \gamma}_2^2}{2\sigma_n^2} \Bigg \rbrace,
\end{equation}
which is then solved iteratively by implementing a proximal forward-backward splitting algorithm \citep[\textit{e.g.}][]{[40]}.
\par
Note that one can choose any convex log-priors \textit{e.g.} an $\ell_2$-norm prior from which one essentially recovers Weiner filtering \citep[see][for alternate iterative Weiner filtering approaches]{Seljak2003,Horowitz2018}.
\begin{figure*}
\begin{center}
\begin{tikzpicture}[node distance = 2cm, auto]
\node [parameter_fixed, text width=14em] (-1) {Calculate MAP solution: $\kappa^{\text{map}}$ (\ref{eq:optimization})};
\node [below of=-1,node distance = 3cm](0) {\includegraphics[width=0.25\textwidth]{Bayesian_location/True_map.png}};
\node [below of=-1, node distance=1.6cm, text width=18em] (dummy) {};
\node [block, left of=dummy, node distance=5.2cm, text width=14em] (1a) {Remove feature $\mathcal{Z}$ by equation (\ref{eq:inpainting}).};
\node [block, right of=dummy, node distance=5.2cm, text width=14em] (1b) {Extract feature $\mathcal{Z} = \kappa^{\text{map}} \mathbb{I}_{\Omega_{\mathcal{Z}}}$};
\node [below of=1a,node distance = 3cm](2a) {\includegraphics[width=0.25\textwidth]{Bayesian_location/Inpainted_empty_map.png}};
\node [below of=1b,node distance = 3cm](2b) {\includegraphics[width=0.25\textwidth]{Bayesian_location/Feature.png}};
\node [block, below of=0, node distance=4.0cm, text width=14em] (3) {Insert: feature $\mathcal{Z}$ at $\bm{x}_t$, get surrogate $\kappa^{\text{sgt}}$};
\node [below of=3,node distance = 3cm](4) {\includegraphics[width=0.25\textwidth]{Bayesian_location/new_surrogate.png}};
\node [decision, below of=4, node distance=3.0cm, text width=14em] (5) {Is: $\kappa^{\text{sgt}} \in C_{\alpha}^{\prime}$ ? };
\node [iteration, below of=2b, node distance=7.0cm, text width=8em] (7) {Reject pixel: $\bm{x}_t$};
\node [iteration, below of=2a, node distance=7.0cm, text width=8em] (8) {Accept pixel: $\bm{x}_t$};
\node [iteration, below of=2a, node distance=4.0cm, text width=8em] (9) {$t \rightarrow t + 1$};
\node [iteration, below of=2b, node distance=4.0cm, text width=8em] (10) {$t \rightarrow t + 1$};
\path [line] (-1) -- (0);
\path [line] (0) -- (1a);
\path [line] (0) -- (1b);
\path [line] (1a) -- (2a);
\path [line] (1b) -- (2b);
\path [line] (2a) -- (3);
\path [line] (2b) -- (3);
\path [line] (3) -- (4);
\path [line] (4) -- (5);
\path [line,dashed] (5) -- node[below]{Yes}(8);
\path [line,dashed] (8) -- node[decision,left]{Select next nearest pixel.}(9);
\path [line,dashed] (9) -- (3);
\path [line,dashed] (10) -- (3);
\path [line,dashed] (7) -- node[decision,right]{Select next nearest pixel.}(10);
\path [line,dashed] (5) -- node[below]{No}(7);
\end{tikzpicture}
\caption{Schematic representation of the inverse nested iterations to determine the \textit{Bayesian location} (see section \ref{sec:BayesianLocation}). The Bayesian location is a positional uncertainty on a feature of interest $\mathcal{Z}$ within a recovered convergence field. Once a complete ring of pixels have been rejected the algorithm returns a binary map of accepted pixels which we call the Bayesian location. Any pixel outside of this location is rejected at $100(1-\alpha)\%$ confidence. Alternately the probability isocontour bounding the set of acceptable pixels can be located by N-splitting circular bisection as described in section \ref{sec:N-splitting} and Appendix \ref{sec:appendixa}.}
\label{fig:BayesianLocationSchema}
\end{center}
\end{figure*}
\begin{figure}
\centering
\large Bayesian Location of Bolshoi-3 Sub-halos \par
\includegraphics[width=0.45\textwidth, trim={0 0 0 1.6cm},clip]{Bayesian_location/Combined_Peaks.png}
\caption{Combined plot of the $99\%$ confidence Bayesian locations at SNR $= 12, 15, 17, 20$ dB. The outer rings represent the noiser position isocontours whereas as the data becomes cleaner the isocontour ring becomes smaller (therefore the rings represent isocontours at SNR $= 12, 15, 17, 20$ dB, from the outer rings inwards respectively). N-splitting Circular Bisection (see section \ref{sec:N-splitting}) was used to efficiently compute each isocontour. For input SNR's below $\approx 10$ the smaller local features cannot be determined physical \textit{via} the initial hypothesis test, and so we truncate our analysis at SNR = 12.}
\label{fig:Combined_peaks}
\end{figure}
\subsection{Bayesian credible regions} \label{sec:HPD_region}
In Bayesian analysis a posterior credible region $C_{\alpha} \in \mathbb{C}^N$ at confidence $100(1-\alpha)\%$ is a set which satisfies:
\begin{equation} \label{eq:CredibleIntegral}
p(\kappa \in C_{\alpha}|\gamma) = \int_{\kappa \in \mathbb{C}^N} p(\kappa|\gamma)\mathbb{I}_{C_{\alpha}}d\kappa = 1 - \alpha,
\end{equation}
where $\mathbb{I}_{C_{\alpha}}$ is an indicator function defined such that,
\begin{equation} \label{eq:indicator}
\mathbb{I}_{C_{\alpha}}=\begin{cases}
1 \quad \text{if}\quad \kappa \in C_{\alpha}, \\
0 \quad \text{if}\quad \kappa \not\in C_{\alpha}.\\
\end{cases}
\end{equation}
\par
There are in general a large number of posterior regions (hyper-volumes) which satisfy this integral. The decision-theoretically optimal region \nobreak--\nobreak\hskip4pt in the sense of minimum enclosed volume \nobreak--\nobreak\hskip4pt is called the \textit{highest posterior density} (HPD) credible region \citep{[19]} and is defined to be:
\begin{equation}
C_{\alpha} := \lbrace \kappa : f(\kappa) + g(\kappa) \leq \epsilon_{\alpha} \rbrace,
\end{equation}
where $f(\kappa) = \mu \norm{\Psi^{\dag} \kappa}_1$ is the log-prior term and $g(\kappa) = \norm{\Phi \kappa - \gamma}_2^2 / 2\sigma_n^2$ is our data fidelity term (log-likelihood function). Here $\epsilon_{\alpha}$ defines a level-set (\textit{i.e.} isocontour) of the log-posterior set such that equation (\ref{eq:CredibleIntegral}) is satisfied. In practice the true HPD credible region is difficult to compute due to the high dimensional integral in equation (\ref{eq:CredibleIntegral}), motivating computationally efficient approximate techniques.
\par
Recently a conservative approximation of the HPD credible set was proposed by \citet{[10]} which exploits developments in probability concentration theory. The approximate HPD credible set $C^{\prime}_{\alpha}$ is given by:
\begin{equation}
C^{\prime}_{\alpha} := \lbrace \kappa : f(\kappa) + g(\kappa) \leq \epsilon^{\prime}_{\alpha} \rbrace,
\end{equation}
with approximate level-set threshold
\begin{equation}
\epsilon^{\prime}_{\alpha} = f(\kappa^{\text{map}}) + g(\kappa^{\text{map}}) + \tau_{\alpha} \sqrt{N} + N,
\end{equation}
where the bounding term $\tau_{\alpha} = \sqrt{16 \log(3 / \alpha)}$ which in turn is constrained to confidence $\alpha \in \big ( 4\exp(-N/3) \;, 1 \big )$. The error of this approximation is bounded above by
\begin{equation}
0 \leq \epsilon^{\prime}_{\alpha} - \epsilon_{\alpha} \leq \eta_{\alpha} \sqrt{N} + N,
\end{equation}
where $\eta_{\alpha} = \sqrt{16 \log (3/\alpha)} + \sqrt{1/\alpha}$. This upper-bound is typically conservative, meaning the isocontour is at all times larger than the true isocontour (\textit{i.e.} this estimator will never produce an underestimate). In \citet{[M2]} the bound on recovered local error bars was found to be $\pm 10$ to $15\%$ larger than the true MCMC \nobreak--\nobreak\hskip4pt yet could be computed $\mathcal{O}(10^6)$ times faster. A similar comparison was done by \citet{[12]} in a radio interferometric setting.
\par
The concept of approximate HPD credible regions is particularly useful as it allows us to explore high dimensional posteriors \nobreak--\nobreak\hskip4pt many orders of magnitude larger than \textit{state-of-the-art} MCMC techniques are currently able to accomodate \nobreak--\nobreak\hskip4pt in a computationally efficient manner.
\section{Bayesian peak locations} \label{sec:BayesianLocation}
Often one wishes to know the location of a feature of interest within the reconstructed convergence $\kappa^{\text{map}}$. Typically, this uncertainty is assessed \textit{via} bootstrapping of the recovered image for a large number of simulated noise fields \citep[as in \textit{e.g.}][]{[9]}.
\par
With the concept of approximate HPD credible regions in mind, we propose a novel Bayesian approach to quantifying uncertainty in the peak location which we will refer to as the \textit{`Bayesian location'}.
\par
In essence the Bayesian location is computed as follows: A feature of interest is removed from the recovered convergence map, this feature is then inserted back into the convergence map at a new position to create a surrogate convergence map, if this surrogate map is within the approximate credible set then the position at which the feature was inserted cannot be rejected, if the surrogate is not in the approximate credible set then the position can be rejected. This process is computed for a sample of the total posible insertion positions, eventually providing an isocontour of `acceptable' positions. This isocontour, at a well-defined confidence level, is the Bayesian location.
\subsection{Bayesian Location}
Suppose we recover a (MAP) convergence field $\kappa^{\text{map}}$ \textit{via} optimization of the objective function defined in equation (\ref{eq:optimization}) which contains a feature of interest (\textit{e.g.} a large peak). Let us define the sub-set of pixels which contain this feature to be $\Omega_{\mathcal{Z}} \subset \Omega$, where $\Omega$ is the entire image domain.
\par
To begin with, extract the feature $\mathcal{Z} = \kappa^{\text{map}} \mathbb{I}_{\Omega_{\mathcal{Z}}}$, \textit{i.e.} a convergence field which contains only the feature of interest. Now we adopt the process of \textit{segmentation inpainting} \citep{[11],[12],[M1]} to create a convergence field realization without the feature of interest $\mathcal{Z}$ but with background signal replaced.
\par
Mathematically segmentation inpainting is represented by the iterations
\begin{equation} \label{eq:inpainting}
\kappa^{(t+1),\text{sgt}} = \kappa^{\text{map}} \mathbb{I}_{\Omega \setminus \Omega_{\mathcal{Z}} } + \Lambda \text{soft}_{\lambda}(\Lambda^{\dag} \kappa^{(t),\text{sgt}})\mathbb{I}_{\Omega_{\mathcal{Z}}},
\end{equation}
where $\Lambda$ is an appropriately selected dictionary \nobreak--\nobreak\hskip4pt for this purpose we simply use the Daubechies 8 (DB8) wavelet dictionary with 8-levels and $\lambda$ is the soft-thresholding parameter.
\par
Following the wavelet inpainting, in order to separate the true feature from the background residual convergence the signal which was inpainted into the region $\Omega_{\mathcal{Z}}$ is subtracted from the extracted feature $\mathcal{Z}$ \nobreak--\nobreak\hskip4pt effectively accounting for the residual background signal which would likely have been present even in the absence of the feature $\mathcal{Z}$. At this junction the surrogate convergence $\kappa^{\text{sgt}}$ is hypothesis tested for physicality \citep{[12],[M1]}.
\par
If a feature is not found to be physical, the algorithm terminates at this point as, fundamentally, it is illogical to evaluate the uncertainty in position of an object of which you cannot statistically determine the existence.
\par
Now that we have successfully isolated $\mathcal{Z}$ we can insert it back into the surrogate field $\kappa^{\text{sgt}}$ at a perturbed position. It is then sufficient to check if
\begin{equation}
f(\kappa^{\text{sgt}\prime}) + g(\kappa^{\text{sgt}\prime}) \leq \epsilon_{\alpha}^{\prime},
\end{equation}
where $\kappa^{\text{sgt}\prime}$ represents the surrogate with the feature $\mathcal{Z}$ inserted at a perturbed location.
\par
If the inequality does hold, then the conclusion is that at $100(1-\alpha)\%$ confidence we cannot say that the feature could not be found at this location. If the equality does not hold then $\mathcal{Z}$ in its observed form could not have been found at the new location at $100(1-\alpha)\%$ confidence. The question then becomes, which perturbed positions are acceptable and which are not.
\par
With the above in mind, we propose a typical inverse nested iterative scheme to determines the pixel-space isocontour for a given feature in the reconstructed convergence field. Schematically this iterative process is outlined in Figure \ref{fig:BayesianLocationSchema}. Essentially, inverse nesting is: start in a ring 1-pixel from the MAP peak location in the first iteration, moving the ring one pixel outwards after each iteration.
\subsection{N-splitting Circular Bisection} \label{sec:N-splitting}
Inverse nested iterations are sufficiently fast for low-dimensional reconstructions $(256 \times 256)$, however as the dimensionality of the reconstructed domain grows it becomes increasingly beneficial to adopt more advanced algorithms to compute the Bayesian location in an efficient manner.
\par
We propose a novel iterative algorithm for computing the pixel-space position isocontour which we call \textit{N-splitting Circular Bisection} (N-splitting), the full details of which can be found in appendix \ref{sec:appendixa}. A brief outline of N-splitting is given below.
\par
Suppose we wish to compute positions on the Bayesian location isocontour at \textit{equiangular intervals} $\Delta \Theta$ (defined clockwise about the peak location) relative to the $y$-axis. Then we require $n = 2 \pi / \Delta \Theta$ sampling angles which are trivially given by,
\begin{equation}
\Theta_i = i \Delta \Theta,
\end{equation}
where $i$ is an iterative factor which sets the angle for a given direction $\Theta_i$.
\par
Once $\Theta_i$ is defined for a single direction, the distance $d_{\alpha}^{\prime}$ along direction $\Theta_i$ such that the objective function saturates the level-set threshold $\epsilon_{\alpha}^{\prime}$ is found by bisection. Mathematically, this is formally defined to be,
\begin{align}
d_{\alpha}^{i} &= \minT_{d} \Big \lbrace \; d \in \Gamma_i \; | \; f(\kappa_d^{\text{sgt}}) + g(\kappa_d^{\text{sgt}}) > \epsilon_{\alpha}^{\prime} \; \Big \rbrace, \\
\Gamma_i &= \Big \lbrace q_1 \sin(\Theta_i), q_2 \cos(\Theta_i) \; | \; q_1,q_2 \in \mathbb{R}_{+} \Big \rbrace, \\
d &= \sqrt{q_1^2 + q_2^2},
\end{align}
where $\Gamma_i$ is the sub-set of the real domain which lie on the line of infinite extent along a given direction $\Theta_i$ sourced at the peak location, and $\kappa_d^{\text{sgt}}$ is the surrogate convergence map constructed by inserting the feature of interest $\mathcal{Z}$ into a perturbed location $[q_1 \sin(\Theta_i), \; q_2 \cos(\Theta_i)]$.
\par
Once a representative set of positions on the location isocontour are computed, the \textit{convex hull} is taken -- the convex hull is simply the smallest convex set which contains all samples of the location isocontour. The boundary of this closed convex set of acceptable pixels is taken as the Bayesian location.
\section{Illustrative example of the Bayesian Location} \label{sec:BayesLocationApplication}
In this section we perform sparse Bayesian reconstructions of a large cluster extracted from the Bolshoi N-body simulation \citep{[20],[6]}, upon which we construct and assess the performance of Bayesian locations for each of the four largest sub-halos. In line with previous work of \citet{[M1]} and in the related article of \citet{[6]} we refer to this extracted cluster as Bolshoi-3.
\par
We grid the Bolshoi convergence field onto a discretized complex map of dimension $(1024 \times 1024)$ so as to demonstrate that the sparse Bayesian approach can construct Bayesian estimators efficiently even when the dimension of the problem is of $\mathcal{O}(10^6)$ or larger \nobreak--\nobreak\hskip4pt dimensions at which MCMC techniques can become highly computationally challenging.
\begin{figure*}
\centering
\includegraphics[width=\textwidth, trim={0 0 0 0.75cm},clip]{Bayesian_location/Peak_1_Nsplit_99Confidence.png}
\put(-520,50){\large \rotatebox[origin=c]{90}{Peak 1}}
\put(-480,115){\large True Peak}
\put(-380,115){\large SNR: 20}
\put(-280,115){\large SNR: 17}
\put(-180,115){\large SNR: 15}
\put(-80,115){\large SNR: 12} \\
\includegraphics[width=\textwidth, trim={0 0 0 0.75cm},clip]{Bayesian_location/Peak_2_Nsplit_99Confidence.png}
\put(-520,50){\large \rotatebox[origin=c]{90}{Peak 2}} \\
\includegraphics[width=\textwidth, trim={0 0 0 0.75cm},clip]{Bayesian_location/Peak_3_Nsplit_99Confidence.png}
\put(-520,50){\large \rotatebox[origin=c]{90}{Peak 3}} \\
\includegraphics[width=\textwidth, trim={0 0 0 0.75cm},clip]{Bayesian_location/Peak_4_Nsplit_99Confidence.png}
\put(-520,50){\large \rotatebox[origin=c]{90}{Peak 4}} \\
\caption{\textbf{Left to right:} Sparse Bayesian reconstructions of Bolshoi-3 peaks 1 to 4 (\textit{top to bottom} respectively) followed by Bayesian locations (see section \ref{sec:BayesianLocation}) at $99\%$ confidence for input SNR of 20.0 to 12.0 dB\nobreak--\nobreak\hskip4pt which are overlaid on the sparse Bayesian MAP recovered convergence maps $\kappa^{\text{map}}$ at the corresponding SNR level. As the input artificial shear becomes more contaminated with noise, the relative information content decreases, and so inferred uncertainty of the reconstructed peak positions increases, as one would logically expect. Note the asymmetry in the $99\%$ isocontour, which motivates the N-splitting searching algorithm (see section \ref{sec:N-splitting} and Appendix A) rather than a naive circular inference (\textit{e.g.} finding the maximal $x$ and $y$ displacements and assuming a circular isocontour). Finally, observe that the $99\%$ isocontour for Peaks 3 and 4 are proportionally more tightly constrained than Peaks 1 and 2. This is due to the local information density typically being higher in more signal dense regions \nobreak--\nobreak\hskip4pt perturbations to pixels in more information dense regions are more tightly constrained and can therefore move less distance before saturating the approximate level-set threshold $\epsilon_{\alpha}^{\prime}$. This effect has been observed in the context of \textit{local credible intervals} as presented in \citet{[12]} and introduced to the weak lensing setting in \citet{[M2]}. }
\label{fig:Bolshoi_3_peaks}
\end{figure*}
\subsection{Methodology} \label{sec:methodology}
First, we construct a complex discretized set of artificial shear measurements $\tilde{\gamma} \in \mathbb{C}^M$ by,
\begin{equation}
\tilde{\gamma} = \Phi \kappa,
\end{equation}
where $\kappa$ is the input Bolshoi-3 convergence map. We then contaminate these mock measurements with noise $n$, which for simplicity we select to be i.i.d. Gaussian noise $n \sim \mathcal{N}(0,\sigma_n^2)$ of zero mean and variance $\sigma_n^2$. The variance is selected such that the SNR of the noisy artificial shear maps can be varied, and is therefore set to be,
\begin{equation}
\sigma_n = \sqrt{\frac{\norm{\bm{\Phi} \kappa}_2^2}{N}} \times 10^{-\frac{\text{SNR}}{20}}.
\end{equation}
\par
The MAP convergence field $\kappa^{\text{map}}$ is recovered \textit{via} the sparse Bayesian mass-mapping algorithm using DB10 wavelets (10-levels), and the Bayesian location for the set of 4 peaks is constructed. For a detailed discussion of how noise levels in dB translate to physical quantities such as galaxy number density see \citet{[M1]}.
\subsection{Analysis and computational efficiency} \label{sec:analysis}
To demonstrate this uncertainty quantification technique we construct $99\%$ confidence Bayesian locations for the 4 largest sub-halos in the Bolshoi-3 cluster, for input SNR in decibels (dB) of $\in \lbrace 12, 15, 17, 20 \rbrace$.
\par
In Figures \ref{fig:Combined_peaks} and \ref{fig:Bolshoi_3_peaks} it is apparent that, as expected, the positional uncertainty isocontour at $99\%$ confidence is smaller for less noisy data, growing in proportion to the noise. In our analysis 32 N-splitting directions (pointings) were used, though as can be seen in Figures \ref{fig:Combined_peaks} and \ref{fig:Bolshoi_3_peaks} as few as 16 directions would easily have been sufficient given the smoothness of the isocontour.
\par
Computationally, reconstruction of the MAP convergence field and computation of the Bayesian location for the complete set of peaks took $\sim 5$ minutes on a standard 2016 MacBook Air. Notably, this is a conservative \citep{[10]} and tight \citep{[M2]} approximate Bayesian inference in an over $10^6$-dimensional space on a personal laptop in minutes. Further to this, the sparse Bayesian algorithmic structure can be easily parallelizable and so this computational efficiency can be optimizerd further.
\section{Aggregate uncertainty in Peak Counts} \label{sec:peak_uncertainties}
Building on the notion of an approximate HPD credible region presented in section \ref{sec:HPD_region} we now ask the question: given a reconstructed convergence field $\kappa^{\text{map}}$, and at a given SNR threshold $K$, what is the maximum and minimum peak count at $100(1-\alpha)\%$ confidence.
\par
In this article we choose to define a \textit{peak} in $\kappa^{\text{map}}$ by a pixel $\kappa^{\text{map}}(\bm{x})$ which is larger than the 8 pixels which surround it \citep{[53]}. A point of the peak statistic is computed as follows: A threshold $K$ is taken on $\kappa^{\text{map}}$, and the \textit{peak count} (number of peaks which have intensity larger than $K$) is taken on the sub-set of pixels larger than the threshold.
\par
Formally we define the \textit{excursion set} $\Omega^{+} \subset \Omega$ as,
\begin{equation}
\Omega^{+} = \Big \lbrace \: \bm{x} \; | \; \kappa^{\text{map}}(\bm{x}) > K \: \Big \rbrace,
\end{equation}
where $\Omega$ is the complete set of recovered pixels. Define a further sub-set $\Pi \subset \Omega^{+}$ as the set of peaks in $\Omega^{+}$:
\begin{equation} \label{eq:excursion_peak_definition}
\Pi (\kappa^{\text{map}}) = \Big \lbrace \: \bm{x} \; | \; \kappa^{\text{map}}(\bm{x}) > \kappa^{\text{map}}(\bm{x}^{\prime}), \; \forall \; \bm{x}^{\prime} \in \mathcal{N}(\bm{x}) \: \Big \rbrace,
\end{equation}
where $\mathcal{N}(\bm{x})$ represents the set of immediately surrounding pixels.
\par
Note that this definition is not valid for pixels on the boundary of the field, and so these pixels are necessarily not considered. This not only excludes the outer edge of $\kappa^{\text{map}}$ but also any pixels surrounding masked regions (of which there are typically many).
\par
Conceptually, we would then like to know at a given threshold $K$ what is the maximum and minimum number of peaks which could exist such that the \textit{surrogate solution} $\kappa^{\text{sgt}}$ still belongs to the approximate HPD credible set $C_{\alpha}^{\prime}$.
\par
Let $\eta_{\alpha}^{\text{max}}$ be the \textit{upper bound} on the number of peaks, and $\eta_{\alpha}^{\text{min}}$ be the \textit{lower bound} on the number of peaks, for a given threshold $K$, at $100(1-\alpha)\%$ confidence. Further let $\eta$ be the number of peaks calculated from the MAP solution $\kappa^{\text{map}}$ at threshold $K$. Formally these quantities are given by,
\begin{align}
\eta \; &\equiv \; |\Pi(\kappa^{\text{map}})|, \label{eq:peak_mean} \\
\eta_{\alpha}^{\text{max}} &\equiv \maxT_{\kappa^{\text{sgt}}} \Big \lbrace \: |\Pi(\kappa^{\text{sgt}})| \in \mathbb{R}_+ \: | \: f(\kappa^{\text{sgt}}) + g(\kappa^{\text{sgt}}) \leq \epsilon_{\alpha}^{\prime} \: \Big \rbrace, \label{eq:peak_upper} \\
\eta_{\alpha}^{\text{min}} &\equiv \minT_{\kappa^{\text{sgt}}} \Big \lbrace \: |\Pi(\kappa^{\text{sgt}})| \in \mathbb{R}_+ \: | \: f(\kappa^{\text{sgt}}) + g(\kappa^{\text{sgt}}) \leq \epsilon_{\alpha}^{\prime} \: \Big \rbrace, \label{eq:peak_lower}
\end{align}
where $|\Pi(\kappa)|$ is the \textit{cardinality of the peak set} of a convergence field $\kappa$.
\par
It is not all obvious how to locate the extremum of optimization problems given in equations (\ref{eq:peak_upper}) and (\ref{eq:peak_lower}) as they are inherently non-linear, non-convex problems. We can, however, propose a logical iterative approach to calculate well motivated approximations to the upper and lower peak count limits $\eta_{\alpha}^{\text{max}}$ and $\eta_{\alpha}^{\text{min}}$.
\begin{figure}
\begin{center}
\begin{tikzpicture}[node distance = 2cm, auto]
\node [parameter_fixed, text width=14em] (0) {Initial surrogate: $\kappa^{\text{sgt}} = \kappa^{\text{map}}$};
\node [block, below of=0, node distance=1.6cm, text width=18em] (1) {Calculate excursion peak set: $\Pi(\kappa^{\text{sgt}})$};
\node [block, below of=1, node distance=1.6cm, text width=14em] (2) {Find lowest peak: ($\bm{x}$)};
\node [parameter_fixed, below of=2, node distance=1.6cm] (3) {Define aperture around peak: $\Omega_{\mathcal{A}}$};
\node [block, below of=3, node distance=1.6cm, text width=14em] (4) {Remove peak from excursion peak set: $\kappa^{\text{sgt}} = \mathcal{S}_{K, \Omega_{\mathcal{A}}} \big ( \kappa^{\text{sgt}} \big )$};
\node [decision, below of=4, node distance=1.6cm] (5) {In credible set?: $\kappa^{\text{sgt}} \in C_{\alpha}^{\prime}$ ? };
\node [iteration, right of=3, node distance=4.0cm, text width=8em] (6) {Repeat steps.};
\node [parameter_fixed, below of=5, node distance=1.6cm, text width=14em] (7) {Min number of peaks: $\eta_{\alpha}^{\text{min}} = |\Pi(\kappa^{\text{sgt}})|$};
\path [line] (0) -- (1);
\path [line] (1) -- (2);
\path [line] (2) -- (3);
\path [line] (3) -- (4);
\path [line] (4) -- (5);
\path [line,dashed] (5) -| node[near start] {Yes}(6);
\path [line,dashed] (6) |- (1);
\path [line,dashed] (5) -- node{No}(7);
\end{tikzpicture}
\caption{Schematic representation of the iteration steps in finding the Bayesian lower bound $\eta_{\alpha}^{\text{min}}$ at confidence $100(1-\alpha)\%$ of the peak count $|\Pi|$ for a given MAP reconstruction $\kappa^{\text{map}}$.}
\label{fig:BayesianLowerBound}
\end{center}
\end{figure}
\subsection{Approximate Bayesian Lower Bound on Peak Counts} \label{sec:lower_peak_bound}
It is perhaps conceptually more straightforward to minimize the cardinality of the peak set and so we will first describe this process.
\par
To calculate an approximate bound on $\eta_{\alpha}^{\text{min}}$ we begin by creating the initial peak set $\Pi$ from the recovered convergence $\kappa^{\text{map}}$. The peak in $\Pi(\kappa^{\text{map}})$ with lowest magnitude is located. The shortest distance $r_{\text{min}}$ from the pixel location $\bm{x}$ to a pixel $\bm{x^{\prime}}$ such that $\kappa^{\text{map}}(\bm{x}^{\prime}) = y$ (where $y$ is some magnitude at which it is assumed the peaks influence is sufficiently small) is computed in Euclidean space as $r_{\text{min}} = | \bm{x} - \bm{x}^{\prime}|$ \nobreak--\nobreak\hskip4pt within this paper we simply set $y=0$.
\par
Let us define the region of interest $\Omega_{\mathcal{A}} \subset \Omega$ to be a circular aperture centered on the peak pixel location $\bm{x}$ with radius $r_{\text{min}}$. Conceptually, this acts as a proxy for the area effected by a large over-density sourced at the location of the peak.
\par
Now, define a \textit{down-scaling kernel} $\mathcal{S}_{K, \Omega_{\mathcal{A}}} \in \mathbb{C}^{N \times N}$ which has the action of scaling the magnitude of the sub-set $\kappa^{\text{map}} \mathbb{I}_{\Omega_{\mathcal{A}}}$ of pixels belonging to the region of interest $\Omega_{\mathcal{A}}$ onto $[0,K]$. Application of the down-scaling operator returns a surrogate solution $\kappa^{\text{sgt}}$. Mathematically this is,
\begin{equation} \label{eq:scaling}
\kappa^{\text{sgt}} = \mathcal{S}_{K, \Omega_{\mathcal{A}}} \big ( \kappa^{\text{map}} \big ) = \kappa^{\text{map}} \mathbb{I}_{\Omega \setminus \Omega_{\mathcal{A}}} + \frac{K}{\max{ \big ( \kappa^{\text{map}} \mathbb{I}_{\Omega_{\mathcal{A}}} \big ) }} (\kappa^{\text{map}} \mathbb{I}_{\Omega_{\mathcal{A}}} ).
\end{equation}
\par
Application of $\mathcal{S}_{K, \Omega_{\mathcal{A}}}$ to an isolated region $\Omega_{\mathcal{A}}$ conserves the local topology of the field \nobreak--\nobreak\hskip4pt which is precisely what we want as it means we are making no assumptions about the halo profile around a peak. Removing a peak by application of $\mathcal{S}_{K, \Omega_{\mathcal{A}}}$ creates a surrogate solution $\kappa^{\text{sgt}}$ which is likely to minimize the increase in the objective function.
\par
As such $\mathcal{S}_{K, \Omega_{\mathcal{A}}}$ is a good strategy for excluding peaks from $\Pi(\kappa^{\text{map}})$ as it will likely maximize the number of peaks which can be removed from $\Pi(\kappa^{\text{map}})$ before the level-set threshold $\epsilon_{\alpha}^{\prime}$ is saturated. Thus, it will likely be near decision-theoretically optimal at minimizing equation (\ref{eq:peak_lower}), which is precisely what we want.
\par
A schematic of the iterative process proposed to find the Bayesian lower bound on the peak statistic can be seen in Figure \ref{fig:BayesianLowerBound}. In words, the process is as follows. Within each iteration, the lowest intensity peak within the peak set is removed forming a new surrogate convergence field $\kappa^{\text{sgt}}$, the objective function is recalculated and if the objective function is below the approximate level-set threshold $\epsilon_{\alpha}^{\prime}$ then the lowest peak within $\kappa^{\text{sgt}}$ is now removed, so on and so forth until the objective function rises above $\epsilon_{\alpha}^{\prime}$, at which the iterations are terminated and the minimum number of peaks is recovered.
\subsection{Approximate Bayesian Upper Bound on Peak Counts} \label{sec:upper_peak_bound}
Now we invert our perspective in order to approximate the maximum number of peaks which could be observed at a given threshold $K$ at $100(1-\alpha)\%$ confidence. Here we will be considering the non-linear maximization problem constructed in equation (\ref{eq:peak_upper}).
\par
First, we introduce the notion of the \textit{inclusion set} $\Omega^{-}$, conjugate to $\Omega^{+}$ such that $\Omega^{-} \cup \Omega^{+} \equiv \Omega$ and $\Omega^{-} \cap \Omega^{+} = \varnothing$,
\begin{equation}
\Omega^{-} = \Big \lbrace \: \bm{x} \; | \; \kappa^{\text{map}}(\bm{x}) \leq K \: \Big \rbrace,
\end{equation}
With this in mind, we can now cast the maximization problem into a minimization problem analogous to that used before.
\par
We now wish to minimize the number of peaks that belong to the \textit{inclusion set} $\Omega^{-}$ which is by definition equivalent to maximizing the number of peaks which belong to the \textit{excursion set} $\Omega^{+}$ \nobreak--\nobreak\hskip4pt which is precisely what we want.
\par
Analogously to section \ref{sec:lower_peak_bound} to construct our approximate bound we calculate the further sub-set $\Pi^{-} \subset \Omega^{-}$ which is defined similarly to the relation in equation (\ref{eq:excursion_peak_definition}) such that,
\begin{equation} \label{eq:inclusion_peak_definition}
\Pi^{-}(\kappa^{\text{map}}) = \Big \lbrace \: \bm{x} \; | \; \kappa^{\text{map}}(\bm{x}) > \kappa^{\text{map}}(\bm{x}^{\prime}), \; \forall \; \bm{x}^{\prime} \in \mathcal{N}(\bm{x}) \: \Big \rbrace,
\end{equation}
\textit{i.e.} the sub-set of peaks below a threshold $K$.
\par
In contrast to section \ref{sec:lower_peak_bound} we now locate the largest peak in $\Pi^{-}$. Suppose that this peak is found at $\Pi^{-}(\bm{x})$, we now construct a circular aperture about $\bm{x}$ with radius $r_{\text{min}}$ as defined before. Let this circular aperture set of pixels be $\Omega_{\mathcal{A}} \subset \Omega$.
\par
Now we define an \textit{up-scaling kernel} $\mathcal{S}_{K, \Omega_{\mathcal{A}}}^{\dagger} \in \mathbb{C}^{N \times N}$ which has action,
\begin{equation}
\mathcal{S}_{K, \Omega_{\mathcal{A}}}^{\dagger} \big ( \kappa^{\text{map}} \big ) = \kappa^{\text{map}} \mathbb{I}_{\Omega \setminus \Omega_{\mathcal{A}}} + \frac{K + \Delta}{\max{ \big ( \kappa^{\text{map}} \mathbb{I}_{\Omega_{\mathcal{A}}} \big ) }} (\kappa^{\text{map}} \mathbb{I}_{\Omega_{\mathcal{A}}} )
\end{equation}
which is very slightly different to the down-scaling operator in the numerator of the second term. Here $\Delta$ is an infinitesimal quantity added such that the re-scaled peak within $\Omega_{\mathcal{A}}$ falls infinitesimally above the threshold $K$ and is therefore counted as a peak. In practice we set $\Delta$ to be $\sim 10^{-5}$ and find that adjusting this quantity by $\mathcal{O}(10^2)$ has negligible effect on the recovered uncertainties.
\par
With these conceptual adjustments we then follow the same iterations discussed in section \ref{sec:lower_peak_bound} to find the approximate Bayesian upper bound on the peak count $\eta_{\alpha}^{\text{max}}$. Schematically this is given in Figure \ref{fig:BayesianUpperBound}.
\par
Finally we return the tuple $\big ( \eta_{\alpha}^{\text{min}}, \eta , \eta_{\alpha}^{\text{max}} \big )$ which is in the form $\big ($minimum, most likely, maximum$\big )$ at $100(1-\alpha)\%$ confidence.
\begin{figure}
\begin{center}
\begin{tikzpicture}[node distance = 2cm, auto]
\node [parameter_fixed, text width=14em] (0) {Initial surrogate: $\kappa^{\text{sgt}} = \kappa^{\text{map}}$};
\node [block, below of=0, node distance=1.6cm, text width=18em] (1) {Calculate inclusion peak set: $\Pi^{-}(\kappa^{\text{sgt}})$};
\node [block, below of=1, node distance=1.6cm, text width=14em] (2) {Find highest peak: ($\bm{x}$)};
\node [parameter_fixed, below of=2, node distance=1.6cm] (3) {Define aperture around peak: $\Omega_{\mathcal{A}}$};
\node [block, below of=3, node distance=1.6cm, text width=14em] (4) {Add peak to excursion peak set: $\kappa^{\text{sgt}} = \mathcal{S}_{K, \Omega_{\mathcal{A}}}^{\dagger} \big ( \kappa^{\text{sgt}} \big )$};
\node [decision, below of=4, node distance=1.6cm] (5) {In credible set?: $\kappa^{\text{sgt}} \in C_{\alpha}^{\prime}$ ? };
\node [iteration, right of=3, node distance=4.0cm, text width=8em] (6) {Repeat steps.};
\node [block, below of=5, node distance=1.6cm, text width=18em] (7) {Calculate excursion peak set: $\Pi(\kappa^{\text{sgt}})$};
\node [parameter_fixed, below of=7, node distance=1.6cm, text width=14em] (8) {Max number of peaks: $\eta_{\alpha}^{\text{max}} = |\Pi(\kappa^{\text{sgt}})|$};
\path [line] (0) -- (1);
\path [line] (1) -- (2);
\path [line] (2) -- (3);
\path [line] (3) -- (4);
\path [line] (4) -- (5);
\path [line,dashed] (5) -| node[near start] {Yes}(6);
\path [line,dashed] (6) |- (1);
\path [line,dashed] (5) -- node{No}(7);
\path [line] (7) -- (8);
\end{tikzpicture}
\caption{Schematic representation of the iteration steps in finding the Bayesian upper bound $\eta_{\alpha}^{\text{max}}$ at confidence $100(1-\alpha)\%$ of the peak count $|\Pi|$ for a given MAP reconstruction $\kappa^{\text{map}}$.}
\label{fig:BayesianUpperBound}
\end{center}
\end{figure}
\begin{figure}
\centering
\large Ground Truth Buzzard $2048 \times 2048$ Convergence $\kappa$\par
\includegraphics[width=0.5\textwidth]{Peak_statistics/Buzzard_2048_convergence_map.png}
\caption{Input $2048 \times 2048$ convergence map extracted from the Buzzard N-body simulation.}
\label{fig:Buzzard_input_2048}
\end{figure}
\begin{figure*}
\centering
\large Bayesian Uncertainty in $2048 \times 2048$ Buzzard Peak statistic: SNR = 30 dB\par
\includegraphics[width=\textwidth, trim={0 0 0 1.2cm},clip]{Peak_statistics/SNR30_Peak_uncertainties.png}
\caption{Cumulative peak statistic for a $2048 \times 2048$ planar convergence map extracted from the Buzzard V-1.6 simulation (see section \ref{sec:buzzard_data}) contaminated with i.i.d. Gaussian noise such that the discretized simulated shear (see section \ref{sec:methodology}) are of SNR 30 dB. The \textit{purple} outer contours are the computed upper and lower bounds at $99\%$ confidence, with the inner \textit{red} contours representing the $68\%\ (\sim 1 \sigma)$ bounds, included to aid comparison to similar literature which typically quote $1 \sigma$ errors. Note that the information content drops for higher $\sigma$ thresholds as fewer peaks are present, leading to larger relative uncertainty as fewer samples are recovered. Further note that this example is computed in a highly idealized low-noise setting.}
\label{fig:SNR30_peak_application}
\end{figure*}
\subsection{Limitations of Re-scaling}
Suppose the SNR threshold $K$ is large enough such that during iterations in schematic of Figure \ref{fig:BayesianLowerBound} the cardinality of the excursion peak set $| \Pi(\kappa^{\text{sgt}}) | \rightarrow 0$. In this situation even though the approximate level-set threshold $\epsilon_{\alpha}^{\prime}$ is not saturated, the algorithm is forced to stop as there are simply no more peaks to exclude (push down). At this point the strategy for removing peaks becomes locally ill-defined. Effectively this is a clipping artifact. To avoid this effect entirely, if $| \Pi(\kappa^{\text{sgt}}) | = 0$ at any point within the iterations at a given threshold, the lower bound $\eta_{\alpha}^{\text{min}}$ at threshold $K$ is set to $0$, \textit{i.e.} we are infinitely uncertain by construction.
\par
Analogously, consider the case when $K$ is small enough that during the iterations in schematic \ref{fig:BayesianUpperBound} the cardinality of the inclusion peak set $| \Pi^{-}(\kappa^{\text{sgt}}) | \rightarrow 0$. In this situation there are simply no more peaks to include (pull up). Again we remove this clipping effect by setting $\eta_{\alpha}^{\text{max}}$ at threshold $K$ is set to $| \Pi(\kappa^{\text{sgt}}) |$.
\par
Typically these clipping effects only occur for very small $K \leq 2$ or very large $K \geq 8$ thresholds, and so a wealth of information can be extracted from the intervening scales. Low thresholds clip the upper limit $\eta_{\alpha}^{\text{max}}$ as the cardinality of the peak set drops to 0 quickly, but the objective function rises comparatively slowly, as this SNR range is statistically dominated by noise. High threshold clip the lower limit $\eta_{\alpha}^{\text{min}}$ simply due to the inherently low count of peaks at high SNR thresholds.
\par
Further to this, the decision-theory approach adopted here for locating the maximal and minimal values of the cumulative peak statistic is based on several assumption: removing lower peaks increases the objective function by less than larger peaks; the extent of a peak (dark matter over-density) is approximated by a circular aperture; and removal of a peak has little to no effect on locations in the image domain which are outside of this aperture. All three of these assumptions are very reasonable.
\par
Although further computational optimizations are not an immediate concern since our approach is already highly computationally efficient, we acknowledge that this iterative approach for removing peaks can easily be formulated as a bisection style problem which is likely to drastically reduce the computation time further \nobreak--\nobreak\hskip4pt particularly for low thresholds, as it mitigates the number of trivial noise peak removal recalculations which are done in the brute force approach presented above. In future, should computational efficiency become of primary interest this speed up will be considered.
\section{Illustrative example of Peak Uncertainties} \label{sec:PeakDemonstration}
In this section we apply the sparse Bayesian mass-mapping pipe-line to high resolution $(2048 \times 2048)$ convergence maps extracted from the Buzzard V-1.6 N-body\footnote{Obtained due to our affiliation with the LSST-DESC collaboration.} simulation, upon which we construct the cumulative peak statistic (number of peaks above a threshold as a function of the threshold). Additionally, we recover the $99\%$ approximate Bayesian constraints on the peak count at each threshold, from which we infer the $68\%$ constraint so as to aid the reader in comparison to typical $1 \sigma$ error-bars quoted in related literature.
\subsection{Simulated Data-sets} \label{sec:buzzard_data}
The Buzzard V-1.6 N-body simulation convergence catalog \citep{DeRose2018,wechsler2018} is extracted by full ray-tracing with the origin located at the box corner \nobreak--\nobreak\hskip4pt leading to a quarter sky coverage. For wide-fields the \textit{flat sky approximation} breaks down \citep{[3]} and so this quarter sky coverage was reduced to smaller planar patches.
\par
The complete quarter sky convergence catalog was projected into a coarse HEALPix\footnote{http://healpix.sourceforge.net/documentation.php}\citep{Gorski2005} pixelisation ($N_{\text{side}} = 4$). Inside of each pixel, we further tessellated the largest square region which we then project into a $2048 \times 2048$ grid. These gridded convergence maps formed our ground truth, discretized convergence fields.
\par
As HEALPix samples in such a way as to provide equal area pixels, and the Buzzard simulation galaxy density is fairly uniform, each extracted square region contained $\sim 2 \times 10^7$ galaxies leading to $\sim 5$ galaxies per pixel.
\par
Due to a comparatively low density of samples, Poisson noise is prevalent, and thus extracted planar regions were passed through a multi-scale Poisson denoising algorithm. This consisted of a forward Anscombe transform (in order to Gaussianise the Poisson noise), several TV-norm (total-variation) denoising optimizations of differing scale, followed by an inverse Anscombe transform \citep[as in][]{[M2],[6]}. A more involved treatment could be applied, but this approach is sufficient to demonstrate our peak reconstructions.
\begin{figure*}
\centering
\large Bayesian Uncertainty in $2048 \times 2048$ Buzzard Peak statistic: SNR = 25 dB\par
\includegraphics[width=\textwidth, trim={0 0 0 1.2cm},clip]{Peak_statistics/SNR25_Peak_uncertainties.png}
\caption{Cumulative peak statistic for a $2048 \times 2048$ planar convergence map extracted from the Buzzard V-1.6 simulation (see section \ref{sec:buzzard_data}) contaminated with i.i.d. Gaussian noise such that the discretized simulated shear (see section \ref{sec:methodology}) are of SNR 25 dB. The \textit{red} inner contours represent the upper and lower bounds at $68\%\ (\sim 1 \sigma)$ confidence, with the outer \textit{purple} contours representing the computed bounds at $99\%$ confidence.}
\label{fig:SNR25_peak_application}
\end{figure*}
\subsection{Application to Buzzard V-1.6}
We select at random one of many planar patches produced for the following application. Following the methodology presented in section \ref{sec:methodology} we generate an artificial shear catalog which we then contaminate with independent and identically distributed (i.i.d.) Gaussian noise such that the SNR of mock shear measurements is 30 dB -- \textit{i.e.} an idealized noise-level simply for illustrative purposes.
\par
The MAP convergence estimator $\kappa^{\text{map}}$ is recovered from these noisy mock shear measurements \textit{via} our sparse Bayesian mass-mapping framework. From $\kappa^{\text{map}}$ we then calculate $\sigma^2 = \langle ( \kappa^{\text{map}} )^2 \rangle$ which we then use as a measure of the noise-level in the reconstructed convergence field. Implementing the uncertainty quantification technique presented in section \ref{sec:peak_uncertainties} we then construct the cumulative peak statistic for SNR thresholds $K \in [2\sigma, 8\sigma)$ at increments of $0.25 \sigma$ with upper and lower $99\%$ approximate Bayesian confidence limits.
\par
Figure \ref{fig:SNR30_peak_application} displays the recovered cumulative peak statistic in both a linear and logarithmic scale. Typically, similar figures in the literature will quote $1\sigma$ error-bars, and so for comparisons sake we convert the Bayesian $99\%$ confidence limits into the $68\%$ confidence limits which are comparable to $1\sigma$ constraints ( in Figure \ref{fig:SNR30_peak_application} we provide both confidence limits for clarity).
\par
Complete reconstruction of the peak statistics for 24 threshold bins, each with approximate Bayesian upper and lower bounds, for a $2048 \times 2048$ resolution convergence map, with DB11 (11-level) wavelets, took $\sim 2$ hours on a 2016 MacBook Air. This is a non-trivial Bayesian inference in over $4\times 10^6$ dimensions, and so $2$ hours is a very reasonable computation time \nobreak--\nobreak\hskip4pt though further speedups are possible, \textit{e.g.} we can trivially parallelize the calculations for each threshold leading to an increase in computational efficiency by a factor of the number of thresholds taken (in our case $24$).
\par
Additionally, the computational bottleneck is for lower thresholds as many low-intensity peaks must be removed, and thus an adaptive scheme could be implemented as discussed previously to avoid unnecessary sampling of these thresholds. With the aforementioned speed-ups, computation of the complete peak statistic is likely to take $\mathcal{O} (\text{minutes})$ on a personal laptop.
\par
Following this initial analysis we reduce the SNR to investigate the effect of increased noise on shear measurements to the cumulative peak statistics within our Bayesian framework. We first decrease the SNR to 25 dB, seen in Figure \ref{fig:SNR25_peak_application}. Following which, we then reduce the input SNR futher to 20 dB, the corresponding results being plotting in Figure \ref{fig:SNR20_peak_application}. This higher noise level of 20 dB is still a very optimistic (somewhat unrealistic) estimate of what upcoming surveys may reach; however in this paper we are primarily focused on demonstrating the methodology and leave detailed realistic simulations and forecasting for future work. A detailed description of how these noise levels in dB translate into observation contraints (\textit{e.g.} galaxy number density \textit{e.t.c.}) can be found in \citep{[M1]}.
\begin{figure*}
\centering
\large Bayesian Uncertainty in $2048 \times 2048$ Buzzard Peak statistic: SNR = 20 dB\par
\includegraphics[width=\textwidth, trim={0 0 0 1.2cm},clip]{Peak_statistics/SNR20_Peak_uncertainties.png}
\caption{Cumulative peak statistic for a $2048 \times 2048$ planar convergence map extracted from the Buzzard V-1.6 simulation (see section \ref{sec:buzzard_data}) contaminated with i.i.d. Gaussian noise such that the discretized simulated shear (see section \ref{sec:methodology}) are of SNR 20 dB. The \textit{red} inner contours represent the upper and lower bounds at $68\%\ (\sim 1 \sigma)$ confidence, with the outer \textit{purple} contours representing the computed bounds at $99\%$ confidence. The shaded \textit{blue} region indicates threshold values for which at $99\%$ confidence the data cannot rule out the possibility that no peaks exist above this threshold (note that in these regions the lower bound is technically $0$ and there still exists a well defined upper bound which is given). Comparing this plot to Figure \ref{fig:SNR30_peak_application} we see that as the noise level increases the $68\%$ and $99\%$ confidence isocontours expand (as one would expect) and that in all cases the MAP peak statistics do not disagree at $99\%$ confidence.}
\label{fig:SNR20_peak_application}
\end{figure*}
\subsection{Analysis of Peak statistic}
Figures \ref{fig:SNR30_peak_application}, \ref{fig:SNR25_peak_application} and \ref{fig:SNR20_peak_application} clearly show that as the noise level in the discretized complex shear field increases the isocontours of the cumulative peak statistic at $99\%$ and $68\%$ loosen noticeably. Therefore this, unsurprisingly, indicates that cleaner measurements are likely to give tighter constraints on cosmological parameters -- though it should be noted that increasing the number of data-points (\textit{i.e.} pixels) would have a similar effect to reducing the noise level per pixel.
\par
For an SNR of 20 dB (Figure \ref{fig:SNR20_peak_application}) the first feature of note is the shaded blue region which indicates that for high thresholds the lower bound on the number of peaks at $99\%$ confidence is consistent (and clipped) at 0 \nobreak--\nobreak\hskip4pt this is saying that at $99\%$ confidence the true number of peaks at a threshold in the blue shaded region could be 0. Note that in the blue region the Bayesian upper bound is still entirely valid, it is only the Bayesian lower bound which within our novel approach is no longer well defined.
\par
Clearly the upper and lower bounds on the peak count statistic is dependent on the threshold one is considering and the total area over which observations are made -- for wide-field surveys, more data is collected which is likely to reduce the variance of the statistic. In a general sense we summarize the mean (over all considered thresholds $K$) order of magnitude percentage spread on the peak statistic for the considered SNR thresholds below.
\par
At input SNR of 20 dB, for thresholds $\in [2 \sigma, 6 \sigma)$ on a single $2048 \times 2048$ planar patch the upper and lower bounds exist and are of $\mathcal{O}(48\%)$ at $99\%$ confidence and of $\mathcal{O}(13\%)$ at $68\%$.
\par
At input SNR of 25 dB, for thresholds $\in [2 \sigma, 8 \sigma)$ on a single $2048 \times 2048$ planar patch the upper and lower bounds exist and are of $\mathcal{O}(25\%)$ at $99\%$ confidence and of $\mathcal{O}(7\%)$ at $68\%$.
\par
At input SNR of 30 dB, for thresholds $\in [2 \sigma, 8 \sigma)$ on a single $2048 \times 2048$ planar patch the upper and lower bounds exist and are of $\mathcal{O}(15\%)$ at $99\%$ confidence and of $\mathcal{O}(3\%)$ at $68\%$.
\par
These illustrative examples imply that for the Bayesian peak statistic to tightly constrain the cumulative peak statistic comparitively larger and or cleaner data-sets may be required -- or, of course, a more informative prior (though this must be well justified). However, to reduce the shot noise introduced \textit{via} intrinsic ellipticities more galaxies must be observed within a given pixel.
\par
One way to increase this is to simply increase the observed number density of galaxy observations, however to do so one must observe galaxies at lower magnitude (for a fixed redshift), which inherently leads to more bright distant galaxies being detected which results in galaxy blending. Hence, increasing the number density significantly above $\sim 30$ gals/arcmin$^2$ is typically quite difficult in practice.
\par
Alternatively, the pixelisation can be adjusted to ensure that the mean galaxy count per pixel is above a given threshold \nobreak--\nobreak\hskip4pt though for weak lensing the majority of non-Gaussian information is stored at fine-scales, which require small pixels, and so using larger pixels to reduce the noise level is sub-optimal for information extraction.
\par
Within the definition of the up and down-scaling kernels (see sections \ref{sec:lower_peak_bound} and \ref{sec:upper_peak_bound}) we define a circular aperture around a selected peak which we define to be the extent of the peak. These regions are roughly equivalent to super-pixel regions as described in \citet{[12]}. In previous work it was shown \citep{[M2]} that for local credible intervals (\textit{c.f.} pixel level error bars) the typical error in the approximate HPD credible region is of $\mathcal{O}(12.5\%)$, and is conservative \nobreak--\nobreak\hskip4pt note that the quoted $25\%$ mean RMSE error is split approximately equally between the upper and lower bounds, therefore this roughly corresponds to an mean error of $12.5\%$ on both. Therefore the bounds drawn on the peak static here are likely to be $\sim 12.5\%$ less tight than the true Bayesian bounds \nobreak--\nobreak\hskip4pt which could be formed if one were to reconstruct the $4 \times 10^6$ dimensional posterior \textit{via} MCMC.
\par
In this paper (particularly the second half) we are primarily concerned with demonstrating how one may recover principled uncertainties on aggregate statistics of the convergence map -- such as, but not limited to, the peak statistics. Hence we do not provide detailed analysis of how these Bayesian uncertainties may effect cosmological constraints derived from such statistics -- this is saved for future work. However it is worth mentioning that one could either; leverage these uncertainties to define the data covariance in a Bayesian manner (as opposed to MC which is fast but may not necessarily be fully principled, or MCMC which is $\mathcal{O}(10^6)$ times slower than our MAP approach) before then running a standard likelihood analysis ; or perform a grid search in parameter space using these uncertainties again as the data covariance. Correctly accounting the uncertainties introduced during mass-mapping has been shown to be an important consideration for the future prospects of statistics such as this \citep{Lin2018peaks}.
\section{Conclusions} \label{sec:Conclusion}
Using the sparse Bayesian mass-mapping framework previously developed \citep{[M1],[M2]} we have presented two novel Bayesian uncertainty quantification techniques which can be performed directly on weak lensing convergence maps.
\par
The first of these techniques recovers the uncertainty in the location of a feature of interest within a reconstructed convergence map \nobreak--\nobreak\hskip4pt \textit{e.g.} a large peak \nobreak--\nobreak\hskip4pt at some well defined confidence. We call this locational uncertainty the \textit{`Bayesian location'}.
\par
Additionally, for computational efficiency we develop a novel sampling scheme of the position isocontour of a given feature which we call \textit{`N-splitting circular bisection'}. We find that sampling the position isocontour in this way could be many orders of magnitude faster in high dimensions than typical inverse nesting approaches.
\par
To evaluate this technique, we perform sparse Bayesian reconstructions of $1024 \times 1024$ convergence maps extracted from Bolshoi N-body simulation datasets upon which we compute the Bayesian location of the four largest sub-halos for a range of noise-levels.
\par
The second of theses techniques quantifies the uncertainty in the cumulative peak statistic of a recovered convergence map. With this technique we can for the first time provide principled Bayesian lower and upper bounds on the number of observed peaks at a given signal to noise threshold, for a single observation, at well defined confidence.
\par
We extract $2048 \times 2048$ convergence maps from the Buzzard V-1.6 N-body simulation, upon which we calculate the cumulative peak statistic with Bayesian upper and lower bounds at $99\%$ for a range of input noise-levels.
We also provide the $68\%$ confidence bounds which we infer from the $99\%$ bounds to aid comparison to typical bootstrapping (MC) approaches.
\par
For upcoming wide-field surveys convergence reconstruction will likely be done natively on the sphere (a single collective sample) to avoid projection effects, making bootstrapping approaches difficult and at worst infeasible due to the fact that they are only asymptotically exact.
\par
Bayesian approaches require only a single set of observations to make exact inferences, and so extend trivially to the more complex spherical setting. Moreover the novel uncertainty quantification techniques presented in this paper and those presented previously in \citet{[M1],[M2],[12]} can be rapidly computed and support algorithmic structure which can be highly parallelized, making them the ideal tools for principled analysis of convergence maps.
\section*{Acknowledgements} \label{sec:Acknowledgements}
Author contributions are summarised as follows.
MAP: conceptualisation, methodology, data curation, investigation, software, visualisation, writing - original draft;
JDM: conceptualisation, methodology, project administration, supervision, writing - review \& editing;
XC: methodology, investigation, writing - review \& editing;
TDK: methodology, supervision, writing - review \& editing.
This paper has undergone internal review in the LSST Dark Energy Science Collaboration. The internal reviewers were Chihway Chang, Tim Eifler, and François Lanusse.
MAP is supported by the Science and Technology Facilities Council (STFC). TDK is supported by a Royal Society University Research Fellowship (URF). This work was also supported by the Engineering and Physical Sciences Research Council (EPSRC) through grant EP/M0110891 and by the Leverhulme Trust.
The DESC acknowledges ongoing support from the Institut National de Physique Nucl\'eaire et de Physique des Particules in France; the Science \& Technology Facilities Council in the United Kingdom; and the Department of Energy, the National Science Foundation, and the LSST Corporation in the United States. DESC uses resources of the IN2P3 Computing Center (CC-IN2P3--Lyon/Villeurbanne - France) funded by the Centre National de la Recherche Scientifique; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S.\ Department of Energy under Contract No.\ DE-AC02-05CH11231; STFC DiRAC HPC Facilities, funded by UK BIS National E-infrastructure capital grants; and the UK particle physics grid, supported by the GridPP Collaboration. This work was performed in part under DOE Contract DE-AC02-76SF00515.
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{Bayesian_location/N_splitting_schematic.png}
\caption{Representation of how the problem is broken up in N-splitting circular bisection. First the $\mathit{\textsf{n}}_i$ directions are specified (\textit{left}) at equiangular separations $\theta$ about the peak location (\textit{blue ball}). Bisection iterations are conducted as in equation (\ref{eq:circular_bisection}) along each of the directions, recovering a set of samples $\mathit{\textsf{d}}_i$ of the Bayesian location isocontour at $100(1-\alpha)\%$ confidence (\textit{right}). Provided a sufficient number of samples are taken, this boundary will fully represent the isocontour. We find typically $\approx 16$ samples are needed for $512 \times 512$ convergence reconstructions though more or less may be needed depending on the resolution and application.}
\label{fig:n_splitting_schematic}
\end{figure*}
\input{peak_uncertainties.bbl}
|
1,314,259,993,436 | arxiv |
\section{Method}
\label{method}
This section describes the algorithmic details of our method.
To set up notation, consider a standard $K$-layer neural network with parameters~$\theta_k$ for layer~$k$. The stacked parameter vector ${\bm{\theta}}=(\theta_1,\dots,\theta_K)$ specifies the entire model for a classification task with loss function~$l_m(x, y; {\bm{\theta}})$ on the test sample $(x,y)$. We call this the \emph{main task}, as indicated by the subscript of the loss function.
We assume to have training data $(x_1,y_1),\dots,(x_n, y_n)$ drawn i.i.d.~from a distribution~$P$. Standard empirical risk minimization solves the optimization problem:
\begin{equation} \label{optimize}
\min_{\bm{\theta}} \frac{1}{n}\sum_{i=1}^{n} l_m(x_i, y_i; {\bm{\theta}}).
\end{equation}
Our method requires a \emph{self-supervised auxiliary task} with loss function~$l_s(x)$.
In this paper, we choose the rotation prediction task \citep{gidaris2018unsupervised}, which has been demonstrated to be simple and effective at feature learning for convolutional neural networks.
The task simply rotates $x$ in the image plane by one of 0, 90, 180 and 270 degrees and have the model predict the angle of rotation as a four-way classification problem.
Other self-supervised tasks in \Cref{related} might also be used for our method.
The auxiliary task shares some of the model parameters
${\bm{\theta}}_e=(\theta_1,\dots,\theta_\kappa)$ up to a
certain~$\kappa\in\{1,\dots,K\}.$
We designate those $\kappa$ layers as a
\emph{shared feature extractor}.
The auxiliary task uses its own task-specific parameters
${\bm{\theta}}_s = (\theta'_{\kappa+1},\dots, \theta'_{K})$.
We call the unshared parameters ${\bm{\theta}}_s$ the \emph{self-supervised task branch}, and ${\bm{\theta}}_m=(\theta_{\kappa+1},\dots,\theta_K)$ the
\emph{main task branch}.
Pictorially, the joint architecture is a $Y$-structure with a shared bottom and two branches.
For our experiments, the self-supervised task branch has the same architecture as the main branch, except for the output dimensionality of the last layer due to the different number of classes in the two tasks.
Training is done in the fashion of multi-task learning \citep{caruana1997multitask}; the model is trained on both tasks on the same data drawn from~$P$.
Losses for both tasks are added together, and gradients are taken for the collection of all parameters. The joint training problem is therefore
\begin{equation}
\label{optimize_train}
\min_{{\bm{\theta}}_e,{\bm{\theta}}_m,{\bm{\theta}}_s}
\frac{1}{n}\sum_{i=1}^{n}
l_m(x_i, y_i; {\bm{\theta}}_m, {\bm{\theta}}_e)
+ l_s(x_i; {\bm{\theta}}_s, {\bm{\theta}}_e).
\end{equation}
Now we describe the standard version of Test-Time Training on a single test sample $x$.
Simply put, Test-Time Training fine-tunes the shared feature extractor ${\bm{\theta}}_e$ by minimizing the auxiliary task loss on $x$.
This can be formulated as
\begin{equation}
\label{optimize_test}
\min_{{\bm{\theta}}_e}l_s(x; {\bm{\theta}}_s, {\bm{\theta}}_e).
\end{equation}
Denote ${\bm{\theta}}_e^*$ the (approximate) minimizer of \autoref{optimize_test}.
The model then makes a prediction using the updated parameters
${\bm{\theta}}(x) = ({\bm{\theta}}_e^*, {\bm{\theta}}_m)$.
Empirically, the difference is negligible between minimizing \autoref{optimize_test} over ${\bm{\theta}}_e$ versus over both ${\bm{\theta}}_e$ and ${\bm{\theta}}_s$.
Theoretically, the difference exists only when optimization is done with more than one gradient step.
Test-Time Training naturally benefits from standard data augmentation techniques. On each test sample $x$, we perform the exact same set of random transformations as for data augmentation during training, to form a batch only containing these augmented copies of $x$ for Test-Time Training.
\paragraph{Online Test-Time Training.}
In the standard version of our method, the optimization problem in
\autoref{optimize_test} is always initialized with parameters
${\bm{\theta}} = ({\bm{\theta}}_e,{\bm{\theta}}_s)$
obtained by minimizing \autoref{optimize_train}.
After making a prediction on $x$, ${\bm{\theta}}_e^*$ is discarded.
Outside of the standard supervised learning setting,
when the test samples arrive online sequentially,
the online version solves the same optimization problem as in \autoref{optimize_test} to update the shared feature extractor ${\bm{\theta}}_e$.
However, on test sample $x_t$,
${\bm{\theta}}$ is instead initialized with ${\bm{\theta}}(x_{t-1})$ updated on the previous sample $x_{t-1}$.
This allows ${\bm{\theta}}(x_t)$ to take advantage of the distributional information available in $x_1, \dots,x_{t-1}$ as well as $x_t$.
\section{Introduction}
\label{intro}
Supervised learning remains notoriously weak at generalization under distribution shifts.
Unless training and test data are drawn from the same distribution, even seemingly minor differences turn out to defeat state-of-the-art models \citep{recht2018cifar}.
Adversarial robustness and domain adaptation are but a few existing paradigms that try to \textit{anticipate} differences between the training and test distribution with either topological structure or data from the test distribution available during training.
We explore a new take on generalization that \textit{does not} anticipate the distribution shifts, but instead learns from them at test time.
We start from a simple observation.
The unlabeled test sample $x$ presented at test time gives us a hint about the distribution from which it was drawn.
We propose to take advantage of this hint on the test distribution by allowing the model parameters ${\bm{\theta}}$
to depend on the test sample $x$, but not its unknown label $y$.
The concept of a variable decision boundary ${\bm{\theta}}(x)$ is powerful in theory since it breaks away from the limitation of fixed model capacity (see additional discussion in \Cref{vdb}), but the design of a feedback mechanism from $x$ to ${\bm{\theta}}(x)$ raises new challenges in practice that we only begin to address here.
Our proposed test-time training method creates a self-supervised learning problem based on this single test sample $x$, updating ${\bm{\theta}}$ at test time before making a prediction.
Self-supervised learning uses an auxiliary task that automatically creates labels from unlabeled inputs.
In our experiments, we use the task of rotating each input image by a multiple of 90 degrees and predicting its angle~\citep{gidaris2018unsupervised}.
This approach can also be easily modified to work outside the standard supervised learning setting.
If several test samples arrive in a batch, we can use the entire batch for test-time training.
If samples arrive in an online stream, we obtain further improvements by keeping the state of the parameters.
After all, prediction is rarely a single event. The online version can be the natural mode of deployment under the additional assumption that test samples are produced by the same or smoothly changing distribution shifts.
We experimentally validate our method in the context of object recognition on several standard benchmarks. These include images with diverse types of corruption at various levels \citep{hendrycks2019benchmarking}, video frames of moving objects \citep{shankar2019systematic}, and a new test set of unknown shifts collected by \cite{recht2018cifar}.
Our algorithm makes substantial improvements under distribution shifts, while maintaining the same performance on the original distribution.
In our experiments, we compare with a strong baseline (labeled joint training) that uses both supervised and self-supervised learning at training-time, but keeps the model fixed at test time.
Recent work shows that \emph{training-time} self-supervision improves robustness \citep{hendrycks2019using}; our joint training baseline corresponds to an improved implementation of this work.
A comprehensive review of related work follows in \Cref{related}.
We complement the empirical results with theoretical investigations in \Cref{theory}, and establish an intuitive sufficient condition on a convex model of when Test-Time Training helps; this condition, roughly speaking, is to have correlated gradients between the loss functions of the two tasks.
\newcommand\blfootnote[1]{%
\begingroup
\renewcommand\thefootnote{}\footnote{#1}%
\addtocounter{footnote}{-1}%
\endgroup
}
\blfootnote{Project website: {\scriptsize \url{https://test-time-training.github.io/}}.}
\newpage
\section{Discussion}
The idea of test-time training also makes sense for other tasks, such as segmentation and detection, and in other fields, such as speech recognition and natural language processing.
For machine learning practitioners with prior domain knowledge in their respective fields, their expertise can be leveraged to design better special-purpose self-supervised tasks for test-time training.
Researchers for general-purpose self-supervised tasks can also use test-time training as an evaluation benchmark, in addition to the currently prevalent benchmark of pre-training and fine-tuning.
More generally, we hope this paper can encourage researchers to abandon the self-imposed constraint of a fixed decision boundary for testing, or even the artificial division between training and testing altogether.
Our work is but a small step toward a new paradigm where much of the learning happens {\em after} a model is deployed.
\section{Theoretical Results}
\label{theory}
This section contains our preliminary study of when and why Test-Time Training is expected to work.
For convex models, we prove that positive gradient correlation between the loss functions leads to better performance on the main task after Test-Time Training.
Equipped with this insight, we then empirically
demonstrate that gradient correlation governs the success of Test-Time Training on the deep learning model discussed in \Cref{results}.
Before stating our main theoretical result, we first illustrate the general intuition with a toy model.
Consider a regression problem where $x\in\mathbb{R}^d$ denotes the input, $y_1\in\mathbb{R}$ denotes the label, and the objective is the square loss
$(\hat{y}-y_1)^2/2$ for a prediction $\hat{y}$.
Consider a two layer linear network parametrized by ${\bm{A}}\in\mathbb{R}^{h\times d}$ and ${\bm{v}} \in\mathbb{R}^h$ (where $h$
stands for the hidden dimension).
The prediction according to this model
is $\hat{y}={\bm{v}}^\top {\bm{A}} x$, and the main task loss is
\begin{equation}
l_m(x, y_1; {\bm{A}}, {\bm{v}}) = \frac{1}{2}\left(y_1 - {\bm{v}}^ \top {\bm{A}} x\right)^2.
\end{equation}
In addition, consider a self-supervised regression task that also uses the square loss and automatically generates a label $y_s$ for $x$.
Let the self-supervised head be parametrized by ${\bm{w}}\in\mathbb{R}^h$. Then the self-supervised task loss is
\begin{equation}
l_s(x, y_2; {\bm{A}}, {\bm{w}}) = \frac{1}{2}\left(y_2 - {\bm{w}}^\top{\bm{A}} x\right)^2.
\end{equation}
Now we apply Test-Time Training to update the shared feature extractor ${\bm{A}}$ by
one step of gradient descent on $l_s$, which we can compute with $y_2$ known.
This gives us
\begin{equation}
{\bm{A}}' \leftarrow {\bm{A}} - \eta\left( y_2-{\bm{w}}^\top{\bm{A}} x \right) \left(-{\bm{w}} x^\top\right),
\end{equation}
where ${\bm{A}}'$ is the updated matrix and $\eta$ is the learning rate.
If we set $\eta = \eta^*$ where
\begin{equation}
\label{magic_lr}
\eta^* = \frac{y_1 - {\bm{v}}^\top {\bm{A}} x}{\left( y_2 - {\bm{w}}^\top {\bm{A}} x \right)
{\bm{v}}^\top {\bm{w}} x^\top x},
\end{equation}
then with some simple algebra, it is easy to see that the main task loss
$l_m(x, y_1; {\bm{A}}', {\bm{v}}) = 0$.
Concretely, Test-Time Training drives the main task loss down to zero with a single gradient step for a carefully chosen learning rate.
In practice, this learning rate is unknown since it depends on the unknown $y_1$.
However, since our model is convex, as long as $\eta^*$ is positive, it suffices to set $\eta$ to be a small positive constant (see details in the appendix).
If $x \neq 0$, one sufficient condition for $\eta^*$ to be positive (when neither loss is zero) is to have
\begin{align}
\label{equal_sign}
\sign\left( y_1 - {\bm{v}}^\top {\bm{A}} x \right) = \sign\left( y_2 - {\bm{w}}^\top {\bm{A}} x \right)\\
\text{and}\quad
{\bm{v}}^\top {\bm{w}} > 0 \;.
\end{align}
For our toy model, both parts of the condition above have an intuition interpretation.
The first part says that the mistakes should be correlated, in the sense
that predictions from both tasks are mistaken in the same direction. The second
part, $v^\top {\bm{w}} > 0$, says that the decision boundaries on the feature
space should be correlated. In fact, these two parts hold iff.
$\langle \nabla l_m({\bm{A}}), \nabla l_s({\bm{A}})\rangle > 0$
(see a simple proof of this fact in the appendix). To summarize, if the gradients have
positive correlation, Test-Time Training is guaranteed to reduce the main task loss.
Our main theoretical result extends this to general smooth and convex loss functions.
\newpage
{\theorem \label{main_theorem}
Let $l_m(x, y; {\bm{\theta}})$ denote the main task loss on test instance $x, y$ with
parameters ${\bm{\theta}}$, and $l_s(x; {\bm{\theta}})$ the self-supervised task loss that only depends on $x$.
Assume that for all $x, y$,
$l_m(x, y; {\bm{\theta}})$ is differentiable, convex and $\beta$-smooth
in ${\bm{\theta}}$,
and both
$\norm{\nabla l_m(x, y; {\bm{\theta}})}, \norm{\nabla l_s(x, {\bm{\theta}})} \leq G$
for all ${\bm{\theta}}$.
With a fixed learning rate $\eta = \frac{{\epsilon}}{\beta G^2}$,
for every $x, y$ such that
\begin{align}
\langle \nabla l_m(x, y; {\bm{\theta}}), \nabla l_s(x; {\bm{\theta}}) \rangle > {\epsilon},
\end{align}
we have
\begin{align}
l_m(x, y; {\bm{\theta}})
> l_m(x, y; {\bm{\theta}}(x)),
\end{align}
where ${\bm{\theta}}(x) = {\bm{\theta}} - \eta \nabla l_s(x; {\bm{\theta}})$ i.e. Test-Time Training with one step of gradient descent.
}
The proof uses standard techniques in optimization, and is left for
the appendix. Theorem 1 reveals gradient correlation as a determining factor of the success of Test-Time Training in the smooth and convex case.
In \autoref{gradient_improve}, we empirically show that our insight also holds for non-convex loss functions,
on the deep learning model and across the diverse set of corruptions considered in \Cref{results}; stronger gradient correlation clearly indicates more performance improvement over the baseline.
\section{Empirical Results}
\label{results}
We experiment with both versions of our method (standard and online) on three kinds of benchmarks for distribution shifts, presented here in the order of visually low to high-level.
Our code is available at the project website.
\input{tex/results_cc}
\paragraph{Network details.}
Our architecture and hyper-parameters are consistent across all experiments. We use ResNets \citep{he2016identity}, which are constructed differently for CIFAR-10 \citep{krizhevsky2009learning} (26-layer) and ImageNet \citep{ILSVRC15} (18-layer). The CIFAR-10 dataset contains 50K images for training, and 10K images for testing. The ImageNet contains 1.2M images for training and the 50K validation images are used as the test set. ResNets on CIFAR-10 have three groups, each containing convolutional layers with the same number of channels and size of feature maps; our splitting point is the end of the second group. ResNets on ImageNet have four groups; our splitting point is the end of the third group.
We use Group Normalization (GN) instead of Batch Normalization (BN) in our architecture, since BN has been shown to be ineffective when training with small batches, for which the estimated batch statistics are not accurate \citep{ioffe2015batch}. This technicality hurts Test-Time Training since each batch only contains (augmented) copies of a single image. Different from BN, GN is not dependent on batch size and achieves similar results on our baselines. We report results with BN in \Cref{results_additional} of the appendix for completeness.
We directly compare our architecture to that of \citet{hendrycks2018using} in \autoref{reviewer_stuff}.
\input{tex/results_imagenet}
\paragraph{Optimization details.}
For joint training (\autoref{optimize_train}),
we use stochastic gradient descent with standard hyper-parameters as \citep{huang2016deep, he2016deep}.
For Test-Time Training (\autoref{optimize_test}), we use stochastic gradient descent with the learning rate set to that of the last epoch during training, which is 0.001 in all our experiments. We set weight decay and momentum to zero during Test-Time Training, inspired by practice in \citep{he2018rethinking, liu2018rethinking}. For the standard version of Test-Time Training, we take ten gradient steps, using batches independently generated by the same image.
For online version of Test-Time Training, we take only one gradient step given each new image. We use random crop and random horizontal flip for data augmentation. See \Cref{computational} of the appendix for computational aspects of our method.
In all the tables and figures, \emph{object recognition task only} refers to the plain ResNet model (using GN, unless otherwise specified);
\emph{joint training}
refers to the model jointly trained on both the main task and the self-supervised task, fixed at test time; this has been proposed as the method in \citet{hendrycks2019using};
\emph{Test-Time Training (TTT)} refers to the standard version described \autoref{method};
and \emph{online Test-Time Training (TTT-Online)} refers to the online version that does not discard ${\bm{\theta}}(x_t)$ for $x_t$ arriving sequentially from the same distribution.
Performance for TTT-Online is calculated as the average over the entire test set; we always shuffle the test set before TTT-Online to avoid ordering artifacts.
\subsection{Object Recognition on Corrupted Images}
\label{results_cc}
\citet{hendrycks2019benchmarking} propose to benchmark robustness of object recognition with 15 types of corruptions
from four broad categories: noise, blur, weather and digital. Each corruption type comes in five levels of severity, with level 5 the most severe (details and sample images in the appendix).
The corruptions are simulated to mimic real-world corruptions as much as possible on copies of the test set for both CIFAR-10 and ImageNet. The new test sets are named as CIFAR-10-C and ImageNet-C, respectively.
In the proposed benchmark, training should be done on the original training set, and the diversity of corruption types should make it difficult for any methods to work well across the board if it relies too much on corruption specific knowledge. For online Test-Time Training, we take the entire test set as a stream of incoming images, and update and test on each image in an online manner as it arrives.
\paragraph{CIFAR-10-C.}
Our results on the level 5 corruptions (most severe) are shown in \autoref{fig:cc}.
The results on levels 1-4 are shown in
\Cref{results_additional} in appendix.
Across all five levels and 15 corruption types, both standard and online versions of Test-Time Training improve over the object recognition task only baseline by a large margin.
The standard version always improves over joint training, and the online version often improves significantly ($>$10\%) over joint training and never hurts by more than 0.2\%.
Specifically, TTT-Online contributes $>$24\% on the three noise types and 38\% on pixelation.
For a learning problem with the seemingly unstable setup that abuses a single image, this kind of consistency is rather surprising.
The baseline ResNet-26 with object recognition task only has error 8.9\% on the original test set of CIFAR-10. The joint training baseline actually improves performance on the original to 8.1\%. More surprisingly, unlike many other methods that trade off original performance for robustness, Test-Time Training further improves on the original test set by 0.2\% consistently over multiple independent trials. This suggests that our method does not choose between specificity and generality.
\newpage
Separate from our method, it is interesting to note that joint training consistently improves over the single-task baseline, as discovered by \citet{hendrycks2019using}.
\citet{hendrycks2019benchmarking} have also experimented with various other training methods on this benchmark, and point to Adversarial Logit Pairing (ALP) \citep{kannan2018adversarial} as the most effective approach.
Results of this additional baseline on all levels of CIFAR-10-C are shown in the appendix, along with its implementation details. While surprisingly robust under some of the most severe corruptions (especially the three noise types), ALP incurs a much larger error (by a factor of two) on the original distribution and some corruptions (e.g. all levels of contrast and fog), and hurts performance significantly when the corruptions are not as severe (especially on levels 1-3);
this kind of tradeoff is to be expected for methods based on adversarial training.
\input{tex/camera_ready_results}
\paragraph{ImageNet-C.}
Our results on the level 5 corruptions (most severe) are shown in \autoref{fig:cc_imagenet}. We use accuracy instead of error for this dataset
because the baseline performance is very low for most corruptions.
The general trend is roughly the same as on CIFAR-10-C. The standard version of TTT always improves over the baseline and joint training, while the online version only hurts on the original by 0.1\% over the baseline, but significantly improves (by a factor of more than three) on many of the corruption types.
In the lower panel of \autoref{fig:cc_imagenet}, we visualize how the accuracy (averaged over a sliding window) of the online version changes as more images are tested.
Due to space constraints, we show this plot on the original test set, as well as every third corruption type, following the same order as in the original paper.
On the original test set, there is no visible trend in performance change after updating on the 50,000 samples.
With corruptions, accuracy has already risen significantly after 10,000 samples, but is still rising towards the end of the 50,000 samples, indicating room for additional improvements if more samples were available.
Without seeing a single label, TTT-Online behaves as if we were training on the test set from the appearance of the plots.
\paragraph{Comparison with unsupervised domain adaptation.}
\autoref{table:uda} empirically compares online Test-Time Training (TTT-Online) with unsupervised domain adaptation through self-supervision (UDA-SS)~\citep{sun2019uda}, which is similar to our method in spirit but is designed for the setting of unsupervised domain adaptation (\Cref{related} provides a survey of other related work in this setting).
Given labeled data from the training distribution and unlabeled data from the test distribution, UDA-SS hopes to find an invariant representation that extracts useful features for both distributions by learning to perform a self-supervised task, specifically rotation prediction, simultaneously on data from both. It then learns a labeling function on top of the invariant representation using the labeled data. In our experiments, the unlabeled data given to UDA-SS is the \emph{entire test set itself} without the labels.
Because TTT-Online can only learn from the unlabeled test samples that have already been evaluated on, it is given less information than UDA-SS at all times. In this sense, UDA-SS should be regarded as an oracle rather than a baseline.
Surprisingly, TTT-Online outperforms UDA-SS on 13 out of the 15 corruptions as well as the original distribution.
Our explanation is that UDA-SS has to find an invariant representation for both distributions, while TTT-Online only adapts the representation to be good for the current test distribution. That is, TTT-Online has the flexibility to forget the training distribution representation, which is no longer relevant. This suggests that in our setting, forgetting is not harmful and perhaps should even be taken advantage of.
\paragraph{Gradually changing distribution shifts.}
In our previous experiments, we have been evaluating the online version under the assumption that the test inputs $x_t$ for $t=1...n$ are all sampled from the same test distribution $Q$, which can be different from the training distribution $P$.
This assumption is indeed satisfied for i.i.d. samples from a shuffled test set.
But here we show that this assumption can in fact be relaxed to allow $x_t\sim Q_t$, where $Q_t$ is close to $Q_{t+1}$ (in the sense of distributional distance).
We call this the assumption of gradually changing distribution shifts. We perform experiments by simulating such distribution shifts on the three noise types of CIFAR-10-C. For each noise type, $x_t$ is corrupted with standard deviation $\sigma_t$, and $\sigma_1,...,\sigma_n$ interpolate between the standard deviation of level 1 and level 5. So $x_t$ is more severely corrupted as we evaluate further into the test set and $t$ grows larger.
As shown in \autoref{fig:smooth},
TTT-Online still improves upon joint training (and our standard version) with this relaxed assumption, and even upon UDA-SS for the first two noise types.
\input{tex/cifar7_table}
\subsection{Object Recognition on Video Frames}
\label{ivc}
The Robust ImageNet Video Classification (VID-Robust) dataset was developed by \citet{shankar2019systematic} from the ImageNet Video detection dataset \citep{ILSVRC15}, to demonstrate how deep models for object recognition trained on ImageNet (still images) fail to adapt well to video frames. The VID-Robust dataset contains 1109 sets of video frames in 30 classes; each set is a short video clip of frames that are similar to an anchor frame. Our results are reported on the anchor frames. To map the 1000 ImageNet classes to the 30 VID-Robust classes, we use the max-conversion function in \citet{shankar2019systematic}.
Without any modifications for videos, we apply our method to VID-Robust on top of the same ImageNet model as in the previous subsection.
Our classification accuracy is reported in \autoref{vidtable}.
In addition, we take the seven classes in VID-Robust that overlap with CIFAR-10, and re-scale those video frames to the size of CIFAR-10 images, as a new test set for the model trained on CIFAR-10 in the previous subsection. Again, we apply our method to this dataset without any modifications.
Our results are shown in \autoref{table.cifar7}, with a breakdown for each class.
Noticing that Test-Time Training does not improve on the airplane class, we inspect some airplane samples (\autoref{vid_samples}), and observe black margins on two sides of most images, which provide a trivial hint for rotation prediction.
In addition, given an image of airplanes in the sky, it is often impossible even for humans to tell if it is rotated.
This shows that our method requires the self-supervised task to be both well defined and non-trivial.
\subsection{CIFAR-10.1: Unknown Distribution Shifts}
\label{cifar10_new}
CIFAR-10.1 \citep{recht2018cifar} is a new test set of size 2000 modeled after CIFAR-10, with the exact same classes and image dimensionality, following the dataset creation process documented by the original CIFAR-10 paper as closely as possible.
The purpose is to investigate the distribution shifts present between the two test sets, and the effect on object recognition.
All models tested by the authors suffer a large performance drop on CIFAR-10.1 comparing to CIFAR-10,
even though there is no human noticeable difference, and both have the same human accuracy.
This demonstrates how insidious and ubiquitous distribution shifts are, even when researchers strive to minimize them.
\begin{table}
\vspace{-1ex}
\begin{center}{
\begin{tabular}{c|c} \toprule
Method & Accuracy (\%) \\ \midrule
{\small Object recognition task only} & 62.7\\ \midrule
{\small Joint training {\scriptsize \citep{hendrycks2019using}}} & 63.5\\ \midrule
TTT (standard version) & 63.8\\ \midrule
TTT-Online & 64.3\\ \bottomrule
\end{tabular}
\caption{Test accuracy (\%) on VID-Robust dataset~\cite{shankar2019systematic}.
TTT and TTT-Online improve over the baselines.
}\label{vidtable}
}\end{center}
\end{table}
\input{tex/cifar10_new_table}
The distribution shifts from CIFAR-10 to CIFAR-10.1 pose an extremely difficult problem, and no prior work has been able to improve the performance of an existing model on this new test set,
probably because:
1) researchers cannot even identify the distribution shifts, let alone describe them mathematically;
2) the samples in CIFAR-10.1 are only revealed at test time; and even if they were revealed during training, the distribution shifts are too subtle, and the sample size is too small, for domain adaptation~\citep{recht2018cifar}.
On the original CIFAR-10 test set, the baseline with only object recognition has error 8.9\%,
and with joint training has 8.1\%;
comparing to the first two rows of \autoref{cifar10_new_table},
both suffer the typical performance drop (by a factor of two).
TTT yields an improvement of 0.8\%
(relative improvement of 4.8\%) over joint training.
We recognize that this improvement is small relative to the performance drop, but see it as an encouraging first step for this very difficult problem.
\section{Proofs}
Here we prove the theoretical results in the main paper.
\subsection{The Toy Problem}
The following setting applies to the two lemmas; this is simply the setting of our toy problem, reproduced here for ease of reference.
\pagebreak
Consider a two layer linear network parametrized by ${\bm{A}}\in\mathbb{R}^{h\times d}$ (shared) and ${\bm{v}}, {\bm{w}}\in\mathbb{R}^h$ (fixed) for the two heads, respectively.
Denote $x\in\mathbb{R}^d$ the input and $y_1, y_2\in\mathbb{R}$ the labels for the two tasks, respectively.
For the main task loss
\begin{equation}
l_m({\bm{A}}; {\bm{v}}) = \frac{1}{2}\left(y_1 - {\bm{v}}^\top {\bm{A}} x\right)^2,
\end{equation}
and the self-supervised task loss
\begin{equation}
l_s({\bm{A}}; {\bm{w}}) = \frac{1}{2}\left(y_2 - {\bm{w}}^\top {\bm{A}} x\right)^2,
\end{equation}
Test-Time Training yields an updated matrix
\begin{equation}
{\bm{A}}' \leftarrow {\bm{A}} - \eta\left( y_2-{\bm{w}}^\top {\bm{A}} x \right) \left(-{\bm{w}} x^\top \right),
\end{equation}
where $\eta$ is the learning rate.
{\lemma \label{lemma_lr}
Following the exposition of the main paper, let
\begin{equation}
\eta^* = \frac{(y_1 - v^\top A x)}{(y_2 - w^\top A x) v^\top w x^\top x}.
\end{equation}
Assume $\eta^* \in [\epsilon, \infty)$ for some $\epsilon > 0$.
Then for any $\eta \in (0, \epsilon]$,
we are guaranteed an improvement on the main loss i.e. $l_m({\bm{A}}') < l_m({\bm{A}})$.
}
{\proof
From the exposition of the main paper, we know that
$$l_m({\bm{A}} - \eta^* \nabla l_s{\bm{A}})) = 0,$$
which can also be derived from simple algebra.
Then by convexity, we have
\begin{align}
l_m&\paren{{\bm{A}} - \eta \nabla l_s({\bm{A}})}\\
&= l_m\paren{\paren{1 - \frac{\eta}{\eta^*}} {\bm{A}}
+ \frac{\eta}{\eta^*}({\bm{A}} - \eta^* \nabla l_s({\bm{A}}))} \\
&\leq \paren{1 - \frac{\eta}{\eta^*}} l_m({\bm{A}}) + 0 \\
&\leq \paren{1 - \frac{\eta}{\epsilon}} l_m({\bm{A}})\\
&< l_m({\bm{A}}),
\end{align}
where the last inequality uses the assumption that $l_m({\bm{A}})>0$, which holds
because $\eta^* > 0$.}
{\lemma \label{lemma_sign}
Define
$\langle {\bm{U}}, {\bm{V}} \rangle
= \text{vec} \left({\bm{U}}\right)^\top \text{vec} \left({\bm{V}} \right)$
i.e. the Frobenious inner product, then
\begin{equation}
\sign \left(\eta^*\right) = \sign \left(\langle \nabla l_m({\bm{A}}), \nabla l_s({\bm{A}})\rangle\right).
\end{equation}
}
{\proof By simple algebra,
\begin{align*}
&\langle\nabla l_m({\bm{A}}), \nabla l_s({\bm{A}})\rangle \\
&=
\langle\left( y_1-{\bm{v}}^\top {\bm{A}} x \right)\left(-{\bm{v}} x^\top \right),
\left( y_2-{\bm{w}}^\top {\bm{A}} x \right)\left(-{\bm{w}} x^\top \right)\rangle\\
&= \left( y_1-{\bm{v}}^\top {\bm{A}} x \right)\left( y_2-{\bm{w}}^\top {\bm{A}} x \right)
\Tr \left( x {\bm{v}}^\top {\bm{w}} x^\top \right)\\
&= \left( y_1-{\bm{v}}^\top{\bm{A}} x \right)\left( y_2-{\bm{w}}^\top{\bm{A}} x \right)
{\bm{v}}^\top {\bm{w}} x^\top x,
\end{align*}
}
which has the same sign as $\eta^*$.
\subsection{Proof of Theorem 1}
\label{main_theorem_proof}
For any $\eta$, by smoothness and convexity,
\begin{align*}
\label{proof_first_eq}
&l_m(x,y; {\bm{\theta}}(x))
= l_m(x,y; {\bm{\theta}} - \eta \nabla l_s(x; {\bm{\theta}}))\\
&\leq l_m(x,y; {\bm{\theta}})
+ \eta \langle \nabla l_m(x, y; {\bm{\theta}}), \nabla l_s(x, {\bm{\theta}}) \rangle \\
&\qquad+ \frac{\eta^2\beta}{2} \norm{\nabla l_s(x; {\bm{\theta}})}^2.
\end{align*}
Denote
$$\eta^* = \frac{\langle \nabla l_m(x, y; {\bm{\theta}}), \nabla l_s(x, {\bm{\theta}}) \rangle}
{\beta \norm{\nabla l_s(x; {\bm{\theta}})}^2}.$$
Then \autoref{proof_first_eq} becomes
\begin{align}
&l_m(x,y; {\bm{\theta}} - \eta^* \nabla l_s(x; {\bm{\theta}}))\\
&\leq l_m(x,y; {\bm{\theta}})
- \frac{\langle \nabla l_m(x, y; {\bm{\theta}}), \nabla l_s(x, {\bm{\theta}}) \rangle^2}
{2\beta \norm{\nabla l_s(x; {\bm{\theta}})}^2}.
\end{align}
And by our assumptions on the gradient norm
and gradient inner product,
\begin{align}
\label{proof_second_eq}
l_m(x,y; {\bm{\theta}}) - l_m(x,y; {\bm{\theta}} - \eta^* \nabla l_s(x; {\bm{\theta}})) \geq\frac{{\epsilon}^2}{2\beta G^2}.
\end{align}
Because we cannot observe $\eta^*$ in practice, we instead use a fixed learning rate $\eta = \frac{{\epsilon}}{\beta G^2}$, as stated in Theorem 1.
Now we argue that this fixed learning rate still improves performance on the main task.
By our assumptions, $\eta^* \geq \frac{{\epsilon}}{\beta G^2}$,
so $\eta \in (0, \eta^*]$.
Denote ${\bm{g}} = \nabla l_s(x; {\bm{\theta}})$, then by convexity of $l_m$,
\begin{align}
&l_m(x,y; {\bm{\theta}}(x))
= l_m(x,y; {\bm{\theta}} - \eta {\bm{g}})\\
&= l_m\paren{x,y;
\paren{1 - \frac{\eta}{\eta^*}} {\bm{\theta}} +
\frac{\eta}{\eta^*} \paren{{\bm{\theta}} - \eta^* {\bm{g}}}} \\
&\leq \paren{1 - \frac{\eta}{\eta^*}} l_m(x,y; {\bm{\theta}})
+ \frac{\eta}{\eta^*} l_m(x,y; {\bm{\theta}} - \eta^* g)
\end{align}
Combining with \autoref{proof_second_eq}, we have
\begin{align*}
l_m(x,y; {\bm{\theta}}(x))
&\leq \paren{1 - \frac{\eta}{\eta^*}} l_m(x,y; {\bm{\theta}}) \\
&\qquad
+ \frac{\eta}{\eta^*} \paren{l_m(x,y; {\bm{\theta}}) - \frac{{\epsilon}^2}{2\beta G^2}} \\
&= l_m(x,y; {\bm{\theta}}) - \frac{\eta}{\eta^*}\frac{{\epsilon}^2}{2\beta G^2}
\end{align*}
Since $\eta / \eta^* > 0$, we have shown that
\begin{align}
l_m(x,y; {\bm{\theta}}) - l_m(x,y; {\bm{\theta}}(x)) > 0.
\end{align}
\section{Related Work}
\label{related}
\paragraph{Learning on test instances.}
\citet{shocher2018zero} provide a key inspiration for our work by showing that image super-resolution could be learned at test time simply by trying to upsample a downsampled version of the input image. More recently, \citet{bau2019semantic} improve photo manipulation by adapting a pre-trained GAN to the statistics of the input image. One of the earlier examples of this idea comes from \citet{jain2011online}, who improve Viola-Jones face detection \citep{viola2001rapid} by bootstrapping the more difficult faces in an image from the more easily detected faces in that same image.
The online version of our algorithm is inspired by the work of \citet{mullapudi2018online}, which makes video segmentation more efficient by using a student model that learns online from a teacher model.
The idea of online updates has also been used in \citet{kalal2011tracking} for tracking and detection.
A recent work in echocardiography \citep{zhu2019neural} improves the deep learning model that tracks myocardial motion and cardiac blood flow with sequential updates.
Lastly, we share the philosophy of transductive learning \citep{vapnik2013nature, gammerman1998learning}, but have little in common with their classical algorithms;
recent work by \citet{nilesh} theoretically explores this for linear prediction, in the context of debiasing the LASSO estimator.
\paragraph{Self-supervised learning} studies how to create labels from the data, by designing various pretext tasks that can learn semantic information without human annotations,
such as context prediction \citep{doersch2015unsupervised}, solving jigsaw puzzles \citep{noroozi2016unsupervised}, colorization \citep{larsson2017colorproxy,zhang2016colorful}, noise prediction \citep{bojanowski2017unsupervised}, feature clustering \citep{caron2018deep}.
Our paper uses rotation prediction \citep{gidaris2018unsupervised}.
\citet{asano2019surprising} show that self-supervised learning on only a single image, surprisingly, can produce low-level features that generalize well.
Closely related to our work, \citet{hendrycks2019using} propose that jointly training a main task and a self-supervised task (our joint training baseline in \Cref{results}) can improve robustness on the main task.
The same idea is used in few-shot learning \citep{su2019boosting}, domain generalization \citep{carlucci2019domain}, and unsupervised domain adaptation \citep{sun2019uda}.
\paragraph{Adversarial robustness}
studies the robust risk
$
R_{P, \Delta}({\bm{\theta}}) = \mathbb{E}_{x,y\sim P} \max_{\delta \in \Delta} l(x+\delta, y; ~{\bm{\theta}}),
$
where $l$ is some loss function, and $\Delta$ is the set of perturbations; $\Delta$ is often chosen as the $L_p$ ball, for $p\in \{1, 2, \infty\}$.
Many popular algorithms formulate and solve this as a robust optimization problem \citep{goodfellow2014explaining, madry2017towards, sinha2017certifying, raghunathan2018certified, wong2017provable, croce2018provable}, and the most well known technique is adversarial training.
Another line of work is based on randomized smoothing \citep{cohen2019certified, salman2019provably}, while some other approaches, such as input transformations
\citep{guo2017countering, song2017pixeldefend}, are shown to be less effective \citep{athalye2018obfuscated}.
There are two main problems with the approaches above.
First, all of them can be seen as \emph{smoothing} the decision boundary.
This establishes a theoretical tradeoff between accuracy and robustness \citep{tsipras2018robustness, zhang2019theoretically}, which we also observe empirically with our adversarial training baseline in \Cref{results}.
Intuitively, the more diverse $\Delta$ is, the less effective this \emph{one-boundary-fits-all} approach can be for a particular element of $\Delta$.
Second, adversarial methods rely heavily on the mathematical structure of $\Delta$, which might not accurately model perturbations in the real world.
Therefore, generalization remains hard outside of the $\Delta$ we know in advance or can mathematically model, especially for non-adversarial distribution shifts.
Empirically, \citet{kang2019transfer} shows that robustness for one $\Delta$ might not transfer to another, and training on the $L_\infty$ ball actually hurts robustness on the $L_1$ ball.
\paragraph{Non-adversarial robustness} studies the effect of corruptions, perturbations, out-of-distribution examples, and real-world distribution shifts
\citep{hendrycks2019improving, hendrycks2019using, hendrycks2018using, hendrycks2016baseline}.
\citet{geirhos2018generalisation} show that training on images corrupted by Gaussian noise makes deep learning models robust to this particular noise type, but does not improve performance on images corrupted by another noise type e.g. salt-and-pepper noise.
\paragraph{Unsupervised domain adaptation}
(a.k.a. transfer learning) studies the problem of distribution shifts, when an unlabeled dataset from the test distribution (target domain) is available at training time,
in addition to a labeled dataset from the training distribution (source domain)
\citep{chen2011co, gong2012geodesic, long2015learning, ganin2016domain, long2016unsupervised, tzeng2017adversarial,hoffman2017cycada,csurka2017domain, chen2018adversarial}.
The limitation of the problem setting, however, is that generalization might only be improved for this specific test distribution, which can be difficult to anticipate in advance.
Prior work try to anticipate broader distributions by using multiple and evolving domains \citep{hoffman2018algorithms, hoffman2012discovering, hoffman2014continuous}.
Test-Time Training does not anticipate any test distribution, by changing the setting of unsupervised domain adaptation, while taking inspiration from its algorithms.
Our paper is a follow-up to \citet{sun2019uda}, which we explain and empirically compare with in \Cref{results}.
Our update rule can be viewed as performing \emph{one-sample unsupervised domain adaptation} on the fly, with the caveat that standard domain adaptation techniques might become ill-defined when there is only one sample from the target domain.
\paragraph{Domain generalization} studies the setting where a meta distribution generates multiple environment distributions, some of which are available during training (source), while others are used for testing (target)
\citep{li2018deep, shankar2018generalizing, muandet2013domain, balaji2018metareg, ghifary2015domain, motiian2017unified, li2017deeper, gan2016learning}.
With only a few environments, information on the meta distribution is often too scarce to be helpful,
and with many environments, we are back to the i.i.d. setting where each environment can be seen as a sample, and a strong baseline is to simply train on all the environments \citep{li2019episodic}.
The setting of domain generalization is limited by the inherent tradeoff between specificity and generality of a fixed decision boundary, and the fact that generalization is again elusive outside of the meta distribution i.e. the actual $P$ learned by the algorithm.
\paragraph{One (few)-shot learning} studies how to learn a new task or a new classification category using only one (or a few) sample(s), on top of a general representation that has been learned on diverse samples
\citep{snell2017prototypical, vinyals2016matching, fei2006one, ravi2016optimization, li2017meta, finn2017model, gidaris2018dynamic}.
Our update rule can be viewed as performing \emph{one-shot self-supervised learning} and can potentially be improved by progress in one-shot learning.
\paragraph{Continual learning} (a.k.a. learning without forgetting) studies the setting where a model is made to learn a sequence of tasks, and not forget about the earlier ones while training for the later \citep{li2017learning, lopez2017gradient, kirkpatrick2017overcoming, santoro2016meta}. In contrast, with Test-Time Training, we are not concerned about forgetting the past test samples since they have already been evaluated on; and if a past sample comes up by any chance, it would go through Test-Time Training again. In addition, the impact of forgetting the training set is minimal, because both tasks have already been jointly trained.
\paragraph{Online learning} (a.k.a. online optimization) is a well-studied area of learning theory
\citep{shalev2012online, hazan2016introduction}.
The basic setting repeats the following: receive $x_t$, predict $\hat{y}_t$, receive $y_t$ from a worst-case oracle, and learn.
Final performance is evaluated using the regret, which colloquially translates to how much worse the online learning algorithm performs in comparison to the best fixed model in hindsight.
In contrast, our setting never reveals any $y_t$ during testing even for the online version, so we do not need to invoke the concept of the worst-case oracle or the regret.
Also, due to the lack of feedback from the environment after predicting, our algorithm is motivated to learn (with self-supervision) before predicting $\hat{y}_t$ instead of after.
Note that some of the previously covered papers \cite{hoffman2014continuous, jain2011online, mullapudi2018online} use the term ``online learning'' outside of the learning theory setting, so the term can be overloaded.
\section{Informal Discussion on Our Variable Decision Boundary}
\label{vdb}
In the introduction, we claim that in traditional supervised learning ${\bm{\theta}}$ gives a fixed decision boundary, while our ${\bm{\theta}}$ gives a variable decision boundary. Here we informally discuss this claim.
Denote the input space $\mathcal{X}$ and output space $\mathcal{Y}$.
A decision boundary is simply a mapping $f: \mathcal{X} \rightarrow \mathcal{Y}$.
Let $\Theta$ be a model class e.g $\mathbb{R}^d$.
Now consider a family of parametrized functions $g_{\bm{\theta}}: \mathcal{X} \rightarrow \mathcal{Y}$, where ${\bm{\theta}}\in \Theta$.
In the context of deep learning, $g$ is the neural network architecture and ${\bm{\theta}}$ contains the parameters.
We say that $f$ is a fixed decision boundary w.r.t. $g$ and $\Theta$
if there exists ${\bm{\theta}}\in\Theta$ s.t. $f(x) = g_{\bm{\theta}}(x)$ for every $x\in\mathcal{X}$,
and a variable decision boundary if for every $x\in\mathcal{X}$, there exists ${\bm{\theta}}\in\Theta$ s.t. $f(x) = g_{\bm{\theta}}(x)$.
Note how selection of ${\bm{\theta}}$ can depend on $x$ for a variable decision boundary, and cannot for a fixed one. It is then trivial to verify that our claim is true under those definitions.
A critical reader might say that with an arbitrarily large model class, can't every decision boundary be fixed?
Yes, but this is not the end of the story.
Let $d = \dim(\mathcal{X}) \times \dim(\mathcal{Y})$, and consider the enormous model class $\Theta' = \mathbb{R}^d$ which is capable of representing all possible mappings between $\mathcal{X}$ and $\mathcal{Y}$.
Let $g'_{{\bm{\theta}}'}$ simply be the mapping represented by ${\bm{\theta}}'\in\Theta'$.
A variable decision boundary w.r.t. $g$ and $\Theta$ then indeed must be a fixed decision boundary w.r.t. $g'$ and $\Theta'$, but we would like to note two things.
First, without any prior knowledge, generalization in $\Theta'$ is impossible with any finite amount of training data;
reasoning about $g'$ and $\Theta'$ is most likely not productive from an algorithmic point of view, and the concept of a variable decision boundary is to avoid such reasoning.
Second, selecting ${\bm{\theta}}$ based on $x$ for a variable decision boundary can be thought of as ``training" on all points $x\in\mathbb{R}^d$; however, ``training" only happens when necessary, for the $x$ that it actually encounters.
Altogether, the concept of a variable decision boundary is different from what can be described
by traditional learning theory.
A formal discussion is beyond the scope of this paper and might be of interest to future work.
\section{Computational Aspects of Our Method}
\label{computational}
At test time, our method is $2~\times$
\texttt{batch\_size}
$\times$
\texttt{number\_of\_iterations}
times slower than regular testing,
which only performs a single forward pass for each sample.
As the first work on Test-Time Training, this paper is not as concerned about computational efficiency as improving robustness, but here we provide two potential solutions that might be useful, but have not been thoroughly verified.
The first is to use the thresholding trick on $l_s$, introduced as a solution for the small batches problem in the method section. For the models considered in our experiments, roughly $80\%$ of the test instances fall below the threshold, so Test-Time Training can only be performed on the other $20\%$ without much effect on performance, because those $20\%$ contain most of the samples with wrong predictions.
The second is to reduce the \texttt{number\_of\_iterations} of test-time updates. For the online version, the \texttt{number\_of\_iterations} is already 1, so there is nothing to do. For the standard version, we have done some preliminary experiments setting \texttt{number\_of\_iterations} to 1 (instead of 10) and learning rate to 0.01 (instead of 0.001), and observing results almost as good as the standard hyper-parameter setting.
A more in depth discussion on efficiency is left for future works, which might, during training, explicitly make the model amenable to fast updates.
\input{tex/proofs}
\pagebreak
\section{Additional Results on the Common Corruptions Dataset}
\label{results_additional}
For table aethetics, we use the following abbreviations:
B for baseline, JT for joint training, TTT for Test-Time Training standard version, and TTT-Online for online Test-Time Training i.e. the online version.
We have abbreviated the names of the corruptions, in order: original test set, Gaussian noise, shot noise, impulse noise, defocus blur, glass blue, motion blur, zoom blur, snow, frost, fog, brightness, contrast, elastic transformation, pixelation, and JPEG compression.
\subsection{Results Using Batch Normalization}
As discussed in the results section, Batch Normalization (BN) is ineffective for small batches, which are the inputs for Test-Time Training (both standard and online version) since there is only one sample available when forming each batch; therefore, our main results are based on a ResNet using Group Normalization (GN).
\autoref{figure:bn} and \autoref{table:bn} show results of our method on CIFAR-10-C level 5, with a ResNet using Batch Normalization (BN). These results are only meant to be a point of reference for the curious readers.
In the early stage of this project, we have experimented with two potential solutions to the small batches problem with BN. The naive solution is to fix the BN layers during Test-Time Training. but this diminishes the performance gains since there are fewer shared parameters.
The better solution, adopted for the results below, is hard example mining: instead of updating on all inputs, we only update on inputs that incur large self-supervised task loss $l_s$,
where the large improvements might counter the negative effects of inaccurate statistics.
Test-Time Training (standard version) is still very effective with BN.
In fact, some of the improvements are quite dramatic, such as on contrast (34\%), defocus blue (18\%) and Gaussian noise (22\% comparing to joint-training, and 16\% comparing to the baseline). Performance on the original distribution is still almost the same, and the original error with BN is in fact slightly lower than with GN, and takes half as many epochs to converge.
We did not further experiment with BN because of two reasons:
1) The online version does not work with BN, because the problem with inaccurate batch statistics is exacerbated when training online for many (e.g. 10000) steps. 2) The baseline error for almost every corruption type is significantly higher with BN than with GN. Although unrelated to the main idea of our paper, we make the interesting note that \emph{GN significantly improves model robustness}.
\subsection{Additional Baseline: Adversarial Logit Pairing}
As discussed in the results section, \citet{hendrycks2019benchmarking} point to Adversarial Logit Pairing (ALP) \citep{kannan2018adversarial} as an effective method for improving model robustness to corruptions and perturbations, even though it was designed to defend against adversarial attacks.
We take ALP as an additional baseline on all benchmarks based on CIFAR-10 (using GN), following the training procedure in \citet{kannan2018adversarial} and their recommended hyper-parameters.
The implementation of the adversarial attack comes from the codebase of \citet{ding2019advertorch}.
We did not run ALP on ImageNet because the two papers we reference for this method, \citet{kannan2018adversarial} and \citet{hendrycks2019benchmarking}, did not run on ImageNet or make any claim or recommendation.
\subsection{Results on CIFAR-10-C and ImageNet-C, Level 5}
\autoref{table:full_c10c} and \autoref{table:imgnet} correspond to the bar plots in the results section. Two rows of \autoref{table:full_c10c} have been presented as \autoref{table:uda} in the main text.
\begin{figure*}
\centering
\includegraphics[width=0.6\textwidth]{plots/distorted}
\caption{
Sample images from the Common Corruptions Benchmark, taken from the original paper by \citet{hendrycks2019benchmarking}.
}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\textwidth]{plots/C10C_layer2_5_bn_expand_final}
\caption{
Test error (\%) on CIFAR-10-C, level 5, ResNet-26 with Batch Normalization.
\label{figure:bn}
}
\end{figure*}
\begin{table*}[ht]
\footnotesize
\begin{center}
{
\setlength\tabcolsep{3.5pt}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\hline
& orig& gauss& shot& impul& defoc& glass& motn& zoom& snow& frost& fog& brit& contr& elast& pixel& jpeg\\
\hline
B & 7.9& 63.9& 58.8& 64.3& 46.3& 54.6& 41.6& 45.9& 31.9& 44.0& 37.5& 13.0& 69.2& 33.8& 61.4& 31.7\\
\hline
JT & 7.5& 70.7& 65.6& 67.2& 43.1& 55.4& 40.9& 42.7& 30.3& 44.5& 42.5& 12.7& 58.6& 30.7& 62.6& 31.9\\
\hline
TTT & 7.9& 47.9& 45.2& 54.8& 27.6& 50.4& 31.5& 30.9& 28.7& 34.3& 26.9& 12.6& 35.2& 30.6& 51.2& 31.3\\
\hline
\end{tabular}
}
\end{center}
\vspace{-2ex}
\caption{
Test error (\%) on CIFAR-10-C, level 5, ResNet-26 with Batch Normalization.
}
\label{table:bn}
\end{table*}
\subsection{Results on CIFAR-10-C, Levels 1-4}
The following bar plots and tables are on levels 1-4 of CIFAR-10-C.
The original distribution is the same for all levels, so are our results on the original distribution.
\subsection{Direct Comparison with \citet{hendrycks2019using}}
\label{reviewer_stuff}
The following comparison has been requested by an anonymous reviewer for our final version.
Our joint training baseline is based on \citet{hendrycks2019using}, but also incorporates some architectural changes (see below). We found these changes improved the robustness of our method, and felt that it was important to give the baseline the same benefit. Note that our joint training baseline overall performs better than Hendrycks: Compare Table S2 to Figure 3 of \citet{hendrycks2019using} (provided by the authors), our baseline has average error of 22.8\% across all corruptions and levels, while their average error is 28.6\%.
Summary of architectural changes: 1) Group Normalization (GN) instead of Batch Normalization (BN). For completeness, the results with BN are provided in Table S1; c.f. GN results in Table S2 which significantly improves robustness, with or without self-supervision. 2) We split after the second residual group, while they split after the third residual group right before the linear layer. This consistently gives about 0.5\% - 1\% improvement. 3) We use a ResNet-26, while they use a 40-2 Wide ResNet. But our baseline still performs better than their method even though our network is 4x smaller, due to the two tricks above.
\newpage
\begin{table*}[ht]
\footnotesize
\begin{center}
{
\setlength\tabcolsep{3.5pt}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\hline
& orig& gauss& shot& impul& defoc& glass& motn& zoom& snow& frost& fog& brit& contr& elast& pixel& jpeg\\
\hline
B & 8.9& 50.5& 47.2& 56.1& 23.7& 51.7& 24.3& 26.3& 25.6& 34.4& 28.1& 13.5& 25.0& 27.4& 55.8& 29.8\\
\hline
JT & 8.1& 49.4& 45.3& 53.4& 24.2& 48.5& 24.8& 26.4& 25.0& 32.5& 27.5& 12.6& 25.3& 24.0& 51.6& 28.7\\
\hline
TTT & 7.9& 45.6& 41.8& 50.0& 21.8& 46.1& 23.0& 23.9& 23.9& 30.0& 25.1& 12.2& 23.9& 22.6& 47.2& 27.2\\
\hline
TTT-Online & 8.2& 25.8& 22.6& 30.6& 14.6& 34.4& 18.3& 17.1& 20.0& 18.0& 16.9& 11.2& 15.6& 21.6& 18.1& 21.2\\
\hline
UDA-SS & 9.0& 28.2& 26.5& 20.8& 15.6& 43.7& 24.5& 23.8& 25.0& 24.9& 17.2& 12.7& 11.6& 22.1& 20.3& 22.6\\
\hline
ALP & 16.5& 22.7& 22.9& 28.3& 25.0& 25.6& 27.4& 23.1& 25.2& 27.2& 64.8& 21.7& 73.6& 23.0& 20.2& 18.9\\
\hline
\end{tabular}
}
\end{center}
\vspace{-2ex}
\caption{
Test error (\%) on CIFAR-10-C, level 5, ResNet-26.
}
\label{table:full_c10c}
\end{table*}
\begin{table*}[ht]
\footnotesize
\begin{center}
{
\setlength\tabcolsep{3.5pt}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\hline
& orig& gauss& shot& impul& defoc& glass& motn& zoom& snow& frost& fog& brit& contr& elast& pixel& jpeg\\
\hline
B & 68.9& 1.3& 2.0& 1.3& 7.5& 6.6& 11.8& 16.2& 15.7& 14.9& 15.3& 43.9& 9.7& 16.5& 15.3& 23.4\\
\hline
JT & 69.1& 2.1& 3.1& 2.1& 8.7& 6.7& 12.3& 16.0& 15.3& 15.8& 17.0& 45.3& 11.0& 18.4& 19.7& 22.9\\
\hline
TTT & 69.0& 3.1& 4.5& 3.5& 10.1& 6.8& 13.5& 18.5& 17.1& 17.9& 20.0& 47.0& 14.4& 20.9& 22.8& 25.3\\
\hline
TTT-Online & 68.8& 26.3& 28.6& 26.9& 23.7& 6.6& 28.7& 33.4& 35.6& 18.7& 47.6& 58.3& 35.3& 44.3& 47.8& 44.3\\
\hline
\end{tabular}
}
\end{center}
\vspace{-2ex}
\caption{
Test accuracy (\%) on ImageNet-C, level 5, ResNet-18.
}
\label{table:imgnet}
\end{table*}
\newpage
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\textwidth]{plots/C10C_layer2_4_gn_expand_final}
\vspace{-4ex}
\caption{
Test error (\%) on CIFAR-10-C, level 4.
See the results section for details.
}
\vspace{4ex}
\end{figure*}
\begin{table*}[ht]
\footnotesize
\begin{center}
{
\setlength\tabcolsep{3.5pt}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\hline
& orig& gauss& shot& impul& defoc& glass& motn& zoom& snow& frost& fog& brit& contr& elast& pixel& jpeg\\
\hline
B & 8.9& 46.4& 39.2& 44.8& 15.3& 52.5& 19.1& 20.5& 21.3& 26.9& 13.3& 10.5& 13.7& 20.8& 35.3& 26.9\\
\hline
JT & 8.1& 45.0& 38.3& 42.2& 16.4& 50.2& 20.7& 20.5& 21.1& 25.4& 14.1& 10.0& 14.7& 19.0& 33.2& 25.1\\
\hline
TTT & 7.9& 41.5& 35.4& 39.8& 15.0& 47.8& 19.1& 18.4& 20.1& 24.0& 13.5& 10.0& 14.1& 17.7& 29.4& 24.5\\
\hline
TTT-Online & 8.2& 22.9& 20.0& 23.9& 11.2& 35.1& 15.6& 13.8& 18.6& 15.9& 12.3& 9.7& 11.9& 16.7& 13.6& 19.8\\
\hline
ALP & 16.5& 21.3& 20.5& 24.5& 20.7& 25.9& 23.7& 21.4& 24.2& 23.9& 42.2& 17.5& 53.7& 22.1& 19.1& 18.5\\
\hline
\end{tabular}
}
\end{center}
\vspace{-2ex}
\caption{
Test error (\%) on CIFAR-10-C, level 4, ResNet-26.
}
\end{table*}
\begin{figure*}[ht]
\vspace{8ex}
\centering
\includegraphics[width=1.0\textwidth]{plots/C10C_layer2_3_gn_expand_final}
\vspace{-4ex}
\caption{
Test error (\%) on CIFAR-10-C, level 3.
See the results section for details.
}
\end{figure*}
\begin{table*}[ht]
\vspace{4ex}
\footnotesize
\begin{center}
{
\setlength\tabcolsep{3.5pt}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\hline
& orig& gauss& shot& impul& defoc& glass& motn& zoom& snow& frost& fog& brit& contr& elast& pixel& jpeg\\
\hline
B & 8.9& 42.2& 35.1& 30.7& 12.2& 41.7& 18.6& 17.5& 19.0& 25.3& 10.8& 9.7& 11.6& 15.3& 21.7& 24.6\\
\hline
JT & 8.1& 40.2& 34.4& 29.9& 12.2& 37.9& 20.8& 17.3& 18.4& 25.0& 11.4& 9.2& 12.0& 15.2& 20.8& 22.8\\
\hline
TTT & 7.9& 37.2& 31.6& 28.6& 11.5& 35.8& 19.1& 15.8& 17.8& 23.3& 11.0& 9.1& 11.6& 14.3& 18.9& 22.3\\
\hline
TTT-Online & 8.2& 21.3& 17.7& 17.9& 9.0& 23.4& 15.3& 12.5& 16.4& 15.8& 10.9& 9.0& 10.7& 12.8& 12.2& 18.7\\
\hline
ALP & 16.5& 20.0& 19.3& 20.5& 19.2& 21.2& 24.0& 20.5& 20.9& 24.2& 30.1& 16.6& 39.6& 20.9& 17.8& 18.0\\
\hline
\end{tabular}
}
\end{center}
\caption{
Test error (\%) on CIFAR-10-C, level 3, ResNet-26.
}
\end{table*}
\begin{figure*}[ht]
\vspace{-1ex}
\centering
\includegraphics[width=1.0\textwidth]{plots/C10C_layer2_2_gn_expand_final}
\vspace{-4ex}
\caption{
Test error (\%) on CIFAR-10-C, level 2.
See the results section for details.
}
\vspace{-1ex}
\end{figure*}
\begin{table*}[ht]
\footnotesize
\begin{center}
{
\setlength\tabcolsep{3.5pt}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\hline
& orig& gauss& shot& impul& defoc& glass& motn& zoom& snow& frost& fog& brit& contr& elast& pixel& jpeg\\
\hline
B & 8.9& 31.7& 22.6& 24.3& 9.9& 42.6& 14.9& 14.7& 21.7& 18.4& 9.8& 9.1& 10.0& 13.1& 17.1& 22.4\\
\hline
JT & 8.1& 31.0& 22.6& 23.4& 9.1& 39.2& 16.4& 14.2& 21.2& 17.5& 9.4& 8.3& 10.6& 12.8& 15.9& 20.5\\
\hline
TTT & 7.9& 28.8& 20.7& 23.0& 9.0& 36.6& 15.4& 13.1& 20.2& 16.9& 9.2& 8.3& 10.2& 12.5& 14.8& 19.7\\
\hline
TTT-Online & 8.2& 16.8& 13.8& 15.5& 8.5& 23.4& 13.3& 11.5& 16.8& 12.7& 9.4& 8.4& 9.7& 12.4& 11.5& 17.0\\
\hline
ALP & 16.5& 18.0& 17.2& 19.0& 17.8& 20.7& 21.2& 19.3& 19.0& 20.1& 22.4& 16.3& 29.2& 20.3& 17.4& 17.8\\
\hline
\end{tabular}
}
\end{center}
\vspace{-2ex}
\caption{
Test error (\%) on CIFAR-10-C, level 2, ResNet-26.
}
\end{table*}
\begin{figure*}[ht]
\vspace{-1ex}
\centering
\includegraphics[width=1.0\textwidth]{plots/C10C_layer2_1_gn_expand_final}
\vspace{-4ex}
\caption{
Test error (\%) on CIFAR-10-C, level 1.
See the results section for details.
}
\vspace{-1ex}
\end{figure*}
\begin{table*}[ht]
\footnotesize
\begin{center}
{
\setlength\tabcolsep{3.5pt}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\hline
& orig& gauss& shot& impul& defoc& glass& motn& zoom& snow& frost& fog& brit& contr& elast& pixel& jpeg\\
\hline
B & 8.9& 21.7& 17.1& 17.0& 9.0& 44.0& 12.1& 13.9& 14.3& 13.4& 9.2& 8.9& 9.0& 13.2& 12.0& 17.3\\
\hline
JT & 8.1& 20.4& 16.6& 16.9& 8.2& 40.5& 12.2& 13.0& 13.1& 12.3& 8.4& 8.1& 8.5& 12.9& 11.3& 15.9\\
\hline
TTT & 7.9& 19.1& 15.8& 16.5& 8.0& 37.9& 11.7& 12.2& 12.8& 11.9& 8.2& 8.0& 8.3& 12.6& 11.1& 15.5\\
\hline
TTT-Online & 8.2& 13.8& 11.9& 12.2& 8.5& 24.4& 10.5& 11.5& 12.4& 10.7& 8.5& 8.3& 8.6& 12.4& 10.7& 14.4\\
\hline
ALP & 17.0& 16.8& 17.6& 16.8& 20.9& 18.7& 19.0& 17.3& 17.5& 17.4& 16.1& 18.4& 20.4& 17.0& 17.2 & 17.5\\
\hline
\end{tabular}
}
\end{center}
\vspace{-2ex}
\caption{
Test error (\%) on CIFAR-10-C, level 1, ResNet-26.
}
\end{table*}
|
1,314,259,993,437 | arxiv | \section{Introduction}
\begin{abstract}
We present a complete classification of complex plane algebraic curves, equipped with the induced Euclidean, up to global bilipschitz homeomorphism.
\end{abstract}
One of the most natural questions in the investigation of a class of mathematical objects is the problem of classification of these objects. Here the classification problem is treated from the outer metric viewpoint: all the subsets of the euclidean space $\R^n$ are considered equipped with the induced Euclidean metric.
Our objects are complex algebraic plane curves and we obtain a complete classification of them up to global bilipschitz homeomorphism.
In order to present precisely our results let us introduce some definitions and notations.
\begin{definition}
Let $(M,d)$ and $(M',d')$ be two metric spaces.
A map $f:M\to M'$ is \textbf{Lipschitz} if there exists a real constant $c>0$ such that
$$ d'(f(x),f(y))\leq cd(x,y)\text{ for all }x,y\in M.$$
A Lipschitz map $f:M\to M'$ is called \textbf{bilipschitz} if its inverse exists and it is Lipschitz. We say that $M$ and $M'$ are \textbf{bilipschitz equivalent} if there exists a bilipschitz map $f:M\to M'$ between them.
The equivalence class of $M$ in this relation is called the \textbf{Lipschitz geometry} of $M$.
\end{definition}
One of the recent works on Lipschitz geometry, Neumann and Pichon \cite{Anne} proved that two germs of plane complex curves are bilipschitz homeomorphic if only if they have the same topological type, the meaning of topological type here is in the following definition.
\begin{definition}
Let $(C_1,p_1)\subset (S_1,p_1)$ and $(C_2,p_2)\subset (S_2,p_2)$ be two germs of complex curves on smooth surfaces.
We say that $(C_1,p_1)$ and $(C_2,p_2)$ have the same \textbf{topology type} if there is a homeomorphism of germs $h \colon (S_1,p_1) \to (S_2,p_2)$ such that $h(C_1)=C_2$.
\end{definition}
Previous contributions on the problem of classification of germs of plane complex curves up to bilipschitz equivalence were made by Fernandes \cite{fernandes2003}, and Pham and Teissier \cite{pham:hal-00384928}.
Let us point out that the theorems of Pham and Teissier \cite{pham:hal-00384928}, Fernandes \cite{fernandes2003}, and Neumann and Pichon \cite{Anne} are local results.
On the other hand, looking to scrutinize global Lipschitz geometry of algebraic sets, Fernandes and Sampaio \cite{Fernandes2019} arrived on the notion of bilipschitz equivalence at infinity of subsets in Euclidean space.
\begin{definition}\label{bi-Lipschitz equivalent at infinity}
Let $X\subset \R^n$ and $Y\subset\R^m$ be two subsets.
We say that $X$ and $Y$ are {\bf bilipschitz equivalent at infinity} if there exist compact subsets $K\subset\R^n$ and $\widetilde K\subset \R^m$, and a bilipschitz map $\Phi \colon X\backslash K\rightarrow Y\backslash \widetilde K$.
The equivalence class of $X$ in this relation is called the \textbf{Lipschitz geometry at infinity} of $X$.
\end{definition}
One of the goals of this paper is to bring a complete bilipschitz classification of complex algebraic plane curves at infinity.
In order to present our result concerned to that classification we need to introduce more definitions and notations. We denote by $\P^2$ the projective plane.
Let $[x:y:z]\in\P^2$ denote the subspace spanned by $(x,y,z)$, and let $\iota: \C^2\hookrightarrow \P^2$ be the parametrization given by $\iota(x,y)=[x:y:1]$.
The \textbf{line at infinity}, denoted by $L_{\infty}$, is the complement of $\iota( \mathbb{C}^2)$ in $\mathbb{P}^2 $.
\begin{definition}
Let $f\in \C[x,y]$ be a polynomial of degree $n$.
The \textbf{homogenization} of $f$ is the homogeneous polynomial $\widetilde{f}\in\C[x,y,z]$ defined by
$$\widetilde{f}(x,y,z)=z^nf\left(\frac{x}{z},\frac{y}{z}\right).$$
Let $C$ be a complex algebraic plane curve with equation $f(x,y)=0$.
The curve $\widetilde{C}=\{[x:y:z]\in \P^2:\widetilde{f}(x,y,z)=0\}$ in the projective plane is called the \textbf{homogenization} of $C$.
The \textbf{points at infinity} of $C$ are the elements of the intersection $\widetilde{C}\cap L_{\infty}$.
\end{definition}
We prove that the Lipschitz geometry at infinity of a complex algebraic plane curve $C$ determines and is determined by the topological type of the germ of the curve $\widetilde{C}\cup L_{\infty}$ at each point at infinity of $C$.
Since the topological type of germs of complex plane curves are presented in terms of dual resolution graphs we also encode the Lipschitz geometry at infinity in a tree obtained as a quotient of dual resolution graphs as follows.
\begin{theorem}\label{atinfinity}
Let $C$ and $C'$ be two complex algebraic plane curves.
The following are equivalent:
\begin{enumerate}
\item\label{it1} $C$ and $C'$ have the same Lipschitz geometry at infinity;
\item\label{it2} there is a bijection $\psi$ between the set of points at infinity of $C$ and the set of points at infinity of $C'$ such that $(\widetilde{C}\cup L_\infty,p)$ has the same topological type as $(\widetilde{C}'\cup L_\infty,\psi(p))$;
\item\label{it3} there is an isomorphism between the Lipschitz tree at infinity of $C$ and $C'$ (see definition \ref{liptree}).
\end{enumerate}
\end{theorem}
Armed with the classification of the Lipschitz geometry of germs and of the Lipschitz geometry at infinity of complex algebraic plane curves we obtain our main result.
\begin{theorem}\label{global}
Let $C$ and $\Gamma$ be two complex plane algebraic curves with irreducible components $ C= \bigcup_{i\in I} C_i$ and $\Gamma=\bigcup_{j\in J} \Gamma_j$. The following are equivalent:
\begin{enumerate}
\item \label{iti}$C$ and $\Gamma$ have the same Lipschitz geometry;
\item \label{itii} there are bijections $\sigma: I\to J$ and $\varphi$ between the set of singular points of $\widetilde{C}\cup L_\infty$ and the set of singular points of $\widetilde{\Gamma}\cup L_\infty$ such that $p\in L_\infty$ if only if $\varphi(p)\in L_\infty$, $(\widetilde{C}\cup L_\infty,p)$ has the same topological type as $(\widetilde{\Gamma}\cup L_\infty,\varphi(p))$, and each $(\widetilde{C}_i\cup L_\infty,p)$ has the same topological type as $(\widetilde{\Gamma}_{\sigma(i)}\cup L_\infty,\varphi(p))$;
\item \label{itiii} there is an isomorphism between the Lipschitz graph of $C$ and $\Gamma$ (see definition \ref{Lipgraph}).
\end{enumerate}
\end{theorem}
We organize the paper in the following way. In Section \ref{trees}, we present definitions of Eggers-Wall and carousel tree. We also describe how one gets the Eggers-Wall tree from the carousel tree.
Section \ref{lipimplies} is devoted to prove that the Lipschitz geometry at infinity of a complex plane algebraic curves gives us the topological data which implies (\ref{it2}) of Theorem \ref{atinfinity}, i.e., we prove that (\ref{it1}) implies (\ref{it2}).
We also give the definition of Lipschitz graph at infinity and explain the equivalence between (\ref{it2}) and (\ref{it3}).
In Section \ref{topsimplies}, we prove that (\ref{it2}) implies (\ref{it1}) of Theorem \ref{atinfinity}.
In the last section, we define Lipschitz graph of complex plane algebraic curves and prove Theorem \ref{global}.
\subsection*{Acknowledgments.}
I would like to thank Edson Sampaio, Lev Birbrair and Rodrigo Mendes for valuable discussions on the subject.
I am deeply indebted to Alexandre Fernandes and Anne Pichon for supervise me along this work which is part of my PhD thesis.
This work has been partially supported by CAPES/COFECUB project 88887.\\ 143177/2017-00 - An\'alise Geom\'etrica e Teoria de Singularidade em Espa\c cos Estratificados and by the project Lipschitz geometry of singularities (LISA) of the Agence Nationale de la Recherche (project ANR-17-CE40-0023) and also by Instituto Federal de Educa\c c\~ao, Ci\^encia e Tecnologia do Cear\'a (IFCE).
\section{Plane curve germs and their Eggers-Wall and carousel trees }\label{trees}
In this section we explain the basic notations and conventions used throughout the paper about reduced germs $C$ of complex curves on smooth surfaces.
Then we define the Eggers-Wall tree and the carousel tree of such a germ relative to a smooth branch contained in it.
The definition of Eggers-Wall tree which are given in this paper are the same present in \cite{GarciaBarroso2019}.
Finally, we describe how one gets the Eggers-Wall tree from the carousel tree. This process is also described in \cite{Anne}.
We recall some definitions and conventions about power series with positive rational exponents. Let $n$ be a positive integer, the ring $\C[[x^{1/n}]]$ consists of sequence $ (A_k)_{k\in \N}$ of elements of $\C$.
Let $\eta=(A_k)_{k\in \N}\in \C[[x^{1/n}]]$, we denote this element by
$$\eta=\sum_{k=0}^{\infty}A_kx^{k/n}.$$
The \textbf{exponents} of $\eta$ are the numbers $k/n$ such that $A_k\neq0$. We denote the set of exponents of $\eta$ by $\mathcal{E}(\eta)$. The \textbf{order} of $\eta\neq 0$, denoted by $\ord_x \eta$, is the smallest exponent of $\eta$. For technical reasons it is convenient to define the order of the zero to be $+\infty$. The subgroup of $n$-th roots of 1 acts on $\C[[x^{1/n}]]$ by the rule
$$(\rho,\eta)\to \eta(\rho\cdot x^{1/n}):=\sum_{k=0}^{\infty}A_k\rho^kx^{k/n},\text{ where $\rho$ is a $n$-th root of 1.}$$
All over this section, $\mathcal{S}$ denotes a complex manifold of dimension two.
We fix a point $O\in \mathcal{S}$.
All coordinate charts of this section are defined in a neighborhood of $O$, moreover, the point $O$ always has coordinate $(0,0)\in\C^2$. A curve germ in $(\mathcal{S},O)$ is the zero set of a non-constant holomorphic function germ from $(\mathcal{S},O)$ to $(\C,0).$ We denote by $(C,O)$ the germ of $C$ at $O$ and by $\mathcal{O}_O$ the ring of holomorphic function germs at $O$.
Any chart of $\mathcal{S}$ induces an isomorphism between $\mathcal{O}_O$ and $\C\{x,y\}$. Since $\C\{x,y\}$ is factorial, $\mathcal{O}_O$ is factorial. Let $C$ be a complex curve with equation $f=0$.
Then $f$ can be written as a product $g_1^{\alpha_1}\ldots g_k^{\alpha_k}$, with $g_1,\ldots, g_k$ irreducible, and the $\alpha_j$'s are positive integers.
The zero set of $g_j$'s are the \textbf{branches} of $C$. When $k=1$, we say that $C$ is \textbf{irreducible}.
The holomorphic function $f$ is \textbf{reduced} if each $\alpha_j=1$. We will always suppose all equations for curves are reduced.
The curve $C$ is said to be \textbf{smooth at} $O$ if there is a neighborhood $U$ of $O$ in $\mathcal{S}$ such that $C\cap U$ is a complex submanifold of $U$.
The next definitions of this section depend on the choice of a smooth curve $L$ at $O$.
In this section, we always choose a coordinate system $(x,y)$ such that $L=\{x=0\}$. Assume that a coordinate system $(x,y)$ is fixed.
Let $C$ be a curve on $\mathcal{S}$ and assume that $A$ is a branch of $C$ different from the curve $L$. Relative to the system $(x,y)$, the branch $A$ may be defined by a Weierstrass polynomial $f_A\in \C\{x\}[y]$, which is monic, and of degree $d_A$.
Note that the degree $d_A$ does not depend on the system of coordinates.
By the Newton-Puiseux Theorem, there exists a parametrization of $A$ of the form $\gamma_A(w)=(w^{d_A},\eta_A(w))$ where $\eta_A(w)=\sum_{k>0}a_kw^k\in \C\{w\}$.
Let $n$ be the product of the degrees of the Weierstrass polynomials of the branches of $C$ different from $L$.
We consider the formal power series $\sum_{k=0}A_kx^{k/n}\in \C[[x^{1/n}]]$ where
$$A_k=\begin{cases}
a_\frac{kd_A}{n}, & \text{if $n$ divides $kd_A$ }\\
0, & \text{otherwise.}
\end{cases}$$
We still denote by $\eta_A$ the formal power series $\sum_{k=0}A_kx^{k/n}$.
The \textbf{Newton-Puiseux roots relative to} $L$ of the branch $A$ are the formal power series $\eta_A(\rho\cdot x^{1/n})\in \C[[x^{1/n}]]$, for $\rho$ running through the $n$-th roots of 1.
Let $\rho\in\C$ be a primitive $n$-root of unity, notice that there are only $d_A$ Newton-Puiseux roots relative to $L$ of the branch $A$, namely $$\eta_A(\rho\cdot x^{1/n}),\ldots, \eta_A(\rho^{d_A}\cdot x^{1/n}).$$
All the Newton-Puiseux roots relative to $L$ of the curve $A$ have the same exponents. Some of those exponents may be distinguished by looking at the differences of roots:
\begin{definition
The \textbf{characteristic exponents relative to} $L$ of the curve $A$ are the $x$-orders $\ord_x (\eta_A-\eta'_A)$ of the differences between distinct Newton-Puiseux roots relative to $L$ of $A$.
\end{definition}
The characteristic exponents relative to $L$ of $A$ consist of exponents of $\eta_A$ which, when written as a quotient of integers, need a denominator strictly bigger than the lowest common denominator of the previous exponents. That is: $\frac{l}{n}$ is
characteristic exponent relative to $L$ of $A$ if and only if $N_l\frac{l}{n}\not\in\Z$ where $N_l=\min\{N\in \Z\ ;\mathcal{E}(\eta_A)\cap[0,\frac{l}{n})\in\frac{1}{N}\Z \} $.
By \cite[Proposition 3.10]{GarciaBarroso2019}
the characteristic exponents relative to $L$ do not depend on the coordinate system
$(x, y)$, but only on the branch $L$.
The \textbf{Newton-Puiseux roots relative} to $L$ of the curve $C$ are the Newton-Puiseux roots relative to $L$ of its branches different from $L$. Let us denote by $\mathcal{I}_C$ the set of
branches of $C$ which are different from $L$. Therefore, $C$ has $d_C:=\sum_{A\in \mathcal{I}_C} d_A$ Newton-Puiseux roots relative to $L$.
\begin{example}\label{examplelocal}
Let $L$ be the $y$-axis. Consider a plane curve $C$ whose branches $A$ and $B$ are parametrized by
$$\gamma_A(w)=(w^4,w^6+w^7),\, \gamma_B(w)=(w^2,w), $$
respectively. The Newton-Puiseux roots relative to $L$ of $A$ are
\begin{align*}
\eta_A(x^{1/8})& =x^{12/8}+x^{14/8},&
\eta_A(\rho x^{1/8})& =\rho^4x^{12/8}+\rho^6x^{14/8},\\
\eta_A(\rho^2 x^{1/8})&=x^{12/8}+\rho^4x^{14/8},&
\eta_A(\rho^3 x^{1/8})&=\rho^4x^{12/8}+\rho^2x^{14/8},
\end{align*}
where $\rho$ is a primitive 8-th root of unity. While the Newton-Puiseux roots relative to $L$ of $B$ are
\begin{align*}
\eta_B(x^{1/8})& =x^{4/8},&
\eta_B(\rho x^{1/8})& =\rho^4x^{4/8}.
\end{align*}
The characteristic exponents relative to $y$-axis of $A$ are $3/2, 7/4$. The characteristic exponent of $B$ relative to $y$-axis is $1/2$.
\end{example}
We keep assuming that $A$ is a branch of $C$ different from $L$.
The Eggers-Wall tree of $A$ relative to $L$ is a geometrical way of encoding the set of characteristic exponents, as well as
the sequence of their successive common denominators:
\begin{definition
The Eggers-Wall tree $\Theta_L (A)$ of the curve $A$ relative to $L$ is a
compact oriented segment endowed with the following supplementary structures:
\begin{itemize}
\item an increasing homeomorphism $\mathbf{e}_{L,A} : \Theta_L (A) \to [0,\infty]$, \textbf{the exponent function};
\item \textbf{marked points}, which are by definition the points whose values by the exponent function
are the characteristic exponents of $A$, as well as the smallest end of $\Theta_L (A)$,
labeled by $L$, and the greatest end, labeled by $A$.
\item an \textbf{index function} $\mathbf{i}_{L,A} : \Theta_L (A) \to \N$, which associates to each point $P \in \Theta_L (A)$ the smallest common denominator of the exponents
of a Newton-Puiseux root of $A$ which are strictly less than $\mathbf{e}_{L,A}(P)$.
\end{itemize}
\end{definition}
Let us consider now the case of a curve with several branches. In order to construct the Eggers-Wall tree in this case, one needs to know not only
the characteristic exponents of its branches, but also the \emph{exponent of coincidence}
of its pairs of branches:
\begin{definition
If $A$ and $B$ are two distinct branches of $C$, then their \textbf{exponent of coincidence relative} to $L$ is defined by:
$$k_L (A, B) := \max\{\ord_x (\eta_A - \eta_B) \}, $$
where $\eta_A,\eta_B\in\C[[x^{1/n}]]$ vary among the Newton-Puiseux roots of $A$ and $B$, respectively.
\end{definition}
\begin{definition
Let $C$ be a germ of curve on $(S,O)$.
Let us denote by $\mathcal{I}_C$ the set of
branches of $C$ which are different from $L$. The Eggers-Wall tree $\Theta_L (C)$ of $C$
relative to $L$ is the rooted tree obtained as the quotient of the disjoint union of the individual Eggers-Wall trees $\Theta_L (A), A \in \mathcal{I}_C$, by the following equivalence relation. If $A, B \in\mathcal{I}_C$,
then we glue $\Theta_L (A)$ with $\Theta_L (B)$ along the initial segments $\mathbf{e}^{-1}_{L,A}([0, k_L (A, B)])$
and $\mathbf{e}^{-1}_{L,B}([0, k_L (A, B)])$ by:
$$\mathbf{e}^{-1}_{L,A}(\alpha) \sim \mathbf{e}^{-1}_{L,B}(\alpha),\text{ for all } \alpha \in [0, k_L (A, B)].$$
One endows $\Theta_L (C)$ with the \textbf{exponent function} $\mathbf{e}_L : \Theta_L (C) \to [0,\infty]$ and the index function
$\mathbf{i}_L : \Theta_L (C) \to \N$ induced by the initial exponent functions $\mathbf{e}_{L,A}$ and $\mathbf{i}_{L,A}$ respectively,
for $A$ varying among the irreducible components of $C$ different from $L$. The tree $\Theta_L (L)$ is the
trivial tree with vertex set a singleton whose element is labelled by $L$. If $L$ is an irreducible
component of $C$, then the marked point $L \in \Theta_L (L)$ is identified with the root of $\Theta_L (L)$ for
any $A \in \mathcal{I}_C$. The set of marked points of $\Theta_L (C)$ is the union of the set of marked points of the
Eggers-Wall tree of the branches of $C$ and of the set of ramification points of $\Theta_L (C)$.
\end{definition}
Again, the fact that in the previous notations $\Theta_L (C), \mathbf{e}_L , \mathbf{i}_L$ we mentioned only the dependency on $L$, and not on the coordinate system $(x, y)$, comes from \cite[Proposition 3.10]{GarciaBarroso2019}.
\begin{example}
Consider again the curve of Example \ref{examplelocal}. One has $K_L(A,B)=1/2$ and the Eggers-Wall tree of $C$ relative to $L$ is drawn in Figure \ref{fig:figure1}.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\draw[thin ](0,0)--(0,4);
\draw[thin ](0,1)--(2,2.5);
\draw[fill ] (0,0)circle(2pt);
\draw[fill ] (0,1)circle(2pt);
\draw[fill ] (0,2)circle(2pt);
\draw[fill ] (0,3)circle(2pt);
\draw[fill ] (0,4)circle(2pt);
\draw[fill ] (2,2.5)circle(2pt);
\node(a)at(-0.3,0){$0$};
\node(a)at(-0.3,1){$\frac{1}{2}$};
\node(a)at(-0.3,2){$\frac{3}{2}$};
\node(a)at(-0.3,3){$\frac{7}{4}$};
\node(a)at(0.2,0.5){$1$};
\node(a)at(0.2,1.5){$1$};
\node(a)at(0.2,2.5){$2$};
\node(a)at(0.2,3.5){$4$};
\node(a)at(1,2){$2$};
\node(a)at(0.3,-0.3){$L$};
\node(a)at(0.3,4.3){$A$};
\node(a)at(2.3,2.5){$B$};
\end{tikzpicture}
\caption{Eggers-Wall tree or Example \ref{examplelocal}.}\label{fig:figure1}
\end{figure}
\end{example}
The carousel tree is a variant of the Eggers-Wall tree, but using all the Newton-Puiseux
roots of $C$, not only one root for each branch. The name was introduced in \cite{Anne} and it is inspired by the carousel geometrical model for the link of the curve $C$ described in \cite[Section 5.3]{Wall}.
\begin{definition}\label{carouseultree
Let $C$ be a germ of curve on $\mathcal{S}$.
Let us denote by $[d_C]$ the set $\{1,\ldots,d_C\}$ and let $\eta_j,j\in[d_C]$ be the Newton-Puiseux roots relative to $L$ of $C$. Consider the map $\ord_x\colon
[d_C]\times[d_C]\to \Q\cup\{\infty\}$, $(j,k)\mapsto \ord_x(\eta_j-\eta_k)$.
The map $\ord_x$ has the property that $\ord_x(j,l) \geq \min \{\ord_x(j,k),\ord_x(k,l)\}$ for any triple $j,k,l$. So for any $q\in \Q\cup\{\infty\}$, the relation on the set $[d_C]$ given by $j\sim_q k\Leftrightarrow \ord_x(j,k)\ge q$ is an equivalence relation.
Name the elements of the set $\ord_x([d_C]\times[d_C])\cup\{0\}$ in ascending order: $0=q_0<q_1<\dots<q_r=\infty$.
For each $i=0,\dots,r$ let $G_{i,1},\dots,G_{i,\mu_i}$ be the equivalence classes for the relation $\sim_{q_i}$.
So $\mu_r=d_C$ and the sets $G_{r,j}$ are singletons while $\mu_0=\mu_1=1$ and $G_{0,1}=G_{1,1}=[d_C]$.
We form a tree with these equivalence classes $G_{i,j}$ as vertices and edges given by inclusion relations: there is an edge between $G_{i,j}$ and $G_{i+1,k}$ if $G_{i+1,k}\subseteq G_{i,j}$.
The vertex $G_{0,1}$ is the root of this tree and the singleton sets $G_{r,j}$ are the leaves.
We weight each vertex with its corresponding $q_i$. The \textbf{carousel tree relative} to $L$ is the tree obtained from this tree by suppressing valency 2 vertices: we remove each such vertex and amalgamate its two adjacent edges into one edge.
\end{definition}
We will describe how one gets the Eggers-Wall tree from the carousel tree. This process is essentially the same process described in \cite[Lemma 3.1]{Anne}.
At any vertex $v$ of the carousel tree we have a weight $q_v$ which is one of the $q_i$'s.
Let $d_v$ be the denominator of the $q_v$ when $q_v$ is written as a quotient of coprime integers.
The process of obtaining the Eggers-Wall tree from the carousel tree is an induction process in $i$.
First, we label the edge between $G_{0,1}$ and $G_{1,1}$ by 1.
The subtrees cut off above $G_{1,1}$ consist of groups of $d_{G_{1,1}}$ isomorphic trees, with possibly one additional tree.
We label the edge connecting $G_{1,1}$ to this additional tree, if it exists, with $1$, and then delete all but one from each group of $d_{G_{1,1}}$ isomorphic trees.
Finally, we label the remaining edges contain $G_{1,1}$ with $\operatorname{lcm}\{d_{G_{1,1}},1\}$.
Inductively, let $v$ be a vertex with weight $q_i$. Let $v'$ be the adjacent vertex below $v$ along the path from $v$ up to the root vertex and let $l_{vv'}$ be the label of the edge between $v$ and $v'$.
The subtrees cut off above $v$ consist of groups of $\frac{\operatorname{lcm}\{d_{v},l_{vv'}\}}{l_{vv'}} $ isomorphic trees, with possibly one additional tree.
We label the edge connecting $v$ to this additional tree, if it exists, with $l_{vv'}$, and then delete all but one from each group of $\frac{\operatorname{lcm}\{d_{v},l_{vv'}\}}{l_{vv'}} $ isomorphic trees below $v$.
Finally, we label the remaining edges contain $v$ with $\operatorname{lcm}\{d_{v},l_{vv'}\}$.
The resulting tree, with the $q_v$ labels at vertices and the extra label on the edges is easily recognized
as the Eggers-Wall tree relative to $L$ of $C$.
\begin{example}
Figure \ref{fig:figure2} illustrates the above process for the Example \ref{examplelocal}.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\draw[thin ](0,0)--(0,1);
\draw[fill ] (0,0)circle(2pt);
\draw[fill ] (0,1)circle(2pt);
\draw[thin ](0,1)--(1,4);
\draw[fill ] (1,4)circle(2pt);
\node(a)at(1,4.2){$\infty$};
\draw[thin ](0,1)--(2,4);
\draw[fill ] (2,4)circle(2pt);
\node(a)at(2,4.2){$\infty$};
\draw[thin ](0,1)--(-0.5,2);
\draw[thin ](-0.5,2)--(0,3);
\draw[thin ](-0.5,2)--(-1,3);
\draw[fill ] (-0.5,2)circle(2pt);
\draw[fill ] (0,3)circle(2pt);
\node(a)at(0.2,3){$\frac{7}{4}$};
\node(a)at(-1.2,3){$\frac{7}{4}$};
\node(a)at(-0.7,2){$\frac{3}{2}$};
\node(a)at(-0.2,1){$\frac{1}{2}$};
\node(a)at(-0.2,0){$0$};
\draw[thin ](-1.3,4)--(-1,3);
\draw[fill ] (-1.3,4)circle(2pt);
\draw[fill ] (-1,3)circle(2pt);
\node(a)at(-1.3,4.2){$\infty$};
\draw[thin ](-0.7,4)--(-1,3);
\draw[fill ] (-0.7,4)circle(2pt);
\node(a)at(-0.7,4.2){$\infty$};
\draw[thin ](-0.3,4)--(0,3);
\draw[fill ] (-0.3,4)circle(2pt);
\node(a)at(-0.3,4.2){$\infty$};
\draw[thin ](0.3,4)--(0,3);
\draw[fill ] (0.3,4)circle(2pt);
\node(a)at(0.3,4.2){$\infty$};
\draw[thick,>-stealth,->](1.6,2)--+(1.0,0);
\begin{scope}[xshift=4.3cm]
\draw[thin ](0,0)--(0,1);
\draw[fill ] (0,0)circle(2pt);
\draw[fill ] (0,1)circle(2pt);
\node(a)at(-0.2,0){$0$};
\draw[thin ](0,1)--(1,4);
\draw[fill ] (1,4)circle(2pt);
\node(a)at(1,4.2){$\infty$};
\draw[dashed ](0,1)--(2,4);
\draw[fill=gray, color=gray ] (2,4)circle(2pt);
\node(a)at(2,4.2){$\infty$};
\draw[thin ](0,1)--(-0.5,2);
\draw[dashed ](-0.5,2)--(0,3);
\draw[thin ](-0.5,2)--(-1,3);
\draw[fill ] (-0.5,2)circle(2pt);
\draw[fill=gray, color=gray ] (0,3)circle(2pt);
\node(a)at(0.2,3){$\frac{7}{4}$};
\node(a)at(-1.2,3){$\frac{7}{4}$};
\node(a)at(-0.7,2){$\frac{3}{2}$};
\node(a)at(-0.2,1){$\frac{1}{2}$};
\draw[thin ](-1.3,4)--(-1,3);
\draw[fill ] (-1.3,4)circle(2pt);
\draw[fill ] (-1,3)circle(2pt);
\node(a)at(-1.3,4.2){$\infty$};
\node(a)at(-1.3,3.5){$\mathbf 4$};
\draw[dashed ](-0.7,4)--(-1,3);
\draw[ fill=gray, color=gray ] (-0.7,4)circle(2pt);
\node(a)at(-0.7,4.2){$\infty$};
\draw[dashed ](-0.3,4)--(0,3);
\draw[fill=gray, color=gray ] (-0.3,4)circle(2pt);
\node(a)at(-0.3,4.2){$\infty$};
\draw[dashed ](0.3,4)--(0,3);
\draw[fill=gray, color=gray ] (0.3,4)circle(2pt);
\node(a)at(0.3,4.2){$\infty$};
\node(a)at(-1,2.5){$\mathbf 2$};
\node(a)at(0.9,3){$\mathbf 2$};
\node(a)at(-0.1,1.6){$\mathbf 1$};
\node(a)at(0.2,0.5){$\mathbf 1$};
\draw[thick,>-stealth,->](1.6,2)--+(1,0);
\end{scope}
\begin{scope}[xshift=8cm]
\draw[thin ](0,0)--(0,4);
\draw[thin ](0,1)--(2,2.5);
\draw[fill ] (0,0)circle(2pt);
\draw[fill ] (0,1)circle(2pt);
\draw[fill ] (0,2)circle(2pt);
\draw[fill ] (0,3)circle(2pt);
\draw[fill ] (0,4)circle(2pt);
\draw[fill ] (2,2.5)circle(2pt);
\node(a)at(-0.3,0){$0$};
\node(a)at(-0.3,1){$\frac{1}{2}$};
\node(a)at(-0.3,2){$\frac{3}{2}$};
\node(a)at(-0.3,3){$\frac{7}{4}$};
\node(a)at(0.2,0.5){$1$};
\node(a)at(0.2,1.5){$1$};
\node(a)at(0.2,2.5){$2$};
\node(a)at(0.2,3.5){$4$};
\node(a)at(1,2){$2$};
\node(a)at(0.3,-0.3){$L$};
\node(a)at(0.3,4.3){$A$};
\node(a)at(2.3,2.5){$B$};
\end{scope}
\end{tikzpicture}
\caption{From the carousel tree to the Eggers-Wall tree. }\label{fig:figure2}
\end{figure}
\end{example}
\section{Lipschitz geometry at infinity determines topological type}\label{lipimplies}
In this section, we define the Lipschitz tree at infinity of a complex algebraic plane curve. Then we prove the equivalence of (\ref{it2}) and (\ref{it3}) and that (\ref{it1}) implies (\ref{it2}) of Theorem \ref{atinfinity}.
To define the Lipschitz tree at infinity of a complex algebraic plane curve we recall the basic vocabulary of resolution of singularities. Let $(C,p)\subset (\mathcal{S},p)$ be a germ of a singular complex curve in a smooth surface $\mathcal{S}$.
We remember that the blowing up of $\mathcal{S}$ with centre $p$ produces a smooth surface $\mathcal{S}_1$, a holomorphic map $\pi_1:\mathcal{S}_1\to\mathcal{S}$ such that $\pi_1:\mathcal{S}_1\backslash\pi_1^{-1}(p)\to S\backslash \{p\}$ is biholomorphic, the \textbf{exceptional curve} $E_1=\pi_1^{-1}(p)$, and the \textbf{strict transform} $C_1$ which is the topological closure $\overline{\pi_1^{-1}(C\backslash\{p\})}$.
The map $\pi_1$ is called the \textbf{blowing up} of $\mathcal{S}$ with centre $p$.
A \textbf{good minimal resolution} of $C$ is a map $\pi:\mathcal{S}_n\to\mathcal{S}$ which is a composite of finite and minimal sequence of blowing ups $\pi_i:\mathcal{S}_i\to\mathcal{S}_{i-1}$ such that the strict transform $C_n=\overline{\pi^{-1}(C\backslash\{p\})}$ is smooth and meets the exceptional curves $\pi^{-1}(p)=E_1\cup E_2\cup\cdots\cup E_n$ transversely at regular points.
\begin{definition}\label{liptree}
Let $C$ be a complex algebraic plane curve, $p_1,\ldots,p_m$ its points at infinity and let $B_1^{(j)},\ldots,B_{k_j}^{(j)}$ be the branches of $(\widetilde{C},p_j)$.
A \textbf{good minimal resolution} of $(\widetilde{C}\cup L_\infty,p_1)$ produces a smooth surface $S_{(1)}$, a projection $\pi_{(1)}:S_{(1)}\to \P$, a sequence of exceptional curves $E_1^{(1)},\ldots,E_{r_1}^{(1)}$ and strict transform curves $\mathcal{B}_1^{(1)},\ldots, \mathcal{B}_{k_1}^{(1)}$ of the branches $B_1^{(1)},\ldots, B_{k_1}^{(1)}$ and the strict transform $\mathcal{L}_\infty$ of the line at infinity $L_\infty$.
Then, we resolve the strict transform $C^{(1)}=\pi^{-1}_{(1)}(\widetilde{C})$ at the singular point $\pi_{(1)}^{-1}(p_2)$.
We repeat this process for all points at infinity of $C$.
The \textbf{Lipschitz tree at infinity} of $C$ is rooted tree with vertices $V^{(j)}_k$ corresponding to the curves $E^{(j)}_k$ labeled with its self-intersection number, arrow vertices $W^{(j)}_i$ corresponding to the branches $\mathcal{B}_i^{(j)}$ not labeled and a root corresponding to the strict transform $\mathcal{L}_\infty$ of the line at infinity.
We put an edge joining vertices if and only if the corresponding curves intersect each other.
\end{definition}
\begin{example}\label{paracusp}
The Lipschitz tree at infinity of the complex algebraic plane curve defined by $(y-x^2)(y^3-x)=0$ is drawn in Figure \ref{fig:Liptree}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.7cm,y=0.7cm]
\clip(-6,-1) rectangle (6,7);
\draw [line width=1pt] (0,0)-- (-1,2);
\draw [line width=1pt] (0,0)-- (1,2);
\draw [line width=1pt] (-1,2)-- (-2,4);
\draw [->,line width=1pt] (-1,2)-- (-3,3);
\draw [line width=1pt] (1,2)-- (2,4);
\draw [line width=1pt] (2,4)-- (3,6);
\draw [->,line width=1pt] (2,4)-- (4,5);
\draw [line width=1pt] (0,0) circle (4pt);
\begin{scriptsize}
\draw [fill=black] (0,0) circle (2pt);
\draw [fill=black] (-1,2) circle (2pt);
\draw[color=black] (-0.3,2) node {\normalsize $-1$};
\draw [fill=black] (1,2) circle (2pt);
\draw[color=black] (1.7,2) node {\normalsize $-3$};
\draw [fill=black] (-2,4) circle (2pt);
\draw[color=black] (-1.3,4) node {\normalsize $-2$};
\draw [fill=black] (2,4) circle (2pt);
\draw[color=black] (1.3,4) node {\normalsize $-1$};
\draw [fill=black] (3,6) circle (2pt);
\draw[color=black] (3.7,6) node {\normalsize $-2$};
\end{scriptsize}
\end{tikzpicture}
\caption{Lipschitz tree at infinity of Example \ref{paracusp}. }
\label{fig:Liptree}
\end{figure}
\end{example}
We point out that the Lipschitz tree at infinity of $C$ is obtained as the quotient of the disjoint union of the individual dual resolution graph of minimal good resolutions of $(\widetilde{C}\cup L_\infty,p_i)$, by identifying
all vertices corresponding to the strict transform of the line at infinity and putting it as the root.
We recall that by \cite[Theorem 8.1.7]{Wall}, the isomorphism class of the dual resolution graph of a minimal good resolutions of germ of complex curve at singular point determines and it is determined by its topological type.
This explain the equivalence $(\ref{it2})\Leftrightarrow (\ref{it3})$ of Theorem \ref{atinfinity}.
For the implication $(1)\Rightarrow (2)$ of Theorem \ref{atinfinity}, we introduce the asymptotic notations of Bachmann-Landau which are convenient for study of Lipschitz geometry (see \cite{knuth76bigomicron} for a historical survey about these notation).
\begin{definition}
Let $f,g:(0,\infty)\to (0,\infty)$ be two functions.
We say that
\begin{enumerate}
\item \textbf{$f$ is big-Theta of $g$}, and we write $f(t)=\Theta(g(t))$, if there exists $R_0>0$ and a constant $c>0$ such that $\displaystyle\frac{1}{c}g(t)\leq f(t)\leq cg(t)$ for all $t>R_0$.
\item \textbf{$f$ is small-o of $g$}, and we write $f(t)=o(g(t))$, if $\displaystyle \limsup\limits_{t\rightarrow \infty}\frac{f(t)}{g(t)}=0$.
\end{enumerate}
\end{definition}
Let $[a:b:0]$ be a point at infinity of a complex algebraic plane curve $C$.
The linear subspace spanned by $(a,b)$ in $\C^2$ is the \textbf{tangent line at infinity} to $C$ associated with $[a:b:0]$ (see \cite{Fernandes2019} and \cite{CTinfinity}).
\begin{example}\label{example}
Consider the polynomial $f(x,y)= y^2x-y$, and let $C_\lambda$ be the complex algebraic plane curve with equation $f(x,y)+\lambda=0$ for $\lambda\in\C$. One has $\widetilde{f}(x,y,z)=y^2x-yz^2+\lambda z^3$, and the points at infinity of $C_\lambda$ are $[1:0:0]$ and $[0:1:0]$ and the tangent lines at infinity to $C_\lambda$ are the coordinates axis.
\end{example}
\begin{lemma}\label{semnome}
Let $C$ be a complex algebraic plane curve, and let $P:\C^2\to \C$ be a linear projection whose kernel does not contain any tangent line at infinity to $C$.
Then there exist a compact set $K$ and a constant $M>1$ such that for each $u,u' \in C\backslash K $, there is an arc $\tilde \alpha$ in $C\backslash K$ joining $u$ to a point $u''$ with $P(u'')=P(u')$ and
$$ d(u,u') \leq \len(\tilde \alpha) + d(u'',u') \leq M d(u,u').$$
\end{lemma}
\begin{proof}
After a linear change of coordinates if necessary, we may assume that $P$ is the projection on the first coordinate and that the $y$-axis is not a tangent line at infinity to $C$.
Let $[1:a_1:0],\ldots,[1:a_m:0]$ be the points at infinity of $C$. For each $i$, let $B_{i1},\ldots,B_{ik_i} $ be the branches of $(\widetilde{C},[1:a_i:0])$.
The open set $U=\{[x:y:z]\in \P^2:x\neq0\}$ contains all the points at infinity of $C$, so we can use the coordinate chart $\varphi:U\to\C^2$ defined by $\varphi([x:y:z])=(z/x,y/x)$ to obtain Newton-Puiseux parametrization of the branch $\varphi(B_{ij})$ for each $i$.
Let $\epsilon>0$ sufficiently small such that there exists Newton-Puiseux parametrization $\gamma_{ij}:D_\epsilon\to\C^2$ of $\varphi(B_{ij})$ given by
$$\gamma_{ij}(w)=(w^{d_{ij}},a_i+v_{ij}(w)),$$
where $D_\epsilon$ is the open disk of radius $\epsilon$ centered at the origin and $v_{ij}\in\C\{w\}, v_{ij}(0)=0$.
Let $\Gamma_{ij}:D_\epsilon\backslash\{0\}\to\C^2$ given by $$\Gamma_{ij}(w)=(\iota^{-1}\circ\varphi^{-1}\circ\gamma_{ij})(w)=\left(\frac{1}{w^{d_{ij}}},\frac{a_i+v_{ij}(w)}{w^{d_{ij}}}\right).$$
We will prove that the compact $K=C\backslash \bigcup_{ij} \Gamma_{ij}(D_\epsilon\backslash\{0\})$ satisfies the desired conditions.
We claim that there exists a constant $c>0$ such that $C\backslash K$ is a subset of the cone $\{(x,y)\in \C^2;|y|\leq c|x|\}$. Moreover, $c$ may be chosen such that the tangent space of $C\backslash K$ at a point $p$, denoted by $T_pC$, is also a subset of the same cone.
The first part of this statement is easy to check. In particular, it follows that $P|_{\Gamma_{ij}(D_\epsilon\backslash\{0\})}$ is a covering map for all $i,j$.
Differentiating $\Gamma_{ij}$ gives
$$\Gamma'_{ij}(w)=\left(-\frac{d_{ij}}{w^{d_{ij}+1}},\frac{wv'_{ij}(w)-d_{ij}v_{ij}(w)}{w^{d_{ij}+1}}-a_i\frac{d_{ij}}{w^{d_{ij}+1}}\right).$$
Thus the points $(x,y)\in T_{\Gamma_{ij}(w)}C$ satisfies $|y-a_ix|\leq\eta_{ij} |x|\Rightarrow |y|\leq(\eta_{ij}+|a_i|) |x|$ where $\eta_{ij}=\sup\left|\frac{wv'_{ij}(w)-d_{ij}v_{ij}(w)}{d_{ij}}\right|$.
Now, putting $c=\max_{ij}\{\eta_{ij}+|a_i|\}$ we have
$$T_p C \subset \{(x,y)\in \C^2;|y|\leq c|x|\} \text{ for all } p\in C\backslash K,$$
as claimed.
Suppose $u,u' \in C\backslash K$ are arbitrary. Let $i_0,j_0,i'_0,j'_0$ such that $u\in \Gamma_{i_0j_0}(D_\epsilon\backslash\{0\})$ and $u'\in \Gamma_{i'_0j'_0}(D_\epsilon\backslash\{0\})$ and suppose that $1/\epsilon^{d_{i_0j_0}}\leq1/\epsilon^{d_{i'_0j'_0}}$.
Let $R=1/\epsilon^{d_{i_0j_0}}$ and choose a path $\alpha:[0,1]\to \C\backslash D_R$ such that $\alpha(0)=P(u),\alpha(1)=P(u')$ and $\len(\alpha)\leq\pi R |P(u)-P(u')|$.
Consider the lifting $\tilde \alpha$ of $\alpha$ by $P|_{ \Gamma_{i_0j_0}(D_\epsilon\backslash\{0\})}$ with origin $u$ and let $u''$ be its end.
We obviously have
$$d(u,u')\leq \len( \tilde\alpha) + d(u',u'')\,.$$
On the other hand, since $P$ is linear, $dP_p=P|_{T_pC}$.
Thus
\begin{equation*}
\frac{1}{\sqrt{1+c^2}}\leq||dP_p||\leq 1 \text{ for all } p\in C\backslash K.
\end{equation*}
In particular, $\len( \tilde\alpha) \leq \sqrt{1+c^2}\len(\alpha)\leq \pi R\sqrt{1+c^2}|P(u)-P(u')|$, as $|P(u)-P(u')| \leq d(u,u')$, we obtain
$$ \len( \tilde\alpha) \leq \pi R\sqrt{1+c^2} d(u,u').$$
If we join the segment $[u,u']$ to $ \tilde\alpha$ at $u$, we have a curve from $u'$ to $u''$, so $d(u',u'') \leq (1+\pi R\sqrt{1+c^2}) d(u,u')$.
Finally,
$$ \len( \tilde\alpha) + d(u',u'') \leq (1+2\pi R\sqrt{1+c^2}) d(u,u'),$$
and the constant $M=1+2 \pi R\sqrt{1+c^2} $ satisfies the desired conditions.
\end{proof}
\begin{remark}\label{bilip}
In the above lemma, we prove that $P|_{C\backslash K}:C\backslash K\to \C\backslash P(K)$ is a covering map. Moreover, $P|_{C\backslash K}$ has derivative bounded above and below by positive constants. In particular, for a non-constant arc $\alpha$ the quotient $$\len(\tilde\alpha)/\len(\alpha )$$ is bounded above and below by positive constants.
\end{remark}
The demonstration technique of $(\ref{it1})\Rightarrow (\ref{it2})$ the Theorem \ref{atinfinity} is similar to the case of germ of complex curves in \cite{Anne}.
In particular, it is based on a so-called “bubble trick” argument.
\begin{proof}[Proof of $(\ref{it1})\Rightarrow (\ref{it2})$ of Theorem \ref{atinfinity}]
We first prove that the Lipschitz geometry at infinity gives us the number of points at infinity.
Let $f\in \C[x,y]$ be a polynomial that defines $C$ which does not have multiple factors. Let $n=\deg f$, then by a linear change of coordinates if necessary, we can assume that the monomial $y^n$ has coefficient equal to 1 in $f$.
The points at infinity of $C$ are the points $[x:y:0]\in\P^2$ satisfying $f_n(x,y)=0$, where $f_n$ denotes the homogeneous polynomial composed by the monomials in $f$ of degree $n$, so $[0:1:0]$ is not a point at infinity of $C$.
We claim that there are constant $c>0$ and an open Euclidean ball $B_{R_0}(0)$ of radius $R_0$ centered at the origin such that $|y|\leq c|x|$ for all $(x,y)\in C\backslash B_{R_0}(0)$.
Indeed, otherwise, there exists a sequence $\{z_k=(x_k,y_k)\}\subset C$ such that $\lim\limits_{k\to+\infty}\|z_k\|=+\infty$ and $|y_k|> k|x_k|$.
Thus, taking a subsequence, one can suppose that $\lim\limits_{k\to+\infty}\frac{y_k}{|y_k|}=y_0$ for some $y_0$ such that $|y_0|=1$.
Since $\frac{|x_k|}{|y_k|}< \frac{1}{k}$, $\lim\limits_{k\to+\infty}\frac{z_k}{\|z_k\|}=(0,y_0)$.
On the other hand,
\begin{eqnarray*}
0 = f(z_k) = f\left(\|z_k\| \frac{z_k}{\|z_k\|}\right)
=
\|z_k\|^n \sum_{i=0}^{n}\frac{1}{\|z_k\|^{n-i}}f_{i} \left(\frac{z_k}{\|z_k\|}\right),
\end{eqnarray*}
where $f_i$ denotes the homogeneous polynomial composed by the monomials in $f$ of degree $i$. This implies that
\begin{eqnarray*}
0 = f(z_k)= \sum_{i=0}^{n}\frac{1}{\|z_k\|^{n-i}}f_{i} \left(\frac{z_k}{\|z_k\|}\right),
\end{eqnarray*}
Letting $k \to \infty$ yields $f_n(0,y_0) = 0$, which implies that $[0:1:0]$ is a point at infinity of $C$, this is a contradiction.
Therefore, the claim is true.
Now, let $[1:a_j:0], j=1,\ldots,m\leq n$ be the points at infinity of $C$.
We define cones
$$V_j:=\{(x,y)\in\C^2:|y-a_jx|\leq\epsilon |x|\}$$
where $\epsilon>0$ is small enough that the cones are disjoint except at $0$.
Then increasing $R_0>0$, if necessary,
\[ C\backslash B_{R_0}(0)\subset \bigcup_{j=1}^{m}V_j.\]
Indeed, otherwise, there exists a sequence $\{z_k=(x_k,y_k)\}\subset C$ such that $\lim\limits_{k\to+\infty}\|z_k\|=+\infty$ and $|y_k-a_jx_k|> \epsilon|x_k|$ for all $j=1,\ldots,m$.
Again, since $\|z_k\|\to+\infty$ as $k\to\infty$, we have
$$\lim_{k\to\infty}f_n \left(\frac{z_k}{\|z_k\|}\right)=0.$$
On the other hand, writing $f_n(x,y)=\prod_{j=1}^m(y-a_jx)^{d_j}$, where $d_j$ is a positive integer such that $n=\sum_{1\leq j\leq m} d_j$, we have
$$\left\|f_n \left(\frac{z_k}{\|z_k\|}\right)\right\|=\frac{\prod_{j=1}^{m}|y_k-a_jx_k|^{d_j}}{\|z_k\|^n}\geq\left(\frac{\epsilon|x_k|}{\|z_k\|}\right)^n.$$
But, because of the first claim, we have
$$\frac{|x_k|}{\|z_k\|}=\frac{1}{\sqrt{1+\left|\frac{y_k}{x_k}\right|^2}}\geq \frac{1}{\sqrt{1+c^2}},$$
which derives a contradiction.
We denote by $\mathcal{C}_j$ the part of $C\backslash B_{R_0}(0)$ inside $V_j$. Now, let $K,K'\subset \C^2$ be compact sets such that there is a bilipschitz map $\Phi:C\backslash K\to C'\backslash K'$.
Let $[1:a'_j:0], j=1,\ldots,m'$ be the points at infinity of $C'$. We repeat the above arguments for $C'$, then increasing $R_0>0$, if necessary,
$$C'\backslash B_{R_0}(0)\subset \bigcup_{j=1}^{m'}V'_j,\:\text{ where }\:V'_j:=\{(x,y)\in\C^2: |y-a'_jx|\leq\epsilon|x|\}.$$
Likewise, denote by $\mathcal{C}'_j$ the set $(C'\backslash B_{R_0}(0))\cap V'_j$.
We have $\Phi(C\backslash B_R(0))\subset C'\backslash B_{h(R)}(0) $ with $h(R)=\Theta(R) $. Since $\dist(\mathcal{C}_j\backslash B_R(0),\mathcal{C}_k\backslash B_R(0))=\Theta(R)$ we have
$$\dist(\Phi(\mathcal{C}_j\backslash B_R(0)),\Phi(\mathcal{C}_k\backslash B_R(0)))=\Theta(R).$$
Notice that the sets $\mathcal{C}'_l,l=1,\ldots,m'$ have the following property: the distance between any two connected components of $\mathcal{C}'_l$ outside a ball of radius $h(R)$ around $0$ is $o(R)$.
Then, we cannot have $$\Phi(\mathcal{C}_j\backslash B_R(0))\subset \mathcal{C}_l'\backslash B_{h(R)}(0) \text{ and } \Phi(\mathcal{C}_k\backslash B_R(0))\subset \mathcal{C}_l'\backslash B_{h(R)}(0)$$
for $k\neq j$ then $m\leq m'$ and using the inverse $\Phi^{-1}$ we get $m=m'$.
Now, we obtain the topological type of $\widetilde{C}\cup L_\infty$ at the points at infinity.
Without loss of generality, we can suppose that $[1:a_1:0]=[1:0:0]$ is a point at infinity for $C$.
We extract the characteristic and the coincidence exponents relative to $L_\infty$ of the curve $(\widetilde{C}\cup L_{\infty},[1:0:0])$
using the coordinate system and the induced Euclidean metric $d$ on $\mathcal{C}_1$.
Next, we prove that these data determine the topology type of $(\widetilde{C}\cup L_{\infty},[1:0:0])$.
Finally, we prove that these data can be obtained without using the chosen coordinate system and even using the equivalent metric $d'$ induced by $\Phi$, for this we operate the ``bubble trick".
Let $U=\{[x:y:z]\in \P^2:x\neq0\}$ and consider the coordinate chart $\varphi:U\to\C^2$ defined by $\varphi([x:y:z])=(z/x,y/x)=(u,v)$.
In this local coordinates, $\varphi([1:0:0])$ is the origin and we have $\ord_v (\widetilde{f}\circ \varphi^{-1})(0,v)=d_1$.
Let $B_{1},\ldots,B_{k_1} $ be the branches of $(\varphi(\widetilde{C}\cap U),0)$.
Every branch of the curve $(\varphi(\widetilde{C}\cap U),0)$ has a Newton-Puiseux parametrization of the form
$$\gamma_{s}(w)=\left(w^{d_{1s}},\sum_{k> 0} a_{sk} w^k\right),$$
where $d_{1s}$ are positive integers such that $\sum_{s=1}^{k_1} d_{1s}=d_1$.
Then, increasing $R_0>0$ if necessary, the images of the maps
$$\displaystyle\Gamma_{s}(w)=(\iota^{-1}\circ\varphi^{-1}\circ\gamma_s)(w)=\left(\frac{1}{w^{d_{1s}}},\frac{1}{w^{d_{1s}}}\sum_{k> 0} a_{sk} w^{k}\right), s=1,\ldots, k_1$$
cover $\mathcal{C}_1$. Therefore, the lines $x=t$ for $t\in (R_0,\infty)$ intersect $\mathcal{C}_1$ in $d_1$ points
$p_1(t),\dots,p_{d_1}(t)$ which depend continuously on $t$.
Denote by $[d_1]$ the set $\{1,\ldots,d_1\}$.
For each $ j, k \in [d_1]$ with $j < k$, the distance $d(p_j(t),p_k(t))$ has the form $\Theta(t^{1-q(j,k)})$, where $q(j,k) = q(k,j)$ is either a characteristic Puiseux
exponent relative to $L_\infty$ for a branch of the plane curve $(\widetilde{C}\cup L_\infty,[1:0:0])$ or a coincidence exponent relative to $L_\infty$ between two branches of $(\widetilde{C}\cup L_\infty,[1:0:0])$.
For $j\in[d_1]$ define $q(j,j)=\infty$.
\begin{lemma} \label{le:curve geometry}The map $q\colon
[d_1]\times[d_1]\to \Q\cup\{\infty\}$, $(j,k)\mapsto q(j,k)$, determines the topological type of $(\widetilde{C}\cup L_\infty,[1:0:0])$.
\end{lemma}
\begin{proof}
The topological type of $(\widetilde{C}\cup L_\infty,[1:0:0])$ is encoded by its Eggers-Wall tree relative to a smooth branch $\mathcal{L}$ transversal to $(\widetilde{C}\cup L_\infty,[1:0:0])$ (see Wall \cite[Proposition 4.3.9 and Theorem 5.5.9]{Wall}).
To prove the lemma we notice that the function $q$ is the same as the function $\ord_x$ of Definition \ref{carouseultree}. By the process described in Section \ref{trees}, one obtains the Eggers-Wall tree relative to $L_\infty$ of $(\widetilde{C}\cup L_\infty,[1:0:0])$. By applying the inversion theorem for Eggers-Wall tree \cite[Theorem 4.5]{GarciaBarroso2019} to $\Theta_{L_{\infty}}(\widetilde{C}\cup L_\infty\cup \mathcal{L},[1:0:0])$, one gets the Eggers-Wall tree $\Theta_{\mathcal{L}}(\widetilde{C}\cup L_\infty,[1:0:0])$.
\end{proof}
As already noted, this discovery of the topology type involved the chosen coordinate system and the metric $d$.
We must show we can discover it using $d'$ and without using the chosen coordinate system.
The points $p_1(t),\dots,p_{d_1}(t)$ that we used to find the numbers $q(j,k)$ were obtained by intersecting $\mathcal{C}_1$ with the line $x=t$.
The arc $t\in (R_0,\infty)\mapsto p_1(t)$ satisfies
\begin{equation}\label{1}
d(0,p_1(t))=\Theta(t).
\end{equation}
Moreover, the other points $p_2(t),\dots,p_{d_1}(t)$ are in the disk of radius $\eta t$ centered at $p_1(t)$ in the plane $x=t$.
Here, $\eta>0$ can be as small as we like, so long as $R_0$ is then chosen sufficiently big.
Instead of a disk of radius $\eta t$, we can use a ball $B(p_1(t),\eta t)$ of radius $\eta t$ centered at $p_1(t)$.
This ball $B(p_1(t),\eta t)$ intersects $\mathcal{C}_{1}$ in $d_1$ topological disks $D_1(\eta t),\dots,D_{d_1}(\eta t)$, named such that $p_l(t)\in D_l(\eta t),l=1,\ldots,d_1$ and thus $\dist(D_j(\eta t),D_k(\eta t))\leq d(p_j(t),p_k(t))$.
On the other hand, let $\widetilde{p}_l(t)\in D_l(\eta t), l=1,\ldots,d_1$ such that $$\dist(D_j(\eta t),D_k(\eta t))= d(\widetilde{p}_j(t),\widetilde{p}_k(t)).$$
Consider the projection $P\colon \C^2 \to \C$ given by $P(x,y)=x$ and let $\alpha_t $ be the segment in $\C$ joining $P(\widetilde{p}_j(t))$ to $P(\widetilde{p}_k(t))$ and let $\tilde{\alpha}_t$ be the lifting of $\alpha_t$ by the restriction
$P|_{C\backslash B_{R_0}(0)}$ with origin $ \widetilde{p}_k(t).$
Applying Lemma \ref{semnome} to $P$ with
$u=\widetilde{p}_k(t)$ and $u'=\widetilde{p}_j(t)$, we then obtain
$$d(\widetilde{p}_j(t),\widetilde{p}_k(t))\geq \frac{1}{M}(\len(\tilde{\alpha}_t)+d(\widetilde{p}_j(t),\tilde{\alpha}_t(1)))\geq \frac{1}{M}d(\widetilde{p}_j(t),\tilde{\alpha}_t(1)).$$
But $d(\widetilde{p}_j(t),\tilde{\alpha}_t(1))=\Theta(t^{1-q(j,k)})$ since $P(\widetilde{p}_j(t))=P(\tilde{\alpha}_t(1))$ and $|P(\widetilde{p}_j(t))|=\Theta(t)$.
We now replace the arc $p_1$ by any continuous arc on $\mathcal{C}_1$ satisfying (\ref{1}) and we still denote this new arc by $p_1$.
If $\eta$ is sufficiently small it is still true that $B_{\mathcal{C}_1}(p_1(t),\eta t):=\mathcal{C}_1\cap B(p_1(t),\eta t)$ consists of $d_1$ disks $D_1(\eta t),\dots,D_{d_1}(\eta t)$ with $\dist\bigl(D_j(\eta t),D_k(\eta t)\bigr)=\Theta(t^{1-q(j,k)})$.
So at this point, we have gotten rid of the dependence on the choice of coordinate system in discovering the topology, but not yet dependence on the metric $d$.
A $L$-bilipschitz change to the metric may make the components of $B_{\mathcal{C}_1}(p_1(t),\eta t)$ disintegrate into many pieces, so we can no longer simply use distance between all pieces.
To resolve this, we consider $B_{\mathcal{C}_1}(p_1(t),\eta t/L)$ and $B_{\mathcal{C}_1}(p_1(t),\eta L t)$.
Note that
$$B_{\mathcal{C}_1}(p_1(t),\eta t/L)\subset B'_{\mathcal{C}_1}(p_1(t),\eta t)\\ \subset B_{\mathcal{C}_1}(p_1(t),\eta Lt),
$$
where $B'$ means we are using the modified metric $d'$.
Denote by $D_j(\eta t/L)$ and $D_j(\eta Lt),j=1,\dots, d_1$ the disk of $B_{\mathcal{C}_1}(p_1(r),\eta t/L)$ and $B_{\mathcal{C}_1}(p_1(r),\eta Lt)$, respectively, so that $D_j(\eta t/L)\subset D_j(\eta Lt)$ for $j=1,\ldots,d_1$. Thus $B'_{\mathcal{C}_1}(p_1(t),\eta t)$ has $d_1$ components such that each one contains at most one component of $B_{\mathcal{C}_1}(p_1(r),\eta t/L)$.
Therefore, exactly $d_1$ components of $B'_{\mathcal{C}_1}(p_1(t),\eta t)$ intersect $B_{\mathcal{C}_1}(p_1(t), \eta t/L)$.
Naming these components $D'_1(\eta t),\dots,D'_{d_1}(\eta t)$, such that $D_j(\eta t/L)\subset D'_j(\eta t) \subset D_j(\eta Lt),j=1,\ldots,d_1$, we still have $\dist(D'_j(\eta t),D'_k(\eta t))=\Theta(t^{1-q(j,k)})$ since
$$
\dist(D_j(\eta L t),D_k(\eta Lt))\leq \dist(D'_j(\eta t),D'_k(\eta t))\\ \leq \dist(D_j(\eta t/L),D_k(\eta t/L)).
$$
So the $q(j,k)$ are determined by the distance between $D_j'(\eta t), j=1,\ldots,d_1$.
Up to now, we have used the metric $d$ to select the components $D_j'(\eta t), j=1,\ldots,d_1$ of $B'_{\mathcal{C}_1}(p_1(t),\eta t)$.
To avoid using the metric $d$, consider $B'_{\mathcal{C}_1}(p_1(t), \eta t/L^2)$. We have
$$
B_{\mathcal{C}_1}(p_1(t), \eta t/L^3) \subset B_{\mathcal{C}_1}'(p_1(t), \eta t/L^2) \subset B_{\mathcal{C}_1}(p_1(t), \eta t/L)\subset D_1'(\eta t)\cup\dots\cup D_{d_1}'(\eta t).
$$
This implies that $B'_{\mathcal{C}_1}(p_1(t), \eta t/L^2)$ intersects only the components $D'_j(\eta t), j=1,\dots,d_1 $ of $B'_{\mathcal{C}_1}(p_1(t),\eta t)$.
So we can use only the metric $d'$ to select these components and we are done.
\end{proof}
\section{Topological type determines Lipschitz geometry at infinity}\label{topsimplies}
In this section, we prove that (\ref{it2}) implies (\ref{it1}) of Theorem \ref{atinfinity}. For this, we will construct a bilipschitz map between complex algebraic plane curves with the same data in (\ref{it2}).
\begin{proof}[Proof of the implication $(\ref{it2}) \Rightarrow (\ref{it1})$ of Theorem \ref{atinfinity}]
Let $C_1$ and $C_2$ be complex algebraic plane curves with the same data described by (\ref{it2}) of of Theorem \ref{atinfinity}.
Choose $(x,y)$ coordinates in such a way that none of the curves have the point $[0:1:0]$ as a point at infinity.
Let $[1:a^l_1:0],\ldots,[1:a^l_{m_l}:0]$ be the points at infinity of $C_l, l=1,2$, denoted in such a way that $(\widetilde{C}_1\cup L_\infty,[1:a^1_i:0])$ has the same topological type as $(\widetilde{C}_2\cup L_\infty,[1:a^2_i:0])$. Then, by \cite[Theorem 5.5.9]{Wall} and \cite[Proposition 4.3.9]{Wall}, for any smooth branch $L_1$ (resp. $L_2$) through $[1:a_i^1:0]$ (resp. $[1:a_i^2:0]$) transversal to $(C_1\cup L_{\infty},[1:a_i^1:0])$ (resp. $(C_1\cup L_{\infty},[1:a_i^2:0])$) the Eggers-Wall trees $\Theta_{L_1}(\widetilde{C}_1\cup L_{\infty},[1:a_i^1:0])$ and $\Theta_{L_2}(\widetilde{C}_2\cup L_{\infty},[1:a_i^2:0])$ are isomorphic.
Then, we apply the inversion theorem for Eggers-Wall tree \cite[Theorem 4.5]{GarciaBarroso2019} to both and we get that $\Theta_{L_\infty}(\widetilde{C}_1,[1:a_i^1:0])$ and $\Theta_{L_\infty}(\widetilde{C}_2 ,[1:a_i^2:0])$ are isomorphic.
For each $i$, let $B^{l}_{i1},\ldots,B^{l}_{ik_i} $ be the branches of $(\widetilde{C}_l,[1:a^l_i:0]), l=1,2$. Again, we denoted in such a way that $(B_{ij}^1,[1:a^1_i:0])$ has the same topological type as $(B_{ij}^2,[1:a^2_i:0])$. From what has been said above, we have that $B_{ij}^1$ and $B_{ij}^2$ have the same characteristic exponents relative to $L_\infty$ and $k_{L_\infty}(B_{ij}^1,B_{ij'}^1)=k_{L_\infty}(B_{ij}^2,B_{ij'}^2).$
The open set $U=\{[x:y:z]\in \P^2:x\neq0\}$ contains all the points at infinity of $C_l,l=1,2$.
We can use the coordinate chart $\varphi:U\to\C^2$ defined by $\varphi([x:y:z])=(z/x,y/x)$ to obtain a Newton-Puiseux parametrization of the branches $\varphi(B^l_{ij})$.
Let $D_{\epsilon_0}$ be the open disk of radius $\epsilon_0>0$ centered at the origin with $\epsilon_0$ sufficiently small such that there exist Newton-Puiseux parametrization $\gamma^l_{ij}:D_{\epsilon_0}\to\C^2$ of $\varphi(B^l_{ij})$ given by
$$\gamma^l_{ij}(w)=\left(w^{d_{ij}},a_i^l+\sum_{k>0}a^l_{ijk}w^k\right).$$
Let $\Gamma^{l}_{ij}:D_{\epsilon_0}\backslash\{0\}\to\C^2$ given by
$$\Gamma_{ij}^l(w)=(\iota^{-1}\circ\varphi^{-1}\circ\gamma^l_{ij})(w)=\left(\frac{1}{w^{d_{ij}}},\frac{a_i^l+\sum_{k>0}a^l_{ijk}w^k}{w^{d_{ij}}}\right),l=1,2.$$
Consider the compact set $K^l_\epsilon=C_{l}\backslash \bigcup_{ij} \Gamma^l_{ij}(D_\epsilon\backslash\{0\}),l=1,2$.
We will prove that there exists $\epsilon>0$ that the map
$$\begin{array}{rcl}
\Phi:C_1\backslash K^1_\epsilon&\longrightarrow&C_2\backslash K^2_\epsilon\\
\Gamma^1_{ij}(w)&\longmapsto&\Gamma^2_{ij}(w)
\end{array}$$
is bilipschitz.
\begin{claim}
Consider the projection $P\colon \C^2 \to \C$ given by $P(x,y)=x$.
In order to check that $\Phi$ is a Lipschitz map it is enough to consider points in $C_1\backslash K^1_\epsilon$ with the same $x$ coordinate. That is, there exists a constant $c>0$ such that
$$d\bigl(\Gamma^2_{ij}(w'),\Gamma^2_{i'j'}(w'')\bigr)\leq c d\bigl(\Gamma^1_{ij}(w'),\Gamma^1_{i'j'}(w'')\bigr),$$
for all $w',w''$ such that $P(\Gamma_{ij}^1(w'))=P(\Gamma_{i'j'}^1(w''))$.
\end{claim}
Indeed, let $\Gamma_{ij}^1(w)$ and $\Gamma_{i'j'}^1(w')$ be any two elements of $C_1\backslash K^1_\epsilon$ and suppose that $1/\epsilon^{d_{ij}}\leq 1/\epsilon^{d_{i'j'}}$.
Let $\alpha$ be a curve in $\C\backslash D_{1/\epsilon^{d_{ij}}}$ joining $P(\Gamma_{ij}^1(w))$ to $P(\Gamma_{i'j'}^1(w'))$ as in Lemma \ref{semnome}. Let $\tilde{\alpha}_1$ (resp.\ $\tilde{\alpha}_2$) be the lifting of $\alpha$ by the restriction
$P|_{\Gamma_{ij}^1(D_{\epsilon}\backslash \{0\})}$ (resp.\ $P|_{\Gamma_{ij}^2(D_{\epsilon}\backslash \{0\})}$) with origin $ \Gamma_{ij}^1(w)$ (resp.\
$\Gamma_{ij}^2(w)$).
Consider the unique $w'' \in D_{\epsilon}$ such that $\Gamma^1_{ij}(w'')$ is the end of $\tilde{\alpha}_1$. Notice that $P\circ\Gamma^1_{ij}=P\circ\Gamma^2_{ij}$ and by uniqueness of lifts $\tilde{\alpha}_2=\Gamma^2_{ij}\circ(\Gamma^1_{ij})^{-1}\circ \tilde{\alpha}_1 $ which implies that $\Gamma^2_{ij}(w'')$ is the end of $\tilde{\alpha}_2$.
We have
$$
d\bigl(\Gamma^2_{ij}(w),\Gamma^2_{i'j'}(w')\bigr) \leq \len(\tilde \alpha_2) +
d\bigl(\Gamma^2_{ij}(w''),\Gamma^2_{ij}(w')\bigr).$$
According to the Remark \ref{bilip}, there are constant, say $c_1$ and $c_2$ such that
$ \len(\tilde \alpha_2) \leq c_1\len(\alpha)\leq c_1c_2 \len(\tilde \alpha_1)$. By hypothesis, there exists a constant $c>0$ such that
$$d\bigl(\Gamma^2_{ij}(w''),\Gamma^2_{ij}(w')\bigr)\leq c d\bigl(\Gamma^1_{ij}(w''),\Gamma^1_{ij}(w')\bigr).$$
Therefore setting $C=\max\{c_1c_2,c\}$, we obtain
$$d\bigl(\Gamma^2_{ij}(w),\Gamma^2_{i'j'}(w')\bigr) \leq C\Bigl( \len(\tilde \alpha_1) +
d\bigl(\Gamma^1_{ij}(w''),\Gamma^1_{ij}(w')\bigr)\Bigr).$$
Applying Lemma \ref{semnome} to $C_1$ with
$u=\Gamma^1_{ij}(w)$ and $u'=\Gamma^1_{i'j'}(w')$, we then have
$$
d\bigl(\Gamma^2_{ij}(w),\Gamma^2_{i'j'}(w')\bigr) \leq C M
d\bigl(\Gamma^1_{ij}(w),\Gamma^1_{i'j'}(w')\bigr) .$$
This proves $\Phi$ is Lipschitz
and the claim.
Now, let $B^1_{ij}$ and $B^2_{i'j'}$ be branches of $\widetilde{C}_1$ and $\widetilde{C}_2$, respectively,
with $i\neq i'$. Let $s\in(0,1]\to \Gamma^1_{ij}(ws^{1/d_{ij}})$ and $s\in(0,1]\to \Gamma^1_{i'j'}(w's^{1/d_{i'j'}})$ be the two real arcs with $w^{d_{ij}}=(w')^{d_{i'j'}}$. Then we have
\begin{multline*}
d\bigl(\Gamma^1_{ij}(ws^{1/d_{ij}}), \Gamma^1_{i'j'}(w's^{1/d_{i'j'}})\bigr) = \frac{1}{s|w^{d_{ij}}|}
\bigg| a^1_{ij}-a^1_{i'j'}
+ \sum_{k>0}a^1_{ijk}w^k{s^{k/d_{ij}}} \\ -\sum_{k>0}a^1_{i'j'k}(w')^k{s^{k/d_{i'j'}}} \bigg|.
\end{multline*}
and
\begin{multline*}
d\bigl(\Phi(\Gamma^1_{ij}(ws^{1/d_{ij}})), \Phi(\Gamma^1_{i'j'}(w's^{1/d_{i'j'}})\bigr) = \frac{1}{s|w^{d_{ij}}|}
\bigg| a^2_{ij}-a^2_{i'j'}
+ \sum_{k>0}a^2_{ijk}w^k{s^{k/d_{ij}}} \\ -\sum_{k>0}a^2_{i'j'k}(w')^k{s^{k/d_{i'j'}}} \bigg|
\end{multline*}
Hence the ratio
\begin{equation}\label{quotient1}
d\bigl(\Gamma^1_{ij}(ws^{1/d_{ij}}), \Gamma^1_{i'j'}(w's^{1/d_{i'j'}})\bigr)\, \Big/ \,
d\bigl(\Phi(\Gamma^1_{ij}(ws^{1/d_{ij}})), \Phi(\Gamma^1_{i'j'}(w's^{1/d_{i'j'}})\bigr)\Bigr)
\end{equation}
tends to the non-zero constant $ \frac{|a^1_{ij}-a^1_{i'j'}|}{|a^2_{ij}-a^2_{i'j'}|}$ as $s$ tends to $0$ for every such pairs $(w,w')$. So there
exists $\epsilon >0$ such that for each such $(w,w')$ with $|w| = 1$ and
each $s < \epsilon$, the quotient (\ref{quotient1}) belongs to $[1/c,c]$ where
$c>0$.
Now, consider the branches $B^1_{ij}$ and $B^2_{ij}$.
Let $s\in(0,1]\to \Gamma^1_{ij}(ws)$ and $s\in(0,1]\to \Gamma^1_{i'j'}(w's)$ be the two real arcs with $w^{d_{ij}}=(w')^{d_{ij}}$.
Then we have
$$ d\bigl(\Gamma^1_{ij}(ws), \Gamma^1_{ij}(w's)\bigr) = \frac{1}{s^{d_{ij}}|w^{d_{ij}}|}
\bigg|
\sum_{k>0}a^1_{ijk}(w^k-(w')^k){s^{k}} \bigg|
$$
and
$$
d\bigl(\Phi(\Gamma^1_{ij}(ws)), \Phi(\Gamma^1_{ij}(w's)\bigr) = \frac{1}{s^{d_{ij}}|w^{d_{ij}}|}
\bigg|
\sum_{k>0}a^2_{ijk}(w^k-(w')^k){s^{k}} \bigg|
$$
Let $k_0$ be the minimal element of $\{k ;a^1_{ijk}\neq 0 \text{ and } w^k \neq(w')^k \}$.
Then $k_0/d_{ij}$ is a characteristic exponent for $B^1_{ij}$ relative to $L_\infty$, so $a^1_{ijk_0}$ and $a^2_{ijk_0}$ are non-zero.
Hence the ratio
\begin{equation}\label{quotient2}
d\bigl(\Gamma^1_{ij}(ws), \Gamma^1_{ij}(w's)\bigr)\, \Big/ \,
d\bigl(\Phi(\Gamma^1_{ij}(ws)), \Phi(\Gamma^1_{ij}(w's)\bigr)\Bigr)
\end{equation}
tends to the non-zero constant $c_{ijk_0}= \frac{|a^1_{ijk_0}|}{|a^2_{ijk_0}|}$ as $s$ tends to $0$.
Notice that the integer $k_0$ depends on the pair of points $(w,w')$.
But $k_0/d_{ij}$ is a characteristic exponent relative to $L_\infty$ of $B^1_{ij}$.
Therefore there is a finite number of values for $k_0$ and $c_{ijk_0}$.
Moreover, the set of pairs $(w,w')$ such that $w\neq w'$ and $w^{d_{ij}}=(w')^{d_{ij}}$ consists of a disjoint union of $d_{ij}-1$ lines, say $L_l=\{(w,\exp(2\pi l/d_{ij})w),w\in \C^*\}, l=1,\ldots, d_{ij}-1$.
Observe that for any $(w,w')\in L_l$ the quotient (\ref{quotient2}) tends to positive constant as $s\to0$ which does not depend on the pair $(w,w')$.
So there exists $\epsilon_1 >0$ such that for each such $(w,w')$ with $|w| = 1$ and each $s \leq \epsilon_1$, the quotient (\ref{quotient2}) belongs to $[1/c,c]$ where
$c>0$, as claimed.
For the case of branches $B_{ij}^1$ and $B_{ij'}^2$ with $j\neq j'$, the same arguments work taking into account their coincidence exponent relative to $L_\infty$.
\end{proof}
\section{Lipschitz geometry of complex algebraic plane curves }
In this section, we present the complete classification of the Lipschitz geometry of complex algebraic plane curves. We define the Lipschitz graph of a complex algebraic plane curves which is a combinatorial object that encode its Lipschitz geometry.
Let $C$ be a complex algebraic plane curve.
A sequence of \textbf{good minimal resolution} of $\widetilde{C}$ produces a smooth curve $\widetilde{\mathcal{C}}$. By \cite[Lemma 9.2.3]{brieskorn}, the connected components of $\widetilde{\mathcal{C}}$ correspond bijectively to the irreducible components of $C$.
\begin{definition}\label{Lipgraph}
Let $C$ be a complex algebraic plane curve with irreducible components $C_1,\ldots, C_n$.
A sequence of \textbf{good minimal resolution} of $\widetilde{C}\cup L_\infty$ produces a sequence of exceptional curves $E_1,\ldots,E_{r}$ and strict transform curves $\mathcal{C}_1,\ldots, \mathcal{C}_{n}$ of the curves $C_1,\ldots, C_{n}$ and the strict transform $\mathcal{L}_\infty$ of the line at infinity $L_\infty$.
The \textbf{Lipschitz graph of $C$} is a rooted graph with vertices $V_k$ corresponding to the curves $E_k$ labeled with its self-intersection number, vertices $W_i$ corresponding to the curves $\mathcal{C}_i$ not labeled and a root corresponding to the $\mathcal{L}_\infty$.
We put one edge joining vertices for each intersection point of the corresponding curves.
\end{definition}
\begin{remark}\label{disunion}
Let $p_1,\ldots,p_m$ be the singular points of $\widetilde{C}\cup L_\infty$.
We point out that the Lipschitz graph of $C$ is obtained as the quotient of the disjoint union of the individual dual resolution graph of minimal good resolutions of $(\widetilde{C}\cup L_\infty,p_i)$, by identifying
all vertices corresponding to the branch of an irreducible component $C_j$ for all $j=1,\ldots,n$.
Then it is clear that the Lipschitz graph of $C$ is determined by the topological type of the germs $(\widetilde{C}\cup L_\infty,p_i)$ and $(\widetilde{C}_j\cup L_\infty,p_i)$.
\end{remark}
\begin{example}\label{quotient}
Let $C$ be a complex algebraic plane curve defined by $x[y^2-x^2(1+x)]=0$. We have two singular points for $\widetilde{C}\cup L_\infty$, namely $[0:0:1]$ and $[0:1:0]$.
Dual resolution graph of a good minimal resolution at the singular point $[0:0:1]$ is drawn in Figure \ref{fig:dualresolution}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm]
\clip(-2,-1.8) rectangle (8,1);
\draw [color=blue, line width=1pt ] plot [smooth, tension=2] coordinates { (1,1) (0,0) (-1,0) (0,0) (1,-1)};
\draw [line width=1pt, color=green] (0,-1)-- (0,1);
\begin{scope}[xshift=3cm]
\draw [line width=1pt,color=black] (-1,0) -- (2,0);
\draw [line width=1pt,color=green] (0,-1) -- (0,1);
\draw [color=blue, line width=1pt ] plot [smooth, tension=1] coordinates { (0.5,1) (1,-1) (1.5,1)};
\end{scope}
\begin{scope}[xshift=7cm,yshift=-1cm]
\draw [->,line width=1pt,color=green] (0,0) -- (-1,2);
\draw [->,line width=1pt,color=blue] (0,0) -- (0,2);
\draw [->,line width=1pt,color=blue] (0,0) -- (1,2);
\begin{scriptsize}
\draw [fill=black] (0,0) circle (2pt);
\draw[color=black] (0,-0.5) node {\normalsize$-1$};
\end{scriptsize}
\end{scope}
\end{tikzpicture}
\caption{The blue arrow vertices correspond to the branches of $C_1: y^2-x^2(1+x)=0$ and the green correspond to the branch of $C_2: x=0$.}
\label{fig:dualresolution}
\end{figure}
Dual resolution graph of a good minimal resolution for the singular point $[0:1:0]$ is drawn in Figure \ref{fig:dual}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.8cm,y=0.8cm]
\clip(-1,-1) rectangle (5,2);
\draw [line width=1pt] (0,0)-- (2,0);
\draw [line width=1pt] (2,0)-- (4,0);
\draw [->,line width=1pt,color=green] (0,0) -- (0,2);
\draw [->,line width=1pt,color=blue] (4,0) -- (3,2);
\draw [->,line width=1pt,color=red] (4,0) -- (5,2);
\begin{scriptsize}
\draw [fill=black] (0,0) circle (2pt);
\draw[color=black] (0,-0.4) node {\normalsize$-2$};
\draw [fill=black] (2,0) circle (2pt);
\draw[color=black] (2,-0.4) node {\normalsize$-2$};
\draw [fill=black] (4,0) circle (2pt);
\draw[color=black] (4,-0.4) node {\normalsize$-1$};
\end{scriptsize}
\end{tikzpicture}
\caption{The blue arrow vertex correspond to the branch of $C_1: y^2-x^2(1+x)=0$ and the green correspond to the branch of $C_2: x=0$.
The red arrow vertex corresponds to the branch of the line at infinity.}
\label{fig:dual}
\end{figure}
Connecting these graphs in the way described above we obtain the Lipschitz graph for $C$ (see Fig. \ref{fig:lipgraph}).
\begin{figure}[h]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.7cm,y=0.7cm]
\draw [line width=1pt] (0,0)-- (2,0);
\draw [line width=1pt] (2,0)-- (4,0);
\draw [line width=1pt] (4,0)-- (6,0);
\draw [line width=1pt] (6,0)-- (6,2);
\draw [line width=1pt] (2,2)-- (2,0);
\draw [line width=1pt] (4,3)-- (2,2);
\draw [line width=1pt] (4,3)-- (6,2);
\draw [shift={(3.5,1.48)},line width=1pt] plot[domain=1.2529983209850046:2.807890477052243,variable=\t]({1*1.6001249951175691*cos(\t r)+0*1.6001249951175691*sin(\t r)},{0*1.6001249951175691*cos(\t r)+1*1.6001249951175691*sin(\t r)});
\begin{scriptsize}
\draw [fill=black] (0,0) circle (2pt);
\draw [color=black] (0,0) circle (4pt);
\draw [fill=black] (2,0) circle (2pt);
\draw[color=black,font=\fontsize{15}{15}] (2,-0.5) node {\normalsize $-1$};
\draw [fill=black] (4,0) circle (2pt);
\draw[color=black,font=\fontsize{40}{40}] (4,-0.5) node {\normalsize $-2$};
\draw [fill=black] (6,0) circle (2pt);
\draw[color=black] (6,-0.5) node {\normalsize$-2$};
\draw [fill=blue] (6,2) circle (2pt);
\draw [fill=blue] (2,2) circle (2pt);
\draw [fill=black] (4,3) circle (2pt);
\draw[color=black] (4,3.5) node {\normalsize$-1$};
\end{scriptsize}
\end{tikzpicture}
\caption{Lipschitz graph of Example \ref{quotient}. The vertices that correspond to irreducible components of the curve are distinguished from the other vertices by the fact that they are not labeled.
But to improve the visualization of the graph we put a distinct color to such vertices.
}
\label{fig:lipgraph}
\end{figure}
\end{example}
We can do the inverse process: start from a Lipschitz graph of a complex algebraic plane curve $C$ and obtain the individuals dual resolution graph of minimal good resolutions of $(\widetilde{C}\cup L_\infty,p_i)$. Then by \cite[Theorem 8.1.7]{Wall} we extract the following data: the topological type of the germ of the curve $\widetilde{C}\cup L_\infty$ at each of its singular points.
\begin{example}
Suppose that Figure \ref{fig:given} is a Lipschitz graph of a complex algebraic plane curve $C$.
\begin{figure}[h]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.6cm,y=0.6cm]
\draw [line width=1pt] (-3,0)-- (-1,0);
\draw [line width=1pt] (-1,0)-- (1,0);
\draw [line width=1pt] (1,0)-- (1,2);
\draw [line width=1pt] (1,0)-- (3,0);
\draw [line width=1pt] (3,0)-- (5,0);
\draw [line width=1pt] (-3,0)-- (-3,-2);
\draw [line width=1pt] (-3,0)-- (-3,2);
\draw [line width=1pt] (-3,2)-- (-1,2);
\draw [line width=1pt] (-1,2)-- (0,4);
\draw [line width=1pt] (0,4)-- (1,2);
\begin{scriptsize}
\draw [fill=black] (-3,0) circle (2pt);
\draw[color=black] (-3.5,-0.5) node {\normalsize $-1$};
\draw [fill=blue] (-1,0) circle (2pt);
\draw [fill=black] (1,0) circle (2pt);
\draw[color=black] (1,-0.5) node {\normalsize $-1$};
\draw [fill=black] (1,2) circle (2pt);
\draw [color=black] (1,2) circle (4pt);
\draw [fill=black] (3,0) circle (2pt);
\draw[color=black] (3,-0.5) node {\normalsize $-2$};
\draw [fill=black] (5,0) circle (2pt);
\draw[color=black] (5,-0.5) node {\normalsize$-2$};
\draw [fill=black] (-3,-2) circle (2pt);
\draw[color=black] (-3.6,-2) node {\normalsize$-3$};
\draw [fill=black] (-3,2) circle (2pt);
\draw[color=black] (-3.6,2) node {\normalsize$-2$};
\draw [fill=blue] (-1,2) circle (2pt);
\draw [fill=black] (0,4) circle (2pt);
\draw[color=black] (-0.6,4) node {\normalsize$-1$};
\end{scriptsize}
\end{tikzpicture}
\caption{A given Lipschitz graph of a complex algebraic plane curve $C$ is given.}
\label{fig:given}
\end{figure}
If we erase the vertices corresponding to the irreducible components of $\widetilde{C}\cup L_\infty$, we get three graphs with some no end edges.
We put an arrow vertex in each no end edges.
But before doing all that we distinguish by colors the vertices corresponding to irreducible components and the edges connected to them to discern which branches belongs to an irreducible component (see Fig. \ref{fig:colorvertex}).
\begin{figure}[h]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.6cm,y=0.6cm]
\draw [line width=1pt,color=blue] (-3,0)-- (-1,0);
\draw [line width=1pt,color=blue] (-1,0)-- (1,0);
\draw [line width=1pt,color=red] (1,0)-- (1,2);
\draw [line width=1pt] (1,0)-- (3,0);
\draw [line width=1pt] (3,0)-- (5,0);
\draw [line width=1pt] (-3,0)-- (-3,-2);
\draw [line width=1pt] (-3,0)-- (-3,2);
\draw [line width=1pt,color=green] (-3,2)-- (-1,2);
\draw [line width=1pt, color=green] (-1,2)-- (0,4);
\draw [line width=1pt,color=red] (0,4)-- (1,2);
\begin{scriptsize}
\draw [fill=black] (-3,0) circle (2pt);
\draw[color=black] (-3.5,-0.5) node {\normalsize $-1$};
\draw [fill=blue] (-1,0) circle (2pt);
\draw [fill=black] (1,0) circle (2pt);
\draw[color=black] (1,-0.5) node {\normalsize $-1$};
\draw [fill=green] (-1,2) circle (2pt);
\draw [fill=black] (3,0) circle (2pt);
\draw[color=black] (3,-0.5) node {\normalsize $-2$};
\draw [fill=black] (5,0) circle (2pt);
\draw[color=black] (5,-0.5) node {\normalsize$-2$};
\draw [fill=black] (-3,-2) circle (2pt);
\draw[color=black] (-3.6,-2) node {\normalsize$-3$};
\draw [fill=black] (-3,2) circle (2pt);
\draw[color=black] (-3.6,2) node {\normalsize$-2$};
\draw [fill=red] (1,2) circle (2pt);
\draw [color=red] (1,2) circle (4pt);
\draw [fill=black] (0,4) circle (2pt);
\draw[color=black] (-0.6,4) node {\normalsize$-1$};
\end{scriptsize}
\end{tikzpicture}
\caption{There are three irreducible components, say, $R$ (red) corresponding to the line at infinity, $G$ (green) and $B$ (blue) corresponding to the irreducible components of $C$.}
\label{fig:colorvertex}
\end{figure}
Now, we delete the vertex corresponding to irreducible components and put the arrows vertices in the no end edges with the same color as the edge (see Fig.\ref{fig:cut}).
\begin{figure}[h]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.6cm,y=0.6cm]
\draw [->,line width=1pt,color=blue] (-3,0)-- (-1.2,0);
\draw [line width=1pt] (-3,0)-- (-3,-2);
\draw [line width=1pt] (-3,0)-- (-3,2);
\draw [->,line width=1pt, color=green] (-3,2)-- (-1.2,2);
\draw [<-,line width=1pt, color=blue] (-0.8,0)-- (1,0);
\draw [->,line width=1pt, color=red] (1,0)-- (1,1.8);
\draw [line width=1pt ] (1,0)-- (3,0);
\draw [line width=1pt] (3,0)-- (5,0);
\draw [<-,line width=1pt,color=green] (-0.9,2.2)-- (0,4);
\draw [->,line width=1pt,color=red] (0,4)-- (0.9,2.2);
\begin{scriptsize}
\draw [fill=black] (-3,0) circle (2pt);
\draw[color=black] (-3.5,-0.5) node {\normalsize $-1$};
\draw [fill=black] (1,0) circle (2pt);
\draw[color=black] (1,-0.5) node {\normalsize $-1$};
\draw [fill=black] (3,0) circle (2pt);
\draw[color=black] (3,-0.5) node {\normalsize $-2$};
\draw[color=black] (3,-1.3) node {\normalsize$\mathcal{G}_2$};
\draw [fill=black] (5,0) circle (2pt);
\draw[color=black] (5,-0.5) node {\normalsize$-2$};
\draw [fill=black] (-3,-2) circle (2pt);
\draw[color=black] (-3.6,-2) node {\normalsize$-3$};
\draw[color=black] (-3,-2.8) node {\normalsize$\mathcal{G}_1$};
\draw [fill=black] (-3,2) circle (2pt);
\draw[color=black] (-3.6,2) node {\normalsize$-2$};
\draw [fill=black] (0,4) circle (2pt);
\draw[color=black] (-0.6,4) node {\normalsize$-1$};
\draw[color=black] (0.7,4) node {\normalsize$\mathcal{G}_3$};
\end{scriptsize}
\end{tikzpicture}
\caption{There are three singular points of $\widetilde{C}\cup L_\infty$, say $p_1,p_2$ and $p_3$ with dual resolution graphs $\mathcal{G}_1, \mathcal{G}_2$ and $\mathcal{G}_3$, respectively.}
\label{fig:cut}
\end{figure}
The colors tell us the relation between branches and irreducible components.
This and the dual resolution graphs $\mathcal{G}_1, \mathcal{G}_2$ and $\mathcal{G}_3$ (see Fig. \ref{fig:cut}) are sufficient to determine the topological type of the germ of the irreducible components at each of its singular points.
To see this we subject the graphs $\mathcal{G}_1, \mathcal{G}_2$ and $\mathcal{G}_3$ repeatedly to a contraction operation which corresponds to blowing down of a curve.
We call a vertex in a dual resolution graph (not associate to a minimal resolution) \textbf{contractible} when it has label $-1$ and valency less than three.
Contraction of one of these vertices consists:
\begin{itemize}
\item if this vertex has valency 2, in adding one to the intersection numbers of its labeled adjacent vertices, removing the vertex, and amalgamating its two adjacent edges into one edge.
\item if this vertex has valency 1, in adding one to the intersection numbers of its labeled adjacent vertex and removing the vertex and its adjacent edge.
\end{itemize}
\begin{definition}
The \textbf{contraction process} of a dual resolution graph of a germ of a complex algebraic plane curve $(\Gamma,p)$ with respect to one of its irreducible components $(\Gamma',p)$ consists in removing the arrow vertices and the edges connected to it except the ones which corresponds to the branches of $(\Gamma',p)$.
In the resulting graph one repeatedly applies all possible contractions. The non-contractible graph finally obtained is the dual resolution graph of $(\Gamma',p)$.
\end{definition}
Notice we get only arrow vertex at the end of contraction process if and only if $(\Gamma',p)$ is smooth.
For instance, to determine the dual resolution graph of $(B,p_2)$ one removes the red and arrow vertex and applies three contractions (see Fig. \ref{fig:contraction}).
\begin{figure}[h]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.7cm,y=0.7cm]
\draw [line width=1pt] (0,0)-- (2,0);
\draw [line width=1pt] (2,0)-- (4,0);
\draw [->,line width=1pt,color=blue] (0,0) -- (0,2);
\draw [line width=1pt] (6,0)-- (10,0);
\draw [->,line width=1pt,color=blue] (6,0) -- (6,2);
\draw [->,line width=1pt,color=blue] (12,0) -- (12,2);
\draw [->,line width=1pt,color=blue] (13,1.7) -- (13,2.02);
\begin{scriptsize}
\draw [fill=black] (0,0) circle (2pt);
\draw[color=black] (0,-0.5) node {\normalsize$-1$};
\draw [fill=black] (2,0) circle (2pt);
\draw[color=black] (2,-0.5) node {\normalsize$-2$};
\draw [fill=black] (4,0) circle (2pt);
\draw[color=black] (4,-0.5) node {\normalsize$-2$};
\draw [fill=black] (6,0) circle (2pt);
\draw[color=black] (6,-0.5) node {\normalsize$-1$};
\draw [fill=black] (10,0) circle (2pt);
\draw[color=black] (10,-0.5) node {\normalsize $-2$};
\draw [fill=black] (12,0) circle (2pt);
\draw[color=black] (12,-0.5) node {\normalsize $-1$};
\end{scriptsize}
\end{tikzpicture}
\caption{Contraction process of the germ $(B,p_2)$.}
\label{fig:contraction}
\end{figure}
To determine the dual resolution graph of $(B,p_1)$ one removes the green edge and its arrow vertex: there are no contractible vertices (see Fig. \ref{fig:bluegraph}).
\begin{figure}[h]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.8cm,y=0.8cm]
\draw [line width=1pt] (0,0)-- (2,0);
\draw [line width=1pt] (2,0)-- (4,0);
\draw [->,line width=1pt,color=blue] (2,0) -- (2,2);
\begin{scriptsize}
\draw [fill=black] (0,0) circle (2pt);
\draw[color=black] (-0.14,-0.5) node {\normalsize$-3$};
\draw [fill=black] (2,0) circle (2pt);
\draw[color=black] (2,-0.5) node {\normalsize$-1$};
\draw [fill=black] (4,0) circle (2pt);
\draw[color=black] (4,-0.5) node {\normalsize$-2$};
\end{scriptsize}
\end{tikzpicture}
\caption{Minimal resolution graph of the germ $(B,p_1)$.}
\label{fig:bluegraph}
\end{figure}
Thus, we extract from the Lipschitz graph the following data:
\begin{itemize}
\item the number of irreducible components.
\item there are three singular points, say $p_1,p_2$ and $p_3$ with dual resolution graphs $\mathcal{G}_1, \mathcal{G}_2$ and $\mathcal{G}_3$, respectively.
By \cite[Theorem 8.1.7]{Wall}, this is equivalent to know the topological type of the germs $(\widetilde{C}\cup L_\infty,p_1), (\widetilde{C}\cup L_\infty,p_2)$ and $(\widetilde{C}\cup L_\infty,p_3)$.
\item the dual resolution graphs of the germs $(G,p_1),(B,p_1),(B,p_2)$ and $(G,p_3)$, obtained by the contraction process.
By \cite[Theorem 8.1.7]{Wall}, this is equivalent to know the topological type of these germs.
\end{itemize}
\end{example}
From the above example it is easy to see that the equivalence $(\ref{itii})\Leftrightarrow(\ref{itiii})$ of Theorem \ref{global} holds. Now, we deal with the equivalence between (\ref{iti}) and (\ref{itii}).
\begin{proof}[Proof of $(\ref{iti}) \Leftrightarrow (\ref{itii})$ of Theorem \ref{global}]
We start assuming that there exists a bilipschitz map $\phi: C\to \Gamma$.
By \cite[Theorem 1.1]{Anne}, for each singular point $p\in C$ the topological type of the germ $(C,p)$ is the same as the topological type of $(\Gamma,\phi(p))$.
By item (\ref{it2}) of Theorem \ref{atinfinity}, there is a bijection $\psi$ between the set of points at infinity of $C$ and the set of points at infinity of $\Gamma$ such that $(\widetilde{C}\cup L_\infty,p)$ has the same topological type as $(\widetilde{\Gamma}\cup L_\infty,\psi(p))$.
Restricting $\phi$ to smooth points of $C$ we get a homeomorphism between $C\backslash \Sigma(C)=\bigcup_{i\in I}C_i\backslash \Sigma(C)$ and $\Gamma\backslash \Sigma(\Gamma)=\bigcup_{j\in J}\Gamma_j\backslash \Sigma(\Gamma)$, where $\Sigma(C)$ and $\Sigma(\Gamma)$ denote the singular points of $C$ and $\Gamma$, respectively.
Since $C_i, \Gamma_j$ are irreducible and $\Sigma(C)$ and $\Sigma(\Gamma)$ are finite, $C_i\backslash\Sigma(C)$ and $\Gamma_j\backslash\Sigma(\Gamma)$ are connected.
Then the map $\sigma:I\to J$, defined by
$\sigma(i)=j\in J$ if and only if $\phi(C_i\backslash\Sigma(C))= \Gamma_j\backslash\Sigma(\Gamma)$, is a bijection.
We extend the application $\phi|_{C_i\backslash \Sigma(C)}$ to topological closure of $C_i\backslash \Sigma(C)$ and we get the bilipschitz map $\phi_i:C_i\to\Gamma_{\sigma(i)}, \phi_i=\phi|_{C_i}$.
Applying \cite[Theorem 1.1]{Anne} to $\phi_i:C_i\to\Gamma_{\sigma(i)}$, we obtain that for each singular point $p\in C_i$ the topological type of the germ $(C_i,p)$ is the same as the topological type of $(\Gamma_{\sigma(i)},\phi(p))$.
By item (\ref{it2}) of Theorem \ref{atinfinity}, there is a bijection $\psi_i$ between the set of points at infinity of $C_i$ and the set of points at infinity of $\Gamma_{\sigma(i)}$ such that $(\widetilde{C}_i\cup L_\infty,p)$ has the same topological type as $(\widetilde{\Gamma}_{\sigma(i)}\cup L_\infty,\psi_i(p))$.
Moreover, $\psi_i$ can be chosen to be the restriction of $\psi$ to the points at infinity of $C_i$.
Recall the parametrization $\iota:\C^2\to\P^2$ of $\P^2$ given by $\iota(x,y)=[x:y:1]$.
Then the bijection $\varphi:\Sigma(\widetilde{C}\cup L_\infty)\to\Sigma(\widetilde{\Gamma}\cup L_\infty)$ defined by
$$\varphi(p)=\begin{cases}
\psi(p)& \text{if $p\in L_\infty$},\\
\iota(\phi(\iota^{-1}(p)))& \text{otherwise},
\end{cases}$$
give us the bijection of item (\ref{itii}) of Theorem \ref{global}.
Now, the reciprocal, i.e., that (\ref{itii}) implies (\ref{iti}) of Theorem \ref{global}.
We can assume that $I=J=\{1,\ldots,m\}$ and $\sigma=\mathrm{id}$. The item (\ref{itii}) of Theorem \ref{global} implies that both curves $C_i$ and $\Gamma_i$ have the same number of points at infinity and singular points for $i=0,\ldots,m$ where $C_0=C$ and $ \Gamma_0=\Gamma$.
Let $p_1,\ldots,p_{s}$ be the singular points of $\widetilde{C}$ and let $q_1,\ldots,q_{s}$ be the singular points of $\widetilde{\Gamma}$ which are not point at infinity of $C$ and $\Gamma$, respectively.
We denote in such a way that $(\widetilde{C}_i,p_l)$ has the same topological type as $(\widetilde{\Gamma}_i,q_l)$ for $l=1,\ldots,s$ and $i=0,1,\ldots, m$.
Similarly, let $p_{s+1},\ldots,p_{m}$ be points at infinity of $C$ and let $q_{s+1},\ldots,q_{m}$ be the points at infinity of $\Gamma$ denoted in such a way that $(\widetilde{C}_i\cup L_\infty,p_l)$ has the same topological type as $(\widetilde{\Gamma}_i\cup L_\infty,q_l)$ for $l=s+1,\ldots,m$ and $i=0,1,\ldots,m.$
Let $B(p_l)\subset \P^2$ be a regular coordinate ball, that is, there exists a smooth coordinate ball $B'(p_l)\supseteq
\overline{B(p_l)}$.
Shrinking
$B'(p_l)$ if necessary, we can assume that $B'(p_l)\cap B'(p_j)=\emptyset$ for $l\neq j$ and we can apply
\cite[Theorem 1.1]{Anne}, i.e., for $l=1,\ldots,s$ there exists a bilipschitz map
$$\phi_l:C\cap \iota^{-1}(B'(p_l))\to \phi_l(C\cap \iota^{-1}(B'(p_l)))\subset\Gamma$$ which is biholomorphic except at $\iota^{-1}(p_l)$ and $\phi_l(\iota^{-1}(p_l))=\iota^{-1}(q_l)$.
Similarly, by Theorem \ref{atinfinity}, there exists a bilipschitz $$\Phi: C\cap \Bigl(\bigcup_{l=s+1}^m\iota^{-1}(B'(p_l)) \Bigr)\to \Phi\Bigl(C\cap\Bigl(\bigcup_{l=s+1}^m\iota^{-1}(B'(p_l))\Bigl)\Bigl)\subset \Gamma$$
which is biholomorphic. Then the curve $C$ is almost covered with domains of bilipschitz maps.
The part that is missing is inside of $C\backslash \Bigl(\bigcup_{l=1}^m\iota^{-1}(B(p_l)\Bigr)$ which is a union of connected compact orientable surfaces $K_i$ with boundary. More precisely, let $K_i=C_i\backslash \Bigl(\bigcup_{l=1}^m\iota^{-1}(B(p_l))\Bigr)$.
Recall that connected compact orientable surfaces with boundary are classified up to diffeomorphism by the Euler characteristic and the number of connected components of boundary see, for instance, \cite[Chapter 9, Theorem 3.11]{hirsch1976differential}.
Let us calculate the Euler characteristic of $K_i$. Shrinking
$B'(p_l)$ if necessary, we assume that $B'(p_l)$ intersect $\widetilde{C}_i$ if and only if $p_l$ is a singular point of $\widetilde{C}_i\cup L_\infty$. By additive property of the Euler characteristic we have:
\begin{multline*}
\chi(\widetilde{C}_i)=\chi\Bigl(\Bigl(\widetilde{C}_i \backslash \bigcup_l B(p_l)\Bigr)\cup \Bigl(\bigcup_l \overline{B(p_l)}\cap \widetilde{C}_i\Bigr)\Bigr) \\ =\chi\Bigl(\widetilde{C}_i \backslash \bigcup_l B(p_l)\Bigr)+\chi\Bigl(\bigcup_l\overline{B(p_l)}\cap \widetilde{C}_i\Bigr)-\chi\Bigl(\Bigl(\widetilde{C}_i \backslash \bigcup_l B(p_l)\Bigr)\cap \Bigl(\bigcup_l \overline{B(p_l)}\cap \widetilde{C}_i)\Bigr),
\end{multline*}
Note that all spaces appearing in the above equation are compact and triangulable. By the Conical Structure Theorem and the additive property we know that
$$\chi\Bigl(\bigcup_l\overline{B(p_l)}\cap \widetilde{C}_i\Bigl)=\sum_l\chi\bigl(\overline{B(p_l)}\cap \widetilde{C}_i\bigl)=m_i,$$
where $m_i$ is the number of singular points of $\widetilde{C}_i\cup L_\infty$.
Shrinking each $B(p_l)$ if necessary, by \cite[Corollary 2.9]{milnor} we may assume that the boundary of $B(p_l)$ intersect $\widetilde{C}_i$ transversally.
Thus there are two possibilities: $B(p_l)\cap \widetilde{C}_i=\emptyset$ or
$B(p_l)\cap \widetilde{C}_i$ is a smooth compact 1-manifold.
In the latter case,
by the classification theorem of smooth 1-manifold, the intersection is diffeomorphic to $\S^1$. In both cases we have
$$\chi\Bigl(\Bigl(\widetilde{C}_i \backslash \bigcup_l B(p_l)\Bigr)\cap \Bigl(\bigcup_l \overline{B(p_l)}\cap \widetilde{C}_i\Bigr)\Bigr)=0.$$
On the other hand, \cite[Theorem 7.1.1]{Wall} tell us a formula for the Euler characteristic of a curve in terms of its degree and its singularities. More precisely,
$$\chi(\widetilde{C}_i)=3d_i-d_i^2 +\sum_{p\in \widetilde{C}_i}\mu_p(\widetilde{C}_i)$$
where $d_i$ denotes the degree of $\widetilde{C}_i
$ and $\mu_p(\widetilde{C}_i)$ denotes the Milnor number of $\widetilde{C}_i$ at $p$.
Recall that the Milnor number is an invariant of the topological type, for instance, see \cite[Theorem 6.5.9]{Wall}. The degree of the curve in its turn is an invariant of the topological type of the germs $(\widetilde{C}_i\cup L_\infty,p_l)$ since
\begin{equation*}
\deg \widetilde{C}_i=\sum_{p\in \widetilde{C}_i\cap L_\infty} (\widetilde{C}_i \cdot L_\infty)_p,
\end{equation*}
where $(\widetilde{C}_i\cdot L_\infty)_p$ denotes the intersection number between $\widetilde{C}_i$ and $L_\infty$.
Also, the invariance of degree for the Lipschitz geometry at infinity of complex algebraic curves is given in \cite[Corollary 3.2]{invariancedegree}. Having said that, we have $\chi(\widetilde{C}_i)=\chi(\widetilde{\Gamma_i})$ and for $K_i$,
\begin{equation}\label{euler}
\chi(K_i)=\chi(\widetilde{C}_i \backslash \cup_l B(p_l))=\chi(\widetilde{C}_i)-m_i.
\end{equation}
Let $\mathcal{B}(q_l)=\iota\Bigl(\Phi\bigl(C\cap\iota^{-1}(B(p_l))\bigl)\Bigl)\cup\{p_l\} $ for $l=s+1,\ldots,m$
and $\mathcal{B}(q_l)=\iota\Bigl(\phi_l\bigl(C\cap \iota^{-1}(B(p_l))\bigr)\Bigr)$ for $l=1,\ldots,s$ and $$\mathcal{K}_i=\Gamma_i\backslash \bigcup_l\phi_l(C\cap \iota^{-1}(B(p_l)))\cup \Phi(C\cap(\bigcup_{l=s+1}^m\iota^{-1}(B(p_l))).$$
To calculate the Euler characteristic of $\mathcal{K}_i$ we notice that
$\mathcal{K}_i=\Gamma_i\backslash \Bigl(\bigcup_{l=1}^m\iota^{-1}(\mathcal{B}(q_l))\Bigr).$ And by similar arguments as above one has
\begin{equation}\label{eulerG}
\chi(\mathcal{K}_i)=\chi\Bigl(\widetilde{\Gamma}_i \backslash \bigcup_l \mathcal{B}(q_l)\Bigr)=\chi(\widetilde{\Gamma}_i)-m_i.
\end{equation}
It follows from equation (\ref{euler}) and (\ref{eulerG}) that $K_i$ and $\mathcal{K}_i$ have the same Euler characteristic.
The map $f_i:\partial K_i\to \partial \mathcal{K}_i$ defined by the restrictions $$\phi_l|_{C_i\cap\iota^{-1}(\partial B(p_l))}\text{ for } l=1,\ldots,s \text{ and } \Phi|_{C_i\cap(\iota^{-1}(\partial B(p_l)))} \text{ for } i=s+1,\ldots, m$$ is bihomorphic.
Now, we use a slight generalization of the classification of smooth compact surface with boundary:
\begin{lemma}\label{classification}
Let $K_i$ and $\mathcal{K}_i$ be connected compact orientable smooth surfaces with boundary and let $f_i:\partial K_i\to \partial \mathcal{K}_i$ be an orientation-preserving diffeomorphism. Then $f_i$ extends to a diffeomorphism $F_i:K_i\to \mathcal{K}_i$ if only if $K_i$ and $\mathcal{K}_i$ have the same Euler characteristic.
\end{lemma}
\begin{proof}
The boundaries $\partial K_i, \partial \mathcal{K}_i$ are smooth compact 1-manifolds and thus its connected components are diffeomorphic to $\S^1$. Since $f_i$ is a diffeomorphism between $\partial K_i$ and $ \partial \mathcal{K}_i$, they have the same number of connected components.
Let $g_i:K_i\to \mathcal{K}_i$ be the diffeomorphism given by the classification of smooth compact surface theorem \cite[Chapter 9, Theorem 3.11]{hirsch1976differential}.
Up to isotopy every orientation-preserving diffeomorphism of $\S^1$ is the identity \cite[Chapter 8, Theorem 3.3]{hirsch1976differential}, then we know that the restriction map $g_i|_{\partial K_i}$, and $f_i$ are isotopic, say by $H_i:[0,1]\times \partial K_i\to \partial \mathcal{K}_i, H_i(0,\cdot)=f_i, H_i(1,\cdot)=g_i|_{\partial K_i}$.
The collar neighborhood theorem \cite[Theorem 9.26]{leesmooth} shows that $\partial K_i$ has a collar neighborhood $\mathcal{C}$ in $K_i$; which is
the image of a smooth embedding $E: [0,1)\times \partial K_i\to K_i$ satisfying $E(0,x)=x$
for all $x\in\partial K_i$. To simplify notation, we use this embedding to identify $\mathcal{C}$
with $[0,1)\times \partial K_i$ and denote a point in $\mathcal{C}$ as an ordered pair $(s, x)$ with $s\in[0,1)$ and $x \in\partial K_i$; thus $(s, x)\in \partial K_i$ if and only if $s = 0$. For any $a\in(0,1)$, let $\mathcal{C}(a)=\{(s,x)\in \mathcal{C}: 0\leq s<a\}$ and $K_i(a)=K_i\backslash \mathcal{C}(a)$.
Let $\gamma: [0,1]\to [0,1]$ be a smooth map that satisfies $\gamma(0)=0$ and $\gamma(s)=1$
for $\frac{1}{2}\leq s\leq 1$. Define $F:K_i\to \mathcal{K}_i$ by
$$F_i(p)=\begin{cases}
g_i(p),& \text{if $p\in \mathrm{Int} K_i\left(\frac{1}{2}\right)$,}\\
(s,H_i(x,\gamma(s)),& p=(s,x)\in \mathcal{C}.
\end{cases}$$
These definitions both give the map $g_i$ on the set $\mathcal{C}\backslash \overline{\mathcal{C}\left(\frac{1}{2}\right)}$
where they overlap,
so $F_i$ is a diffeomorphism extension of $f_i$.
\end{proof}
The map $F:K_i\to \mathcal{K}_i$ is a diffeomorphism between compact sets, so it is a bilipschtz map.
The maps $F, \Phi, \phi_l$ agree on the components of the boundary of $K_i$.
It follows that there exists a bilipschitz map $\Psi:C\to \Gamma$ such that $\Psi|_{K_i}=F_i,\: \Psi|_{\iota^{-1}(B(p_l))\cap C}=\phi_l$ and
$\Psi|_{C\cap\bigl(\bigcup_{l=s+1}^m\iota^{-1}(B(p_l))\bigl)}=\Phi$.
\end{proof}
|
1,314,259,993,438 | arxiv | \section{Introduction}
\begin{figure}[b]
\vspace{-5pt}
\noindent \footnotesize{\textbf{This preprint will appear in the Proceedings of the 19th IFAC Symposium in System Identification. Please cite:}}
\begin{lstlisting}
@inproceedings{ribeiro_occam_2021,
author={Ant\^onio H. Ribeiro and Johannes N. Hendriks and Adrian G. Wills and Thomas B. Sch\"on},
title={{B}eyond {O}ccam's {R}azor in {S}ystem {I}dentification: {D}ouble-{D}escent when {M}odeling {D}ynamics},
year={2021},
booktitle={{P}roceedings of the 19th {IFAC} {S}ymposium in {S}ystem {I}dentification ({SYSID})}
}
\end{lstlisting}
\vspace{-5pt}
\end{figure}
Traditionally, there is a trade-off when choosing the model complexity: the model must be rich enough to capture the dynamics in the data, but not so flexible that it learns spurious random effects. This corresponds to the classical U-shape performance curve that is typically considered when choosing model complexity (see Fig.~\ref{fig:double_descent_illustration}(a)).
\begin{figure}
\centering
\subfigure[U-shape performance]{\includegraphics[width=0.49\linewidth]{figures/ce8_ushape.png}}
\subfigure[double-descent performance]{\includegraphics[width=0.49\linewidth]{figures/ce8_doubledescent.png}
\caption{\textbf{Double-descent in the CE8 benchmark.} We show the one-step-ahead prediction error for a nonlinear ARX model in the CE8 benchmark~\citep{wigren2017coupled}. In (a), we show the \textit{``classical''} regime and a U-shape performance curve. In (b), the vertical dashed line indicates the transition from the \textit{``classical''} regime to the interpolation regime: when the model capacity is large enough to perfectly fit the data (i.e. zero training error). While the U-shape model performance is observed in the \textit{``classical''} regime and the error is large at the interpolation threshold, increasing the capacity beyond this point leads to decreasing the error in the test set. The experiment is detailed in Section~\ref{sec:additional-examples}.}
\label{fig:double_descent_illustration}
\end{figure}
As we increase the model flexibility, it is possible to reach a point where the training error is zero. At this point, the model achieves low bias and large variance. That is, the model does not generalise well and will perform poorly on an unseen test dataset. However, recent work by~\citet{belkin_reconciling_2019} has shown that if we continue to increase the model complexity beyond this point, the model can eventually start to generalise well again. That is, the bias remains low and the variance starts to decrease again. Figure~\ref{fig:double_descent_illustration}(b) shows this behavior. This is known as the \textit{double-descent} curve and it subsumes the traditional U-shaped curve.
This seemingly counterintuitive behavior has been explored in related fields. In this paper, we seek to demonstrate it on data from dynamic systems using Nonlinear ARX (Auto-Regressive with eXogenous inputs) models. We give examples of different models and regularization strategies that can lead to its observation.
\vspace{-10pt}
\subsubsection{\textbf{Related work and historical development}}
Deep neural networks have achieved state-of-the-art solutions for many tasks~\citep{lecun_deep_2015}. These models, however, often have millions or, even, billions of parameters \citep{tan_efficientnet_2019}, which seems at odds with basic system identification (and statistics) tenets and the parsimonious principle: \textit{``model structure should give enough flexibility to model the system dynamics but not more''}. Furthermore, while the number of parameters is not always a perfect measure of the capacity of the model, deep learning models have been shown to have enough capacity to fit a training set labeled at random~\citep{zhang_understanding_2017}. It has also been observed that these models seem to indefinitely display increased performance as the model size increases~\citep{tan_efficientnet_2019}.
\citet{belkin_reconciling_2019} reconciled this phenomenon with the more traditional bias-variance trade-off paradigm. There, model generalization is studied in the interpolation regime, i.e., for which the model has enough capacity to perfectly (or almost perfectly) fit the training data. Although the learned predictors obtained at the interpolation threshold typically have high risk, increasing the capacity beyond this point leads to decreasing risk, sometimes achieving better performance than in the ``classical'' regime.
The double-descent performance curve has been experimentally observed in diverse machine learning settings: \citet{belkin_reconciling_2019} show it for random Fourier features, random forest and shallow networks, while~\citet{nakkiran_deep_2020} show the same phenomenon for transformers and convolutional network models. In a different line of work, the phenomenon has been studied theoretically. \citet{hastie_surprises_2019, mei_generalization_2019} provide asymptotic guarantees for regression with random features and \citet{bartlett_benign_2020} provides finite sample generalization bounds.
\vspace{-10pt}
\subsubsection{\textbf{Contributions}}
Here, we study double-descent in the usual system identification setting where the data comes from the input and output of a dynamical system. We provide experimental evidence it can indeed be observed in this scenario (cf. Fig.~\ref{fig:double_descent_illustration}) with experiments on both artificially generated and real-world data sets. We also discuss the mechanisms that can yield the observation of the phenomenon.
\vspace{-10pt}
\subsubsection{\textbf{Code availability}} The code for reproducing the experiments is available at:\\
{\href{https://github.com/antonior92/narx-double-descent}{\hspace{13pt}https://github.com/antonior92/narx-double-descent}}
\vspace{-6pt}
\section{Motivation example}
\label{sec:motivation-examples}
\begin{figure*}[t!]
\centering
\subfigure[one-step-ahead MSE]{\includegraphics[width=0.32\linewidth]{figures/chen_rff_onestepahead.png}}
\subfigure[free-run simulation MSE]{\includegraphics[width=0.32\linewidth]{figures/chen_rff_freerun.png}}
\subfigure[parameter norm]{\includegraphics[width=0.32\linewidth]{figures/chen_rff_norm.png}}
\label{fig:random_features_dd}
\caption{\textbf{Double-descent performance curve when modeling the nonlinear system \eqref{eq:chen_model}.} We display the performance of the minimum-norm least-square solution for RFF models ($\gamma = 0.6$) on data generated from~\eqref{eq:chen_model}, with $\sigma_v=0.1$ and $\omega_c = 0.7$. In (a) and (b) we show the train and test MSE. In (c), we show the parameter norm of the corresponding solutions. We keep the number of samples in the training and test datasets constant ($T =400$ and $T' = 100$) and vary the number of parameters $m$ in a log-uniform grid in $[10^{-1}T, 10^3T]$, repeating the same experiment 10 times in each of the 100 points of the grid. The solid line is the median of the 10 experiments and the shaded region delimits the inter-quartile range (i.e the range between $25\%$ and $75\%$ percentiles). The dashed horizontal line gives the test performance of a linear ARX model (with the same delays) that is used as baseline.}
\end{figure*}
We start by presenting a simple example for which the double-descent performance curve can be observed.
\subsubsection{\textbf{Dataset}}
Consider the nonlinear system presented by \citet{chen_non-linear_1990}. Let $u_t\in \mathbb{R}$ and $y_t\in \mathbb{R}$ denote the input and output, respectively. The system output is given by the difference equation
\begin{equation}\label{eq:chen_model}
\begin{split}
y_t =& \left(0.8 - 0.5e^{-y^{2}_{t-1}}\right)y_{t-1} - \left(0.3+0.9e^{-y^{2}_{t-1}}\right)y_{t-2}\\ &+ u_{t-1}+0.2u_{t-2}+0.1u_{t-1}u_{t-2}+v_t,
\end{split}
\end{equation}
where $v_t \sim \mathcal{N}(0,\sigma_v^2)$ represents the process noise and $u_t$ is generated by applying a low-pass filter with cutoff frequency $\omega_c$ to a Gaussian white noise signal with unitary variance. We generate $T$ samples for training the model and a hold-out test set of $ T'$ samples to evaluate its performance on unseen data.
\subsubsection{\textbf{Model}} We use a nonlinear ARX model~\citep{ljung_system_1998} to identify the proposed system. Let us denote
\begin{equation}
\label{eq:specific_x}
x_t = (u_{t-1}, u_{t-2}, y_{t-1}, y_{t-2}).
\end{equation}
We consider a linear-in-the-parameter model for predicting the output from the observed past input/output values
\begin{equation}
\label{eq:nonlin-map}
\hat{y}_t = f(x_t) = \sum_{i=1}^m \theta_i \phi_i(x_t).
\end{equation}
Given the training sequence $\{(u_t, y_t), t=1, \cdots, T\}$, the model is estimated by finding the values $\theta_i$ that minimize
\begin{equation}
\label{eq:narx-estimation}
\frac{1}{T}\sum_{t = 1}^T\left\| y_t - \sum_{i=1}^m \theta_i \phi_i(x_t)\right\|^2.
\end{equation}
Or, equivalently, in matrix form, by finding the vector $\theta \in \mathbb{R}^m$ that minimizes
\begin{equation}
\label{eq:narx-estimation-matrix}
\frac{1}{T}\|y - \Phi \theta\|^2,
\end{equation}
where $\Phi \in\mathbb{R}^{T\times m}$ is the matrix containing $\phi_i(x_t)$ at position $(t, i)$ and $y \in \mathbb{R}^T$ is the vector of outputs. Indeed, finding the optimal parameter here is an ordinary least-squares problem and its analytical solution is
\begin{equation}
\label{eq:ls-sol}
\hat{\theta} = (\Phi^\top \Phi)^{+}\Phi^\top y,
\end{equation}
where $(\Phi^\top \Phi)^{+}$ denotes the Moore-Penrose pseudo-inverse of $\Phi^\top \Phi$. Next, we detail the choice of the nonlinear feature map used in this example.
\subsubsection{\textbf{Random Fourier features} {\bf (RFF)}} \hspace{-5pt} We use the feature map introduced by
\citet{rahimi_random_2008} to approximate the reproducing kernel Hilbert space (RKHS) defined by the Gaussian kernel ${K(x, x') = \exp(-\gamma \|x - x'\|^2)}$. More precisely, the features are generated as
\begin{equation}
\phi_i(x) = \sqrt{\frac{2}{m}}\cos(w_i^\top x + b_i),
\end{equation}
where $w_i\in\mathbb{R}^n$ is a vector with each element sampled independently from $\mathcal{N}(0, 2\gamma)$ and $b_i\in\mathbb{R}$ is sampled from a uniform distribution $\mathcal{U}[0, 2\pi)$. Here,~$\gamma$ is a tunable hyper-parameter of the method.
\subsubsection{\textbf{Metrics}}
We use the \textit{mean squared error (MSE)} as performance metric,
\begin{equation}
{\rm MSE} = \frac{1}{T}\sum_{t = 1}^T\|\hat{y}_{t} - y_t\|^2.
\end{equation}
We refer to the \textbf{one-step-ahead MSE} when the one-step-ahead prediction $\hat{y}_{t}$ is used in the computation. By one-step-ahead we refer to predictions computed as in Eq.~\eqref{eq:nonlin-map}, with the observed past inputs being used to predict the current output. On the other hand, we refer to \textbf{free-run-simulation MSE} when the MSE is computed for the simulations $\hat{y}_{t}^{\text{free}}$, obtained by free-run simulating the model. That is, $\hat{y}_{t}^{\text{free}}$ is computed by the recursive formula:
\begin{equation*}
\hat{y}_{t}^{\rm free} =
\begin{cases}
y_{t}\text{ for }t = 1, \cdots, n_y,\\
f(u_{t}, \cdots, u_{t-n_u+1}, \hat{y}_{t-1}^{\rm free}, \cdots, \hat{y}_{t-n_y}^{\rm free})\text{ for }t> n_y,
\end{cases}
\end{equation*}
where the previously predicted outputs (rather than the observed ones) are used to compute the next step.
\subsubsection{\textbf{Results}}
In Fig.~\ref{fig:random_features_dd}, we show the performance of the RFF models on the training and test datasets as a function of the proportion $m/T$, i.e. the number of parameters (features) divided by the total training sequence length. In Fig.~\ref{fig:random_features_dd}(a), we show the one-step-ahead MSE, which displays the U-shape test performance curve followed by a second descent in the test performance, which is the result of performance improvements as we increase the number of features after the interpolation threshold ($m/T = 1$). In Fig.~\ref{fig:random_features_dd}(b), we show the free-run simulation MSE and demonstrate that we can still observe the second descent in test performance for this scenario. It is interesting to note that while the model reaches one-step-ahead training error close to zero for $m/T = 1$---due to the numerical approximation errors\footnote{For $m/T = 1$---the matrix $\Phi$ has a large condition number which yields numerical errors when estimating the parameters. In this example, $\nicefrac{\sigma_{\max}}{\sigma_{\text{min}}} > 10^6$, where $\sigma_{\max}$ the maximum singular value of $\Phi$. The error is then accumulated through the recurrence in the free-run simulation of the estimated system.}, the free-run simulation MSE on the training data approaches zero only for larger values of $m/T$. Fig.~\ref{fig:random_features_dd}(c) displays the parameter norm $\|\theta\|_2$ as a function of the proportion $m/T$, showing that it peaks at the interpolation threshold and then monotonically decrease. This is something that we will explore later. With this, we finish the presentation of our initial example. Next, we will further explore different mechanisms and settings that give rise to double-descent performance curves.
\section{Linear-in-the-parameters models}
\label{sec:linear-in-the-parameters}
In this section, we further investigate the phenomena in the linear-in-the-parameters setting. We first study options for selecting one solution (over many) in the overparametrized regime and then present additional examples with different datasets and features.
\subsection{Selecting the solution in the interpolation regime}
\label{sec:selecting-the-solution}
In the case where the number of features is larger than the number of measurements, i.e. $m>T$, there are multiple possible solutions for~\eqref{eq:narx-estimation-matrix}. We discuss different choices next.
\subsubsection{\textbf{The minimum-norm solution}} One natural option is to, in the overparametrized case, use the minimum $\ell_2$-norm solution to the problem. That is, the solution
\begin{equation}
\label{eq:min-norm-solution}
\hat{\theta} = \text{arg}\min_\theta \|\theta\|_2 \quad \text{subject to}\quad\Phi\theta = y.
\end{equation}
In the motivation example, we used $\hat{\theta} = (\Phi^\top \Phi)^{+}\Phi^\top y$, which is equivalent to~\eqref{eq:min-norm-solution} in the overparametrized case (i.e. $m>T$) due to the use of the Moore-Penrose pseudo-inverse, which yields the minimum norm solution by definition.
Fig.~\ref{fig:random_features_dd}(c) displays the parameter norm $\|\theta\|_2$ as a function of the proportion $m/T$, showing that it peaks at the interpolation threshold and monotonically decreases after it. The intuition behind this behavior is that at the interpolation threshold there is a unique solution and usually this solution has a large norm, but as we increase the problem dimension ($m$), the space of possible solutions increases and it becomes possible to find solutions with smaller norm. Thus, an interpretation of the second descent in the performance curve is that increasing the number of parameters yields solutions with smaller parameter norm (cf. Fig~\ref{fig:random_features_dd}), which is a type of inductive bias resulting in a decreasing variance error after the interpolation threshold. This argument was presented, for instance, by~\citet{belkin_reconciling_2019}.
\vspace{-5pt}
\subsubsection{\textbf{The effect of regularization}}
\begin{figure}
\vspace{-10pt}
\centering
\includegraphics[width=0.78\linewidth]{figures/chen_rff_vanishing_ridge.png}
\caption{\textbf{Ridge regression with vanishing values of $\lambda$.} The figure displays the one-step-ahead MSE in the test set for RFF models in the nonlinear system~\eqref{eq:chen_model}. The performance curve labeled ``min-norm'' is obtained by solving problem~\eqref{eq:min-norm-solution} and all the other curves correspond to the solution given in~\eqref{eq:ridge_regression} with progressively smaller values of $\lambda$. The experimental setting is the same as that used for Fig.~\ref{fig:random_features_dd}. }
\label{fig:ridge}
\end{figure}
It is common to add a regularization term to the least-square problem~\eqref{eq:narx-estimation-matrix}, penalizing the $\ell_2$ parameter norm. This results in the so-called ridge regression problem, which has a unique solution (even in the interpolation regime) given by
\begin{equation}
\label{eq:ridge_regression}
\hat{\theta}_\lambda = \text{arg}\min_\theta\left(\frac{1}{T}\|y - \Phi \theta\|^2 + \lambda \|\theta\|^2\right).
\end{equation}
Fig.~\ref{fig:ridge} displays the performance curve as a function of $m/T$ for different values of $\lambda$. It illustrates that the double-descent becomes more explicit for smaller values of $\lambda$. Indeed, with $\hat{\theta}_\text{min-norm}$ defined as in~\eqref{eq:min-norm-solution}, it is possible to prove that $\lim_{\lambda\rightarrow 0^+} \hat{\theta}_\lambda = \hat{\theta}_\text{min-norm}$, see~\citet{hastie_surprises_2019}.
\subsubsection{\textbf{Ensembles}}
\begin{figure}
\vspace{-10pt}
\centering
\subfigure[one-step-ahead MSE]{\includegraphics[width=0.49\linewidth]{figures/chen_rff_ensembles_onestepahead.png}}
\subfigure[free-run simulation MSE]{\includegraphics[width=0.49\linewidth]{figures/chen_rff_ensembles_freerun.png}}
\caption{\textbf{Ensembles after the interpolation threshold}. The figure displays the train and test MSE. After the interpolation threshold we use Eq.~\eqref{eq:ensemble-linear-in-param} for an ensemble with $B=1000$ different solutions. The experimental settings are the same as those for Fig~\ref{fig:random_features_dd}. }
\label{fig:random_features_dd_ensembles}
\end{figure}
Another mechanism for choosing solutions beyond the interpolation threshold that also yields increased performance as the model class is enlarged is the use of ensembles. Here we give one example of an ensemble that is linear-in-the-parameters and in Section~\ref{sec:ensembles-and-rf} we give an example that is not.
For $m>T$, assume that we select a subset of indices $\mathcal{S}_{b}\subset \{1, \cdots, m\}$, where the cardinality of this set is $|\mathcal{S}_{b}| = T$. Let ${\rm S}_b\in \mathbb{R}^{m \times T}$ be the selection matrix obtained by selecting the columns of the identity matrix $I_m$ corresponding to the indices in $\mathcal{S}_{b}$. The matrix $\Phi{\rm S}_b$ is a square matrix in $\mathbb{R}^{T \times T}$ and we can (uniquely) find the parameters $\hat{\theta}_b \in \mathbb{R}^T$ by solving the linear system $(\Phi{\rm S}_b) \hat{\theta}_b = y$. In this case, ${\rm S}_b^\top\hat{\theta}_b$ is one solution of the overparametrized least-square problem defined in~\eqref{eq:narx-estimation-matrix}. Assume that we repeat the same procedure $B$ times for different selection matrices and get the average solution
\begin{equation}
\label{eq:ensemble-linear-in-param}
\hat{\theta}^{\rm ens} = \frac{1}{B}\sum_{b=1}^B {\rm S}_b^\top\hat{\theta}_b.
\end{equation}
It is easy to verify that this is still a solution to~\eqref{eq:narx-estimation-matrix}. In Fig.~\ref{fig:random_features_dd_ensembles} we show that this procedure is an alternative mechanism that also yields a second descent in performance after the interpolation point. For numerical stability, rather than solving the linear system $(\Phi{\rm S}_b) \hat{\theta}_b = y$ we solve a ridge regression problem with very small value of $\lambda$. For instance, in Fig.~\ref{fig:random_features_dd_ensembles}, we use $\lambda = 10^{-7}$.
Our focus here is just to present ensembles as an alternative mechanism for observing the double-descent performance curve after the interpolation threshold. Hence, in Fig.~\ref{fig:random_features_dd_ensembles} ensembles are used only after the interpolation point. Nonetheless, we would like to highlight that ensemble models can boost the performance even before the interpolation threshold. We refer the reader to \citet{lejeune_implicit_2020} for an in-depth analysis of ensembles of ordinary least squares in the underparametrized regime.
\subsection{Additional examples}
\label{sec:additional-examples}
\begin{figure}
\centering
\vspace{-10pt}
\includegraphics[width=0.78\linewidth]{figures/chen_rbfnet_onestepahead.png}
\caption{\textbf{RBF networks.} The one-step-ahead MSE in the training and test set for RBF network models ($\gamma = 0.25$ and $\eta=5$) in the nonlinear system from \citet{chen_non-linear_1990}. The data was generated using Eq.~\eqref{eq:chen_model}, with $\sigma_v=0.1$, $T =400$ and $T' = 100$. After the interpolation threshold, we use the ensemble strategy for choosing the solution (as in Fig.~\ref{fig:random_features_dd_ensembles}), with $B=2000$ and $\lambda = 10^{-14}$. }
\label{fig:random_features_dd_rbfnet}
\end{figure}
Here we give alternative linear-in-the-parameter scenarios where we have experimentally observed a double-descent performance curve. We have considered RFF in all the examples so far. Next, we present an alternative definition of nonlinear feature maps which result in similar behavior.
\subsubsection{\textbf{Radial basis function {\bf (RBF)} network}} For RBF networks, given the centers $c_i\in\mathbb{R}^n$, the features are generated as
\begin{equation}
\phi_i(x) = \exp(-\gamma \|x - c_i\|).
\end{equation}
This class of functions are universal approximators in a compact subset~\citep{park_universal_1991}, and can be formulated as non-convex optimization problems when $c_i$ are treated as free optimization parameters. Here, however, we choose the centers at random, sampling them from $\mathcal{N}(0, \eta I_n)$. This yields a hypothesis class that fits in Eq.~\eqref{eq:nonlin-map} and can be solved using ordinary least squares. In Fig.~\ref{fig:random_features_dd_rbfnet}, we provide the performance curve of RBF networks when modeling the nonlinear system described in~\eqref{eq:chen_model}.
We also tested the phenomena using a real world dataset, described next.
\subsubsection{\textbf{Coupled electrical (CE8) drives benchmark}}
We also observe the phenomena in the dataset collected from the operation of coupled electric drives~\citep{wigren2017coupled}. The system consists of two electric motors that drive a pulley using a flexible belt. The pulley is held by a spring and the angular speed is measured by a pulse counter. The system to be identified takes as input the control signal sent to both motors (which are the same) and should predict as output the angular speed. The pulse counter is insensitive to the sign of the angular velocity, which creates an ambiguity in the measurements and makes the problem harder.
We use two sequences of 10 seconds to develop the model. The sequences were collected from the above system operating with inputs uniformly distributed in amplitude. The first 60\% of the measurements are used for training and the remaining 40\%, for testing. In Fig.~\ref{fig:double_descent_illustration}, we display the training and test MSE for RFF models ($\gamma = 0.2$). After the interpolation threshold, we use the ensemble strategy, with $B=2000$ and $\lambda = 10^{-14}$, for choosing the solution.
\section{General nonlinear models}
Let us now consider nonlinear ARX models that cannot be formulated as linear-in-the-parameters problems. As in Section~\ref{sec:motivation-examples}, let $x_t$ be defined as a concatenation of past input and outputs. Given a training set, the problem can be formulated as choosing the function $f$ in the hypothesis class $\mathcal{F}$ that (exactly or approximately) minimizes
\begin{equation}
\label{eq:narx-estimation-nonlinear-in-param}
V = \frac{1}{T}\sum_{t = 1}^T\|f(x_t) - y_t\|^2.
\end{equation}
Assume that $\omega \in \mathbb{R}_+$ is a parameter that controls
the size of the hypothesis class---i.e. $\mathcal{F}_{\omega_1} \subset \mathcal{F}_{\omega_2}$ if $\omega_1 < \omega_2$. Let $\omega_t$ denote the threshold after which it is possible to select $f$ that yields zero training error (i.e., the model perfectly fits the training data). We refer to the double-descent phenomenon as the situation for which the test error decreases with $\omega$ in the interpolation regime $\omega\in (\omega_t, \infty)$.
\subsection{Random forests}
\label{sec:ensembles-and-rf}
\begin{figure}
\centering
\vspace{-10pt}
\subfigure[one-step-ahead MSE]{\includegraphics[width=0.49\linewidth]{figures/chen_randomforest_onestepahead.png}}
\subfigure[free-run simulation MSE]{\includegraphics[width=0.49\linewidth]{figures/chen_randomforest_freerun.png}}
\caption{\textbf{Double-descent for Random Forest models.} The data was generated as in Eq.~\eqref{eq:chen_model}, with $\sigma_v=0.1$. In (a) and (b) we show the train and test MSE. We keep the number of samples of the training and test datasets constant ($T =3000$ and $T' = 100$) and vary the number of leafs $m$ in a log-uniform grid in the interval $[10^{-1}T, 10^{2}T]$.}
\label{fig:random_forest_dd}
\end{figure}
We can generalize the idea of an ensemble (from Section~\ref{sec:selecting-the-solution}) to this more general scenario. Before the interpolation threshold, the function $f$ which minimizes \eqref{eq:narx-estimation-nonlinear-in-param} is selected from the hypothesis class $\mathcal{F}$. When $\omega > \omega_t$ there might be multiple possible solutions, so we pick $B$ different solutions $f_b$ from the hypothesis class, and use the average of their predictions,
\begin{equation}
\label{eq:bagging}
f(x) = \frac{1}{B} \sum_{b = 1}^B f_b(x).
\end{equation}
The random forest~\citep{breiman_random_2001} is a popular ensemble method which we will study here. For this model class, $f_b$ is a decision tree. That is, a rooted tree structure is associated with $f_b$ and the output is computed by traveling from the tree root node to one of the leaves (which is associated with a given output). At each node of the tree, a given $x_i$ is used as a decision variable to decide which child node to navigate to. The tree structure, the decision variables and the decision stumps are obtained (i.e., the model is trained) by approximately minimizing \eqref{eq:narx-estimation-nonlinear-in-param} using a greedy algorithm.
The number of leaves of a decision tree provides a natural way to parameterize the capacity of the model. A tree with $m$ leaves corresponds to a piece-wise function consisting of $m$ constant functions and, as such, it can interpolate $m$ data points. To increase the capacity of the model beyond the interpolating threshold, ensembles (averages) of multiple decision trees are used, i.e. Eq.~\eqref{eq:bagging}. Here, the different models $f_b$ are obtained by presenting the data in a different order compared to the (suboptimal) greedy optimization algorithm. We do not use bootstrap resampling (as it is traditionally done). This way, each $f_b$ perfectly fits the training dataset after the interpolation threshold, in agreement with the setup we are interested in.
\subsubsection{\textbf{Results}} In Fig.~\ref{fig:random_forest_dd}, we show the performance as a function of the proportion between the total number of leaves in the random forest and the number of training samples. Showing that increasing the model capacity beyond the interpolation threshold yields continuous improvements in the performance.
\subsection{A note on neural networks}
\label{sec:nn}
In the introduction, we described how the success of deep neural networks was an important reason for digging deeper into the properties of overparametrized models. In this section, we briefly describe some connections between the examples and ideas we presented in this paper and the study of neural networks. We appeal to a recent line of work by~\citet{jacot_neural_2018, chizat_lazy_2019}, which derive approximate models for neural networks where analytical solutions are easier to study.
Deep neural networks can be understood as black-box parametrized functions $f_\theta(\cdot)$ that are nonlinear in the parameters and for which~\eqref{eq:narx-estimation-nonlinear-in-param} yields a non-convex problem. Let $\theta \in \mathbb{R}^m$ be the neural network parameters, following \citet{chizat_lazy_2019}, we assume that the number of parameters is very large and that training the neural network moves each of them just by a small amount w.r.t. its initialization $\theta_0$. It thus makes sense to linearize the model around $\theta_0$, which yields
\begin{equation}
f(x; \theta) \approx f(x; \theta_0) + \nabla f(x; \theta_0)^\top\tilde{\theta},
\end{equation}
where $\tilde\theta = \theta - \theta_0$. Hence, denoting $\phi_i = \nabla f(x; \theta_0)$ and assuming $f(x; \theta_0) \approx 0$ we return to the setup of Section~\ref{sec:linear-in-the-parameters}.
Another interesting connection, that will be explored next, is the relation between the solution of the gradient descent algorithm and the minimum-norm solution of least-squares problems. Gradient descent algorithm and its variations are popular choices for estimating deep neural network parameters. In the vanilla version, this algorithm iteratively refines the parameter $\theta$ by moving in the opposite direction of the gradient of the cost function $V$,
\begin{equation}
\label{eq:gradient-descent}
\theta^{i+1} = \theta^i - \gamma \nabla_{\theta} V (\theta^i),
\end{equation}
where $\gamma$ is the \textit{learning rate} that controls the optimization and $\nabla_{\theta} V (\theta^i)$ denotes the gradient of $V$ evaluated at $\theta^i$ and $\theta^i$ denotes the parameter estimate at the $i^{\text{th}}$ iteration.
The use of the minimum-solution norm solution yields, in the setting from Section~\ref{sec:linear-in-the-parameters}, a second descent in the performance curve in the overparametrized region. The next theorem establishes that if we initialize $\theta^0$ in the row space of $\Phi$, then gradient descent finds the minimum-norm solution of least-squares problems. This is another interesting connection between the setup studied here and the standard deep neural network setup.
\begin{thm}
Let $V(\theta) = \frac{1}{2}\|\Phi \theta + y\|^2$, for $\Phi \in \mathbb{R}^{T \times m}$ a matrix with full row rank. Let $\theta^i$ be the $i^{\text{th}}$ step of the gradient-descent algorithm (defined in Eq.~\eqref{eq:gradient-descent}), initialized with $\theta^0$ in the row space of $\Phi$. Then, if $\hat{\theta}$ denotes the minimum-norm minimizer of $V(\theta)$ there exist a $\tilde{\gamma}>0$ such that for all $\gamma\in(0, \tilde{\gamma})$, $\theta^i \rightarrow \hat{\theta}$ as $i \rightarrow \infty$.
\end{thm}
\begin{pf}
The above result follows from analytically computing the gradient, plugging it into Eq.~\eqref{eq:gradient-descent}, using the SVD decomposition of $\Phi$, and then taking the limit. \citet[Proposition 1]{hastie_surprises_2019} and
\citet{de_azevedo_does_2020} provide complete proofs of the statement.
\end{pf}
While the connections presented here are not exact, they give some insight into the generalization properties of neural network models. Indeed, double-descent performance curves have been experimentally observed in~\citet{belkin_reconciling_2019} for shallow neural networks and in~\citet{nakkiran_deep_2020} for transformers and convolutional neural networks.
\section{Conclusion and Future Work}
In this paper, we have presented the double-descent phenomenon in a system identification framework, giving experimental evidence that it holds for nonlinear ARX models. We also discuss the mechanisms that lead to it.
It is well-known within the system identification community that the assumptions needed to guarantee that the nonlinear ARX estimates are consistent are rather strict, only white process noise can be present. Studying double-descent for ARMAX, output error and other types of models that can handle more general noise types~\citep{ljung_system_1998} is a natural and interesting future direction. Furthermore, the assumption of independent regressors used in \citep{hastie_surprises_2019, mei_generalization_2019, bartlett_benign_2020} and other theoretical analysis of the double-descent curve do not hold in the setup we presented here. Hence, extending the available theoretical results to the presented setup is also another interesting direction.
|
1,314,259,993,439 | arxiv | \section{Introduction}
\label{sec:introduction}
Operator algebras play a major role in modern functional analysis and
mathematical physics, particularly algebras with an ample supply
of projections. Such algebras
display a rich interplay between their algebraic structure, the order-theoretic
structure of their projections, the group-theoretic structure of their
unitaries, and their various topological structures. It is therefore
natural to wonder to what extent one of these aspects determines the
others. We will consider algebras for whom operator topologies play
a minor role, and focus on the
other facets; specifically, we work with AW*-algebras, which include all von Neumann algebras.
Such algebras are not completely determined by the
group-theoretic structure of their unitaries: for example, $U(A) \cong
U(A\op)$, but $A \not\cong A\op$ in general~\cite{connes:factor}.
Adding the order-theoretic structure of their projections does not
suffice to reconstruct the algebra either: again, $\Proj(A)
\cong \Proj(A\op)$.
Closely related to projections is the structure of the normal part
$N(A)$ of $A$ as a
\emph{piecewise\footnote{We prefer the terminology `piecewise algebra'
over the traditional `partial algebra', because of the unfortunate
conjunction `partial complete Boolean algebra'.} algebra} (see~\cite{vdbergheunen:colim}). Roughly, these are
algebras where one can only add or multiply commuting elements.
But adding this structure is still not enough to determine the
algebra, since $N(A)$ and $N(A\op)$ are isomorphic as piecewise algebras.
It follows from our main result that taking into account one final
ingredient does suffice to completely determine the algebra structure, namely the
action by conjugation of the unitaries on the projections. Thus we
answer the following preserver problem.
\begin{corollary*}
Let $A$ and $B$ be AW*-algebras.
If $f \colon N(A) \to N(B)$ is an isomorphism of piecewise algebras, that
restricts to isomorphisms $\Proj(A) \cong \Proj(B)$ and
$U(A) \cong U(B)$, and satisfies
$f(upu^*)=f(u)f(p)f(u)^*$, then $A \cong B$.
\end{corollary*}
There is considerable overkill in the previous corollary.
For one thing, we could have stated the assumption on piecewise
algebras in terms that a priori contain less
information, such as, for example, the partial orders of commutative
subalgebras of $A$ and $B$ (see~\cite{hamhalter:jordan}), or various
notions built on those.
We will prove that any isomorphism $\Proj(A) \to \Proj(B)$ extends
uniquely to an isomorphism $N(A) \to N(B)$ of piecewise algebras, that
could therefore have been left out from the assumptions altogether.
This puts the following two driving questions
on an equal footing.
\begin{itemize}
\item What extra data make projections a complete
invariant of AW*-algebras?
\item What extra data on piecewise AW*-algebras enable
extension to total ones?
\end{itemize}
Moreover, it suffices to consider the subgroup of the
unitaries generated by so-called symmetries (see~\cite[Chapter~6]{alfsenshultz:statespaces}).
Finally, the projection lattice injects into the symmetry group by $p
\mapsto 1-2p$, and so the projection lattice acts on itself in a
certain sense. We can package up the remaining data in an \emph{active
lattice}, which therefore completely determines the AW*-algebra
structure. The precise definition can be found in
Section~\ref{sec:activelattices}, but let us emphasize here that it is
expressed exclusively in terms of the projection lattice and the
symmetry group that it generates. (For related ideas, see also~\cite{mayet:orthosymmetric}, which only came to our attention when the current work was already in press.)
In fact, we will be (quite) a bit more general, and work with
arbitrary morphisms instead of just isomorphisms: we define a functor
from the category of AW*-algebras to that of active lattices, and prove it
to be full and faithful.
This then implements our main result, which makes precise the titular claim that
an AW*-algebra is completely determined by its active lattice.
\begin{theorem*}
The category of AW*-algebras is equivalent to a full subcategory of the category of active lattices.
\end{theorem*}
This is summarized in the following commuting diagram of
functors. Solid arrows represent functors that are faithful but not
full, whereas the dashed functor we construct is both
full and faithful.
\[\xymatrix@R-2ex@C+2ex{
& \AWstar \ar_-{\Proj}[dl] \ar^-{\Sym}[dr] \ar@{..>}[d] \\
\cOML & \AL \ar[l] \ar[r] & \Group
}\]
In particular, this result incorporates all von Neumann algebras, as W*-algebras and normal $*$-homomorphisms form a full subcategory of AW*-algebras.
\subsubsection*{Motivation}
Our main motivation is to generalize the duality of commutative
C*-algebras and their Gelfand spectra to the noncommutative case.
Many proposals for noncommutative spectra have been studied. One of them concerns
\emph{quantales}~\cite{mulvey:andthen}, that are based on projection lattices
in the case of AW*-algebras. However, there are rigorous
obstructions to various categories being in duality with that of
C*-algebras, including that of quantales~\cite{reyes:obstructing,vdbergheunen:localicnogo}.
These obstructions suggest that
a good notion of spectrum can instead be based on piecewise structures~\cite{heunenlandsmanspitterswolters:gelfand,hamhalter:jordan}.
Our active lattices come very close to quantales, but circumvent the
obstruction afflicting them. Whereas a quantale is a monoid that
is also a lattice, an active lattice can be regarded as a
monoid that is generated by a lattice.
Stone duality between Stonean spaces and complete Boolean
algebras (see Section~\ref{sec:duality}) allows one to consider
$\Proj(A)$ as a substitute for the Gelfand spectrum in case $A$ is
a commutative AW*-algebra. So our results can also be regarded as a successful
extension of this ``substitute spectrum'' to noncommutative AW*-algebras.
This goal explains why go to the nontrivial trouble of take morphisms seriously, and deal with arbitrary morphisms with different domain and codomain rather than just focusing on isomorphisms.
The theorem above succeeds in extending a combination of Gelfand's and Stone's representation theorems noncommutatively for the case of AW*-algebras. Because of the relation to complete Boolean algebras sketched above, active lattices could be regarded as ``noncommutative Boolean algebras'', providing progress toward a category of ``noncommutative sets''. This is an important step closer to the ``noncommutative topological spaces'' that C*-algebras represent than the ``noncommutative measure spaces'' of von Neumann algebras. This explains why we take pains to avoid measure-theoretic arguments and work with AW*-algebras instead of von Neumann algebras.
Our results can also be regarded as a novel answer to the Mackey--Gleason problem,
that has been studied in great detail for von Neumann
algebras. This type of problem asks what properties of a function
between projection lattices ensure that it extends to a linear
function between operator algebras, or more generally, what properties
of a function between operator algebras that is only piecewise linear
make it linear~\cite{buncewright:mackeygleason}.
As mentioned, we generalize many constructions from von Neumann algebras to AW*-algebras, as the latter are the natural home for our arguments. In particular, we will not rely on Gleason's theorem to extend piecewise linearity to linearity~\cite{buncewright:jordan}, but directly generalize results to due to Dye~\cite{dye:projections} instead.
In addition, our main results hold perfectly well for algebras with $\mathrm{I}_2$
summands, which are exceptions to many classic theorems, including the Mackey--Gleason problem. Thus our main results answer this problem by approaching it a substantially different and worthwhile way.
\subsubsection*{Structure of the paper}
The article proceeds as follows. Section~\ref{sec:duality} recalls
AW*-algebras, complete Boolean algebras, and their piecewise
versions. It then proves that the two resulting categories of piecewise structures
are equivalent. Section~\ref{sec:activelattices} introduces
active lattices after discussing the ingredients of projection
lattices and symmetry groups. It also constructs the functor taking an
AW*-algebra to its active lattice. Section~\ref{sec:fullness} is
devoted to proving that this functor is full.
\section{(Piecewise) AW*-algebras and complete Boolean algebras}
\label{sec:duality}
After reviewing commutative AW*-algebras and their equivalence to
complete Boolean algebras, this section extends the equivalence to
piecewise AW*-algebras and piecewise complete Boolean algebras,
positively answering~\cite[Remark~3]{vdbergheunen:colim}.
\subsection*{AW*-algebras and complete Boolean algebras}
Kaplansky introduced AW*-algebras as an abstract generalization of von
Neumann algebras~\cite{kaplansky:awstar, berberian}. Their main characteristic is
that they are algebraically determined by their projections, \ie\ self-adjoint
idempotents, to a great extent. We denote the set of projections of a
$*$-ring $A$ by $\Proj(A)$. This set is partially ordered by the relation
$p \leq q \iff p = pq\, (= qp)$.
\begin{definition}\label{def:awstar}
An \emph{AW*-algebra} is a C*-algebra $A$ that satisfies the
following left-right symmetric and equivalent conditions:
\begin{enumerate}[(a)]
\item the right annihilator of any subset is generated as
right ideal by a projection;
\item the right annihilator of any element $a \in A$ is
generated by a projection, and $\Proj(A)$ forms a
complete lattice;
\item the right annihilator of any element $a \in A$ is generated
by a projection, and every orthogonal family in
$\Proj(A)$ has a supremum;
\item any maximal commutative subalgebra $C$ is the closed linear
span of $\Proj(C)$, and every orthogonal family in $\Proj(A)$ has
a supremum.
\end{enumerate}
A \emph{morphism of AW*-algebras} is a $*$-homomorphism that preserves
suprema of projections. We write $\AWstar$ for the category of
AW*-algebras and their morphisms.
\end{definition}
For a subset $S$ of an AW*-algebra $A$, write $\Rann(S)$ for
the (unique) projection of Definition~\ref{def:awstar}(a): $\Rann(S)$ is
the least projection annihilating every element of $S$, and is also the
(unique) projection such that $xy = 0$ for all $x \in S$ if and
only if $\Rann(S)y = y$. This is the \emph{right annihilating projection}
of $S$. With a slight abuse of notation, we write $\Rann(a)$ in place
of $\Rann(\{a\})$ for a single element $a \in A$. The projection
$\RP(a) = 1 - \Rann(a)$ is the \emph{right supporting projection}
of $a$. It is the least projection satisfying $a \RP(a) = a$.
Given the equivalent conditions defining AW*-algebras, there are
several possible choices for morphisms of the category \AWstar.
Fortunately, the most obvious conditions one might impose on
a $*$-homomorphism are also equivalent, as the following lemma shows.
Recall that a set of projections is called directed when every pair
of its elements has an upper bound within the set.
\begin{lemma}\label{lem:awstarmorphisms}
For a $*$-homomorphism $f \colon A \to B$ between AW*-algebras, the
following conditions are equivalent:
\begin{enumerate}[\quad (a)]
\item $f$ preserves right annihilating projections of arbitrary subsets;
\item $f$ preserves suprema of arbitrary families of projections;
\item $f$ preserves suprema of orthogonal families of projections;
\item $f$ preserves suprema of directed families of projections.
\end{enumerate}
If $f$ satisfies these equivalent conditions, then the kernel of $f$
is generated by a central projection and $f$ preserves $\RP$.
\end{lemma}
\begin{proof}
For a morphism $f$ satisfying (c), the last sentence of the lemma
follows from~\cite[Exercise~23.8]{berberian}. That (b)
implies (a) follows from the fact that such $f$ preserves $\RP$ as
well as the following equation for any $S \subseteq A$,
\[
\Rann(S) = \bigvee_{x \in S} \Rann(x) = \bigvee_{x \in S} (1 - \RP(x))
\]
(see also~\cite[Proposition~4.2]{berberian}). Conversely,
assume (a), and let $\{p_i\} \subseteq \Proj(A)$. We will prove
that
\[
\bigvee \{p_i\} = \Rann(\{1-p_i\}),
\]
from which (a) $\Rightarrow$ (b) will follow. Writing $p =
\Rann(\{1-p_i\})$, every $(1-p_i) \perp p$, which gives $p_i \leq p$
for all $i$. And if all $p_i \leq q$ for any $q \in \Proj(A)$, then
all $(1-p_i) \perp q$, which means that $pq = q$ and thus $q \leq p$.
Hence $p = \bigvee_i p_i$ as desired.
Clearly (b) $\Rightarrow$ (d). To see (d) $\Rightarrow$ (c), let
$\mathcal{P}$ be an orthogonal family of projections. Setting $q_S =
\bigvee S$ for every finite subset $S \subseteq \mathcal{P}$ gives a
directed family of projections with the same supremum as
$\mathcal{P}$. Because each $S$ is orthogonal and finite, we have
$f(q_S) = f(\sum S) = \sum f(S) = \bigvee f(S)$. And because $f$ is
assumed to preserve directed suprema, $f(\bigvee \mathcal{P}) = f(\bigvee_S q_S) =
\bigvee_S f(q_S) = \bigvee_S f(S) = \bigvee f(\mathcal{P})$.
Finally, because any
$*$-homomorphism $f \colon A \to B$ between AW*-algebras
restricts to a lattice homomorphism $\Proj(A) \to
\Proj(B)$ (see~\cite[Proposition~5.7]{berberian}),
Lemma~\ref{lem:orthocomplete} below provides a direct proof of (c)
$\Rightarrow$ (b).
\end{proof}
Observe that the proof of the previous lemma establishes more than was
promised: it shows that direct sums provide finite products in the category
\AWstar.
The initial object is the AW*-algebra $\C$, and the terminal object is the
zero algebra.
Observe also that the above lemma holds true if $f$ is only assumed
to be a $*$-ring homomorphism. This will be useful later in
Section~\ref{sec:fullness}.
Let $\Wstar$ denote the category of W*-algebras
(\ie\ abstract von Neumann algebras) and
normal $*$-homomorphisms. Then $\Wstar$ is a full subcategory of
$\AWstar$. (The objects of $\Wstar$ are objects of $\AWstar$
by~\cite[Proposition~4.9]{berberian}, and the subcategory can
be shown to be full, for instance, by composing a $*$-homomorphism $A \to B$
with all normal linear functionals on $B$ and
using~\cite[Corollary~III.3.11]{takesaki:operatoralgebras1}. See
also~\cite[Lecture~11]{lurie:neumann}.)
In particular, the lemma above provides equivalent conditions for
a $*$-homomorphism between von Neumann algebras to be
normal.
If an AW*-algebra is commutative, its projections form a
\emph{complete Boolean algebra}: a distributive lattice in which
every subset has a least upper bound, and in which every element has a
complement. In fact, we now detail an equivalence between the
categories of commutative AW*-algebras and complete Boolean algebras.
First recall Stone duality~\cite[Corollary~II.4.4]{johnstone:stonespaces},
which gives a dual equivalence between Boolean algebras and Stone
spaces, \ie\ totally disconnected compact Hausdorff spaces. If the Boolean
algebra is complete, the corresponding Stone space is in fact a Stonean
space, \ie\ extremally disconnected, meaning that the closure of every open
set is (cl)open. We write $\CBoolean$ for the category of complete
Boolean algebras and homomorphisms of Boolean algebras that preserve
arbitrary suprema.
On the topological side, we write \cat{Stonean} for
the category of Stonean spaces and open continuous functions. With
this choice of morphisms, Stone duality restricts to a dual
equivalence between $\CBoolean$ and
$\cat{Stonean}$. See~\cite[Section~6]{bezhanishvili:devries}.
Similarly, recall that Gelfand duality gives a dual equivalence
between commutative C*-algebras and compact Hausdorff spaces. If the
C*-algebra is an AW*-algebra, then the compact Hausdorff space is in
fact a Stonean space~\cite[Theorem~7.1]{berberian}.
If we write \cat{cAWstar} for the full subcategory of
\cat{AWstar} consisting of commutative AW*-algebras, then Gelfand
duality restricts to a dual equivalence between \cat{cAWstar} and
\cat{Stonean}.
Hence we have the following equivalences.
\begin{equation}\label{eq:commutativeduality}
\xymatrix@C+4ex{
\cat{cAWstar} \ar@{}|-{\simeq}[r] \ar@<1ex>^-{\Spec}[r]
& \cat{Stonean}\op \ar@{}|-{\simeq}[r] \ar@<1ex>^-{\Clopen}[r] \ar@<1ex>^-{\Cont}[l]
& \CBoolean \ar@<1ex>^-{\Stone}[l]
}
\end{equation}
Explicitly, $\Spec$ is the functor taking characters and furnishing
them with the Gelfand topology, and $\Clopen$ takes clopen subsets,
so the composite $\Clopen \circ \Spec$ is naturally isomorphic to the
functor $\Proj$. We write $\Func$ for the composite $\Cont \circ
\Stone$. Explicitly, $\Stone = \CBoolean(-,2)$ and $\Cont = C(-,\mathbb{C})$.
Thus $\Proj$ and $\Func$ form an equivalence between
commutative AW*-algebras and complete Boolean algebras.
\subsection*{Piecewise structures}
Piecewise algebras are sets of which only certain pieces carry
algebraic structure, but in a coherent way. Before we can extend the
equivalence above to a piecewise setting, we spell out the
appropriate definitions. Definition~\ref{def:awstar}(c) leads to a
specialization of the definition of a piecewise C*-algebra, that we
recall first~\cite{vdbergheunen:colim}.
\begin{definition}
\label{def:pawstar}
A \emph{piecewise C*-algebra} consists of a set $A$ with:
\begin{itemize}
\item a reflexive and symmetric binary
(\emph{commeasurability}) relation $\commeas \subseteq A \times A$;
\item elements $0,1 \in A$;
\item a (total) involution $* \colon A \to A$;
\item a (total) function $\cdot \colon \mathbb{C} \times A \to A$;
\item a (total) function $\|\! - \!\| \colon A \to \mathbb{R}$;
\item (partial) binary operations $+, \cdot \colon \commeas \to A$;
\end{itemize}
such that every set $S \subseteq A$ of pairwise commeasurable
elements is contained in a set $T \subseteq A$ of pairwise
commeasurable elements that forms a commutative C*-algebra under the
above operations.
A \emph{piecewise AW*-algebra} is a piecewise C*-algebra $A$ with
\begin{itemize}
\item a (total) function $\RP \colon A \to \Proj(A)$;
\item a (partial) operation $\bigvee \colon \{ X \subseteq \Proj(A) \mid X
\times X \subseteq \commeas \} \to \Proj(A)$;
\end{itemize}
such that every set $S \subseteq A$ of pairwise commeasurable
elements is contained in a set $T \subseteq A$ of pairwise
commeasurable elements that forms a commutative AW*-algebra under
the above operations.
A \emph{morphism of piecewise AW*-algebras} is a (total) function $f
\colon A \to B$ such that:
\begin{itemize}
\item $f(a) \commeas f(b)$ for commeasurable $a,b \in A$;
\item $f(ab)=f(a)f(b)$ for commeasurable $a,b \in A$;
\item $f(a+b)=f(a)+f(b)$ for commeasurable $a,b \in A$;
\item $f(za) = zf(a)$ for $z \in \mathbb{C}$ and $a \in A$;
\item $f(a)^* = f(a^*)$ for $a \in A$;
\item $f(\bigvee_i p_i) = \bigvee_i f(p_i)$ for pairwise
commeasurable projections $\{p_i\}$.
\end{itemize}
By Lemma~\ref{lem:awstarmorphisms}, such a morphism automatically
satisfies $f(\RP(a)) = \RP(f(a))$. Also, it follows from the last
condition that $f(1)=1$.
Piecewise AW*-algebras and their morphisms organize themselves into
a category denoted by $\cat{PAWstar}$.
\end{definition}
The prime example of a piecewise AW*-algebra is the set $N(A)$ of
normal elements of an AW*-algebra $A$, where commeasurability is given
by commutativity. Hence one can regard piecewise AW*-algebras as
AW*-algebras of which the algebraic structure between noncommuting
elements is forgotten.
\begin{lemma}\label{lem:normalfunctor}
The assignment sending an AW*-algebra $A$ to its set of normal elements
$N(A)$ defines a functor $N \colon \AWstar \to \PAWstar$.
\end{lemma}
\begin{proof}
Let $A$ be an AW*-algebra. The natural piecewise algebra structure on $N(A)$
is a piecewise $C^*$-algebra by~\cite[Proposition~3]{vdbergheunen:colim}.
It is a piecewise AW*-algebra under the inherited $\RP$
and supremum operations, because every pairwise commuting subset of
$N(A)$ is contained in a maximal commutative subalgebra of $A$, that
is an AW*-subalgebra by Definition~\ref{def:awstar}(d), and must
itself necessarily be contained in $N(A)$.
Functoriality of $N$ is easy to check.
\end{proof}
The next lemma observes that the structures
$\bigvee$ and $\RP$ in Definition~\ref{def:pawstar} are
in fact properties. (Nonetheless morphisms in $\PAWstar$ have
to preserve $\bigvee$.) Thus we may say that a certain piecewise C*-algebra
``is a piecewise AW*-algebra'' without ambiguity of the
AW*-operations.
We call a projection $p$ of a piecewise C*-algebra a \emph{least upper
commeasurable bound} of a commeasurable set $S$ of projections when $p
\commeas a$ for any $a$ that makes $S \cup \{a\}$
commeasurable, and whenever a projection $q$ is
commeasurable with $S \leq q$, then $q$ is commeasurable with $p$
as well and $p \leq q$.
\begin{lemma}
Let $A$ be a piecewise C*-algebra. There is at most one choice of
operations $\bigvee$ and $\RP$ as in Definition~\ref{def:pawstar}
making $A$ a piecewise AW*-algebra.
\end{lemma}
\begin{proof}
First, we claim that in any piecewise AW*-algebra, $\bigvee$ is
characterized as giving the least upper commeasurable bound.
Since these rely only the underlying
piecewise C*-algebra structure, $\bigvee$ is then unique. The claim
derives from
Definition~\ref{def:pawstar} as follows. Since $\bigvee$ makes $A$
into a piecewise AW*-algebra, there exists a commutative AW*-algebra
$T \subseteq A$ whose suprema are given by $\bigvee$, containing
$S$. Hence $T$ contains $\bigvee S$, making $S \cup
\{ \bigvee S \}$ commeasurable, and $\bigvee S$ majorizes $S$. If $q$ is
commeasurable with $S$ and majorizes it, there exists an AW*-algebra
$T$ containing $S \cup \{q\}$. In particular, it is closed under suprema
of projections, which are given by $\bigvee$. Thus it contains
$\bigvee S$, which is therefore commeasurable with $q$ and
$\bigvee S \leq q$.
Finally, $\RP(a) = \bigwedge \{ p \in \Proj(A)
\mid ap=a \}$ equals $\bigvee \{ q \in \Proj(A) \mid \forall p \in
\Proj(A) \colon ap=a \Rightarrow p \leq q\}$.
\end{proof}
The next two results give convenient ways to recognize
piecewise AW*-algebras among piecewise C*-algebras.
The first shows that a piecewise AW*-algebra is a piecewise
C*-algebra that is ``covered'' by sufficiently many AW*-algebras;
recall that an AW*-algebra $A$ is an \emph{AW*-subalgebra} of an
AW*-algebra $B$ when the inclusion $A \hookrightarrow B$ is a morphism
in $\AWstar$.
The second is a characterization analogous to Kaplansky's
original definition of AW*-algebras as
C*-algebras with extra properties, Definition~\ref{def:awstar}(d).
\begin{lemma}\label{RPandsup}
A piecewise C*-algebra $A$ is a piecewise AW*-algebra when:
\begin{itemize}
\item any commeasurable subset $S$ is contained in a
commeasurable subset $T(S)$ that is an AW*-algebra, such that:
\item if $S \subseteq S'$ are commeasurable subsets,
$T(S)$ is an AW*-subalgebra of $T(S')$.
\end{itemize}
\end{lemma}
\begin{proof}
Define functions $\RP$ and $\bigvee$ by calculating $\RP(a)$ as in $T(\{a\})$,
and calculating $\bigvee X$ as in $T(X)$. By~\cite[Proposition~3.8]{berberian},
then $\RP(a)$ is the same when calculated in any $T(S)$ with
$a \in S$, because $T(\{a\})$ is an AW*-subalgebra of $T(S)$.
Similarly, $\bigvee{X}$ is the same in any $T(S)$ with $X \subseteq
S$~\cite[Proposition~4.8]{berberian}. Therefore $\RP$ and $\bigvee$
make $A$ into a piecewise AW*-algebra.
\end{proof}
\begin{proposition}
A piecewise C*-algebra $A$ is a piecewise AW*-algebra when both:
\begin{itemize}
\item commeasurable sets of projection have least upper
commeasurable bounds;
\item maximal commeasurable subalgebras are closed linear
spans of projections.
\end{itemize}
\end{proposition}
\begin{proof}
The first assumption defines a function $\bigvee$.
If $S$ is a commeasurable subset, Zorn's lemma provides
a maximal commeasurable set $M \supseteq S$. By definition of
piecewise C*-algebra, $M$ is contained in a commeasurable
C*-algebra. Hence maximality guarantees that $M$ is a commutative
C*-algebra under the operations of $A$. But now the second assumption
together with $\bigvee$ make $M$ into an
AW*-algebra~\cite[Exercise~7.1]{berberian}. Taking
$S=\{a\}$, we can define $\RP(a)$ as the unique right supporting
projection in $M$.
The functions $\bigvee$ and $\RP$ (uniquely) make $A$ into a piecewise
AW*-algebra.
\end{proof}
There is a similar definition of piecewise
complete Boolean algebras that specializes the definition of piecewise
Boolean algebras~\cite{vdbergheunen:colim}.
\begin{definition}
A \emph{piecewise complete Boolean algebra} consists of a set $B$ with
\begin{itemize}
\item a reflexive and symmetric binary
(\emph{commeasurability}) relation $\commeas \subseteq B \times B$;
\item a (total) unary operation $\lnot \colon B \to B$;
\item a (partial) operation $\bigvee \colon \{ X \subseteq B \mid
X \times X \subseteq \commeas \} \to B$;
\end{itemize}
such that every set $S \subseteq B$ of pairwise commeasurable
elements is contained in a pairwise commeasurable set $T \subseteq
B$ that forms a complete Boolean algebra under
the above operations.
(Notice that these data uniquely determine elements $0=\bigvee
\emptyset$ and $1=\neg 0$, and (partial) operations $x \vee y = \bigvee
\{x,y\}$ and $x \wedge y =
\neg(\neg x \vee \neg y)$.)
A \emph{morphism of piecewise complete Boolean algebras} is a
(total) function that preserves commeasurability and all the
algebraic structure, whenever defined. We write $\PCBoolean$ for the
resulting category.
\end{definition}
\subsection*{A piecewise equivalence}
The functor $\Proj \colon \AWstar \to \CBoolean$ extends to
a functor $\PAWstar \to
\PCBoolean$~\cite[Lemma~3]{vdbergheunen:colim}. We aim to prove
that the latter functor is also (part of) an equivalence.
By~\cite[Theorem~3]{vdbergheunen:colim}, any piecewise complete Boolean
algebra $B$ can be seen as (a colimit of) a functor $\cC(B) \to
\CBoolean$, where $\cC(B)$ is the diagram of (commeasurable)
complete Boolean subalgebras of $B$ and inclusions. Similarly, by the
AW*-variation of~\cite[Theorem~7]{vdbergheunen:colim}, any
piecewise AW*-algebra $A$ can be seen as a functor $\cC(A) \to
\cAWstar$, where $\cC(A)$ is the diagram of (commeasurable)
commutative AW*-subalgebras of $A$ and inclusions. Hence
postcomposition with $\Func$ should turn a
piecewise complete Boolean algebra into a piecewise AW*-algebra. Below we
explicitly compute the ensuing colimit to get a functor $F \colon
\PCBoolean \to \PAWstar$. Even though it is unclear how
general coequalizers are computed in either category, the fact that
$\cC(B)$ is a diagram of monomorphisms makes the constructions manageable.
\begin{lemma}\label{monics}
The monomorphisms in $\AWstar$, $\cAWstar$, and $\CBoolean$
are precisely the injective morphisms.
\end{lemma}
\begin{proof}
Let $f \colon A \rightarrowtail B$ be a monomorphism in
$\AWstar$ or $\cAWstar$. We first show that $\Proj(f) \colon \Proj(A)
\rightarrowtail \Proj(B)$ is injective. Suppose that $f(p)=f(q)$ for
$p,q \in \Proj(A)$. Define $g,h \colon \mathbb{C}^2 \to A$ by
$g(1,0)=p$ and $h(1,0)=q$. Then $(f \circ g)(x,y) =
xf(p)+yf(p)^\perp = (f \circ h)(x,y)$, so $g=h$ and hence $p=q$. In
particular, $f$ cannot map a nonzero projection of $A$ to 0 in
$B$. Thus $\ker(f)=0$ by Lemma~\ref{lem:awstarmorphisms}, and $f$ is
injective. Conversely, injective morphisms are trivially monic.
Monomorphisms $f \colon P \rightarrowtail Q$ in $\CBoolean$
factor as
\[
P \cong \Proj(\Func(P)) \rightarrowtail \Proj(\Func(Q)) \cong Q.
\]
Now, isomorphisms in $\CBoolean$ are bijective, and
by the above, the middle arrow $\Proj(\Func(f))$ is injective,
making $f$ itself injective.
\end{proof}
We are ready to define the object part of a functor $F \colon
\PCBoolean \to \PAWstar$.
\begin{definition}\label{defFB}
Let $B$ be a piecewise complete Boolean algebra. Define $F(B)$ to be
the following collection of data.
\begin{itemize}
\item The carrier set $A$ is $(\coprod_{C \in \cC(B)}
\Func(C)) /\sim$, where $\sim$ is the smallest equivalence relation
satisfying $f \sim g$ for $f \in \Func(C)$ and $g \in \Func(D)$
when $C \subseteq D$ and $g=\Func(C \hookrightarrow D)(f)$.
\item Two equivalence classes $\rho$ and $\sigma$ in $A$ are
commeasurable if and only if there exist $C \in \cC(B)$ and $f,g \in
\Func(C)$ such that $f \in \rho$ and $g \in \sigma$.
\item Notice that $z \cdot 1_C \sim z \cdot 1_D$ for $C \subseteq D$
in $\cC(B)$, and any $z \in \mathbb{C}$. Also, $\{0,1\}$ is the
minimal element of $\cC(B)$. Hence $z \cdot 1_C \sim z \cdot 1_D$
for any $C,D \in \cC(B)$ by transitivity.
In particular, $[0_{\Func(\{0,1\})}]=[0_C]$ defines an element $0
\in A$ independently of $C$, and $1 \in A$ is defined by
$[1_{\Func(\{0,1\})}]=[1_C]$ for any $C \in \cC(B)$. Likewise,
$z \cdot [f] = [z \cdot f]$ is well-defined for $z \in
\mathbb{C}$.
\item Similarly, $[f]^* = [f^*]$ gives a well-defined operation $* \colon A \to A$.
\item If $\rho$ and $\sigma$ are two commeasurable elements of $A$,
then by definition there are $C \in \cC(B)$ and $f,g \in \Func(C)$
with $f \in \rho$ and $g \in \sigma$. Setting $\rho + \sigma =
[f+g]$ and $\rho \cdot \sigma = [f \cdot g]$ gives well-defined
operations $+,\cdot \colon \commeas \to A$.
\item If $C \subseteq D$, then $\Func(C \hookrightarrow D) \colon \Func(C) \to
\Func(D)$ is an injective $*$-homomorphism by Lemma~\ref{monics},
and hence preserves norm~\cite[Theorems~4.1.8,~4.1.9]{kadisonringrose}.
So $\|[f]\| = \|f\|$ gives a well-defined operation $A \to
\mathbb{R}$.
\end{itemize}
\end{definition}
\begin{proposition}
The data $F(B)$ defined above form a piecewise AW*-algebra.
\end{proposition}
\begin{proof}
For $\rho$ in $A$, define
$
C_\rho = \bigcap \{ C \in \cC(B) \mid \rho \cap \Func(C) \neq
\emptyset \}.
$
Because $\cC(B)$ is closed under arbitrary intersections, $C_\rho \in
\cC(B)$. If $\rho$ and $\sigma$ in $A$ are commeasurable, then by
definition there are $C \in \cC(B)$ and $f,g \in \Func(C)$, so
$C_\rho \subseteq C \supseteq C_\sigma$. But that implies any
element of $C_\rho$ is commeasurable in $B$ with any element of $C_\sigma$.
Let $S \subseteq A$ be pairwise commeasurable. Then $\hat{S} =
\bigcup_{\rho \in S} C_\rho \subseteq B$ is pairwise
commeasurable by the last paragraph. Hence there exists a set $\hat{T} \subseteq B$ that
contains $\hat{S}$, is pairwise commeasurable, and forms a complete
Boolean algebra under the operations from $B$. Therefore
$T=\{ [f] \mid f \in \Func(\hat{T}) \} \subseteq A$ contains $S$, is
commeasurable, and forms a commutative AW*-algebra under the
operations from $A$.
Hence $A$ is a piecewise C*-algebra. Moreover, if $S \subseteq S'$,
then $\hat{S} \subseteq \hat{S'}$, and $\hat{T} \subseteq
\hat{T'}$ are both complete Boolean subalgebras of $B$ under the
same operation $\bigvee$, namely that of $B$. Hence $T$ is an
AW*-subalgebra of $T'$, so that $A$ is in fact a piecewise
AW*-algebra by Lemma~\ref{RPandsup}.
\end{proof}
\begin{lemma}\label{Fcolim}
If $B \in \PCBoolean$, then $F(B)$ is a colimit of the diagram
$\Func(C)$ with $C$ ranging over $\cC(B)$. Therefore $F$ is
functorial $\PCBoolean \to \PAWstar$.
\end{lemma}
\begin{proof}
Clearly there exists a cocone of morphisms $\Func(C) \to A$ for
each $C \in \cC(B)$, given by $f \mapsto [f]$.
If $k_C \colon \Func(C) \to A'$ is another cocone, the unique
mediating map $m \colon A \to A'$ is given by $m([f])=k_C(f)$ when
$f \in \Func(C)$.
Let $g \colon B_1 \to B_2$ be a morphism of $\PCBoolean$.
Because $F(B_1)$ is a colimit of $\{\Func(C) \mid C \in \cC(B_1)\}$,
to define a morphism $F(g) \colon F(B_1) \to F(B_2)$, it suffices to
specify morphisms $\Func(C) \to F(B_2)$ in $\PAWstar$ for each
$C \in \cC(B_1)$. But $g$ preserves commeasurability, so its restriction to
$C$ is a morphism in $\CBoolean$ and we can just take
$F(g)\big|_{\Func(C)} = \Func(g\big|_C)$. This assignment is automatically functorial.
Moreover, it is well-defined, even though colimits are only unique
up to isomorphism, because Definition~\ref{defFB} fixed one specific
colimit.
\end{proof}
\begin{theorem}\label{thm:piecewiseequivalence}
The functors $F$ and $\Proj$ form an equivalence between the categories
$\PAWstar$ and $\PCBoolean$.
\end{theorem}
\begin{proof}
For a piecewise AW*-algebra $A$ we have
\begin{align*}
F(\Proj(A))
& \cong \colim_{C \in \cC(\Proj(A))} \Func(C) \\
& \cong \colim_{C \in \cC(A)} \Func(\Proj(C)) \\
& \cong \colim_{C \in \cC(A)} C
\cong A.
\end{align*}
by Lemma~\ref{Fcolim}, \cite[Proposition~6]{vdbergheunen:colim}, and
\cite[Theorem~7]{vdbergheunen:colim}.
Each of the above isomorphisms is readily seen to be natural in $A$.
Next we establish an isomorphism $\Proj(F(B)) \cong B$.
Let $\rho \in \Proj(F(B)) \subseteq F(B)$. If $\rho = [f]$ for $f \in
\Func(C)$ and $C \in \cC(B)$, then $f \in \Proj(\Func(C))$. So $\eta_C(f) \in C
\subseteq B$, where $\eta$ is the unit of the equivalence formed by
$\Proj$ and $\Func$. In fact, by naturality of $\eta$, if $C \subseteq D$
for another $D \in \cC(B)$ with the inclusion denoted by $i \colon C
\hookrightarrow D$, the following diagram commutes.
\[\xymatrix@R-2ex{
& \Proj(\Func(C)) \ar^-{\eta_C}[r] \ar_-{\Proj(\Func(i))}[d]
& C \ar@{^{(}->}^-{i}[d] \\
& \Proj(\Func(D)) \ar_-{\eta_D}[r]
& D
}\]
So if $g \sim f$ because $g=\Func(C \hookrightarrow D)(f)$, then
$\eta_C(f)=\eta_D(g)$. That is, $\eta_C(f)$ is independent of the
chosen representative $f$ of $\rho$. Thus we have a map $\Proj(F(B))
\to B$ that is a morphism of piecewise complete Boolean algebras,
because $\eta$ is a morphism of complete Boolean algebras.
Conversely, for $b \in B$, consider the commeasurable subalgebra
$\generated{B}{b}$ of $B$ generated by $b$.
Then $\eta^{-1}_{\generated{B}{b}}(b)$ is an
element of $\Proj(\Func(\generated{B}{b}))$. Thus
$b \mapsto [\eta^{-1}_{\generated{B}{b}}(b)]$ is a function $B \to
\Proj(F(B))$, that is easily seen to be inverse to the function
$\Proj(F(B)) \to B$ above. Thus we have an isomorphism $B \cong
\Proj(F(B))$ of piecewise complete Boolean algebras. Unfolding
definitions shows that this isomorphism is natural in $B$.
\end{proof}
As a consequence, the functor $\Proj$ preserves general coequalizers.
\section{The category of active lattices}
\label{sec:activelattices}
This section equips the piecewise AW*-algebra structure of
AW*-algebras $A$ with enough extra data to recover their full algebra
structure, which will be done in the next section. The required
structure consists of three ingredients: a lattice structure on
$\Proj(A)$, a group structure on the so-called symmetry subgroup of
the unitaries $U(A)$, and an action of the latter on the former. We
will discuss each in turn.
\subsection*{The projection lattice}
We start with some axioms satisfied by lattices of projections of AW*-algebras.
\begin{definition}
An \emph{orthocomplementation} on a lattice $P$ is an
order-reversing involution $p \mapsto p^{\perp}$ satisfying
$p \vee p^{\perp} = 1$ and $p \wedge p^{\perp} = 0$ (\ie, $p^{\perp}$
is a \emph{complement} of $p$). We say $p$ and $q$
are \emph{orthogonal} when $p \leq q^\perp$.
An orthocomplemented lattice is said to be
\emph{orthomodular} when
$p \vee (p^{\perp} \wedge q) = q$ for all $p \leq q$.
Complete orthomodular lattices form a category $\cOML$
whose morphisms are functions that preserve the
orthocomplementation as well as arbitrary suprema.
\end{definition}
The condition of being an object of $\cOML$
can be tested on orthogonal subsets, and the same is
nearly true for morphisms.
\begin{lemma}\label{lem:orthocomplete}
An orthomodular lattice $P$ is complete if and only if every
orthogonal subset of $P$ has a least upper bound. If $P$ and $Q$
are complete orthomodular lattices, a function $f \colon P \to Q$ is
a morphism of $\cOML$ if and only if it preserves orthocomplements,
binary joins, and suprema of orthogonal sets.
\end{lemma}
\begin{proof}
The first statement is~\cite[Corollary~1]{holland:orthocomplete}.
Let $f \colon P \to Q$ be as in the second statement, and let
$\{p_i\}$ be any subset of $P$. Because $f$ preserves finite joins,
it preserves order, and so $\bigvee f(p_i) \leq f(\bigvee p_i)$; we
prove the reverse comparison.
Let $\{e_\alpha\}$ be a maximal orthogonal set of nonzero elements of $P$
with $f(e_\alpha) \leq \bigvee f(p_i)$, and set $e = \bigvee e_\alpha$.
By hypothesis, $f(e) = \bigvee f(e_\alpha) \leq \bigvee f(p_i)$.
Thus it suffices to show that each $p_i \leq e$, for then $\bigvee p_i \leq e$
and $f(\bigvee p_i) \leq f(e) \leq \bigvee f(p_i)$ as desired. Assume for
contradiction that some $p_j \nleq e$. Then $e' = (p_j \vee e) \wedge e^\perp$
is a nonzero element of $P$ orthogonal to $e$ and hence orthogonal
to each $e_\alpha$. Furthermore
\[
f(e') \leq f(p_j \vee e) = f(p_j) \vee f(e) \leq \bigvee f(p_i),
\]
since $e' \leq p_j \vee e$. But this contradicts the maximality of $\{e_\alpha\}$.
\end{proof}
The axioms defining AW*-algebras and their morphisms are such that the
operation of passing to projection lattices defines a functor $\Proj
\colon \AWstar \to \cOML$.
Complete orthomodular lattices are tightly linked to piecewise
complete Boolean algebras (rather than the more general
orthocomplemented lattices).
Indeed, any complete orthomodular lattice $P$ canonically is a
piecewise complete Boolean algebra, as follows. Define a
commeasurability relation
$\odot$ on $P$ by the following equivalent
conditions, for any $p, q \in P$:
\begin{enumerate}[\quad (i)]
\item there is a Boolean subalgebra of $P$ that contains both
$p$ and $q$;
\item there exist pairwise orthogonal $p', q', r \in P$ with
$p = p' \vee r$ and $q = q' \vee r$;
\item $p \wedge (p \wedge q)^\perp$ is orthogonal to $q$;
\item $q \wedge (p \wedge q)^\perp$ is orthogonal to $p$;
\item the \emph{commutator}
$(p \vee q) \wedge (p \vee q^\perp) \wedge (p^\perp \vee q)
\wedge (p^\perp \vee q^\perp)$ of $p$ and $q$ is zero.
\end{enumerate}
For the equivalence of (i)--(iv) we refer
to~\cite[Lemma~6.7]{varadarajan:geometry1}; for the equivalence of
(i) and (v) see~\cite{marsden:commutator}.
\begin{lemma}\label{lem:piecewiseorthomodular}
The assignment $P \mapsto (P,\odot)$ is a functor
$\cOML \to \PCBoolean$.
\end{lemma}
\begin{proof}
Given a complete orthomodular lattice $P$ and the commeasurability
relation $\odot$ above, it follows from~\cite[Lemma~6.10]{varadarajan:geometry1}
that the supremum operation of $P$ restricts to a partial operation
$\bigvee \colon \{X \subseteq P \mid X \times X \subseteq \odot\} \to P$.
\end{proof}
Composing this forgetful functor with the equivalence $\PCBoolean \to
\PAWstar$ of Theorem~\ref{thm:piecewiseequivalence} gives a
canonical functor $\cOML \to \PAWstar$.
Below, we will extend the structure of the piecewise complete
Boolean algebra $\Proj(A)$ to that of a complete orthomodular lattice,
where $A$ is a piecewise AW*-algebra. As a converse to the above
lemma, we now show that this is a property rather than structure.
For any piecewise Boolean algebra $B$, let $\leq$ be the union of the
partial orders on each commeasurable subalgebra $C$ of $B$.
When this relation is transitive, it is a partial order, which we call
the induced partial order. In that case we call $B$ \emph{transitive}.
If every pair of (not necessarily commeasurable) elements of $B$
have a least upper bound with respect to $\leq$, we say that $B$
is \emph{joined}.
Similarly, we call a piecewise AW*-algebra $A$ transitive or joined
when $\Proj(A)$ is respectively transitive or joined.
\begin{proposition}
The following categories are equivalent:
\begin{enumerate}[\quad (a)]
\item the category $\cOML$ of complete orthomodular lattices;
\item the subcategory of $\PCBoolean$ whose objects are transitive and joined
and whose morphisms preserve binary joins;
\item the subcategory of $\PAWstar$ whose objects are transitive and joined
and whose morphisms preserve binary joins of projections.
\end{enumerate}
\end{proposition}
\begin{proof}
The piecewise complete Boolean algebras that are in the image of the
functor $\cOML \to \PCBoolean$ from
Lemma~\ref{lem:piecewiseorthomodular} are by definition transitive and joined.
Next, we define a functor $G$ in the opposite direction. Let $B$ be a transitive,
joined piecewise complete Boolean algebra and $\leq$ its induced partial
order. By construction of $\leq$, it restricts to the given partial order on
each commeasurable subalgebra of
$B$. Furthermore, it is straightforward to verify that if $X
\subseteq B$ is commeasurable then $\bigvee X$ is the least upper
bound of $X$ with respect to $\leq$. Kalmbach's bundle
lemma~\cite[1.4.22]{kalmbach:orthomodularlattices} now applies to
show that $\leq$ and $\neg$ induce the structure of an orthomodular
lattice on $B$. Because orthogonal subsets are commeasurable, and
$B$ has suprema of such subsets, it in fact has suprema of
arbitrary subsets by Lemma~\ref{lem:orthocomplete}.
This makes $B$ into a complete orthomodular lattice, and we can define
$G(B)=(B,\leq)$. Setting $G(f)=f$ on for $\PCBoolean$ morphisms
that preserve binary joins gives a well-defined
functor, thanks to Lemma~\ref{lem:orthocomplete}.
It is straightforward to see that these two functors form an isomorphism
of categories.
The equivalence of~(b) and~(c) follows from
Theorem~\ref{thm:piecewiseequivalence}.
\end{proof}
\begin{remark}
For an AW*-algebra $A$, recall that $\cC(A)$ is the set of commutative
AW*-subalgebras, ordered by inclusion. It carries the same
information as the projection lattice
$\Proj(A)$~\cite[Theorem~2.5]{heunen:cstarsubalgebras}. Therefore, everything
that follows can equivalently be expressed in terms of $\cC(A)$
instead of $\Proj(A)$.
\end{remark}
\subsection*{The symmetry group}
If $A$ is a piecewise AW*- algebra, we let $U(A)$ denote the set of
unitary elements of $A$, \ie\ the set of all elements $u \in A$ such
that $u u^* = 1$ (recall that $u \commeas u^*$ for all $u \in A$).
This set carries the structure of a \emph{piecewise group},
\ie\ one can multiply commeasurable elements, the multiplication has a
unit (that is commeasurable with any element), and there is a total
function giving inverses, such that every commeasurable subset
generates a commutative subgroup.
A \emph{piecewise subgroup} is a subset that is a piecewise group in
its own right under the inherited operations (and commeasurability relation).
Every group is a piecewise
group, and conversely, we will be extending the structure of the
piecewise group $U(A)$ to that of a group. Piecewise groups form a
category $\PGroup$ with the evident morphisms.
\begin{definition}
A \emph{symmetry} in an AW*-algebra $A$ is a self-adjoint unitary
element; these are precisely the elements of the form $p^\perp - p =
1-2p$ for some $p \in \Proj(A)$. Let $U(A)$ denote the group of unitary
elements of $A$, and define $\Sym(A)$ to be the subgroup of $U(A)$
generated by the symmetries of $A$. (Notice that if $A$ is not
commutative then $\Sym(A)$ contains elements that are not
symmetries.)
\end{definition}
Before moving on to actions of groups on lattices, we consider
how large the symmetry $\Sym(A)$ group can become. We will see that this
depends on the type: $\Sym(A)$ is (significantly) smaller than $U(A)$ for type
$\mathrm{I}_n$ algebras, and just as large as $U(A)$ for other AW*-algebras.
If $A$ is an AW*-algebra of type $\mathrm{I}_1$, i.e.\ if $A$ is
commutative, then $\Sym(A)$ is as small as possible, namely in bijection
with $\Proj(A)$, as the following example shows.
\begin{example}\label{ex:sym:comm}
If $A$ is a commutative AW*-algebra, then the product of symmetries
is again a symmetry, and so the sets $\Sym(A)$ and $\Proj(A)$ are
bijective. In fact, $(1-2p)(1-2q) = 1-2((p+q-pq)-pq) = 1-2((p \vee
q)-(p \wedge q)) = 1-2(p \Delta q)$, where $\Delta$ is the symmetric
difference operation. Thus $\Sym(A)$ is the additive group of the
Boolean ring structure associated to the Boolean algebra $\Proj(A)$.
\end{example}
For AW*-algebras of type $\mathrm{I}_n$ for $n \geq
2$, we will use the fact that traces and determinants are
well-defined for matrices over commutative rings.
Recall that any AW*-algebra of type $\mathrm{I}_n$ takes the form
$\M_n(C)$ for a commutative AW*-algebra
$C$~\cite[Proposition~18.2]{berberian}.
We will use roman letters $a,b,p,\ldots$ for elements of a matrix
algebra $\M_n(B)$ and greek letters $\alpha,\beta,\pi,\ldots$ for
elements of $B$ when both are needed.
\begin{lemma}\label{lem:sym:typeone}
Let $A=\M_n(C)$ for $n\geq 2$ and a commutative AW*-algebra $C$.
\begin{enumerate}[\quad (a)]
\item If $b,c \in C$ satisfy $0 \leq c =b^2 \leq 1$ and $b^*=b$,
then there exists $u \in \Sym(C)$ with $b=uc_0$, where $c_0$ is the unique
positive square root of $c$ in $C$.
\item If $u \in U(A)$ has $\det(u)=1$, then $u=(1-2p)(1-2q)$ for some $p,q \in \Proj(A)$.
\item If $u \in U(A)$ has $\det(u) =1-2\pi$ for $\pi \in \Proj(C)$,
then $u$ can be written as $u=(1-2p)(1-2q)(1-2r)$ for some $p,q,r \in \Proj(A)$.
\item $\Sym(A)$ is the normal subgroup $\{ u \in U(A) \mid \det(u)^2 =
1 \} = \det^{-1}(\Sym(C))$.
\end{enumerate}
\end{lemma}
\begin{proof}
For part (a), observe that the Gelfand spectrum $X$ of $C$ is
extremally disconnected. So $\inter(b^{-1}(-\infty,0])$ is a clopen
set, as is its complement $\cl(b^{-1}(0,\infty)))$.
So the function $u \colon X \to C$ defined by
\[
u(x) =
\begin{cases}
-1 & \mbox{if $x \in \inter(b^{-1}(-\infty,0])$,} \\
\phantom{-}1 & \mbox{if $x \in \cl(b^{-1}(0,\infty))$,}
\end{cases}
\]
is continuous. It is clearly a self-adjoint unitary.
If $x \in \inter(b^{-1}(-\infty,0])$, then $b(x) \leq 0$ and $u(x)=-1$, so
$b(x)=u(x)c_0(x)$.
If $x \in \cl(b^{-1}(0,\infty))$, then $b(x) \geq 0$ and $u(x)=1$, so
$b(x)=u(x)c_0(x)$.
In either case $b=uc_0$.
For part (b) we generalize the argument
of~\cite[page~87]{dye:projections} from matrices with entries in $\C$ to entries in $C$.
Let $u \in U(A)$ have determinant 1. Then $u$ is unitarily equivalent to a
diagonal matrix $\diag(\zeta_1,\ldots,\zeta_n)$ with diagonal
entries $\zeta_i \in
U(C)$ satisfying $\prod \zeta_i=1$ \cite{deckardpearcy:diagonal}. Such
a matrix can be written as $\prod_{i=1}^{n-1}
\diag(\zeta_{1,i},\ldots,\zeta_{n,i})$, where $\zeta_{i,i} =
\prod_{k=1}^i \zeta_k$, $\zeta_{i+1,i}=\zeta_{i,i}^*$, and
$\zeta_{k,i}=1$ otherwise. Therefore, we may assume that
$u=\diag(\zeta,\zeta^*,1,\ldots,1)$ for fixed $\zeta \in U(C)$.
Keeping the rest of the matrices involved equal to the identity matrix, we
may in fact pretend that we are dealing with $n=2$ and
$u=\diag(\zeta,\zeta^*)$ for fixed $\zeta \in U(C)$.
We may write $\zeta = \alpha + i\beta$ where
$\alpha,\beta \in C$ are self-adjoint and satisfy $\alpha^2 + \beta^2 = 1$.
For each positive $\varphi \in C$, the element $1+\varphi^2$ is invertible in
$C$, so we can define
\[
p_\varphi = \frac{1}{1+\varphi^2} \begin{pmatrix} 1 & \varphi \\ \varphi &
\varphi^2 \end{pmatrix}.
\]
Each $p_\varphi$ is easily seen to be a projection in $A$, so
$v_\varphi=(1-2p_\varphi)(1-2p_0)$ defines an element of
$\Sym(A)$. Computing
\[
v_\varphi = \frac{1}{1+\varphi^2} \begin{pmatrix} 1-\varphi^2 &
-2\varphi \\ 2\varphi & 1-\varphi^2 \end{pmatrix}
\]
shows that $\det(v_\varphi)=1$ and $\tr(v_\varphi) = 2 \cdot
\frac{1-\varphi^2}{1+\varphi^2}$. Now, the function $\varphi \mapsto
\frac{1-\varphi^2}{1+\varphi^2}$ is a composite of an
order-automorphism $\varphi \mapsto \varphi^2$ of the positive cone
of $C$ with the Cayley transform $\varphi \mapsto
\frac{1-\varphi}{1+\varphi}$, which maps
the positive cone of $C$ order-anti-isomorphically onto the interval
$\{ \gamma \in C \mid -1 < \gamma \leq 1 \}$.
Hence $\tr(v_\varphi)$ assumes all values in the interval
$\{ \gamma \in C \mid -2 < \gamma \leq 2 \}$ as $\varphi$ ranges
over the positive cone of $C$, and actually
achieves the value $-2$ by interpreting $p_\infty = \left(\begin{smallmatrix}
0 & 0 \\ 0 & 1 \end{smallmatrix}\right)$. Diagonalizing $v_\varphi$ to $\diag(\xi,\xi^*)$
with $\xi \in U(C)$, we can therefore make $\tr(v_\varphi) =
\xi + \xi^* = 2 \Re(\xi)$ assume all values in the positive cone of
$C$ by varying $\varphi$.
In particular, for $\zeta = \alpha + i \beta$ as above, there exist positive $\varphi
\in C$ and $\beta_0 = \sqrt{1 - \alpha^2}$ such that $\zeta_0 = \alpha + i\beta_0 \in
U(C)$ and $\diag(\zeta_0, \zeta_0^*)$
is unitarily equivalent to $v_\varphi$. Part (a) gives
$\sigma \in \Sym(C)$ with $\beta = \sigma\beta_0$. The $\R$-linear map
$\theta$ fixing self-adjoint elements and sending $i$ to $i\sigma$
defines a $*$-ring automorphism of $C$.
Thus $\M_n(\theta)$ is a $*$-ring automorphism of $A$, and
$\M_n(\theta)(v_{\varphi})$ is unitarily equivalent to
$\M_n(\theta)(\diag(\zeta_0, \zeta_0^*)) = \diag(\theta(\zeta_0),
\theta(\zeta_0)^*) = \diag(\zeta, \zeta^*)$.
Because $v_{\varphi}$ is a product of two
symmetries, the same is true for $\diag(\zeta, \zeta^*)$.
For part (c), suppose $\det(u)=1-2\pi$. Set $r = \diag(\pi,0) \in
\Proj(A)$. Notice that $1-2r = \diag(1-2\pi, 1)$ has determinant
$1-2\pi$. Then $u \cdot (1-2r)$ has determinant~1, so by part~(b) there
exist $p, q \in \Proj(A)$ such that $u(1-2r) =
(1-2p)(1-2q)$. Multiplying on the right by $1-2r$, which is its own
inverse, gives the desired representation of $u$.
Finally, part (d) follows from the observation
$\Sym(C) = \{ 1-2\pi \mid \pi \in \Proj(C) \}$ and part (c), as follows. Because
its generators $1-2p$ square to the identity, and the determinant is
multiplicative, $\Sym(A) \subseteq \{ u \in U(A) \mid \det(u)^2=1
\}$. Next, if $\det(u)^2=1$, then $\det(u)$ is a symmetry
in $C$, and hence of the form $1-2\pi$ for some $\pi \in
\Proj(C)$, so that $\{u \in U(A) \mid \det(u)^2=1 \} \subseteq
\det^{-1}(\Sym(C))$. Finally, part (c) implies
$\det^{-1}(\Sym(C)) \subseteq \Sym(A)$.
\end{proof}
For AW*-algebras of infinite type, it is known that
every unitary is a product of four
symmetries~\cite{thakarebaliga:symmetries}, and therefore the symmetry
group is the full unitary group.
That leaves AW*-algebras of type $\mathrm{II}_1$. For W*-factors of
this type, it is known that $\Sym(A)=U(A)$~\cite{broise:unitaries}.
If $\Sym(A)$ is closed in $U(A)$, it follows from
from~\cite[Theorem~2]{kadison:unitary}, which holds for AW*-algebras,
that $\Sym(A)=U(A)$. The general question of whether $\Sym(A) = U(A)$
for AW*-algebras $A$ of type $\mathrm{II}_1$ remains open.
\subsection*{Active lattices}
The final piece of structure we will need to be able to recover the
full algebra structure of an AW*-algebra is an action of the
symmetry group.
\begin{definition}
An \emph{action} of a group $G$ on a piecewise AW*-algebra $A$ is a
group homomorphism from $G$ to the group of isomorphisms $A \to A$
in $\PAWstar$.
Similarly, an action of a group $G$ on a complete orthomodular lattice $P$ is a
group homomorphism from $G$ to the group of isomorphisms $P \to P$
in $\cOML$. Explicitly, we can consider a function $G \times P
\stackrel{\cdot}{\to} P$ satisfying:
\begin{itemize}
\item $1 \cdot p = p$ for all $p \in P$;
\item $u \cdot (v \cdot p) = (uv) \cdot p$ for all $p \in P$ and $u,v \in G$;
\item $u \cdot (-) \colon P \to P$ is a morphism of $\cOML$ for each $u \in G$.
\end{itemize}
Alternatively, we can speak about a group homomorphism $\alpha \colon G \to \Aut(P)$.
If the object being acted upon needs to be emphasized, we will speak
of a \emph{piecewise algebra action} or an \emph{orthomodular action}, respectively.
\end{definition}
If $A$ is an AW*-algebra, then its unitary group $U(A)$ acts on its
projection lattice $\Proj(A)$ by (left) conjugation: if $p$ is a projection
and $u$ is a unitary, then $upu^*$ is again a projection. Moreover,
because conjugation with a unitary is an automorphism of AW*-algebras,
$u(-)u^* \colon \Proj(A) \to \Proj(A)$ is a morphism of complete
orthomodular lattices for each $u \in U(A)$.
The group $\Sym(A)$ acts on $\Proj(A)$ by restricting the action of $U(A)$.
This motivates the following definition.
\begin{definition}
The category $\EAWstar$ of \emph{extended piecewise AW*-algebras} is
defined as follows. Objects are 4-tuples $(A,P,G,\cdot)$ consisting of:
\begin{itemize}
\item a piecewise AW*-algebra $A$;
\item an object $P$ of $\cOML$ that maps to $\Proj(A)$ under the
forgetful functor $\cOML \to \PCBoolean$;
\item a group $G$, that maps to a piecewise subgroup of $U(A)$ under the forgetful functor
$\Group \to \PGroup$, and that (contains and) is
generated as a group by the elements $1-2p$ for all $p \in \Proj(A)$;
\item an action of $G$ on $A$, which restricts to (left) conjugation on $G
\subseteq A$, that is, $g \cdot h = ghg^{-1}$ for $g \in G$ and $h \in
G \subseteq A$.
\end{itemize}
A morphism $f \colon (A,P,G,\cdot) \to (A',P',G',\cdot')$ is a function $f \colon A \to A'$
such that:
\begin{itemize}
\item $f$ is a morphism of piecewise AW*-algebras;
\item $f$ restricts to a morphism $P \to P'$ of complete orthomodular lattices;
\item $f$ restricts to a group homomorphism $G \to G'$;
\item the equivariance condition $f(u \cdot
a) = f(u) \cdot' f(a)$ holds for $u \in G$ and $a \in A$.
\end{itemize}
\end{definition}
In fact, using the equivalence $F \colon \PCBoolean \to \PAWstar$ of
Theorem~\ref{thm:piecewiseequivalence}, we can
whittle the data down further. In particular, if a group $G$ has an
orthomodular action on $P$, there is an induced piecewise algebra
action on $F(P)$ as follows (applying Lemma~\ref{lem:piecewiseorthomodular}
and Theorem~\ref{thm:piecewiseequivalence}):
\[
G \to \Aut_{\cOML}(P) \subseteq
\Aut_{\PCBoolean}(P) \cong \Aut_{\PAWstar}(F(P)).
\]
Hence we can reformulate purely in terms of orthomodular
lattices and groups.
\begin{definition}\label{def:activelattive}
An \emph{active lattice} is a 3-tuple $(P,G,\cdot)$ consisting of:
\begin{itemize}
\item a complete orthomodular lattice $P$;
\item a group $G$, that maps to a piecewise subgroup of $U(F(P))$
under the forgetful functor $\Group \to \PGroup$, and
that (contains and) is generated as a group by the elements $1-2p$
for all $p \in \Proj(F(P)) \cong P$;
\item an orthomodular action of $G$ on $P$ such that the induced
piecewise algebra action of $G$ on $F(P)$ restricts to (left)
conjugation on $G \subseteq F(P)$.
\end{itemize}
A \emph{morphism of active lattices} $(P,G,\cdot) \to
(P',G',\cdot')$ is a morphism $f \colon P \to P'$ of complete
orthomodular lattices such that:
\begin{itemize}
\item $Ff$ restricts to a group homomorphism $G \to G'$;
\item equivariance $f(u \cdot
p) = Ff(u) \cdot' f(p)$ holds for all $u \in G$ and $p \in P$.
\end{itemize}
Active lattices and their morphisms form a category $\AL$.
\end{definition}
\begin{proposition}\label{prop:activelatticesequivalence}
The categories $\EAWstar$ and $\AL$ are equivalent.
\end{proposition}
\begin{proof}
We use the unit $\eta_P \colon P \to \Proj(F(P))$ and counit
$\varepsilon_A \colon F(\Proj(A)) \to A$ isomorphisms of the
equivalence of Theorem~\ref{thm:piecewiseequivalence}
to define appropriate functors.
Define $G \colon \EAWstar \to \AL$ by
$G(A,P,G,\alpha)=(P,U(\varepsilon_A^{-1})(G), \alpha \circ
U(\varepsilon_A))$ and $G(f)=f$. This is well-defined: if $G$ is a
piecewise subgroup of $U(A)$, then $U(\varepsilon_A^{-1})(G)$ is a
piecewise subgroup of $U(F(P))$, and precomposing the action
$\alpha \colon G \to \Aut(P)$ with $U(\varepsilon_A)$ turns it into
an action of $U(\varepsilon_A^{-1}(G))$ on $P$. The equivariance
condition on morphisms also follows directly.
In the reverse direction, define $H \colon \AL \to \EAWstar$ on
objects by setting
\[
H(P,G,\alpha) = (F(P), \eta_P(P), G, \Aut(\eta_P^{-1}) \circ \alpha)
\]
and on morphisms by $H(f)=F(f)$. This is
well-defined: the structure of $P$ as a
complete orthomodular lattice transfers via $\eta_P$ to
$\eta_P(P)=\Proj(F(P))$, and postcomposing the action $\alpha \colon
G \to \Aut(P)$ with $\Aut(\eta_P^{-1})$ turns it into an action of $G$ on
$\Proj(F(P))$. The equivariance condition on morphisms also follows
directly.
Now $\eta_P$ implements a (natural) isomorphism $G \circ H
(P,G,\cdot) \cong (P,G,\cdot)$, and $\varepsilon_A$ implements a
(natural) isomorphism $H \circ G(A,P,G,\cdot) \cong
(A,P,G,\cdot)$. Hence $G$ and $H$ form an equivalence.
\end{proof}
\subsection*{The functor}
We can now define a functor from AW*-algebras to active lattices, and
prove that it is faithful. In Section~\ref{sec:fullness} we will prove that
it is also full. The next proposition tacitly identifies a piecewise AW*-algebra $A$
with $F(\Proj(A))$, as justified by Theorem~\ref{thm:piecewiseequivalence}.
\begin{proposition}\label{prop:thefunctor}
There is a functor $\ActiveProj \colon \AWstar \to \AL$ acting as
\[
\ActiveProj(A) = (\Proj(A), \Sym(A), \cdot),
\]
on objects, where $u \cdot p = upu^*$. It sends a morphism $A \to B$ to its restriction $\Proj(A) \to
\Proj(B)$.
\end{proposition}
\begin{proof}
Follows directly from the definitions.
\end{proof}
Via Proposition~\ref{prop:activelatticesequivalence}, we also
write $\ActiveProj$ for the functor $\AWstar \to \EAWstar$.
\begin{lemma}\label{lem:faithful}
The functor $\ActiveProj$ is faithful.
\end{lemma}
\begin{proof}
If $\ActiveProj(f)=\ActiveProj(f')$, the continuous linear
functions $f,f' \colon A \to B$ coincide on $\Proj(A)$. But $A$
is the closed linear span of $\Proj(A)$
\end{proof}
The reader might think that Definition~\ref{def:activelattive} could
be reduced further still by considering just complete orthomodular
lattices acted upon by groups generated by them, and letting morphisms
be equivariant pairs of group homomorphisms and morphisms of complete
orthomodular lattices. The following example shows that one cannot
ignore piecewise algebra structure this easily and hope to have a full
and faithful functor out of $\AWstar$.
\begin{example}
Consider $\ActiveProj(\M_2(\C)) = (\Proj(\M_2(\C)), \Sym(\M_2(\C)),
\cdot)$. Define a morphism of complete orthomodular lattices $f
\colon \Proj(\M_2(\C)) \to \Proj(\M_2(\C))$ by $f(0)=0$, $f(1)=1$,
and $f(p)=p^\perp$ for $p \neq 0,1$. Recall from
Lemma~\ref{lem:sym:typeone} that $\Sym(\M_2(\C)) = \{ u \in U_2(\C)
\mid \det(u)=\pm 1 \}$. Define a group homomorphism $g
\colon \Sym(\M_2(\C)) \to \Sym(\M_2(\C))$ by $g(u) = \det(u) u$.
Write $j$ for the injection $\Proj(\M_2(\C)) \to \Sym(\M_2(\C))$
given by $j(p) = 1-2p$. For $p=0,1$ one easily checks that
$j(f(p))=g(j(p))$, and for $p \neq 0,1$:
\[
j(f(p)) = j(p^\perp) = p - p^\perp = \det(p-p^\perp) \cdot
(p-p^\perp) = g(p-p^\perp) = g(j(p)).
\]
Finally, for $u \in \Sym(\M_2(\C))$ and $p \neq 0,1$:
\[
g(u) f(p) g(u)^* = |\det(u)|^2 up^\perp u^* = 1-upu^* =
f(upu^*),
\]
and for $p=0,1$ this formula is also easily seen to hold. Hence $f$
and $g$ satisfy the equivariance condition.
But if there is a linear map $h \colon \M_2(\C) \to \M_2(\C)$ that
restricts to $f$ on $\Proj(\M_2(\C))$ and to $g$ on $\Sym(\M_2(\C))$,
then for $\zeta \in U(\C) \backslash \{ \pm 1 \}$, $p \in \Proj(\M_2(\C))
\backslash\{0,1\}$, and $u=\zeta p+\zeta^*p^\perp \in
\Sym(\M_2(\C))$, we would have
\[
u=g(u)=g(\zeta p+\zeta^*p^\perp)=\zeta
f(p)+\zeta^*f(p^\perp) = \zeta p^\perp +
\zeta^*p = u^*,
\]
contradicting $\zeta\neq\pm 1$. Therefore it cannot be the case that
$h$ restricts to $f$.
\end{example}
In the commutative case, the functor $\ActiveProj$ has nice properties.
\begin{example}
There is a functor $\CBoolean \to \AL$, that maps a complete Boolean
algebra $B$ to the active lattice $(B,B_{\text{add}},\cdot)$. Here,
we identify $B$ with $\Proj(F(B))$ using
Theorem~\ref{thm:piecewiseequivalence}, and
$B_{\text{add}}$ is the additive group of $B$ qua Boolean ring,
which acts trivially on the Boolean algebra $B$ itself. This functor
is full and faithful. Moreover, it factors through the functor
$\ActiveProj$. If we restrict to the full subcategory $\cAL$ of
$\AL$ consisting of the objects $(P,G,\cdot)$ for which $P$ is a
complete Boolean algebra, then it follows from
Example~\ref{ex:sym:comm} that the functor $\ActiveProj$
becomes an equivalence of categories. This makes the left triangle
in the following diagram commute. The other faces obviously commute.
\[\xymatrix@C-8ex@R+1ex{
& \cAL \ar@{^{(}->}[rrr] &&& \AL \ar[dl] \\
\CBoolean \ar@{<-}[ur]^-{\cong} \ar@{^{(}->}|(.475){\hole}[rrr] &&& \cOML \\
&& \cAWstar \ar[uul]_(.2){\ActiveProj} \ar[ull]^-{\Proj}_-{\simeq} \ar@{^{(}->}[rrr]
&&& \AWstar \ar[ull]^-{\Proj} \ar[uul]_-{\ActiveProj}
}\]
\end{example}
\section{Recovering total algebras from piecewise algebras}
\label{sec:fullness}
This section proves that the functor $\ActiveProj$ of
Proposition~\ref{prop:thefunctor} is full. The proof distinguishes two
cases. First, we adapt a theorem of Dye to deal with algebras without
type $\mathrm{I}_2$ summands. Subsequently we deal with algebras of
type $\mathrm{I}_2$ directly.
\subsection*{Algebras without $\mathrm{I}_2$ summand and a theorem of Dye}
To facilitate the proof of Theorem~\ref{thm:dye} below, we give a
sequence of preparatory lemmas. Several of these are adapted from
Dye's results in~\cite[Section~3]{dye:projections}. Let $A$ be an
AW*-algebra. Any matrix ring $\M_n(A)$ is an AW*-algebra;
see~\cite[Section~62]{berberian}. If $x$ is a row vector in $A^n$
one of whose entries is a projection, then there is a projection in
$\M_n(A)$ whose range is the submodule $Ax \subseteq A^n$ according
to~\cite[Lemma~2]{dye:projections}.
We shall refer to these projections in $\M_n(A)$ as \emph{vector projections.}
In particular, given two distinct indices $1\leq i,j \leq n$ and an
element $\alpha \in A$, there is a projection as above where the
vector $x$ is taken to have $1$ in the $i$th entry, $\alpha$ in the
$j$th entry, and all other entries zero. We denote the corresponding
projection in $\M_n(A)$ by $p_{ij}(\alpha)$. For instance, when $n =
2$, $i = 1$, and $j = 2$, we have
\[
p_{12}(\alpha) = \begin{pmatrix}
(1+\alpha \alpha^*)^{-1} & (1 + \alpha \alpha^*)^{-1}\alpha \\
\alpha^* (1 + \alpha \alpha^*)^{-1} & \alpha^* (1 + \alpha \alpha^*)^{-1} \alpha
\end{pmatrix}.
\]
For larger $n$, we follow the convention to only write down the nonzero
2-by-2 parts of such $n$-by-$n$ matrices.
Notice that if $p_{ij}(\alpha) = p_{ij}(\beta)$ for some $\alpha,\beta \in A$,
then $\alpha = \beta$.
\begin{lemma}\label{lem:generatingprojections}
Let $A$ be an AW*-algebra.
\begin{enumerate}[(a)]
\item Sublattices of $\Proj(\M_n(A))$ containing all
$p_{ij}(\alpha)$ contain all vector projections.
\item Any projection in $\M_n(A)$ is the supremum of (orthogonal)
vector projections.
\end{enumerate}
Hence the $p_{ij}(\alpha)$ generate $\Proj(\M_n(A))$ as a
complete orthomodular lattice.
\end{lemma}
\begin{proof}
Part~(a) is proven as in~\cite[Lemma~7]{dye:projections}.
For~(b), first note that the proof of~\cite[Lemma~7]{dye:projections} illustrates that
every nonzero element of $\Proj(\M_n(A))$ contains a nonzero vector projection.
Fix $p \in \Proj(\M_n(A))$. Zorn's lemma gives a maximal set $S$ of orthogonal
nonzero homogeneous projections below $p$. We claim that $p$ equals
$p_0=\bigvee S$. Otherwise $p_0 < p$, so that there would be a nonzero
vector projection $q \leq p - p_0$.
Because $p - p_0 \leq p$, transitivity gives $q \leq p$. Combined with $q \leq p - p_0$,
this implies $q$ is orthogonal to $p_0$. It follows that $q$ is
orthogonal to $\Proj(S)$, so $S \sqcup \{q\}$ is an orthogonal set
of projections below $p$, contradicting maximality.
\end{proof}
We denote by $e_{ij} \in \M_n(A)$ the matrix unit whose $i,j$-entry is
1 and every other entry is zero. Note that $e_{ii} = p_{ij}(0)$ for
any $j \neq i$.
For a projection $p$ in an AW*-algebra, we denote by $s_p = 1 - 2p$ the
corresponding symmetry.
\begin{lemma}\label{lem:unitaryimage}
Let $A$ and $B$ be AW*-algebras. If $f \colon \Proj(\M_n(A)) \to
\Proj(\M_n(B))$ is a function satisfying $f(e_{ii})=e_{ii}$ and
$f(s_p q s_p) = s_{f(p)} f(q) s_{f(p)}$, then for each $i,j$
and each $\zeta \in U(A)$ there is a unique $\xi \in U(B)$ with $f(p_{ij}(\zeta)) = p_{ij}(\xi)$.
\end{lemma}
\begin{proof}
Notice that for $\zeta \in U(A)$, we have (in ``2-by-2 shorthand'')
\[
p_{ij}(\zeta) = \frac{1}{2}
\begin{pmatrix}
1 & \zeta \\
\zeta^* & 1
\end{pmatrix}.
\]
It is easy to see that
conjugation by $1-2p_{ij}(\zeta)$ swaps $e_{ii}$ and $e_{jj}$ while
leaving the remaining diagonal matrix units fixed. Conversely, if $p
\in \Proj(\M_n(A))$ is such that conjugation by $1-2p$ leaves
$e_{kk}$ fixed for $k \neq i,j$, then it
must equal the identity everywhere except in rows and columns $i$
and $j$. Hence we can write $p=\left(\begin{smallmatrix} \alpha & \beta \\
\beta^* & \gamma \end{smallmatrix}\right)$ in ``2-by-2 shorthand''. If
$e_{ii} = (1-2p) e_{jj} (1-2p)$, then $\alpha=\frac{1}{2}$ and
$\beta^*\beta=\frac{1}{4}$, and it follows from $p=p^2$ that
$\gamma=\frac{1}{2}$ and $\beta\beta^*=\frac{1}{4}$. Hence
the projections of the form $p_{ij}(\zeta)$ with $\zeta$ unitary are
precisely those projections $p$ for which
conjugation with $1-2p$ swaps $e_{ii}$ and
$e_{jj}$ while leaving the other $e_{kk}$ fixed.
Now, because of the assumptions that $f$ sends diagonal matrix
units to diagonal matrix units, and is equivariant, the same
statement is true about $f(p_{ij}(\zeta))$. Hence there is some
unitary $\xi \in U(B)$ such that $f(p_{ij}(\zeta)) =
p_{ij}(\xi)$; uniqueness follows.
\end{proof}
Recall that a $\C$-linear function $f \colon A \to B$ between
$C^*$-algebras that preserves the involution is a \emph{Jordan
$*$-homomorphism} if it preserves the Jordan product $a \circ b =
\frac{1}{2}(ab+ba)$; this is readily seen to be equivalent to the
property that $f$ preserves the square of every element.
\begin{lemma}\label{lem:jordan}
Given a $*$-ring homomorphism $A \to B$ between
C*-algebras, there is a unique Jordan $*$-homomorphism
$A \to B$ that equals it on
self-adjoint elements.
\end{lemma}
\begin{proof}
Let $f \colon A \to B$ be a $*$-ring homomorphism.
As it preserves $1$ it is $\mathbb{Q}$-linear, and it
follows from preserving positivity that it is in fact $\R$-linear.
Define complementary projections $q_- = \frac{1}{2}(1+if(i))$ and
$q_+ = \frac{1}{2}(1-if(i))$ in $B$. Setting
\begin{align*}
f_- & \colon A \to q_- B q_- & f_-(a) & =
\tfrac{1}{2}(f(a) + if(ia)) \\
f_+ & \colon A \to q_+ B q_+ & f_+(a) & =
\tfrac{1}{2}(f(a) - if(ia))
\end{align*}
gives $*$-ring homomorphisms, where $f_-$ is $\mathbb{C}$-anti-linear and
$f_+$ is $\mathbb{C}$-linear. Clearly $f=f_+ +f_-$.
Now define $g \colon A \to B$ by
\[
g(a) = f_+(a) + (f_-(a))^*.
\]
This $\C$-linear function preserves the involution
and agrees with $f$ on self-adjoint elements. It is easy to verify
that it preserves the operation of squaring because the images of
$f_+$ and $f_-$ are orthogonal in $B$.
Uniqueness is straightforward.
\end{proof}
The following lemma records some results of
Dye~\cite{dye:projections} about properties of the ``coordinate
assignment'' from Lemma~\ref{lem:unitaryimage}.
Basically, it expresses algebraic operations on the coordinates in
lattice-theoretic terms. The subsequent lemma will use these properties to
establish a $*$-ring homomorphism, following~\cite[Lemmas~6 and~8]{dye:projections}.
Recall that a \emph{lattice polynomial} is an expression
combining a finite number of variables using $\wedge$ and $\vee$;
these are preserved by morphisms in $\cOML$.
\begin{lemma}\label{lem:latticepolynomials}
There exist lattice polynomials $P$, $Q$, and $R$ such that, for any
elements $\alpha,\beta,\gamma$ of a C*-algebra $A$ with $\gamma$
invertible, any integer $n \geq 3$, and any distinct indices
$1 \leq i,j,k \leq n$, the following hold:
\begin{enumerate}[\quad (a)]
\item $p_{ij}(\alpha+\beta) = P\big( p_{ij}(\alpha), p_{ij}(\beta), p_{ik}(\gamma), e_{ii},
e_{jj}, e_{kk} \big)$;
\item $p_{ij}(-\alpha\beta) = Q\big(p_{ik}(\alpha), p_{kj}(\beta), e_{ii}, e_{jj}\big)$;
\item $p_{ij}(-\alpha^*) = R\big (p_{ji}(\alpha), e_{ii}, e_{jj} \big)$.
\end{enumerate}
\end{lemma}
\begin{proof}
See~\cite[Lemma~5]{dye:projections},
\cite[Lemma~4]{dye:projections},
and~\cite[Lemma~3(i)]{dye:projections}, respectively.
\end{proof}
\begin{lemma}\label{lem:coordinatefunction}
Let $f \colon \Proj(\M_n(A)) \to \Proj(\M_n(B))$ be a morphism of
$\cOML$ for AW*-algebras $A,B$, and $n\geq 3$. Suppose
$f(e_{ii})=e_{ii}$ for all $i$, and that for any distinct $i,j$ and
any $\zeta \in U(A)$ there is $\xi \in U(B)$ with $f(p_{ij}(\zeta)) = p_{ij}(\xi)$.
Then there is a diagonal $W \in U(\M_n(B))$ such that:
\begin{enumerate}[\quad (a)]
\item there is a function $\varphi \colon U(A) \to U(B)$ satisfying
the ``coordinate condition''
\begin{equation*}
f(p_{ij}(\alpha)) = W^*p_{ij}(\varphi(\alpha))W
\end{equation*}
for all $\alpha \in U(A)$ and distinct indices $i,j$;
\item $\varphi$ extends to a $*$-ring homomorphism $A \to B$
satisfying the coordinate condition for all $\alpha \in A$ and
distinct $i,j$.
\end{enumerate}
\end{lemma}
\begin{proof}
Abbreviate the coordinate condition as ($*$).
By hypothesis, for all indices $j > 1$ there exist $\beta_j \in U(B)$
such that $f(p_{1j}(1)) = p_{1j}(\beta_j)$. Define
$W = \diag(1,\beta_2, \dots, \beta_n) \in U(B)$. Then
$p_{1j}(\beta_j) = W^* p_{1j}(1) W$ for all $j$. Notice
that conjugation by a diagonal unitary fixes all $e_{ii}$,
and leaves the set $\{p_{ij}(\alpha)\}$ invariant as $\alpha$
ranges over $U(A)$ (respectively, over $A$). Thus, replacing
$f$ with the morphism $p \mapsto Wf(p)W^*$, we may assume
that $f(p_{1j}(1)) = p_{1j}(1)$ for all $j > 1$, and prove
that ($*$) holds in both~(a) and~(b) with $W = 1$.
Towards (a), define $\varphi \colon U(A) \to U(B)$ by the condition
$f(p_{12}(\alpha)) = p_{12}(\varphi(\alpha))$.
In case $f(p_{1j}(\alpha)) = p_{1j}(\varphi(\alpha))$, for some
$\alpha \in U(A)$ and distinct $i,j > 1$, it follows
by applying Lemma~\ref{lem:latticepolynomials} that
\begin{align*}
f(p_{ij}(\alpha))
& = f\big(Q\big( p_{i1}(-1), p_{1j}(\alpha), e_{ii}, e_{jj}\big)\big) \\
& = f\big(Q\big( R\big( p_{1i}(1), e_{ii}, e_{11} \big),
p_{1j}(\alpha), e_{ii}, e_{jj} \big) \\
& = Q\big( R\big( p_{1i}(1), e_{ii}, e_{11} \big),
p_{1j}(\varphi(\alpha)), e_{ii}, e_{jj} \big)
= p_{ij}(\varphi(\alpha)).
\end{align*}
In particular, because ($*$) is known to hold in case $i = 1$ and $\alpha = 1$,
this shows that ($*$) in fact holds for $\alpha = 1$ and any distinct
$i,j > 1$ (and, of course, when $i = 1$ and $j = 2$).
Now since ($*$) for the case $\alpha=1$ and $j = 2$, and it holds
by assumption for $i=1$, $j=2$ and all $\alpha \in U(A)$, then for
$j > 2$ we find:
\begin{align*}
f(p_{1j}(\alpha))
& = f\big( Q\big( p_{12}(\alpha), R\big( p_{j2}(1), e_{22}, e_{jj}
\big), e_{11}, e_{jj} \big) \big) \\
& = Q\big( p_{12}(\varphi(\alpha)), R\big( p_{j2}(1), e_{22},
e_{jj} \big), e_{11}, e_{jj} \big) \big)
= p_{1j}(\varphi(\alpha)).
\end{align*}
Thus the above shows that ($*$) holds for all $\alpha \in U(A)$
and any $j \geq 2$.
For the remaining case where $i > 1$ and $j = 1$, simply note
that
\begin{align*}
f(p_{i1}(\alpha)) &= f(R\big (p_{1i}(-\alpha^*), e_{ii}, e_{11} \big)) \\
&= R\big (p_{1i}(-\alpha^*), e_{ii}, e_{11} \big) = p_{i1}(\alpha).
\end{align*}
To prove part (b), we start by defining a function $\psi \colon A \to B$.
Write $\alpha = \alpha_1 + i \alpha_2$ where each $\alpha_k$ is self-adjoint.
Set
$\zeta_k=\frac{\alpha_k}{2n} + i\sqrt{1-(\frac{\alpha_k}{2n})^2}$,
where $n$ is an integer satisfying $\|\alpha_k\| \leq 2n$ for $k = 1,2$.
Then $\zeta_k \in U(A)$ satisfy $\zeta_k+\zeta_k^*=\frac{\alpha_k}{n}$.
Now, an application of Lemma~\ref{lem:latticepolynomials}(a) with $\gamma=1$
shows $f(p_{ij}(\lambda_1+\lambda_2))=p_{ij}(\mu_1+\mu_2)$ if
$f(p_{ij}(\lambda_l))=p_{ij}(\mu_l)$, and similarly for sums with more terms.
Therefore, in particular,
\begin{align*}
f(p_{ij}(\alpha/n)) &= f(p_{ij}(\zeta_1+\zeta_1^*+i\zeta_2+i\zeta_2^*)) \\
&= p_{ij}(\varphi(\zeta_1)+\varphi(\zeta_1^*)+\varphi(i\zeta_2)+\varphi(i\zeta_2^*)).
\end{align*}
Taking $\beta$ to be $n$ times the argument of $p_{ij}$ in the previous line, we have
$\beta \in B$ with $f(p_{ij}(\alpha))=p_{ij}(\beta)$. Setting $\psi(\alpha)=\beta$
yields $f(p_{ij}(\alpha))=p_{ij}(\psi(\alpha))$ for all $\alpha \in A$.
It follows that $p_{ij}(\psi(\alpha))=p_{ij}(\varphi(\alpha))$ for
unitary $\alpha$, whence $\psi$ extends $\varphi$.
Next we prove that $\psi$ is a $*$-ring homomorphism. First apply
Lemma~\ref{lem:latticepolynomials}(a) with $\gamma=1$ and use part (a)
to deduce
\begin{align*}
p_{ij}\big(\psi(\alpha)+\psi(\beta)\big)
& = P\big( p_{ij}(\psi(\alpha)), p_{ij}(\psi(\beta)), p_{ik}(\psi(1)), e_{ii}, e_{jj}, e_{kk} \big)\\
& = P\big( f(p_{ij}(\alpha)), f(p_{ij}(\beta)), f(p_{ik}(1)), f(e_{ii}), f(e_{jj}), f(e_{kk}) \big)\\
& = f\big(P\big(p_{ij}(\alpha), p_{ij}(\beta), p_{ik}(1), e_{ii}, e_{jj}, e_{kk}\big)\big) \\
& = f(p_{ij}(\alpha+\beta)) = p_{ij}\big(\psi(\alpha+\beta)\big),
\end{align*}
and conclude that $\psi$ is additive. Hence also
$\psi(0)=\psi(0+0)-\psi(0)=0$.
It similarly follows from Lemma~\ref{lem:latticepolynomials}(b)
that $\psi$ is multiplicative.
Finally, Lemma~\ref{lem:latticepolynomials}(c) shows that $\psi$
preserves the involution.
\end{proof}
The assumption that each $\zeta \in U(A)$ allows $\xi \in U(A)$
such that $f(p_{ij}(\zeta)) = p_{ij}(\xi)$ is slightly stronger than necessary
and is only used to shorten the proof above. With more work,
one may simply assume that this is the case when $i = 1$ and $j = 2$.
We are now ready to prove an AW*-analogue of Dye's
theorem~\cite[Theorem~1]{dye:projections}.
\begin{theorem}\label{thm:dye}
Let $A$ be an AW*-algebra without type $\mathrm{I}_2$ summands,
and $B$ any AW*-algebra. A $\cOML$-morphism $f
\colon \Proj(A) \to \Proj(B)$ extends to a Jordan $*$-homomorphism
$A \to B$ if and only if $f(s_p q s_p) = s_{f(p)} f(q) s_{f(p)}$.
\end{theorem}
\begin{proof}
The ``only if'' direction follows because the expression to be
preserved can be written in terms of Jordan operations:
\begin{align*}
s_p q s_p = (1-2p) q (1-2p)
&= q - 2 (pq+qp) + 4 pqp \\
&= q - 2 (p \circ q) + 4 (pqp + p^{\perp} q p^{\perp}) \circ p \\
&= q - 2 (p \circ q) + 4 (p+q-1)^2 \circ p.
\end{align*}
For the converse we first reduce the problem to the case
$A=\M_n(D)$ for $n \geq 3$ and AW*-algebra $D$. Indeed,
\cite[Theorem~15.3]{berberian} and~\cite[Theorem~18.4]{berberian}
provide unique orthogonal central projections
$p_1,p_2,\ldots,p_\infty$ with sum $1$ such that $p_nA$ is of
type~$\mathrm{I}_n$ for $n < \infty$ and $p_\infty A$ has no finite
type~$\mathrm{I}$ summands. Then $A$ is the C*-sum of
$p_iA$~\cite[Proposition~10.2]{berberian},
and it suffices to consider one summand at a
time because Jordan $*$-homomorphisms are closed under direct sums.
By~\cite[Exercise~19.2]{berberian}, $p_\infty A \cong \M_3(D)$
for some AW*-algebra $D$. For each finite $n$,
by~\cite[Proposition~18.2]{berberian}, $p_nA \cong \M_n(C)$ for some
commutative AW*-algebra $C$.
By assumption $p_2=0$, leaving us with commutative AW*-algebras
$p_1A$. But this case is taken care of by the
duality~\eqref{eq:commutativeduality}, since morphisms in $\cAWstar$
are definitely Jordan $*$-homomorphisms.
Thus we may replace $A$ with $\M_n(A)$ for $n \geq 3$.
Next, we make another reduction (replicating the proof
of~\cite[Theorem~1]{dye:projections}) to show that we may also
replace $B$ with $\M_n(B)$. Any two distinct diagonal matrix units
$e_{ii}$ in $\M_n(A)$ have a common complement, so the same is true
for their images under
$f$. By~\cite[Theorem~6.6]{kaplansky:awstar} this means
that their images $f(e_{ii})$ are equivalent
projections. These $n$ orthogonal equivalent projections sum
to $1$, so by~\cite[Proposition~16.1]{berberian} they form the diagonal
projections of a set of $n$-by-$n$ matrix units in $B$. Thus we may
replace $B$ by $\M_n(B)$ and assume that $f(e_{ii}) = e_{ii}$.
So we are assuming that $A$ and $B$ are AW*-algebras with an
$\cOML$-morphism $f \colon \Proj(\M_n(A)) \to \Proj(\M_n(B))$ for
$n \geq 3$.
Combining Lemmas~\ref{lem:unitaryimage}
and~\ref{lem:coordinatefunction} produces a $*$-ring homomorphism
$\varphi \colon A \to B$ and a diagonal $W \in U(M_n(B))$ such that
$f(p_{ij}(\alpha))=W^* p_{ij}(\varphi(\alpha)) W$ for all $\alpha
\in A$ and all distinct $i,j$.
It follows from the definition of $p_{ij}$ that
$f(p_{ij}(\alpha))=W^* \big(\M_n\varphi(p_{ij}(\alpha))\big) W$ for all
$i,j$ and $\alpha \in A$.
Next we show that $\varphi$ preserves suprema of projections, using an
auxiliary function $\pi \mapsto p_{12}(\pi) \wedge e_{22}$.
It is a well-defined morphism $j_A \colon \Proj(A) \to \Proj(M_n(A))$ of
complete orthomodular lattices that is injective. Hence
\begin{align*}
j_B(\bigvee_i \varphi(\pi_i))
& = \bigvee_i p_{12}(\varphi(\pi_i)) \wedge e_{22}
= \bigvee_i W f(j_A(\pi_i)) W^* \\
& = W f(j_A(\bigvee_i \pi_i)) W^*
= p_{12}(\varphi(\bigvee_i \pi_i)) \wedge e_{22}
= j_B(\varphi(\bigvee_i \pi_i)),
\end{align*}
and so $\bigvee_i \varphi(\pi_i) = \varphi(\bigvee_i \pi_i)$ by
injectivity of $j_B$.
Consequently, the $*$-ring homomorphism $\M_n(\varphi) \colon
\M_n(A) \to \M_n(B)$ also preserves suprema of projections
by~\cite[Theorem~8.2 and
Remark~8.3]{heunenreyes:diagonal}.
Hence so does its conjugation with $W$.
Now Lemma~\ref{lem:generatingprojections} guarantees that
$W^* \M_n(\varphi) W$ equals $f$ on all of $\Proj(\M_n(A))$.
The proof is concluded by an application of Lemma~\ref{lem:jordan}.
\end{proof}
\begin{remark}
It remains an open question whether every morphism of complete orthomodular
lattices $\Proj(A) \to \Proj(B)$ extends to a Jordan $*$-homomorphism
$A \to B$ when $A$ and $B$ are AW*-algebras and $A$ has no type
$\mathrm{I}_2$ summands. This is known to be the case when $A$ and $B$
are W*-algebras~\cite[Corollary~1]{buncewright:jordan}. Our proof
suffices to answer this question for AW*-algebras if
Lemma~\ref{lem:unitaryimage} holds more generally without the
equivariance assumption.
The analogous generalization of Lemma~\ref{lem:unitaryimage} is known
to hold over a von Neumann regular ring $R$, i.e.\ a ring such that every
$a \in R$ admits $b \in R$ with $a=aba$. In this setting, denote by
$q_{ij}(\alpha)$ the idempotent in $\M_n(R)$ whose row range is the
submodule of $R^n$ generated by the row vector with $i$th entry $1$ and
$j$th entry $\alpha$. Then the $q_{ij}(\alpha)$ for invertible $\alpha$
are characterised in lattice-theoretic terms as those projections $p$ that
complement both $e_{ii}$ and $e_{jj}$, i.e.\ $p \wedge e_{ii}=0
=p \wedge e_{jj}$ and $p \vee e_{ii} = e_{ii}+e_{jj} = p\vee e_{jj}$
(see Part~II, Chapter~III, Lemma~3.4 of~\cite{vonneumann:continuous}).
Unfortunately, this characterisation does not hold for a general AW*-algebra $A$.
To see the difficulty, let $\alpha \in A$ be neither a left nor a right zerodivisor, but also not invertible.
Considering $A^2$ as a left $\M_2(A)$-module, $p=p_{21}(\alpha)$ is a projection
with range $A \begin{pmatrix} \alpha & 1 \end{pmatrix}$. Since $\alpha$ is not a left
zerodivisor, $A \begin{pmatrix} \alpha & 1 \end{pmatrix} \cap A \begin{pmatrix} 0 & 1 \end{pmatrix} = 0$,
whence $\range(p \wedge e_{22})=\range(p) \cap \range(e_{22})=0$, so
$p \wedge e_{22}=0$. Similarly $p \wedge e_{11}=0$. Furthermore, $p^\perp$ has range
$A \begin{pmatrix} 1 & -\alpha^* \end{pmatrix}$, which has zero intersection with
$A \begin{pmatrix} 1 & 0 \end{pmatrix}$ because $\alpha$ is not a right zerodivisor,
so that $p^\perp \wedge e_{11}=0$, which $(-)^\perp$ sends to $p \vee e_{22} = 1$.
Also $p \vee e_{11}=1$, so $p$ complements both $e_{11}$ and $e_{22}$.
However, because $\alpha$ is not invertible, it cannot be of the form
$p=p_{12}(\beta)$~\cite[Lemma~3(ii)]{dye:projections}.
\end{remark}
\begin{lemma}\label{lem:fullness:typeone}
Let $A$ and $B$ be AW*-algebras. If $f \colon \ActiveProj(A) \to
\ActiveProj(B)$ is a morphism in $\EAWstar$ such that $f$ extends to a
continuous $\C$-linear function $g \colon A \to B$, then
$g$ is a morphism of AW*-algebras satisfying $\ActiveProj(g)=f$.
\end{lemma}
\begin{proof}
We first show that $g(a)g(b) = g(ab)$ for all $a, b \in A$.
Because the functions $A \times A \to A$ on each side of the
equation above are continuous and bilinear, and
because $A$ is the closed linear span of its projections,
it suffices to consider the case where $a$ and $b$ are
projections.
Now, for $p,q \in \Proj(A)$,
\begin{align*}
1-2g(p)-2g(q)+4g(pq)
& = g(1-2p-2q+4pq) \\
& = f((1-2p)(1-2q)) \\
& = f(1-2p) f(1-2q) \\
& = (1-2f(p))(1-2f(q)) \\
& = 1-2g(p)-2g(q)+4g(p)g(q),
\end{align*}
and therefore $g(pq)=g(p)g(q)$ as desired. The above equations used,
respectively: linearity of $g$; $g$ extends $f$; $f$ is a group
homomorphism on $\Sym(A)$; $f$ is a piecewise algebra morphism; $g$
extends $f$.
So $g$ is an algebra homomorphism, and it is readily seen to be a
$*$-homomorphism using linearity and the fact that it equals $f$ on
normal elements. Because $f$ preserves suprema of
projections and $g$ extends it, we see that $g$ is a morphism in
\AWstar, which obviously satisfies $\ActiveProj(g) = f$.
\end{proof}
\begin{corollary}\label{cor:fullness:typeone}
Let $A$ and $B$ be AW*-algebras, and $f \colon
\ActiveProj(A) \to \ActiveProj(B)$ a morphism of $\EAWstar$. If $A$
has no type $\mathrm{I}_2$ summand, $f$ is in the
image of $\ActiveProj$.
\end{corollary}
\begin{proof}
Theorem~\ref{thm:dye} extends $f \colon \Proj(A) \to \Proj(B)$
to a Jordan $*$-homomorphism $g \colon A \to B$, which is
continuous~\cite[Page~439]{stoermer:jordan}. Because $A$ is the
closed linear span of $\Proj(A)$,
in fact $f$ and $g$ coincide as functions $N(A) \to N(B)$.
Hence the result follows from Lemma~\ref{lem:fullness:typeone}.
\end{proof}
\subsection*{Type $I_2$ algebras}
Next we focus on algebras of type $\mathrm{I}_2$. As in
Lemma~\ref{lem:sym:typeone}, we will use the fact that determinants
and traces are well-defined for matrices with entries in a commutative ring.
\begin{proposition}\label{prop:fullness:typeonetwo}
Let $A$ and $B$ be AW*-algebras, and $f \colon \ActiveProj(A) \to
\ActiveProj(B)$ a morphism of $\EAWstar$. If $A$ is type
$\mathrm{I}_2$, then $f$ is in the image of
$\ActiveProj$.
\end{proposition}
\begin{proof}
Let $C$ be a commutative AW*-algebra with
$A=\M_2(C)$; this exists by~\cite[Proposition~18.2]{berberian}.
The algebra $C$ is embedded in $A=\M_2(C)$ by $\gamma \mapsto \diag(\gamma,\gamma)$.
Fix $p = e_{11} \in \Proj(A)$ and $u = e_{12} + e_{21} \in \Sym(A)$.
Since $upu=p^{\perp}$, we deduce
\begin{align*}
f(u) f(p) f(u) &= f(p)^{\perp}, \\
f(u) f(p) &= f(p)^{\perp} f(u), \\
f(p) f(u) &= f(u) f(p)^{\perp}.
\end{align*}
It follows that $e'_{11} = f(p)$, $e'_{12} = f(p) f(u)$, $e'_{21} = f(u) f(p)$,
and $e'_{22} = f(p)^{\perp}$ form a self-adjoint set of 2-by-2
matrix units in $B$ (see~\cite[Page~429]{kadisonringrose}).
The image $D = f(C) \subseteq B$ is a commutative $*$-subalgebra
centralizing all $e'_{ij}$. Letting $V \subseteq B$ be the $D$-span
of the $e'_{ij}$, it follows that $V$ is a $*$-subalgebra of $B$ isomorphic to $\M_2(D)$.
Define a $C$-linear function $g \colon A \to V \subseteq B$ by $g(e_{ij}) = e'_{ij}$
and $g(\gamma)=f(\gamma)$ for $\gamma \in C$; it is a $*$-homomorphism.
Next we will prove that $g$ equals $f$ on all $q \in \Proj(A)$. Notice
that $\det(q)$ is a projection in $C$.
Using properties of the adjugate matrix~\cite[XIII.4.16]{Lang} we find
$\det(q) 1_A = \adj(q) q = \adj(q) q^2 = \det(q) q$, and so
$\det(q) 1_A \leq q$ in $\Proj(A)$.
Because $\adj \colon \M_2(C) \to \M_2(C)$
is $C$-linear, the projection $q' = q - \det(q)1_A$ has
determinant
\[
\det(q') = \adj(q') q' = (\adj(q) - \det(q)1_A)(q - \det(q)1_A) = 0.
\]
So without loss of generality we may assume $\det(q) = 0$.
In this case one can compute that $\tau = \tr(q)$ is a projection of $C$.
As any projection in $A$ with trace $\tau$ and determinant
zero, $q$ can be written (in standard matrix units $e_{ij}$) in the form
\[
q=\frac{1}{2}
\begin{pmatrix}
\tau + \alpha & \zeta \beta \\
\zeta^* \beta & \tau - \alpha
\end{pmatrix}
\]
where $\alpha, \beta \in C$ are self-adjoint, satisfy $\alpha^2 + \beta^2 = \tau$, and
$\zeta \in C$ is a partial isometry with $\zeta \zeta^* \beta = \beta$.
Replacing $\zeta$ with $\zeta + (1- \zeta \zeta^*)$ if necessary,
we may in fact assume $\zeta \in U(C)$.
Because the algebra $C$ has square
roots~\cite[Corollary~2.3]{deckardpearcy:diagonal},
there exists $\xi \in
U(C)$ such that $i\xi^2 = \zeta$.
From $\alpha^2 + \beta^2 = \tau$ one deduces that $\tau$ supports
$\alpha$ and $\beta$,
so $\tau^{\perp}$ annihilates $\alpha$
and $\beta$. Then
\begin{align}
1 - 2q &=
\begin{pmatrix}
\tau^{\perp}-\alpha & -\zeta \beta \\
-\zeta^* \beta & \tau^{\perp}+\alpha
\end{pmatrix} \tag{$*$} \\
&=
\begin{pmatrix}
\tau - \tau^{\perp} & 0 \\
0 & 1
\end{pmatrix}
\begin{pmatrix}
-\tau^{\perp}-\alpha & -i \xi^2 \beta \\
i(\xi^*)^2 \beta & \tau^{\perp}+\alpha
\end{pmatrix} \notag \\
&=
\begin{pmatrix}
\tau - \tau^{\perp} & 0 \\
0 & 1
\end{pmatrix}
\begin{pmatrix}
-\xi & 0 \\
0 & \xi^*
\end{pmatrix}
\begin{pmatrix}
\tau^{\perp} + \alpha & i\beta \\
i\beta & \tau^{\perp} + \alpha
\end{pmatrix}
\begin{pmatrix}
\xi^* & 0 \\
0 & \xi
\end{pmatrix} \notag \\ \notag
&= \big( (\tau - \tau^{\perp})p + p^{\perp} \big)
\big( -\xi p + \xi^* p^{\perp} \big)
\big( (\tau^{\perp}+\alpha)1 + i\beta u \big)
\big( \xi^* p + \xi p^{\perp} \big).
\end{align}
The four factors in the right hand side are elements of $\Sym(A)$ by
Lemma~\ref{lem:sym:typeone}(d). Because $f$ is piecewise linear and is multiplicative
when restricted to $\Sym(\M_2(C))$, applying $f$ to~($*$) and invoking
piecewise linearity gives
\begin{align*}
1-2f(q)
& = f\big( (\tau - \tau^{\perp})p + p^{\perp} \big)
f\big( -\xi p + \xi^* p^\perp \big)
f\big( (\tau^{\perp}+\alpha)1+i\beta u \big)
f\big( \xi^* p + \xi p^\perp \big) \\
& = (\tau^{\perp}-\alpha) f(p)
- \zeta \beta f(p)f(u)
- \zeta^* \beta f(u)f(p)
+ (\tau^{\perp}+\alpha) f(p)^\perp \\
& = (\tau^{\perp}-\alpha) g(e_{11})
- \zeta \beta g(e_{12})
- \zeta^* \beta g(e_{21})
+ (\tau^{\perp}+\alpha) g(e_{22}) \\
& = g(1-2q)
= 1 - 2g(q),
\end{align*}
whence $f(q) = g(q)$.
Finally, because $*$-homomorphisms are continuous, an application
of Lemma~\ref{lem:fullness:typeone} finishes the proof.
\end{proof}
\subsection*{Fullness of the functor and some open problems}
We summarize by showing that $\ActiveProj
\colon \AWstar \to \AL$ is a full functor.
\begin{theorem}\label{thm:fullness}
If $A$ and $B$ are AW*-algebras, and $f \colon \ActiveProj(A) \to
\ActiveProj(B)$ is a morphism in $\AL$, then $f=\ActiveProj(g)$ for
some $g \colon A \to B$ in $\AWstar$.
\end{theorem}
\begin{proof}
Proposition~\ref{prop:activelatticesequivalence} allows us to replace
$\AL$ by $\EAWstar$. As any AW*-algebra, $A$ is a direct sum $A = p_1A \oplus p_2A$
for central projections $p_1$ and $p_2 = 1-p_1$, where $p_1 A$ is
a type $\mathrm{I}_2$ AW*-algebra and $p_2 A$ is an AW*-algebra
without type $\mathrm{I}_2$ summands~\cite[Section~15]{berberian}.
Because $p_i$ are central in $A$, the symmetries $1-2p_i$ are central in $\Sym(A)$.
So the projections $q_i = f(p_i)$ are central in $B$, as the symmetries $1-2q_i$ are central in $\Sym(B)$ because $f$ is a morphism in $\AL$.
Thus $f$ restricts to two morphisms $f_i
\colon \ActiveProj(p_i A) \to \ActiveProj(q_i B)$ of $\EAWstar$.
Corollary~\ref{cor:fullness:typeone} provides a
morphism $g_1 \colon p_1 A \to q_1 B$ of $\AWstar$ with $\ActiveProj(g_1)=f_1$, and
Proposition~\ref{prop:fullness:typeonetwo} provides a morphism $g_2 \colon
p_2 A \to q_2 B$ of $\AWstar$ with $\ActiveProj(g_2)=f_2$. Their sum
$g \colon A \to B$, defined by $g(a) = g_1(p_1 a) + g_2(p_2 a)$, is a
morphism of $\AWstar$ satisfying $\ActiveProj(g)=f$.
\end{proof}
\begin{corollary}
$\AWstar$ is equivalent to a full subcategory of $\AL$.
\end{corollary}
\begin{proof}
Follows directly from Lemma~\ref{lem:faithful} and Theorem~\ref{thm:fullness}.
\end{proof}
This corollary immediately presents the problem of characterizing those
active lattices in the essential image of $\ActiveProj$. That is, for which
active lattices $(P,G,\cdot)$ does there exist an AW*-algebra $A$
such that $(P,G,\cdot) \cong \ActiveProj(A)$ as active lattices?
This is a \emph{coordinatization problem,} reminiscent of von Neumann's
coordinatization of continuous geometries by continuous regular
rings~\cite{vonneumann:continuous}. The authors are currently
unaware of any active lattices that are not in the essential image
of $\ActiveProj$. A solution to this problem should provide deeper
insight into how exactly the active lattice $\ActiveProj(A)$ ``encodes''
the ring structure of an AW*-algebra $A$.
We incorporated the symmetry group into $\ActiveProj(A)$ to
circumvent the problem that the product $pq$ of two projections in an
AW*-algebra $A$ is only a projection if $p$ and $q$ commute.
Another way to bypass this shortcoming would be to consider the submonoid
$P(A) \subseteq A$ generated by $\Proj(A)$.
The involution of $A$ restricts to $P(A)$, and this makes $A$ into a
\emph{Baer $*$-semigroup} in the sense of Foulis~\cite{foulis:baer}
(that is, a $*$-semigroup in which the right annihilator of any subset is generated
as a right ideal by a projection).
The assignment $A \mapsto P(A)$ is a functor from $\AWstar$ to the category of
Baer $*$-semigroups with morphisms given by $*$-homomorphisms that preserve
annihilating projections.
This functor is faithful for the same reason given in the proof of Lemma~\ref{lem:faithful}.
Theorem~\ref{thm:fullness} now suggests the natural question: is this functor also full?
In conclusion, our results also suggest the following natural question for
general C*-algebras: can one reconstruct a C*-algebra $A$ from the
piecewise C*-algebra $N(A)$, the unitary group $U(A)$, and the action
by conjugation of the latter on the former?
The following proposition shows that this comes down to a Mackey--Gleason type
problem once again.
\begin{proposition}
Let $A,B$ be C*-algebras, and $f \colon N(A) \to N(B)$
a morphism of piecewise C*-algebras that restricts to a group homomorphism
$U(A) \to U(B)$. Then $f$ extends to a $*$-homomorphism $A \to B$
if and only if it is additive on self-adjoints.
\end{proposition}
\begin{proof}
One direction is obvious. For the other, assume that $f$ is
additive on self-adjoints. Since any $a \in A$ can be written as
$a=a_1+a_2$ for self-adjoint $a_1=\tfrac{1}{2}(a+a^*)$ and
$a_2=\tfrac{1}{2i}(a-a^*)$, if $f$ extends to a linear function $f
\colon A \to B$, then it does so uniquely, by
$f(a)=f(a_1)+if(a_2)$. First notice that this is well-defined and
coincides with the given values for $a \in N(A)$, since in that case $a_1
\commeas a_2$.
As $(a+b)_1 = a_1+b_1$ and $(a+b)_2 = a_2+b_2$,
the assumption makes the extension $f \colon A \to B$
additive. Next, for $z \in \mathbb{C}$, say $z=x+iy$ for real $x$
and $y$, compute
\begin{align*}
f(za) & = f(xa_1 - ya_2) + if(xa_2 + ya_1), \\
zf(a) & = f(xa_1) - f(ya_2) + if(xa_2) + if(ya_1).
\end{align*}
So the assumption in fact makes the extension $f \colon A \to B$ a
$\mathbb{C}$-linear function. It also clearly satisfies
$f(a^*)=f(a)^*$ and $f(1)=1$.
Finally, any self-adjoint $a \in A$
can be written as $a=\tfrac{1}{2}a_++\tfrac{1}{2}a_-$ for unitaries
$a_\pm = a \pm i\sqrt{1-a^*a}$ that commute with each
other and $a$. Therefore any element of $A$ is a linear combination
of four unitaries. Because $f$ restricts to a homomorphism $U(A) \to
U(B)$, it therefore preserves multiplication on all of $A$.
\end{proof}
One could take into account the action by conjugation of $U(A)$ on
$N(A)$, but it is not clear at all how additionally assuming that
$f(uau^*)=f(u)f(a)f(u)^*$ should guarantee that $f$ is additive on
self-adjoints.
\bibliographystyle{amsplain}
|
1,314,259,993,440 | arxiv | \section{Introduction}
The main purpose of the present article is the construction of geometric objects which share a large class of algebraic, analytic, geometric, and topological invariants. Our main tool is a refinement of a construct that dates back to Gassman which has been utilized by Perlis \cite{Perlis}, Sunada \cite{sunada}, and others. Given a commutative ring $R$ with identity, a group $G$, and a pair of subgroups $P_1,P_2 \leq G$, we say that $P_1,P_2$ are \textbf{$R$--equivalent} if $R[G/P_1]$ and $R[G/P_2]$ are isomorphic as left $R[G]$--modules. When $G$ is finite and $P_1,P_2$ are $\mathbf{Q}$--equivalent, the associated triple $(G,P_1,P_2)$ is called a \textbf{Gassman triple}. For general $R$, $(G,P_1,P_2)$ is called an \textbf{$R$--Gassman triple}.
Scott \cite{scott} found non-conjugate $\mathbf{Z}$--equivalent subgroups $P_1,P_2$ of $\PSL(2,\mathbf{F}_{29})$. The subgroups $P_1,P_2$ are isomorphic to $\mathrm{Alt}(5)$ and are conjugate in $\PGL(2,\mathbf{F}_{29})$. D. Prasad \cite{prasad} recently employed this $\mathbf{Z}$--Gassman triple $(\PSL(2,\mathbf{F}_{29}),P_1,P_2)$ to construct non--isometric Riemann surfaces with isomorphic Jacobian varieties viewed only as unpolarized abelian varieties. He also constructed a pair of non--isomorphic finite extensions of $\mathbf{Q}$ with isomorphic idele class groups and adele rings. In particular, these finite extensions are arithmetically equivalent (i.e.~have the same Dedekind zeta functions). Recently, the third author with B.~Linowitz and N.~Miller \cite{LMM} used non--isomorphic fields with isomorphic adele rings to construction isomorphisms between various Galois cohomology sets that arise in the study of $K$--forms of semisimple Lie groups. One instance of this was the construction of an isomorphism between the Brauer groups of the fields which was compatible with the restriction and co-restriction maps. The bijections between other Galois cohomology sets was also compatible with respect to the restriction and co-restriction maps.
\subsection{Differential geometric examples}
Our results split across algebraic and differential geometry. We state our differential geometric results first. Before doing so, we require some additional notation and terminology. Given a closed, Riemannian manifold $M$ with associated Laplace--Beltrami operator $\triangle_M$, the operator $\triangle_M$ acts on the space of $L^2$ functions or $L^2$ $k$--forms $\Omega^k(M)$. We denote the associated eigenvalue spectrum for the operator $\triangle_M$ acting on $\Omega^k(M)$ by $\mathcal{E}_k(M)$. In the case of $k=0$, we denote the eigenvalue spectrum by $\mathcal{E}(M)$ and refer to this as the \textbf{eigenvalue spectrum}. The spectrum $\mathcal{E}(M)$ is a well studied analytic invariant of the Riemannian manifold $M$ and is known to determine the dimension, volume, and total scalar curvature. A related geometric invariant is the \textbf{primitive geodesic length spectrum} $\mathcal{L}_p(M)$ of $M$. Assuming for simplicity that $M$ is negatively curved, each free homotopy class of closed curves on $M$ has a unique geodesic representative. We define $\mathcal{L}_p(M)$ to be the set of lengths (with multiplicity) of each geodesic representative in each free homotopy class. We denote by $H^k(M,\mathbf{Z})$ the \textbf{$k$th singular cohomology group} of $M$ with trivial $\mathbf{Z}$--coefficients. Given a finite cover $M' \to M$, we have induced homomorphisms $\Res\colon H^k(M,\mathbf{Z}) \to H^k(M',\mathbf{Z})$ and $\Cor\colon H^k(M',\mathbf{Z}) \to H^k(M,\mathbf{Z})$. For a pair of finite covers $M_1,M_2 \to M$, we say that a morphism $\psi_k\colon H^k(M_1,\mathbf{Z}) \to H^k(M_2,\mathbf{Z})$ is \textbf{compatible} if the diagram
\begin{equation}\label{Eq:Natural}
\begin{tikzcd}
& H^k(M,\mathbf{Z}) \arrow[rd,"\Res", bend left = 30] \arrow[ld,"\Res"', bend right = 30] & \\ H^k(M_1,\mathbf{Z}) \arrow[rr,"\psi_k"', bend right = 45] \arrow[ru,"\Cor"', bend right = 30] & & H^k(M_2,\mathbf{Z}) \arrow[lu,"\Cor", bend left = 30]&
\end{tikzcd}
\end{equation}
commutes. Finally, $M$ is called \textbf{large} if there exists a finite index subgroup $\Gamma_0 \leq \pi_1(M)$ and a surjective homomorphism of $\Gamma_0$ to a non-abelian free group. We now state our first result and refer the reader to \S 2 for a brief review of real/complex hyperbolic manifolds and the definition of non-arithmetic manifolds.
\begin{thm}\label{Sec:Intro:T1}
Let $M$ be a closed hyperbolic $n$--manifold that is large and non-arithmetic. Then for each $j \in \mathbf{N}$, there exist pairwise non-isometric, finite Riemannian covers $M_1,\dots,M_j$ of $M$ such that the following holds:
\begin{itemize}
\item[(1)]
$\mathcal{E}_k(M_i) = \mathcal{E}_k(M_{i'})$ for all $k$ and all $i,i'$.
\item[(2)]
$\mathcal{L}_p(M_i) = \mathcal{L}_p(M_{i'})$ for all $i,i'$.
\item[(3)]
There exist compatible isomorphisms $\psi_k\colon H^k(M_i,\mathbf{Z}) \to H^k(M_{i'}, \mathbf{Z})$ for all $k$ and for all $i,i'$.
\end{itemize}
\end{thm}
When $n \geq 3$, it follows from Mostow--Prasad rigidity (see \cite{Mostow}, \cite{GPra}) that $\pi_1(M_i),\pi_1(M_{i'})$ are non-isomorphic for $i \ne i'$. When manifolds $M_i,M_{i'}$ satisfy (1), they are referred to as \textbf{strongly isospectral}. When only $\mathcal{E}(M_i) = \mathcal{E}(M_{i'})$, the pair is said to be \textbf{isospectral}. Similarly, when (2) holds, the pair is said to be \textbf{length isospectral}. We note that for every $n \geq 2$, by work of Gromov--Piatetski-Shapiro \cite{GPS}, there are infinitely many commensurability classes of examples for which Theorem \ref{Sec:Intro:T1} can be applied. Moreover, being non-arithmetic or large are both commensurability invariants.
\begin{rem}
The compatible isomorphism in singular cohomology with trivial $\mathbf{Z}$--coefficients is a special case of a more general result that relates the cohomology of manifolds $M_1,M_2$ that arise from this refined Gassman/Sunada construction; see Lemma \ref{Sec:Coho:L4} which also answers Question 2 in \cite{prasad}. In particular, there is a large class of coefficients for which compatible isomorphisms exist and the coefficients need not be trivial.
\end{rem}
That one can construct non-isometric manifolds that satisfy (1) and (2) has been known since \cite{sunada}; see also \cite{LMNR} for a variant on \cite{sunada}. Additionally, it was known that when two manifolds arise from this construction, besides satisfying (1) and (2), they have $H^k(M_i,\mathbf{Q}) \cong H^k(M_{i'},\mathbf{Q})$. However, it need not be the case that the pair have isomorphic integral cohomology. Bartel--Page \cite{BartelPage1} (see also \cite{BartelPage}) found examples of pairs arising from a Sunada construction which do not have isomorphic cohomology groups with coefficients in $\mathbf{F}_p$. Specifically, given a finite set of primes $S$, there exist strongly isospectral closed hyperbolic $3$--manifolds with non-isomorphic $\mathbf{F}_p$--cohomology for every $p \in S$ and isomorphic $\mathbf{F}_p$--cohomology for every $p \notin S$ (see \cite[Thm 1.2]{BartelPage1}). Also, Lauret--Miatello--Rossetti \cite{LMR} prove that strongly isospectral pairs need not have isomorphic cohomology rings.
By work of Agol \cite[Thm 9.2]{Agol}, every closed hyperbolic $3$--manifold is large; that hyperbolic surfaces are large is well known. As a result, we obtain the following corollary of Theorem \ref{Sec:Intro:T1}.
\begin{cor}\label{Sec:Intro:C1}
Let $M$ be a closed, non-arithmetic real hyperbolic $2$-- or $3$--manifold. Then for each $j \in \mathbf{N}$, there exist pairwise non-isometric, finite Riemannian covers $M_1,\dots,M_j$ of $M$ such that the following holds:
\begin{itemize}
\item[(1)]
$\mathcal{E}_k(M_i) = \mathcal{E}_k(M_{i'})$ for all $k$ and all $i,i'$.
\item[(2)]
$\mathcal{L}_p(M_i) = \mathcal{L}_p(M_{i'})$ for all $i,i'$.
\item[(3)]
There exist compatible isomorphisms $\psi_k\colon H^k(M_i,\mathbf{Z}) \to H^k(M_{i'}, \mathbf{Z})$ for all $k$ and for all $i,i'$.
\end{itemize}
\end{cor}
When $n=2$, Corollary \ref{Sec:Intro:C1} is a small generalization of \cite{prasad}. In this setting, Borel \cite{borel} proved that there are only finite many non-isometric arithmetic hyperbolic surfaces of area at most $A$ for any $A>0$. In particular, for each genus $g \geq 2$, there are only finitely many points in $\mathcal{M}_g$, the moduli space of hyperbolic structures on $\Sigma_g$, that correspond to arithmetic hyperbolic structures. However, $\mathcal{M}_g$ has real dimension $6g-6$ and so we see that a typical hyperbolic structure on $\Sigma_g$ is non-arithmetic. A closed hyperbolic $3$--manifold is typically non-arithmetic as well. For each positive real number $V >0$, Borel \cite{borel} proved that there are only finitely many non-isometric arithmetic hyperbolic $3$--manifolds of volume at most $V$. However, it follows by work of Thurston that when $V$ is sufficiently large, there exist infinitely many closed hyperbolic $3$--manifolds of volume at most $V$. For instance, if $M_0$ is the complement of the figure-eight knot, for all but finitely many Dehn surgeries on $\partial M_0$, the resulting closed $3$--manifold will admit a complete hyperbolic structure by Thurston's Dehn Surgery theorem (see \cite{Thurston}). The figure-eight knot complement also admits a complete hyperbolic structure on its interior and the volumes of the closed hyperbolic manifolds obtained by Dehn surgery on $M_0$ are strictly smaller than $\mathrm{Vol}(M_0)$. Consequently, only finitely many of these closed hyperbolic $3$--manifolds can be arithmetic by Borel's finiteness theorem. For $n \geq 4$, the number of non-isometric, complete, finite volume hyperbolic $n$--manifolds of volume at most $V$ is finite by Wang \cite{Wang}. In this case, we can count the number of non-isometric complete, finite volume hyperbolic $n$--manifolds of volume at most $V$. Restricting to only the arithmetic or non-arithmetic manifolds, we obtain two counting functions and it is known that these functions have the same growth type (see \cite{GL} and the references therein for more on this topic).
Returning to the main topic of this subsection, we end with another family of examples.
\begin{cor}\label{Sec:Intro:C2}
Let $M$ be a closed complex hyperbolic $2$--manifold that is non-arithmetic and large. Then for each $j \in \mathbf{N}$, there exist pairwise non-isometric, finite Riemannian covers $M_1,\dots,M_j$ of $M$ such that:
\begin{itemize}
\item[(1)]
$\mathcal{E}_k(M_i) = \mathcal{E}_k(M_{i'})$ for all $k$ and all $i,i'$.
\item[(2)]
$\mathcal{L}_p(M_i) = \mathcal{L}_p(M_{i'})$ for all $i,i'$.
\item[(3)]
There exist compatible isomorphisms $\psi_k\colon H^k(M_i,\mathbf{Z}) \to H^k(M_{i'},\mathbf{Z})$ for all $k$ and for all $i,i'$.
\end{itemize}
\end{cor}
By work of Deligne--Mostow \cite{DM}, there are commensurability classes of complex hyperbolic 2--manifolds for which Corollary \ref{Sec:Intro:C2} can be applied. At present, there are only finitely many known commensurability classes of non-arithmetic complex hyperbolic 2--manifolds; see \cite{DPP} for more on this topic.
\subsection{Algebro-geometric results and examples}\label{Sec:AGresults}
We now describe some results that relate various algebro-geometric invariants for pairs of smooth projective varieties that are constructed via $R$--equivalence for certain rings $R$. The examples from Corollary \ref{Sec:Intro:C2} provide non-trivial examples of such pairs for $R=\mathbf{Z}$. Large families of examples of non-isomorphic smooth projective varieties for $R=\mathbf{Q}$ were constructed in \cite[Thm 1.1]{McR}. These examples arise in all possible dimensions and the universal cover of these examples can be taken to be any irreducible, non-compact Hermitian symmetric space.
For a field $K \subseteq \mathbf{C}$, we denote the category of smooth projective varieties over $K$ by $\Var_K$. The set of complex points $X(\mathbf{C})$ of a variety $X\in \Var_K$ can be regarded as a complex manifold. These spaces carry the usual topological invariants such as the (topological) fundamental group or singular cohomology. However, the singular cohomology of an algebraic variety $X\in \Var_K$ is endowed with more structure than just an abelian group. Hodge theory provides the singular cohomology groups with a canonical decomposition $H^i(X,\mathbf{Z})\otimes \mathbf{C}= \bigoplus_{p+q=i}H^{pq}$ such that $\overline{H^{pq}} = H^{qp}$ (see \cite{voisin} for instance). Such a decomposition is referred to as a \textbf{Hodge structure}. The subspace $H^{pq}$ can be defined as the space of de Rham cohomology classes represented by closed complex valued differential forms of type $(p,q)$. If $X$ is equipped with a K\"ahler metric, then $H^{pq}$ is isomorphic to the space of harmonic $(p,q)$--forms. For $k$ odd, the Hodge structure can be used to construct a complex structure on the real torus $H^k(X(\mathbf{C}),\mathbf{R})/H^k(X(\mathbf{C}),\mathbf{Z})$ which turns it into a complex torus called the \textbf{Griffiths intermediate Jacobian}. When $k=1, 2\dim_\mathbf{C}(X)-1$, these tori are in fact abelian varieties called the \textbf{Picard} and \textbf{Albanese varieties} of $X$. Setting $\mathbf{Q}_p$ to be the field of $p$--adic numbers and $\mathbf{Z}_p$ to be the ring of $p$--adic integers, via the comparison isomorphisms with \'etale cohomology (see \cite{milne}), we have
\begin{align*}
H^k(X(\mathbf{C}),\mathbf{Z}_p) &\cong H_{et}^k(X_{\overline{K}},\mathbf{Z}_p) :=\varprojlim_n H_{et}^k(X_{\overline{K}},\mathbf{Z}/(p^n)) , \\
H^k(X(\mathbf{C}),\mathbf{Q}_p) &\cong H_{et}^k(X_{\overline{K}},\mathbf{Q}_p) := H_{et}^k(X_{\overline{K}},\mathbf{Z}_p)\otimes \mathbf{Q}_p.
\end{align*}
where $\overline{K}$ denotes the algebraic closure of $K$ and $X_{\overline{K}}= X\times_{\spec K} \spec{\overline{K}}$. The \'etale cohomology groups carry natural $\Gal(\overline{K}/K)$--actions which encode important arithmetic information about $X$. When $K$ is a number field, these Galois modules determine the Hasse--Weil zeta function (see \cite{serre}). At a more basic level, the fundamental homological invariant of a variety is its motive; see \S \ref{Sec:motives} for more details. Rational cohomology with its Hodge structure or $\mathbf{Q}_p$--\'etale cohomology with its Galois action depend only on the motive.
\begin{thm}\label{thm:cohXi}
Suppose $(G,P_1,P_2)$ is an $R$--Gassman triple, $p\colon X\to Y$ is a Galois \'etale cover with $X,Y \in \Var_K$ and with Galois group $G$, and $X_i= X/P_i$ for $i=1,2$.
\begin{enumerate}
\item[(1)]
If $K=\mathbf{C}$, then there is an $R$--module isomorphism of singular cohomology groups $H^i(X_1(\mathbf{C}),R)\cong H^i(X_2(\mathbf{C}),R)$. If $R=\mathbf{Z}$ (resp.~$\mathbf{Q}$), then the isomorphism respects the canonical integral (resp.~rational) Hodge structures. In particular, the intermediate Jacobians of $X_i$ are isomorphic (resp.~isogenous).
\item [(2)]
If $R=\mathbf{Z}_p$ (resp. $\mathbf{Q}_p$) and $\overline{K}$ is the algebraic closure of $K$, then there is a $\Gal(\overline{K}/K)$--equivariant isomorphism of \'etale cohomology $H_{et}^*(X_{1,\overline{K}},\mathbf{Z}_p)\cong H_{et}^*(X_{2,\overline{K}},\mathbf{Z}_p)$ (resp. $H_{et}^*(X_{1,\overline{K}},\mathbf{Q}_p)\cong H_{et}^*(X_{2,\overline{K}},\mathbf{Q}_p)$).
\item[(3)]
If $R=\mathbf{Q}$, then the effective Chow motives $M(X_i)$ of $X_i$ are isomorphic.
\end{enumerate}
\end{thm}
\begin{rem}
The last statement of case (1), when $\dim X_i=1$, is due to Prasad \cite{prasad}. In case (2) and $R=\mathbf{Q}_p$, this result is due to Prasad--Rajan \cite{pr}, who also observed that this implies that the Hasse--Weil zeta functions agree when $K$ is a number field. Note that case (3) actually implies the previous two statements when $\mathbf{Q} \subseteq R$.
\end{rem}
Combining this theorem with (the proof of) Corollary \ref{Sec:Intro:C2} yields:
\begin{thm}\label{prop:ZequivVar}
Fix an embedding $\overline{\mathbf{Q}} \subset \mathbf{C}$. Then for every $j \in \mathbf{N}$, there exists smooth projective surfaces $X_1,\ldots, X_j$ defined over $\overline{\mathbf{Q}}$ such that
\begin{enumerate}
\item[(1)] $H^k(X_i(\mathbf{C}),\mathbf{Z})\cong H^k(X_{i'}(\mathbf{C}),\mathbf{Z})$ as Hodge structures for all $k$ and all $i,i'$.
\item[(2)] $H_{et}^k(X_i,\mathbf{Z}_p)\cong H_{et}^k(X_{i'},\mathbf{Z}_p)$ as Galois modules for all $k$ and all $i,i'$.
\item[(3)] The motives $M(X_i)\cong M(X_{i'})$ all $i,i'$.
\item[(4)] The topological fundamental groups of $X_i(\mathbf{C})$ are pairwise non-isomorphic.
\end{enumerate}
\end{thm}
In \S \ref{Sec:motives}, we will use Theorem \ref{prop:ZequivVar} to show the non-injectivity of the map from the Grothendieck group of varieties over $\overline{\mathbf{Q}}$ to the Grothendieck group of the category of effective Chow motives (see Theorem \ref{Sec:Motive:NonInject}). The differences $[X_i]-[X_{i'}]$ will give nonzero elements in the kernel.
\paragraph{\textbf{Acknowledgments.}} The authors would like to thank Nick Miller, Deepam Patel, Alan Reid, and Matthew Stover for conversations on the topics in this paper. We would also like to thank the referee for helpful comments that helped improve the clarify of the article. The first author was partially supported by an NSF grant. The third author was partially supported by NSF grant DMS-1408458.
\section{Preliminaries}
Real and complex hyperbolic $n$--space are examples of symmetric spaces of non-compact type. We refer the reader to \cite{Goldman} and \cite{Ratcliffe} for a thorough introduction to these spaces. The isometry group $\Isom(\mathbf{H}_\mathbf{R}^n)$ of real hyperbolic $n$--space $\mathbf{H}_\mathbf{R}^n$ is isogenous to the subgroup $\SO(n,1)$ of $\SL(n+1,\mathbf{R})$ that preserves the bilinear form
\[ B_{n,1}(x,y) = -x_{n+1}y_{n+1} + \sum_{j=1}^n x_jy_j. \]
Given a discrete subgroup $\Gamma \leq \Isom(\mathbf{H}_\mathbf{R}^n)$, the quotient space $\mathbf{H}_\mathbf{R}^n/\Gamma$ is a real hyperbolic $n$--orbifold. When $\Gamma$ is torsion free (i.e. contains no non-trivial elements of finite order), the quotient space is a complete, real hyperbolic $n$--manifold. We say that $\Gamma$ is a \textbf{lattice} if $\mathbf{H}_\mathbf{R}^n/\Gamma$ has finite volume. If $\mathbf{H}_\mathbf{R}^n/\Gamma$ is also compact, we say that $\Gamma$ is \textbf{cocompact}. Conversely, given a complete, finite volume real hyperbolic $n$--manifold $M$, via the action of $\pi_1(M)$ on the universal cover $\mathbf{H}_\mathbf{R}^n$, we obtain an injective homomorphism $\pi_1(M) \to \Isom(\mathbf{H}_\mathbf{R}^n)$. The image under this representation is a lattice. We note that because this representation depends on the choice of a lift $\widetilde{p} \in \mathbf{H}_\mathbf{R}^n$ of the base point $p \in M$, this representation is unique only up to conjugation in $\Isom(\mathbf{H}_\mathbf{R}^n)$.
The isometry group $\Isom(\mathbf{H}_\mathbf{C}^n)$ of complex hyperbolic $n$--space $\mathbf{H}_\mathbf{C}^n$ is isogenous to the subgroup $\SU(n,1)$ of $\SL(n+1,\mathbf{C})$ that preserves the hermitian form
\[ H_{n,1}(w,z) = -w_{n+1}\overline{z_{n+1}} + \sum_{j=1}^n w_j\overline{z_j}. \]
Complex hyperbolic $n$--manifolds and orbifolds are constructed similarly to those in the real hyperbolic setting but taking discrete subgroups of $\Isom(\mathbf{H}_\mathbf{C}^n)$. One important difference between real and complex hyperbolic $n$--manifolds that will be relevant is the existence of complex projective structures. First, a complex hyperbolic $n$--manifold is a complex manifold of real dimension $2n$. Due to an exceptional isogeny between $\SL(2,\mathbf{R})$ and $\SU(1,1)$, real hyperbolic $2$--manifolds coincide with complex hyperbolic $1$--manifolds. In particular, real hyperbolic $2$--manifolds come with a natural complex structure. For all $n>2$, real hyperbolic $n$--manifolds are not naturally complex. When $\Gamma$ is a torsion free cocompact lattice in $\SU(n,1)$, the associated complex hyperbolic $n$--manifold is a non-singular, complex projective algebraic variety.
Taking $\mathrm{G}$ to be either $\Isom(\mathbf{H}_\mathbf{R}^n)$ or $\Isom(\mathbf{H}_\mathbf{C}^n)$, given a pair of subgroups $\Gamma_1,\Gamma_2 \leq \mathrm{G}$, we say that $\Gamma_1,\Gamma_2$ are \textbf{commensurable} if $\Gamma_1 \cap \Gamma_2$ is a finite index subgroup of both $\Gamma_1,\Gamma_2$. We define the commensurator of $\Gamma$ in $\mathrm{G}$ to be the subgroup
\[ \Comm(\Gamma) = \set{g \in \mathrm{G}~:~g^{-1}\Gamma g, \Gamma \textrm{ are commensurable}}. \]
One sees that $\Gamma \leq \Comm(\Gamma)$. It follows from work of Margulis \cite[Thm 1, pp.~2]{Margulis} that either $[\Comm(\Gamma):\Gamma] < \infty$ or $\Comm(\Gamma)$ is dense in $\mathrm{G}$ in the analytic topology. When $[\Comm(\Gamma):\Gamma]$ is finite, we say that $\Gamma$ is \textbf{non-arithmetic} and when $\Comm(\Gamma) \leq \mathrm{G}$ is dense, we say that $\Gamma$ is \textbf{arithmetic}.
We end this section with a short remark concerning the cohomology/homology of $\Gamma$ and its associated real or complex hyperbolic manifold. When $\Gamma$ is discrete and torsion free, the associated manifold $M = \mathbf{H}_\mathbf{R}^n/\Gamma$ or $\mathbf{H}_\mathbf{C}^n/\Gamma$ is a $K(\Gamma,1)$--space for $\Gamma$ since $\mathbf{H}_\mathbf{R}^n,\mathbf{H}_\mathbf{C}^n$ are contractible. As a result, we can establish the cohomology isomorphisms for the spaces by establishing them for the cohomology of the associated lattices.
\section{Isomorphisms in group cohomology}
In this section, we record some basic results that relate the group cohomology of $\mathbf{Z}$--equivalent subgroups of finite and infinite groups. We refer the reader to \cite{Brown} for a more complete treatment of group cohomology.
Given a group $G$ and a subgroup $P \leq G$, we denote the restriction functor by $\Res^G_P$. Restriction has left and right adjoints given by the induction and co-induction functors $\Ind_P^G,\CoInd_P^G$. Explicitly, for a $\mathbf{Z}[P]$--module $A$, the underlying modules are $\Ind_P^G(A) =\mathbf{Z}[G] \otimes_{\mathbf{Z}[P]} A$ and $\CoInd_P^G(A) =\Hom_{\mathbf{Z}[P]}(\mathbf{Z}[G],A)$, with respective $G$ actions given by the $\mathbf{Z}$--linear extensions of $g\cdot (x\otimes a)=gx\otimes a$ and $(g\cdot \phi)(x)= \phi(xg)$.
We start with a pair of well known results.
\begin{lemma}\label{Sec:Coho:L1}
If $G$ is a group and $P \leq G$ is finite index, then induction and co-induction are isomorphic as $\mathbf{Z}[G]$--modules.
\end{lemma}
\begin{lemma}\label{Sec:Coho:L2}
If $G$ is a group, $P \leq G$ is of finite index, and $A$ is a $\mathbf{Z}[G]$--module, then $\CoInd_P^G(\Res_P^G(A))=A\otimes_{\mathbf{Z}} \mathbf{Z}[G/P]$.
\end{lemma}
We note that $P_{1},P_{2} \leq G$ are $\mathbf{Z}$--equivalent if and only if $\CoInd_{P_1}^G(\Res_{P_1}^G(1))$, $\CoInd_{P_2}^G(\Res_{P_2}^G(1))$ are isomorphic as $\mathbf{Z}[G]$--modules. Given a $\mathbf{Z}[G]$--module $A$, we say that a morphism $\psi_k\colon H^k(P_1,\Res_{P_1}^G(A)) \to H^k(P_2,\Res_{P_2}^G(A))$ is \textbf{compatible} if the diagram
\begin{equation}\label{Eq:Natural2}
\begin{tikzcd}
& H^k(G,A) \arrow[rd,"\Res", bend left = 30] \arrow[ld,"\Res"', bend right = 30] & \\ H^k(P_1,\Res_{P_1}^G(A)) \arrow[rr,"\psi_k"', bend right = 45] \arrow[ru, "\CoInd"', bend right = 30]& & H^k(P_2,\Res_{P_2}^G(A)) \arrow[lu,"\CoInd", bend left = 30]
\end{tikzcd}
\end{equation}
commutes.
\begin{lemma} \label{Sec:Coho:L3}
Let $G$ be a finite group and $P_{1}, P_{2} \leq G$ be $\mathbf{Z}$--equivalent subgroups. Then for any $\mathbf{Z}[G]$--module $A$ and any nonnegative integer $k$, there is a compatible isomorphism $H^{k}(P_{1},\Res_{P_1}^G(A))\to H^{k}(P_{2},\Res_{P_2}^G(A))$.
\end{lemma}
\begin{proof}
By Shapiro's lemma (see \cite[III.8]{Brown}), we have $H^{k}(P_{i},\Res_{P_i}^G(A))=H^{k}(G,\CoInd_{P_i}^G(\Res_{P_i}^G(A)))$. By Lemma \ref{Sec:Coho:L2}, the coefficients for the latter cohomology groups are $A \otimes_{\mathbf{Z}} \mathbf{Z}[G/P_i]$, viewed as $\mathbf{Z}[G]$--modules. Since $P_{1}$ and $P_{2}$ are $\mathbf{Z}$--equivalent, these coefficient modules are $\mathbf{Z}[G]$--isomorphic. Thus, the right hand side of the equality above is actually independent of $i$, providing the isomorphism as claimed. Compatibility follows from the naturality of the isomorphism in Shapiro's lemma. Specifically, upon choosing an isomorphism of the $\mathbf{Z}[G]$--modules $\mathbf{Z}[G/P_1]$ and $\mathbf{Z}[G/P_2]$, isomorphisms in cohomology groups
\[ H^k(P_1,\Res_{P_1}^G(A)) \to H^{k}(G,\CoInd_{P_1}^G(\Res_{P_1}^G(A))) \to H^{k}(G,\CoInd_{P_2}^G(\Res_{P_2}^G(A))) \to H^k(P_2,\Res_{P_1}^G(A)) \]
are induced by isomorphisms of coefficients.
\end{proof}
We now deduce a few corollaries of the above. First, we observe that if $\psi\colon \Gamma \to G$ is a surjective homomorphism and $\Gamma_i = \psi^{-1}(P_i)$ for $\mathbf{Z}$--equivalent subgroups $P_1,P_2 \leq G$, then $\Gamma_1,\Gamma_2 \leq \Gamma$ are also $\mathbf{Z}$--equivalent subgroups. In particular, via the previous subsection, we obtain the following lemma.
\begin{lemma}\label{Sec:Coho:L4}
Let $\psi\colon \Gamma \to G$ be a surjective homomorphism, $P_1,P_2 \leq G$ be $\mathbf{Z}$--equivalent subgroups, and $\Gamma_i = \psi^{-1}(P_i)$. Then for any $\mathbf{Z}[\Gamma]$--module $A$ and any nonnegative integer $k$, there is a compatible isomorphism $H^k(\Gamma_1,\Res_{\Gamma_1}^\Gamma(A)) \to H^k(\Gamma_2,\Res_{\Gamma_2}^\Gamma(A))$.
\end{lemma}
One case of Lemma \ref{Sec:Coho:L4} of particular interest is when $A$ is a trivial $\mathbf{Z}[\Gamma]$--module (i.e. the $\Gamma$--action is trivial).
\begin{cor}\label{Sec:Coho:C1}
Let $\psi\colon \Gamma \to G$ be a surjective homomorphism, $P_1,P_2 \leq G$ be $\mathbf{Z}$--equivalent subgroups, and $\Gamma_i = \psi^{-1}(P_i)$. Then for any trivial $\mathbf{Z}[\Gamma]$--module $A$ and any nonnegative integer $k$, there is a compatible isomorphism $H^k(\Gamma_1,A) \to H^k(\Gamma_2,A)$.
\end{cor}
We note that one deficiency of Lemma \ref{Sec:Coho:L4} is the requirement that our initial module $A$ be a $\mathbf{Z}[\Gamma]$--module. This prevents us from obtaining a bijection between the $\mathbf{Z}[\Gamma_1]$--modules and $\mathbf{Z}[\Gamma_2]$--modules in a way that induces compatible isomorphisms in group cohomology.
\section{Proof of Theorem \ref{Sec:Intro:T1}}
In this section, we prove Theorem \ref{Sec:Intro:T1}, Corollary \ref{Sec:Intro:C1}, and Corollary \ref{Sec:Intro:C2}.
\subsection{Algebraic construction}
Throughout this section, for each $r \in \mathbf{N}$, we will denote the free group of rank $r$ by $F_r$. The main goal of this section is the following construction of arbitrarily large families of finite index subgroups of certain lattices that are pairwise non-isomorphic and pairwise $\mathbf{Z}$--equivalent.
\begin{prop}\label{TechConProp}
Let $\mathrm{G}$ be a simple Lie group that is not isogenous to $\SL(2,\mathbf{R})$ and let $\Gamma \leq \mathrm{G}$ be a lattice that is large and non-arithmetic. Then for each $j\in \mathbf{N}$, there exist finite index subgroups $\Delta_1,\dots,\Delta_j \leq \Gamma$ such that
\begin{itemize}
\item[(a)]
The subgroups $\Delta_i$ are pairwise non-isomorphic.
\item[(b)]
The subgroups $\Delta_i$ are pairwise $\mathbf{Z}$--equivalent.
\end{itemize}
\end{prop}
We note that Proposition \ref{TechConProp} holds when $\mathrm{G}$ is isogenous to $\SL(2,\mathbf{R})$ but with (a) changed to the following:
\begin{itemize}
\item[(a')]
The subgroups $\Delta_i$ are pairwise non-conjugate in $\mathrm{G}$.
\end{itemize}
This subsection is devoted to the proof Proposition \ref{TechConProp}. We start with a basic lemma on the size of the set $\mathrm{Hom}_{\mathrm{sur}}(F_r,Q)$ of surjective homomorphisms from a free group $F_r$ to a finite group $Q$.
\begin{lemma}\label{HomSizeLemma}
If $Q$ is a finite group that is minimally generated by $r_Q$ elements, then $\abs{\mathrm{Hom}_{\mathrm{sur}}(F_r,Q)} \geq \abs{Q}^{r-r_Q}$ for all $r \geq r_Q$.
\end{lemma}
\begin{proof}
Given $r \geq r_Q$, let $X_r = \set{x_1,\dots,x_r}$ and let $F_r = F(X_r)$ be the free group generated by $X_r$. We can view $F_{r_Q} \leq F_r$ by $F_{r_Q} = \innp{x_1,\dots,x_{r_Q}}$. Fixing $\varphi \in \mathrm{Hom}_{\textrm{sur}}(F_{r_Q},Q)$, for each $q_{r_Q+1},\dots,q_r \in Q$, we define $\Phi\colon F_r \to Q$ to be the unique homomorphism induced by the function $f\colon X_r \to Q$ given by
\[ f(x_j) = \begin{cases} \varphi(x_j), & j \leq r_Q, \\ q_j, & j > r_Q. \end{cases} \]
Since $\varphi$ is surjective, the homomorphisms $\Phi$ are surjective and distinct for all distinct (as ordered sets) choices of $q_{r_Q+1},\dots,q_r$. Hence $\abs{\mathrm{Hom}_{\textrm{sur}}(F_r,Q)} \geq \abs{Q}^{r-r_Q}$.
\end{proof}
We also require the following result of P.~Hall \cite{Hall}.
\begin{thm}\label{HallThm}
Let $Q$ be a non-abelian finite simple group and $\Gamma$ be a finitely generated group. If $\varphi_1,\dots,\varphi_m \in \mathrm{Hom}_{\mathrm{sur}}(\Gamma,Q)$ and $\varphi_i \ne \theta \circ \varphi_j$ for all $\theta \in \Aut(Q)$ and all $i \ne j$, then $\varphi_1 \times \dots \times \varphi_m\colon \Gamma \to Q^m$ is surjective.
\end{thm}
With all of the requisite material assembled, we now prove Proposition \ref{TechConProp}.
\begin{proof}[Proof of Proposition \ref{TechConProp}]
We begin by setting $\mathcal{X}_r(Q) \overset{\text{def}}{=} \mathrm{Hom}_{\textrm{sur}}(F_r,Q)/\Aut(Q)$ where the action of $\Aut(Q)$ on $\mathrm{Hom}_{\textrm{sur}}(F_r,Q)$ is given by post-composition. By Lemma \ref{HomSizeLemma}, we see that $\beta_{r,Q} = \abs{\mathcal{X}_r(Q)} \geq \alpha_Q^{-1}\abs{Q}^{r-r_Q}$ where $\alpha_Q = \abs{\Aut(Q)}$. For each equivalence class $x$ in $\mathcal{X}_r(Q)$, we fix a representative $\varphi_x \in \mathrm{Hom}_{\textrm{sur}}(F_r,Q)$. By Theorem \ref{HallThm}, we have a surjective homomorphism $\Phi_r\colon F_r \to Q^{\beta_{r,Q}}$ given by $\Phi_r = \prod_{x \in \mathcal{X}_r(Q)} \varphi_x$. Fixing $Q = \PSL(2,\mathbf{F}_{29})$ and setting $P_1,P_2 \leq Q$ to be the $\mathbf{Z}$--equivalent subgroups given by Scott \cite{scott}, for each $m \in \mathbf{N}$ and $z = (z_i) = \set{1,2}^m$, we define $P_z \leq Q^m$ to be the subgroup $P_z \overset{\text{def}}{=} \prod_{i=1}^m P_{z_i}$. It follows that for any distinct $z,z' \in \set{1,2}^m$ that $P_z,P_{z'}$ are $\mathbf{Z}$--equivalent and non-conjugate in $Q^m$. In particular, $Q^m$ has $2^m$ pairwise non-conjugate, pairwise $\mathbf{Z}$--equivalent subgroups.
Now, given a large, non-arithmetic lattice $\Gamma \leq \mathrm{G}$ and $j \in \mathbf{N}$, we must find finite index subgroups $\Delta_1,\dots,\Delta_j \leq \Gamma$ that are pairwise non-isomorphic and pairwise $\mathbf{Z}$--equivalent. Since $\Gamma$ is non-arithmetic, combining Mostow--Prasad (see \cite{Mostow}, \cite{GPra}) and Margulis \cite[Thm 1, p. 2]{Margulis}, there exists a constant $C_\Gamma \in \mathbf{N}$ such that if $\Delta \leq \Gamma$ is a finite index subgroup, there are at most $C_\Gamma$ non-conjugate subgroups of $\Gamma$ that are isomorphic to $\Delta$ as an abstract group. Explicitly, $C_\Gamma = [\Comm(\Gamma):\Gamma]$ and so when $\Lambda \leq \Gamma$ is a finite index subgroup, we have $C_\Lambda = C_\Gamma [\Gamma:\Lambda]$. As $\Gamma$ is also large, there exists a finite index subgroup $\Gamma_2 \leq \Gamma$ and a surjective homomorphism $\psi\colon \Gamma_2 \to F_2$. Given any $r \geq 3$, there exists a subgroup $F_r \leq F_2$ of index $r-1$ such that $F_r$ is a free group of rank $r$. To see this, we first note that we have a surjective homomorphism $F_2 \to \mathbf{Z}$ given by sending $a=1$ and $b=0$, where $\set{a,b}$ is a free basis for $F_2$. We compose this surjection with the surjective homomorphism $\mathbf{Z} \to \mathbf{Z}/(r-1)\mathbf{Z}$ given by reduction modulo $r-1$. The kernel of the homomorphism $F_2 \to \mathbf{Z} \to \mathbf{Z}/(r-1)\mathbf{Z}$ has index $r-1$ in $F_2$. It follows by the Nielsen--Schreier theorem (see \cite[Thm 2.10]{MKS} for instance) that this subgroup of $F_2$ is free and of rank $r$. Setting $\Gamma_r = \psi^{-1}(F_r)$, we see that there exists subgroups $\Gamma_r \leq \Gamma_2 \leq \Gamma$ and surjective homomorphisms $\psi_r\colon \Gamma_r \to F_r$ with $[\Gamma_2:\Gamma_r] = r-1$. Now, for the given $j \in \mathbf{N}$, we select $r$ such that $2^{\beta_{r,Q}} \geq j(r-1)C_{\Gamma_2}$. Note that this can be done since $\beta_{r,Q} \geq \alpha_Q^{-1}\abs{Q}^{r-2}$ grows exponentially as a function of $r$ whereas $(r-1)C_{\Gamma_2}$ only grows linearly as a function of $r$. By selection of $\Gamma_r$ and $r$, we have the surjective homomorphism
\[ \begin{tikzcd} \Gamma_r \arrow[twoheadrightarrow, rr,"\psi_r"', bend right = 45] \arrow[rrrr,twoheadrightarrow,"\mu_r", bend left = 45] & & F_r \arrow[rr, "\Phi_r"', twoheadrightarrow,bend right = 45] & & Q^{\beta_{r,Q}}. \end{tikzcd} \]
For each $z \in \set{1,2}^{\beta_{r,Q}}$, we define $\Delta_z = \mu_r^{-1}(P_z)$ and note that the subgroups $\Delta_z$ are pairwise non-conjugate in $\Gamma_r$ and are pairwise $\mathbf{Z}$--equivalent. There are $2^{\beta_{r,Q}}$ such subgroups and we know that for each $\Delta_z$, there are at most $C_{\Gamma_r}$ subgroups from this list that can be abstractly isomorphic to a fixed $\Delta_z$. As $C_{\Gamma_r} = (r-1)C_{\Gamma_2}$ and $2^{\beta_{r,Q}} \geq j(r-1)C_{\Gamma_2}$, there is a subset of these subgroups of size at least $j$ that are all pairwise non-isomorphic.
\end{proof}
\subsection{Completing the proof of Theorem \ref{Sec:Intro:T1}}
To prove Theorem \ref{Sec:Intro:T1} from Proposition \ref{TechConProp}, a few more words are required. As noted in the introduction, by work of Agol \cite[Thm 9.2]{Agol}, every closed hyperbolic $3$--manifold is large. In higher dimensions, using the construction of Gromov--Piatetski-Shapiro \cite{GPS}, there exists infinitely many commensurability classes of complete, finite volume hyperbolic $n$--manifolds that are both non-arithmetic and large. We note that there exist infinitely many commensurability classes of closed or complete, finite volume non-arithmetic hyperbolic $n$--manifolds for every $n$ follows directly from \cite{GPS}. That these examples are also large is well known. For the readers' sake, we briefly recall the construction of these manifolds with largeness in mind. First, we start with a pair of compact hyperbolic $n$--manifolds $M_1,M_2$ with connected, totally geodesic boundaries that are isometric. Gluing $M_1,M_2$ along the common boundaries $\partial M_1 = \partial M_2 = N$ produces a closed hyperbolic $n$--manifold $M$. By construction, $\pi_1(M) = \pi_1(M_1) \ast_{\pi_1(N)} \pi_1(M_2)$ and is large (see \cite[Thm 3.2]{Lub}). Lastly, using the construction of Deligne--Mostow \cite{DM}, there exist complete, finite volume complex hyperbolic 2--manifolds that are both non-arithmetic and large. As in the construction \cite{GPS}, Deligne--Mostow do not explicitly state that the non-arithmetic lattices they construct are large. That some of these lattices are large follows from the fact that they have surjective homomorphisms to hyperbolic triangle groups; see \cite[Thm 3.1]{Der}, \cite{Kap}, and \cite[Thm 3.1]{Toledo}.
We can apply Proposition \ref{TechConProp} to any manifold $M$ in the above classes. We have opted to only write out the case when $M$ is a closed hyperbolic $n$--manifold as the complex hyperbolic setting is logically identical. Given $j \in \mathbf{N}$, $n \geq 3$, and a closed hyperbolic $n$--manifold $M$ which is non-arithmetic and large, we can apply Proposition \ref{TechConProp} with $\Gamma = \pi_1(M)$. We obtain $j$ pairwise non-isomorphic, finite index subgroup $\Delta_1,\dots,\Delta_j$ that are $\mathbf{Z}$--equivalent. By Corollary \ref{Sec:Coho:C1}, for any abelian group $A$ endowed with a trivial $\mathbf{Z}[\Gamma]$--module structure, we obtain compatible isomorphisms between the cohomology groups $H^k(\Delta_i,A)$ and $H^k(\Delta_{i'},A)$ for all $k$ and all $i,i'$. Since $M$ is aspherical, $M$ is a $K(\Gamma,1)$ for $\Gamma$. Setting $M_i$ to be the associated finite covers corresponding to $\Delta_i$, we see that $M_i$ is a $K(\Delta_i,1)$ for all $i$. In particular, we have that $H^k(M_i,A)$ and $H^k(\Delta_i,A)$ are compatibly isomorphic; the compatibility of the isomorphisms between $H^k(\Delta_i,A)$ and $H^k(\Delta_{i'},A)$ produce compatible isomorphisms between the cohomology groups $H^k(M_i,A)$ and $H^k(M_{i'},A)$. As the groups $\Delta_i,\Delta_{i'}$ are not isomorphic, by Mostow--Prasad rigidity (see \cite{Mostow}, \cite{GPra}) the manifolds $M_i,M_{i'}$ are not isometric. Taking $A = \mathbf{Z}$ produces (3) of Theorem \ref{Sec:Intro:T1}. The proof of Theorem \ref{Sec:Intro:T1} is completed by noting that $\mathbf{Z}$--equivalence implies $\mathbf{Q}$--equivalence and $\mathbf{Q}$--equivalence implies the manifolds $M_i,M_{i'}$ satisfy (1) and (2) by \cite{sunada}.
\section{Proof of parts (1) and (2) of Theorem \ref{thm:cohXi} }
Given a commutative ring $R$, a finite group $G$, and a finite $G$--set $X$, let $R[X]=\Hom_{sets}(X,R)$. This defines a contravariant functor from finite $G$--sets to $R[G]$--modules; if $p\colon X\to Y$ is $G$--map, let $p^*\colon R[Y]\to R[X]$ denote the corresponding homomorphism. When $p$ is onto, we have a homomorphism $p_*\colon R[X]\to R[Y]$ defined by $(p_*\phi)(y)=\sum_{x\in p^{-1}(y)} \phi(x)$. If $X=G$ and $Y=G/P$, then $(p_{*}\circ p^*)(\phi) = \abs{P}\phi$. It follows that $\mathbf{Q}[G/P]$ is a direct summand of $\mathbf{Q}[G]$, which can be identified with the left ideal $\mathbf{Q}[G]e_P$, where $e_P= \frac{1}{\abs{P}} \sum_{g\in P} g$ is the corresponding idempotent. It is convenient to normalize $p_*, p^*$ as follows. Instead of $p_*$ use the inclusion $\iota_P\colon \mathbf{Q}[G]e_P\to \mathbf{Q}$, and replace $p^*$ by the projection $p_P(x)= xe_P$. Given $\mathbf{Q}$--equivalent subgroups $P_1,P_2\leq G$ and set $e_i = e_{P_i}$, $\iota_i = \iota_{P_i}$, and $p_i = p_{P_i}$ for $i=1,2$. Since $\mathbf{Q}[G/P_i]$ are summands of $\mathbf{Q}[G]$ as $\mathbf{Q}[G]$--modules, it follows that a $\mathbf{Q}[G]$--module isomorphism $f'\colon \mathbf{Q}[G/P_1]\to \mathbf{Q}[G/P_2]$ can be extended to $\mathbf{Q}[G]$--module isomorphism $f\colon \mathbf{Q}[G]\to \mathbf{Q}[G]$ such that the diagram
\[ \begin{tikzcd} \mathbf{Q}[G]\arrow[dd, "f"', bend right =15] \arrow[rr,"p_1", bend left = 30] & & \mathbf{Q}[G/P_1]\arrow[dd, "f'", bend left =15] \arrow[ll, "\iota_1"', bend left = 30] \\ & & \\ \mathbf{Q}[G]\arrow[rr, "p_2"', bend left =30] & & \mathbf{Q}[G/P_2] \ar[ll, "\iota_2", bend left =30] \end{tikzcd} \]
commutes. By Skolem--Noether, the extension $f$ is necessarily right multiplication by an invertible element, that we will also denote by $f\in (\mathbf{Q}[G])^\times$. The commutativity implies that
\begin{equation}\label{eq:1}
e_2 = f^{-1} e_1 f
\end{equation}
We record this fact.
\begin{lemma}\label{lemma:1}
$(G,P_1,P_2)$ is a Gassman triple if and only if there exists $f\in (\mathbf{Q}[G])^\times$ such that \eqref{eq:1} holds.
\end{lemma}
The converse above is clear. If $f\in G$, then \eqref{eq:1} says that $P_i$ are conjugate. Thus the Gassman condition is a weakening of conjugacy. Note that there are plenty of invertible elements of $\mathbf{Q}[G]$ which do not come from $G$. To see this, observe that by Artin--Wedderburn, $\overline{\mathbf{Q}}[G]$ is a product of matrix algebras. An element $f\in \mathbf{Q}[G]$ is invertible if and only the components of $f\otimes \overline{\mathbf{Q}}$ are invertible as matrices.
We now prove the first two parts of Theorem \ref{thm:cohXi}. The remaining part will be proved in the next section. Recall that we are given $Y \in \Var_K$, where $\mathrm{char}(K) = 0$, $p\colon X \to Y$ is a Galois \'etale cover with Galois group $G$ and $X_i = X/P_i$, where $(G,P_1,P_2)$ is an $R$--Gassman triple. These fit into a diagram
\begin{equation}\label{eq:diamond}
\begin{tikzcd}
& X\arrow[ld, "p_{X,1}"', bend right = 20, twoheadrightarrow,] \arrow[rd, twoheadrightarrow, "p_{X,2}", bend left = 20] \arrow[dd, "p", twoheadrightarrow] & \\ X_1 \arrow[rd, "p_{1,Y}"', bend right = 20, twoheadrightarrow] & & X_2\arrow[ld, "p_{2,Y}", bend left = 20, twoheadrightarrow] \\ & Y &
\end{tikzcd}
\end{equation}
Our goal is to show that the cohomology groups $H^k(X_1,R)$ and $H^k(X_2,R)$ are isomorphic as Hodge structures or Galois modules.
We start with a rather simple proof of part (i) for rational coefficients.
\begin{proof}[First proof of Theorem \ref{thm:cohXi} (1) when $R=\mathbf{Q}$.]
In this case, pullback $p_{X,i}^*$ gives an isomorphism of vector spaces $H^k(X_i(\mathbf{C}),\mathbf{Q}) \cong H^k(X(\mathbf{C}),\mathbf{Q})^{P_i}$, with inverse given by the normalized transfer $\frac{1}{|P_i|}(p_{X,i})_*$. Fixing a K\"ahler metric on $Y$, we endow the manifolds $X_1,X_2$ and $X$ with the pullback of this K\"ahler metric. The rational Hodge structures on these spaces are given by the standard Hodge-de Rham isomorphism between $H^k(X_i(\mathbf{C}),\mathbf{Q})\otimes \mathbf{C}$ and the space of harmonic $k$--forms on $X_i$ in tandem with the decomposition of the latter into $(p,q)$--parts. As this data is compatible under pullback, we see that $H^k(X_i(\mathbf{C}),\mathbf{Q}) \cong H^k(X(\mathbf{C}),\mathbf{Q})^{P_i}$ as Hodge structures. Applying Lemma \ref{lemma:1}, we deduce $H^k(X(\mathbf{C}),\mathbf{Q})^{P_1}\cong H^k(X(\mathbf{C}),\mathbf{Q})^{P_2}$ as Hodge structures.
\end{proof}
The above strategy will fail for integer coefficients, because we cannot identify $H^k(X_i,\mathbf{Z})$ with $H^k(X,\mathbf{Z})^{P_i}$ .
So instead, we push the coefficients down to $Y$.
\begin{proof}[Proof of Theorem \ref{thm:cohXi} (1) and (2)]
Suppose that $K=\mathbf{C}$. By covering space theory, $p\colon X\to Y$ corresponds to a surjective homomorphism $\rho\colon \pi_1(Y(\mathbf{C}))\to G$. Through $\rho$, any $R[G]$--module gives rise to a local system of $R$--modules on $Y$. The local systems corresponding to the $R[G]$--module $R[G/P_i]$ are precisely the sheaves $(p_{i,Y})_*(R)$. It follows that $(p_{1,Y})_*(R)\cong (p_{2,Y})_*(R)$. Hence $H^k(Y(\mathbf{C}),(p_{1,Y})_*(R))\cong H^k(Y(\mathbf{C}),(p_{2,Y})_*(R))$. Since the maps $p_{i,Y}$ are finite sheeted covers, the Leray spectral sequences collapse to give isomorphisms
\begin{equation}\label{eq:HkXi}
H^k(X_i(\mathbf{C}),R)\cong H^k(Y(\mathbf{C}),(p_{i,Y})_*(R))
\end{equation}
Now suppose that $R=\mathbf{Z}$ or $\mathbf{Q}$. Using the language of variations of Hodge structure (see \cite[\S 1-2]{zucker} for the relevant facts), the argument goes as follows. The local systems $(p_{i,Y})_*(R)$ can be regarded as variations of Hodge structures of type $(0,0)$ in a natural way. Consequently, the cohomology groups carry Hodge structures, and the isomorphisms \eqref{eq:HkXi} are compatible with these. In more explicit terms, if $V_i$ denotes the unitary flat bundle associated to $(p_{i,Y})_*(R)\otimes \mathbf{C}$, the Hodge structures result from the lattices $H^k(Y(\mathbf{C}),(p_{i,Y})_*(R))$ together with the isomorphisms of $H^k(Y(\mathbf{C}),(p_{i,Y})_*(R))\otimes \mathbf{C}$ to the spaces of $V_i$--valued harmonic $k$--forms, plus the $(p,q)$ decompositions of the latter. This proves (1).
The proof of (2) is formally identical, except that one works with the corresponding \'etale notions \cite{deligne, milne}. Let us assume that $R=\mathbf{Z}_p$ as
the argument for $\mathbf{Q}_p$ is the same. \'Etale covers of $Y$ are classified by open subgroups of the \'etale fundamental group $\pi_1^{et}(Y)$, which is an extension of $\Gal(\overline{K}/K)$ by the profinite completion of $\pi_1(Y(\mathbf{C}))$; this depends on the choice of a base point. In particular, $X$ corresponds to a surjective continuous homomorphism $\rho\colon \pi^{et}_1(Y)\to G$. The local systems (more precisely lisse sheaves, see \cite[Rapport]{deligne}) $(p_{i,Y})_*(\mathbf{Z}_p)$ correspond to the representations of the \'etale fundamental groups $\pi^{et}_1(Y)\to \mathbf{Z}_p[G/P_i]$ defined as above, and these are isomorphic. The cohomology of these sheaves come with canonical Galois actions, and we have isomorphisms $H_{et}^k(X_{i,\overline{K}},\mathbf{Z}_p)\cong H_{et}^k(Y_{\overline{K}},(p_{i,Y})_*(\mathbf{Z}_p))$ compatible with Galois actions. (This is discussed in \cite[Rapport, \S1.2-1.4]{deligne} when $K$ is a finite field, but the same reasoning applied here.) This proves (2).
\end{proof}
\begin{rem}
If the varieties in \eqref{eq:diamond} are replaced by $\mathbf{Z}$--equivalent manifolds $X_1$ and $X_2$, the same argument as above shows that
$H^k(X_1,\mathbf{Z})\cong H^k(X_2,\mathbf{Z})$. This answers Question 2 in Prasad \cite{prasad}. For aspherical manifolds, this also follows from
Corollary \ref{Sec:Coho:C1}.
\end{rem}
\section{Motives}\label{Sec:motives}
An additive category $C$ is called \textbf{pseudo-abelian} if every idempotent (i.e.~$p^2=p$) morphism $p\colon V\to V$ has a kernel and $V\cong \ker (p)\oplus \ker(1-p)$. The image of an idempotent $p$ also exists, and is given by $p(V)= \ker(1-p)$. Fixing a pseudo-abelian $\mathbf{Q}$--linear category $C$ and object $V$ on which a finite group $G$ acts by automorphisms, we have a homomorphism $\mathbf{Q}[G]\to \End_C(V)$ of algebras. Given a subgroup $P\leq G$, we define $V^P\subset V$ to be the image of the idempotent $e_P= \frac{1}{\abs{P}} \sum_{g\in P} g$.
\begin{lemma}\label{lemma:VPi}
If $V$ is as above with $P_1, P_2 \leq G$ are $\mathbf{Q}$--equivalent subgroups, then $V^{P_1}\cong V^{P_2}$.
\end{lemma}
\begin{proof}
Let $f\in \mathbf{Q}[G]$ be as in Lemma \ref{lemma:1}, then $f\colon V^{P_2} \to V^{P_1}$ is an isomorphism.
\end{proof}
Let $\Var_K$ denote the category of smooth projective varieties over a field $K$ and $\CH^*(X)$ denote the Chow ring of cycles modulo rational equivalence tensored with $\mathbf{Q}$ (see \cite{fulton} for instance). We can form the category $\Cor_K$ of (degree $0$) correspondences: the objects are the same as $\Var_K$, $\Cor(X,Y) = \CH^d(X\times Y)$, where $d=\dim X$ (more details can be found in \cite{fulton, kleiman, scholl}.) The category of effective Chow motives $\Mot^{eff}_K$ is the pseudo-abelian completion of the previous category. More concretely, an object of $\Mot^{eff}_K$ is given by a pair $(X,e)$,
where $e\in \Cor(X,X)$ is an idempotent. Morphisms are given by
\[ \Hom_{\Mot^{eff}}((X,e), (X',e'))= \frac{\{ f\in \Cor(X,X')\mid f\circ e= e'\circ f\}}{\{f\mid f\circ e=e'\circ f=0\}}. \]
Set $M(X) = (X,id)$, which is the \textbf{motive associated to $X$}, and $(X,e)= e(M(X))$. Suppose that a finite group $G$ acts on $X\in \Var_K$. Then we can embed $\mathbf{Q}[G]\subset \Cor(X,X)$, by sending $g$ to the graph of the corresponding automorphism of $X$.
\begin{lemma}
Suppose that $Y\in \Var_K$ and $p\colon X\to Y$ is an Galois \'etale cover with Galois group $G$. Then $[Y]\cong (X,e_G)$, where $e_G=\frac{1}{|G|}\sum_{g\in G} g$.
\end{lemma}
\begin{proof}
The graph of $p$ defines an element of $\Hom_{\Mot^{eff}}((X,e), Y)$ that we must show is an isomorphism. By Manin's identity principle \cite{scholl}, it is enough to check that $\CH^*((X,e)\otimes Z)\to \CH^*(Y\otimes Z)$ is an isomorphism for every $Z\in \Var_K$. This map is $\CH^*(X\otimes Z)^G\to \CH^*(Y\otimes Z)$ which is an isomorphism by \cite[1.7.6]{fulton}.
\end{proof}
\begin{cor}\label{Sec:Motive:C1}
$M(Y)\cong e_G(M(X))=M(X)^G$.
\end{cor}
The next result will complete the proof of Theorem \ref{thm:cohXi}. Recall, we are given a $\mathbf{Q}$--Gassman triple $(G,P_1,P_2)$, a $G$--\'etale cover $X\to Y$ with $X_i = X/P_i$ and $Y \in \Var_K$ where $\mathrm{char}(K) = 0$.
\begin{prop}\label{prop:motivesX1X2}
$M(X_1)\cong M(X_2)$ in $\Mot^{eff}_K$.
\end{prop}
\begin{proof}
By Lemma \ref{lemma:VPi} and Corollary \ref{Sec:Motive:C1}, we have $M(X_1) \cong M(X)^{P_1}\cong M(X)^{P_2}\cong M(X_2)$.
\end{proof}
The category of motives $\Mot_K$ is obtained by inverting the so called Lefschetz object in $\Mot_K^{eff}$, c.f. \cite{kleiman} (in \cite{andre, scholl}, $\Mot_K$ is constructed from $\Cor_K$ in one step).
\begin{cor}
The motives of $X_1$ and $X_2$ in $\Mot_K$ are isomorphic.
\end{cor}
\begin{rmk}
Since $H^*(X(\mathbf{C}),\mathbf{Q})$ and $H_{et}^*(X,\mathbf{Q}_p)$ depend on the underlying motives, we recover Theorem \ref{thm:cohXi} (1) and (2) for these coefficients.
Although the previous arguments were more direct.
\end{rmk}
We now prove Theorem \ref{prop:ZequivVar}. Recall that this says that there are arbitrarily large collections of projective surfaces over $\overline{\mathbf{Q}}$ with distinct fundamental groups (with respect to a fixed embedding $\overline{\mathbf{Q}}\subset \mathbf{C}$) but isomorphic motives.
\begin{proof}[Proof of Theorem \ref{prop:ZequivVar}]
From Proposition \ref{TechConProp}, we deduce that there are $j$ pairwise non-isomorphic $\mathbf{Z}$--equivalent compact torsion free lattices $\Delta_i\leq \SU(2,1)$. These act on $\mathbf{H}_\mathbf{C}^2$ which can be identified with the complex $2$--ball $B\subset \mathbf{C}^2$. Setting $X_i = B/\Delta_i$, we note that these spaces are projective algebraic by Kodaira's embedding theorem \cite[pp 219--220]{wells}. Each $X_i$ is also rigid Calabi--Vesentini \cite{CalabiVesentini} and hence defined over $\overline{\mathbf{Q}}$. By construction $\pi_1(X_i)=\Delta_i\ncong \Delta_{i'}=\pi_1(X_{i'})$ when $i\not= i'$. The remaining properties follow from Theorem \ref{thm:cohXi}.
\end{proof}
Let $K_0(\Var_K)$ denote the Grothendieck ring of $K$--varieties. When $\mathrm{char}(K)=0$, a nice presentation was given by Bittner \cite{bittner}: The generators are isomorphism classes $[X]$ of smooth projective varieties, and $[\Bl_ZX] - [E]= [X]-[Z]$ holds whenever $\Bl_ZX$ is the blow up of $X$ along a smooth subvariety $Z\subset X$ with exceptional divisor $E$. Using this presentation together with the formulas in \cite[pp 77--78]{kleiman}, we get a surjective ring homomorphism
\[ \chi_m^{eff}\colon K_0(\Var_{\overline{\mathbf{Q}}})\longrightarrow K_0(\Mot_{\overline{\mathbf{Q}}}^{eff}) \]
sending $[X]\mapsto [M(X)]$. This can be thought of as the motivic Euler characteristic. It is natural to ask whether this is an isomorphism; in some form this question goes back to Grothendieck \cite[p 174]{gs}. The right side is a $\mathbf{Q}$--algebra because $\Mot_{\overline{\mathbf{Q}}}$ is $\mathbf{Q}$--linear. Therefore we have and induced homomorphism
\[ \chi_m^{eff}\otimes \mathbf{Q}\colon K_0(\Var_{\overline{\mathbf{Q}}})\otimes \mathbf{Q}\longrightarrow K_0(\Mot_{\overline{\mathbf{Q}}}^{eff}). \]
\begin{thm}\label{Sec:Motive:NonInject}
The homomorphism $\chi_m^{eff}\otimes \mathbf{Q}$ is not injective.
\end{thm}
Before proving Theorem \ref{Sec:Motive:NonInject}, we require the following lemma.
Let $\Grp$ be the set of isomorphism classes of finitely generated groups. This becomes a commutative
monoid under the operation $[G_1][G_2]= [G_1\times G_2]$.
Let $\mathbf{Q}[\Grp]$ denote the monoid algebra associated to $\Grp$.
\begin{lemma}\label{Sec:Motive:L-end}
There is a ring homomorphism $K_0(\Var_{\overline{\mathbf{Q}}})\otimes \mathbf{Q}\to \mathbf{Q}[\Grp]$ which sends $[X]\to [\pi_1(X(\mathbf{C}))]$.
\end{lemma}
\begin{proof}
Two varieties $X_1,X_2$ are stably birational if $X_1\times \PP^n$ is birational to $X_2\times \PP^m$ for some $n,m$. Let $\SB_{\overline{\mathbf{Q}}}$ denote the set of stable birational classes of smooth projective varieties defined over $\overline{\mathbf{Q}}$. Products of varieties makes this into a commutative monoid. By a theorem of Larsen--Lunts \cite{larslunts}, there exists a homomorphism $\lambda\colon K_0(\Var_{\overline{\mathbf{Q}}})\otimes \mathbf{Q}\to \mathbf{Q}[\SB_{\overline{\mathbf{Q}}}]$ sending $[X]$ to the stable birational class of $X$. Since stably birationally equivalent smooth projective varieties have isomorphic fundamental groups, $X\mapsto \pi_1(X(\mathbf{C}))$ induces homomorphism of monoids $\SB_{\overline{\mathbf{Q}}}\to \Grp$ and of rings $\mathbf{Q}[\SB_{\overline{\mathbf{Q}}}]\to \mathbf{Q}[\Grp]$.
Compose this with $\lambda$ to get the desired homomorphism.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Sec:Motive:NonInject}]
Taking $X_i$ as in Theorem \ref{prop:ZequivVar}, we see that $[X_1]-[X_2]$ lies in the kernel and is non-zero by Lemma \ref{Sec:Motive:L-end}.
\end{proof}
\begin{cor}
The composition $\chi_m\otimes \mathbf{Q}\colon K_0(\Var_{\overline{\mathbf{Q}}})\otimes \mathbf{Q}\to K_0(\Mot_{\overline{\mathbf{Q}}})$ is also not injective.
\end{cor}
This statement can also be deduced from work of Borisov \cite{borisov}, who
shows that the Lefschetz class $\mathbb{L} = [\PP^1]-[\mathrm{pt}]\in K_0(\Var_{\overline{\mathbf{Q}}})$ is a zero divisor. Elements annihilated by $\mathbb{L}$ must lie
in the kernel of $\chi_m$ because $\chi_m(\mathbb{L})$ is invertible.
|
1,314,259,993,441 | arxiv | \section{Introduction}
Flavour physics, i.e. physics studying weak processes which change quark flavour, could give access to potentially new physics beyond the Standard Model. These flavour changes are described by the Cabibbo-Kobayashi-Maskawa (CKM) matrix \cite{Cabibbo:1963yz,Kobayashi:1973fv}. Heavy-light semileptonic processes, like $D \rightarrow \pi \ell \nu$,
give access to its elements ($|V_{cd}|$ in the process mentioned). Currently, lattice QCD results are in agreement with the Standard Model prediction of a unitary CKM matrix, but a further reduction in error could potentially lead to new physics. Another motivation for the study of heavy-light semileptonic processes involving $B$ mesons are the R-ratios
\begin{align}
R(D^{(*)}) = \frac{\mathcal{B}(B \rightarrow D^{(*)} \tau \nu_\tau)}{\mathcal{B}(B \rightarrow D^{(*)} \ell \nu_\ell)}
\, , \qquad
\ell = e,\mu
\, ,
\end{align}
where currently a tension \cite{Amhis:2019ckw} in lepton flavour universality between experiment and theory is observed, giving rise to the need of a clear first-principles determination of these ratios.
The main observable computed on the lattice for the study of heavy-light semileptonic processes are the three-point functions
\begin{align}
C_3(\Delta T = t_\mathrm{snk} - t_\mathrm{src},t) = \langle \Gamma_\mathrm{snk} D_{q_\mathrm{f}}^{-1}(t_\mathrm{snk},t) \Gamma_\mathrm{op} D_{q_\mathrm{i}}^{-1}(t,t_\mathrm{src}) \Gamma_\mathrm{src} D_{q_\mathrm{spec}}^{-1}(t_\mathrm{src},t_\mathrm{snk}) \rangle
\, ,
\end{align}
shown diagrammatically in Figure \ref{figure:hl-semi}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.34\textwidth]{hl-semi.pdf}
\caption{Diagram of the tree-level three-point function of semileptonic decays. For $D \rightarrow \pi \ell \nu$, $q_\mathrm{spec}=l$, $q_\mathrm{i}=h$, $q_\mathrm{f}=l$, where $h$ denotes a heavy quark and $l$ denotes a light quark.}
\label{figure:hl-semi
\end{figure}
They suffer from a bad signal-to-noise ratio, rendering their computation a challenging task. Advanced numerical methods and algorithms are needed to tackle the computation of heavy-light semileptonic form factors effectively. We present a feasibility study which uses distillation with LapH smearing \cite{Peardon:2009gh,Morningstar:2011ka} to estimate the relevant correlation functions and compare it to sequential propagator inversion, as used in RBC/UKQCD's heavy-light semi-leptonic form factor calculations using the relativistic heavy quark action \cite{Flynn:2019any}. Both approaches use the highly optimised lattice QCD code Grid \cite{Boyle:2016lbp}, together with Hadrons \cite{Portelli:Hadrons}.
\section{Computation of three-point functions}
The dominating contributions to the weak decay in these three-point functions $C_3$ (Figure \ref{figure:hl-semi}) are short-distance. We therefore can treat this operator as point-like. Experimentally the region around $q^2=0$ is most precisely known. In our lattice QCD calculation, $q^2 = (E_D-E_\pi)^2 - (\mathbf{p}_D - \mathbf{p}_\pi)^2$, with the energies $E_D,E_\pi$ and momenta $\mathbf{p}_D ,\mathbf{p}_\pi$ of the $D$ meson and pion, is larger than zero for small momenta. By choosing different momenta for the two particles, we can map out the $q^2$ region and approach or extrapolate towards $q^2=0$ - because of the much smaller rest mass of the pion, it is beneficial to keep $\mathbf{p}_D=\mathbf{0}$ and to vary $\mathbf{p}_\pi$.
A straightforward way to compute $C_3$ is to compute $D^{-1}_{q_\mathrm{spec}}$ first and then to sequentially invert on this propagator at $t_\mathrm{snk}$ with a $q_\mathrm{f}$ quark. This sequential inversion needs to be done for each $\Gamma_\mathrm{snk}, \mathbf{p}_\mathrm{snk}, \Delta T$. A sketch of this technique is shown in the left panel of Figure \ref{figure:hl-semi-z2}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{hl-semi-z2.pdf} \vline \vspace{5mm} \includegraphics[width=0.35\textwidth]{hl-laph.pdf}
\caption{\textbf{Left panel:} Diagrammatic visualisation of the standard strategy to compute the three-point functions using a sequential solve on the $q_\mathrm{spec}$ propagator for each $\Gamma_\mathrm{snk}, \mathbf{p}_\mathrm{snk}, \Delta T$. \textbf{Right panel:} Diagrammatic sketch of $C_3$ evaluated using distillation. All operators $\Gamma$ as well as momenta $\mathbf{p}$ at all three positions as well as the distance $\Delta T$ can be chosen during the final (cheap) contraction.}
\label{figure:hl-semi-z2
\end{figure}
The choice of $\Delta T$ has to be a compromise, as the signal suffers from a bad signal-to-noise ratio for large $\Delta T$, but when choosing a smaller $\Delta T$ we cannot isolate the ground state - a suitable quark smearing technique might help to get a good ground state signal already for a smaller $\Delta T$.
One further difficulty is to obtain the overlap factor between a momentum-carrying meson and the vacuum, for which we need two-point functions with non-zero momentum built from a $Z_2$-wall source $\mathcal{Z}_2(x)$. To achieve this we create phased momentum sources
\begin{align}
\mathcal{Z}_2^\mathbf{p}(x) = \mathcal{Z}_2(x) \times e^{i\mathbf{p}\mathbf{x}}
\, .
\end{align}
A propagator computed from a phased source $\mathcal{Z}_2^\mathbf{p}(x)$ combined with a second propagator from the non-phased source $\mathcal{Z}_2(x)$ then gives the desired correlation function.
The idea of this work is to compare this traditional method to estimating $C_3$ using Distillation with LapH smearing \cite{Peardon:2009gh,Morningstar:2011ka}. This technique is based on a hermitian smearing matrix $S=\sum_{k=1}^{N_\mathrm{vec}} V_{k} V_{k}^\dagger$ constructed from $N_\mathrm{vec}$ low modes $V_{k}(x,t)$ of the 3D lattice Laplacian. Sources are created by applying dilution projectors \cite{Wilcox:1999ab,Foley:2005ac} onto these low modes, leading to the definitions
\begin{align}
\varrho^{[d]} = V P^{[d]} \eta\, , \phantom{aaa}
\phi^{[d]} = D^{-1} \varrho^{[d]}\, , \phantom{aaa}
\tau^{[d]} = V^\dagger \phi^{[d]}\, , \phantom{aaa}
\varphi^{[d]} = V \tau^{[d]}
\, ,
\end{align}
with random noise vectors $\eta$\footnote{We use the notation developed for stochastic distillation throughout this paper, but we employ exact distillation which is restored by using full dilution and setting the noise vectors to $\rho=1$.}, dilution projectors $P^{[d]}$, the diluted LapH source vectors $\varrho^{[d]}$ and sink vectors $\varphi^{[d]}$, the unsmeared sinks $\phi^{[d]}$ \cite{Mastropas:2014fsa}, which can be used to define local currents, and the (stochastic) perambulators $\tau^{[d]}$ which are non-lattice sized (and therefore cheap to store) objects.
Using these definitions, the three-point function can be evaluated in the LapH framework by inserting the LapH-smearing matrices at the source and the sink, but not at the current insertion, which we require to be local. This can be straightforwardly evaluated to
\begin{align}
\nonumber C_3 &= \langle \phi_{t_\mathrm{src}}^\dagger(t) \Gamma_\mathrm{op} \bar{\phi}_{t_\mathrm{snk}}(t) \varrho^\dagger(t_\mathrm{snk}) \Gamma_\mathrm{snk} \varphi_{t_\mathrm{src}}(t_\mathrm{snk}) \varrho^\dagger(t_\mathrm{src}) \Gamma_\mathrm{src} \bar{\varrho}(t_\mathrm{src}) \rangle \\
& = M_{\Gamma_\mathrm{op}}(\bar{\phi}_{t_\mathrm{snk}},\phi_{t_\mathrm{src}},t) M_{\Gamma_\mathrm{snk}}(\varphi_{t_\mathrm{src}},\varrho,t_\mathrm{snk}) M_{\Gamma_\mathrm{src}}(\bar{\varrho},\varrho,t_\mathrm{src})^*
\, ,
\label{eqn:3pt-dist}
\end{align}
\vspace{-1cm}
\begin{align}
M^{[d_1,d_2]}_\Gamma(\varphi,\varrho,t,\mathbf{p}) = \sum_{\mathbf{x}} e^{-i\mathbf{p} \cdot \mathbf{x}} \varphi^{[d_1]*}(t) \Gamma \varrho^{[d_2]}(t)
\, ,
\label{eqn:mf}
\end{align}
where $ \bar{\varrho} = \gamma_5 \varrho , \bar{\varphi} = \gamma_5 \varphi, \bar{\phi} = \gamma_5 \phi$, which arise when using $\gamma_5$ hermiticity to evaluate the $q_\mathrm{i}$ propagator. Diagrammatically, Equation \ref{eqn:3pt-dist} is shown in the right panel of Figure \ref{figure:hl-semi-z2}.
\section{Pseudoscalar-axial diagonalisation}
We have computed all combinations of the two-point function with pseudoscalar/axial ($\gamma_5, \gamma_0 \gamma_5$) at source and sink and expect them to behave like
\begin{align}
C_{ij}(t) = \begin{pmatrix} C_{PP} & C_{PA} \\ C_{AP} & C_{AA} \end{pmatrix}
= \begin{pmatrix} \langle O_P O_P^\dag \rangle & \langle O_P O_A^\dag \rangle \\ \langle O_A O_P^\dag \rangle & \langle O_A O_A^\dag \rangle \end{pmatrix}(t) = \sum_{i=0}^1 \begin{pmatrix} A_{P_i}^2 & A_{A_i} A_{P_i} \\ A_{A_i} A_{P_i} & A_{A_i}^2 \end{pmatrix} e^{- E_i t}
\end{align}
where we have averaged the forward- and backward-propagating contribution and allow for the ground state and one excited state to be present\footnote{For the fit, we neglect the backward-propagating contribution, which only plays a role close to the centre of the lattice from which we stay away.}. We can perform a 6-parameter fit ($A_0$, $P_0$, $E_0$, $A_1$, $P_1$, $E_1$) to this equation and to extract matrix elements and energy levels. Furthermore, we define new operators $O'_P = \cos \theta O_P$ and $O'_A =\sin \theta O_A,$ with a tunable parameter $\theta$, leading to the correlation function
\begin{align}
C (\theta) &= \langle O' (\theta) O'^\dag (\theta) \rangle
= \cos^2 \theta C_{PP} + \cos \theta \sin \theta (C_{PA} + C_{AP}) + \sin^2 \theta C_{AA}
\, .
\label{eqn:c-theta}
\end{align}
By varying $\theta$ we can define several "diagonalised" operators, some of which may have earlier plateaus - due to cancellation of axial and pseudoscalar excited states - than the correlation functions directly computed from the lattice. Because $C (\theta)$ is built both from correlation functions with a $\sinh$ and $\cosh$ dependence on the mass, we restrict ourselves to $t<<T/2$ when studying them.
\section{Discussion of the data}
We use a $64 \times 24^3$ RBC-UKQCD $(2+1)$-flavour domain-wall fermion gauge-field ensemble \cite{Allton:2008pn} with a pion mass of $m_\pi=329$ MeV and a lattice spacing of $a=0.11$ fm. We simulate $D \rightarrow \pi$ decay with a heavy mass $am_h=0.58$ and the same action as in \cite{Boyle:2018knm}. The current level of statistics for distillation is 5 configurations with 16 solves per flavour on each, averaged over the estimators with exchanged $q_\mathrm{i}$ and $q_\mathrm{f}$. We computed $4$ different $\Delta T \in \{12,16,20,24\}$. A comparison of the effective energy plateaux of the three-point functions for different $\Delta T$ is shown in Figure \ref{figure:compare-delta}; in the top panels for the distillation data and in the bottom panels for the $Z_2$-wall source data.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.91\textwidth]{g0_pp_delta_comparison.pdf}
\includegraphics[width=0.91\textwidth]{z2_pp_delta_comparison.pdf}
\caption{Comparison of the effective energies of the three-point functions for different values of $\Delta T$, showing the five (pion-)momentum frames $0\leq\mathbf{n}_\pi^2\leq4$ (where $\mathbf{p} = \frac{2 \pi}{L} \mathbf{n}$) and current insertion $\Gamma_\mathrm{op}=\gamma_0$ and $\Gamma_\mathrm{src} = \Gamma_\mathrm{snk} = \gamma_5$. The horizontal axis has been scaled to have the initial state ($D({\bf p}_D)$) at 0 and the final state ($\pi({\bf p}_\pi$)) at 1. The grey bands indicate the expected plateau obtained from $E_D - E_\pi(\mathbf{p}_\pi)$, which is computed using the lattice dispersion relation and from $am_D = 0.99656(95)$\cite{Boyle:2018knm}. The top panels show the distillation data and the bottom panels show the $Z_2$-wall data.}
\label{figure:compare-delta
\end{figure}
Figure \ref{figure:pa-diag} shows that by choosing $\theta=-79 \degree$ an earlier onset to the plateau can be achieved for the case of $Z_2$-wall sources. We cannot observe such an effect for the distillation data with the current level of statistics, possibly due to already suppressed excited states as a result of the LapH smearing.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{pa-diag.pdf}
\caption{Effective energies of two-point correlation functions $C (\theta)$ defined in \ref{eqn:c-theta}. $\theta=0 \degree$ is proportional to $C_{PP}$ and $\theta=90 \degree$ is proportional to $C_{AA}$.}
\label{figure:pa-diag
\end{figure}
For the distillation data, we also show three-point correlation functions where both the mesons at source and sink have a momentum $\mathbf{p} \neq \mathbf{0}$. They can be cheaply assembled re-using already computed objects. We show our data for selected channels in Figure \ref{figure:diff-mom}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.91\textwidth]{g0_16_comparison.pdf}
\caption{Effective energies of three-point correlation functions with non-zero momenta both at source and sink, only for $\Delta T = 16$, $\Gamma_\mathrm{src} = \Gamma_\mathrm{snk} = \gamma_5$ and $\Gamma_\mathrm{op}=\gamma_0$. The individual panel descriptions denote the 3-momenta involved, which obey $\mathbf{p}_\pi+\mathbf{p}_\mathrm{op}+\mathbf{p}_D =\mathbf{0}$. All momenta combinations are averaged over all possible lattice rotations. The grey bands indicate the expected plateau obtained from $E_D(\mathbf{p}_D) - E_\pi(\mathbf{p}_\pi)$, which is computed using the lattice dispersion relation and from $am_D = 0.99656(95)$\cite{Boyle:2018knm}.}
\label{figure:diff-mom
\end{figure}
Data extracted from these channels can be used to get additional energy levels and map out the $q^2$ region in between the values obtained for $\mathbf{p}_D=\mathbf{0}$. The quality of the effective energy plateaux varies a lot between channels and most are noisy compared with the plateaux of Figure \ref{figure:compare-delta}. In the $Z_2$-wall approach, the computation of these correlation functions would be costly and therefore impractical.
\section{Comparison of cost and statistics}
The computational cost of these calculations is summarised in Table \ref{tab:cost-small}.
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\hline
& Distillation & $Z_2$ seq. \\
\hline
\#Inv / conf / $t_\mathrm{src}$ & $N_{vec} \times 4$ & $N_{\Delta T} \times N_{\mathbf{p}} \times N_{\Gamma_\mathrm{snk}}$ \\
\hline
total \#Inv & 12800 & 1008 \\
\hline
\end{tabular}
\caption{Comparison of the number of inversions in the two approaches employed. This is just a rough measure for the computational cost, as the cost per inversion differs significantly for different quark flavours.}
\label{tab:cost-small}
\end{table}
The difference in the number of inversions for this setup amounts to a factor of about 13. When comparing the effective energies in all frames for the raw pseudoscalar and axial correlation functions, the error reduction is about a factor of $\approx 3-6$, i.e. no method is giving better statistical properties for the same computational cost\footnote{The reason why we get larger effective statistics using distillation is that we can compute all momentum rotations very cheaply and average over them, whereas in the $Z_2$-wall approach we compute just a single momentum direction per momentum frame.}. Different choices of $\Gamma_\mathrm{src}$ and $\Gamma_\mathrm{snk}$ do not come at additional cost for distillation. This allows the pseudoscalar-axial diagonalisation but it also allows for vector states at no additional cost. The inherent smearing when using distillation leads to slightly flatter plateaux, but a similar effect can be achieved by employing our suggested pseudoscalar-axial diagonalisation.
Distillation could be more cost-effective by using a more ambitious setup, like $q_\mathrm{spec}=l,s$, $q_{f}=l,s,h$, $q_{i}=$ multiple heavy-quark masses, which would allow us to study the processes $D \rightarrow \pi$, $D \rightarrow K$, $D_s \rightarrow K$, $D_{(s)} \rightarrow D^{'}_{(s)}$. While the inversion cost for the distillation run would only increase linearly with the new $s$ and $h$ quarks, the amount of inversions in the $Z_2$-wall approach would increase with the number of combinations of $q_\mathrm{spec},q_\mathrm{f}$ quarks due to the sequential inversions. When using distillation, one would also need additional storage for the fields needed to assemble correlation functions. On our $24^3$ ensemble, the meson fields amount to $25$ TB per configuration. They can be deleted after all the contractions are completed. On a physical-pion-mass ensemble with a larger physical volume $V$, the storage needed will scale with $O(V^2)$, as $N_\mathrm{vec}$ would need to scale with $O(V)$, if the smearing radius is to be kept the same. Stochastic distillation would lessen this effect however by stochastically sampling the sources in the Laplacian-eigenvector space, so that the scaling in storage space needed would be mild.
\section{Conclusions and Outlook}
We have computed two-point and three-point correlation functions to study heavy-light semileptonic decays using two different approaches: Once by using $Z_2$-wall sources and sequential solves and once using distillation and have compared the cost of the two approaches\footnote{We note that there are further viable approaches like Coulomb gauge fixed approaches such as the gauge fixed wall approach, which are not part of this study.}. We find that both approaches have clear advantages, but that there is no approach which is better in all circumstances:
Distillation could be useful to study a very big project looking at many semileptonic decay channels. The method is very effective, but the cost both for the computation and intermediate storage needed cannot be neglected. This comes in particular from the local current insertion in the three-point functions. This problem would increase if we were to run this on the new RBC-UKQCD domain-wall ensemble at physical pion mass \cite{Blum:2014tka}, even though the use of stochastic distillation would lessen this problem to some degree. Distillation might also be the tool of choice if the plan of the study is to get results for many momentum transfers $q^2$.
The traditional approach using $Z_2$-wall sources with sequential solves however is better suited for a smaller-scale project, where only a few decay channels are considered. It does not perform worse than distillation in this case, but is a lot easier to set up and does not need the large amount of intermediate storage for the meson fields.
\textbf{Acknowledgements}
The authors thank the members of the RBC and UKQCD Collaborations for
helpful discussions and suggestions. This work used the DiRACExtreme Scaling service at the University of Edinburgh, operated by the Edinburgh Parallel Computing Centre on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC grants ST/R00238X/1 and ST/S002537/1 and STFC DiRAC Operations grantST/R001006/1. DiRAC is part of the National e-Infrastructure. F.E. and A.P. are supported in part by UK STFC grant ST/P000630/1. F.E. and A.P. also received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 757646 \& A.P. additionally by grant agreement 813942. J.T.T. is thankful for support by the Independent Research Fund Denmark, Research Project 1, grant number 8021-00122B. P.B. received support from the Royal Society Wolfson Research Merit award WM/60035.
\bibliographystyle{JHEP}
|
1,314,259,993,442 | arxiv | \section{Introduction}
An electromagnetic cloak is a device that minimizes or even nullifies the effects of scattering from various objects by rendering them electromagnetically invisible to a detector. One well-known cloaking technique is the so-called ``transformation-optics'' method \cite{Leonhardt_Science, Pendry_Science, Nature_review, Jopt_review} which uses strongly inhomogeneous and highly anisotropic materials to perform complex transformations for the incident wave. Another cloaking technique is based on the cancellation of the scattering effects of a magnetodielectric object with a suitable metamaterial or plasmonic cladding \cite{Alu_transparency, Alu_review}, which neutralizes the polarization current of the primary scatterer.
Unfortunately, the actual construction of a cloaking device is very difficult with both the aforementioned approaches. Suggestions to overcome such a drawback includes, e.g., the use of transmission-line networks or other waveguiding structures instead of fictitious materials with exotic properties. \cite{Alitalo_review2} With transmission-line networks, the object to be electromagnetically hidden is strongly limited in size and geometry, \cite{TLcloak_Alitalo_APL2009, TLcloak_Alitalo_MOTL2009} but cloaking of impenetrable (e.g., perfectly metallic) objects has been shown to be possible with a set of conical conducting plates placed around a cylindrical cloaked region. \cite{VariousCloakingTechniques} This metal-plate cloaking has been practically realized in microwaves and numerically proven to be functional for a wide band of optical frequencies.\cite{MPcloak_Tretyakov_PRL2009} However, the underlying theory and the physical principles that govern the related phenomena have not been yet thoroughly understood.
In this work, we introduce a new, simple, nonmagnetic cloak. The idea originates from an analytical model of the previously reported metal-plate cloak operating in the visible range. \cite{MPcloak_Tretyakov_PRL2009} The used model is a simple mathematical concept comprised of multiple concentric cylinders of different dielectric permittivities. The choice of the permittivities has been made by considering the tapered conical lines, under the related plane-wave excitation, as a series connection of capacitors. The analytically evaluated response of the described model is shown to have a remarkable coincidence with the corresponding results obtained from full-wave simulations of the actual device for various values of input parameters.
Since the aforementioned simple model appears to resemble so successfully the real-world metal-plate cloaking structure, we tried to adopt an even simpler concept to cloak an impenetrable, perfectly electrically conducting (PEC) volume. This is a clearly more challenging case since one can hide practically anything in a PEC volume by avoiding to interact with the environment, which does not happen for penetrable objects. We introduce uniform or layered dielectric claddings which are optimized by sweeping the relative permittivity values of these covers. The scattering reduction achieved by the proposed devices is quite high, given their simple structure. Moreover, the demonstrated cloaking bandwidths can be considered as wide, a property attributed to the fact that the materials required in the claddings are simple dielectrics with relative permittivities larger than unity. It should be noted that most other cloaking methods for PEC objects, discussed in the literature, require magnetic material properties.
\section{Generic model of concentric cylinders}
The proposed model to mimic the physical mechanism of wave propagation in metal-plate devices is fairly simple and its configuration is shown in Fig.~\ref{ModelGeometry}, where the used cylindrical coordinate system $(\rho, \phi, z)$ is also defined. There are just $U$ infinitely long cylindrical layers each of which is assigned to an integer number $u=1,\cdots,U$, occupying the region $r_{u-1}<\rho<r_u$ and being filled with magnetically inert dielectric materials of relative permittivities $\epsilon_{r,u}$. One can clearly note that $r_0=b$ and $r_U=a$, while the background medium is vacuum (with $\epsilon_0, \mu_0$ intrinsic electromagnetic parameters) and is taken as the region 0 ($\rho>b$) of our configuration with $\epsilon_{r,0}=1$. The internal cylinder, which corresponds to the region $(U+1)$ ($\rho<a$), can be either penetrable with relative dielectric constant $\epsilon_{r,(U+1)}$ or PEC. The assumed harmonic time dependence is of the form $e^{+j2\pi f t}$ ($f$ is the operating frequency) and is suppressed throughout the analysis.
\begin{figure}[ht]
\centering \epsfig{file=Fig1.eps, width=0.49\textwidth}
\caption{The geometry of the generic model comprised of several concentric cylindrical layers around the cloaked object. The cloaked object is situated in region $(U+1)$.}
\label{ModelGeometry}
\end{figure}
We consider a plane wave excitation of electric field \cite{PlaneWaveExpansion}:
\begin{eqnarray}
{\bf E}_{0,inc}={\bf z}~ e^{-jk_0\rho\cos\phi}={\bf z} \sum_{n=-\infty}^{+\infty}j^{-n}J_n(k_0\rho)e^{jn\phi},
\label{eq:IncidentField}
\end{eqnarray}
where $k_0=2\pi f\sqrt{\epsilon_0\mu_0}$ is the free-space wavenumber and $J_n$ is the $n$-th ordered Bessel function. The total field in each layer $u=0,\cdots,(U+1)$ possesses only a $z$ component $E_{u}$ whose expression is given by:
\begin{eqnarray}
E_u=\sum_{n=-\infty}^{+\infty}\left[A_{n,u}J_n(k_u\rho)+B_{n,u}H_n(k_u\rho)\right]e^{jn\phi},
\label{eq:TotalFields}
\end{eqnarray}
where $k_u=k_0\sqrt{\epsilon_{r,u}}$, $H_n$ is the $n$-th ordered Hankel function of the second type and $A_{n,u}, B_{n,u}$ are sequences of unknown complex coefficients. After imposing the necessary boundary conditions at $\rho=r_u, u=0,\cdots, (U-1)$, one obtains the following relations connecting the coefficients of two adjacent layers:
\begin{eqnarray}
\left[\begin{array}{l} A_{n,u}\\B_{n,u} \end{array}\right]=
{\bf T}_{n,u}\cdot
\left[\begin{array}{l} A_{n,(u+1)}\\B_{n,(u+1)} \end{array}\right],
\label{eq:ElectromagneticBC}
\end{eqnarray}
for integer $n$. The explicit form of the transfer matrix \cite{TMatrices1, TMatrices2} ${\bf T}_{n,u}$ is shown in the two-column equation (\ref{eq:TransferMatrix}).
\begin{figure*}[t]
\hrulefill
\setcounter{equation}{3}
\begin{equation}
{\bf T}_{n,u}=\left[\begin{array}{cc} \frac{J'_n(k_{u+1}r_u)H_n(k_ur_u)k_{u+1}-H'_n(k_ur_u)J_n(k_{u+1}r_u)k_u}{J'_n(k_ur_u)H_n(k_ur_u)k_u-H'_n(k_ur_u)J_n(k_ur_u)k_u} &
\frac{H'_n(k_{u+1}r_u)H_n(k_ur_u)k_{u+1}-H'_n(k_ur_u)H_n(k_{u+1}r_u)k_u}{J'_n(k_ur_u)H_n(k_ur_u)k_u-H'_n(k_ur_u)J_n(k_ur_u)k_u} \\
\frac{J'_n(k_ur_u)J_n(k_{u+1}r_u)k_u-J'_n(k_{u+1}r_u)J_n(k_ur_u)k_{u+1}}{J'_n(k_ur_u)H_n(k_ur_u)k_u-H'_n(k_ur_u)J_n(k_ur_u)k_u} &
\frac{J'_n(k_ur_u)H_n(k_{u+1}r_u)k_u-H'_n(k_{u+1}r_u)J_n(k_ur_u)k_{u+1}}{J'_n(k_ur_u)H_n(k_ur_u)k_u-H'_n(k_ur_u)J_n(k_ur_u)k_u}
\end{array}\right].
\label{eq:TransferMatrix}
\end{equation}
\hrulefill
\end{figure*}
By successive application of (\ref{eq:ElectromagneticBC}) for $u=0,\cdots, (U-1)$, one arrives to:
\begin{eqnarray}
\left[\begin{array}{l} A_{n,0}\\B_{n,0} \end{array}\right]
={\bf T}_{n,0}\cdot{\bf T}_{n,1}\cdots{\bf T}_{n,(U-1)}\cdot\left[\begin{array}{l} A_{n,U}\\B_{n,U} \end{array}\right] \nonumber \\
=\left[\begin{array}{cc} M_{11}(n) & M_{12}(n)\\ M_{21}(n) & M_{22}(n) \end{array}\right]\cdot
\left[\begin{array}{l} A_{n,U}\\B_{n,U} \end{array}\right].
\label{eq:SuccessiveApplication}
\end{eqnarray}
Depending on what is the filling material of the core region $(U+1)$, namely dielectric (diel.) with permittivity $\epsilon_{r,(U+1)}$ or PEC, the following expression for the coefficients of the $U$ region (boundary conditions at $\rho=r_U=a$) is formulated:
\begin{eqnarray}
\left[\begin{array}{l} A_{n,U}\\B_{n,U} \end{array}\right]
=\left\{
\begin{array}{ll} {\bf T}_{n,U}\cdot\left[\begin{array}{c} A_{n,(U+1)}\\B_{n,(U+1)} \end{array}\right] & ,{\rm diel}. \\
\left[\begin{array}{c} 1\\ -\frac{J_n(k_Ur_U)}{H_n(k_Ur_U)} \end{array}\right]A_{n,U} & ,{\rm PEC}
\end{array}.
\right.
\label{eq:InternalBoundary}
\end{eqnarray}
By inspection of (\ref{eq:IncidentField}), it is directly obtained that $A_{n,0}=j^{-n}$, while the physical demand for bounded field into the cloaked region, is translated into: $B_{n,(U+1)}=0$. Therefore, the scattering field by the device into the vacuum background area can be readily derived through the related coefficient $B_{n,0}$ by combining (\ref{eq:SuccessiveApplication}), (\ref{eq:InternalBoundary}) in each case:
\begin{eqnarray}
B_{n,0}
=\left\{
\begin{array}{ll} j^{-n}\frac{M_{21}(n)\left[{\bf T}_{n,U}\right]_{11}+M_{22}(n)\left[{\bf T}_{n,U}\right]_{21}}
{M_{11}(n)\left[{\bf T}_{n,U}\right]_{11}+M_{12}(n)\left[{\bf T}_{n,U}\right]_{21}}
& , {\rm diel}. \\
j^{-n}\frac{M_{21}(n)-M_{22}(n)\frac{J_n(k_Ur_U)}{H_n(k_Ur_U)}}
{M_{11}(n)-M_{12}(n)\frac{J_n(k_Ur_U)}{H_n(k_Ur_U)}}
& , {\rm PEC}
\end{array}.
\right.
\label{eq:ScatteringCoefficient}
\end{eqnarray}
The notation $\left[{\bf D}\right]_{vw}$ is used for the $(v,w)$ element of the matrix ${\bf D}$.
\begin{figure}[ht]
\centering \epsfig{file=Fig2.eps, width=0.3\textwidth}
\caption{(Color online) The physical configuration of a single conical metal-plate cell. One half (cut along the $xz$-plane) of a single cell is shown.}
\label{ConicalComponent}
\end{figure}
\section{Optical cloak made of conical plates}
In this section, we are going to test the model analyzed above on how accurately does it describe the operation of an actual device. It has been shown that the so-called metal-plate cloak configuration, already constructed for radio frequencies, can work in the optical band as well. \cite{MPcloak_Tretyakov_PRL2009} The optical device is comprised of periodically stacked, solid, conical, silver plates (of outer diameter 2$b$) positioned around the cylindrical cloaked region (of diameter 2$a$) with a small air gap (of thickness $g$) in between. One period of this structure is shown in Fig.~\ref{ConicalComponent} and it is a (hollow) waveguide, with linearly decreasing height from $H$ (at $\rho=b$) to $h$ (at $\rho=a$), leading the $z$-polarized fields around the cloaked object. In a specific case \cite{MPcloak_Tretyakov_PRL2009}, the constructing material of the plates is silver, the (optimal) operating frequency $f_0\cong 590$ THz, while the physical dimensions are given by: $b=113$ nm, $a=50$ nm, $g=15$ nm, $H=13$ nm and $h=2.5$ nm. It should be stressed that the cloaked region is filled with silver too, while the frequency-dependent permittivity of this substance $\epsilon_{r,silver}=\epsilon_{r,silver}(f)$ is well known. \cite{SilverPermittivity} Note that the permittivity of silver is negative close to $f=f_0$, namely: $\Re[\epsilon_{r,silver}(f_0)]\cong-10$.
\begin{figure}[ht]
\centering \epsfig{file=Fig3.eps, width=0.49\textwidth}
\caption{The side view of a single conical metal-plate cell.}
\label{SideView}
\end{figure}
With reference to the model analyzed above, we make use of the formulas corresponding to a dielectric core since the region $(U+1)$ is solid silver; therefore, $\epsilon_{r,(U+1)}=\epsilon_{r,silver}$. In addition, the $U$-th layer is the air gap; thus, $\epsilon_{r,U}=1$ and $r_{U-1}=a+g$. It is apparent that our model does not take into account the periodic dimension variation of the actual configuration with respect to $z$, since the stack of conical plates is replaced by a structure of homogeneous concentric cylinders. However, the radial inhomogeneity (contrary to the axial one) of the cloaking device is possible to be imitated by choosing properly the dielectric permittivities of the cylindrical layers. But how can one compute $\epsilon_{r,u}$ for $u=1,\cdots,(U-1)$ in a correct and reliable way?
In Fig.~\ref{SideView}, one observes the side view of half the biconical cell as the shape possesses cylindrical symmetry. Assume that the cylindrical layer in the vicinity of the representative surface $\rho=r_u$ (containing both the corresponding vacuum aperture and the related solid silver plates), would be replaced by a (locally) homogeneous dielectric, with the same external dimensions, of relative permittivity $\epsilon_{r,u}$, whose value should be estimated. From the similarity of triangles, the height $\zeta_u$ of the representative vacuum aperture is found equal to:
\begin{eqnarray}
\zeta_u=H\frac{r_u-a-g}{b-a-g}+h\frac{b-r_u}{b-a-g}.
\label{eq:RepresentativeHeight}
\end{eqnarray}
If the inclination of the tapered plates is relatively low (which is the case for the specific choice of $b, a, g, H,h$), then the $z$-polarized electric field will be almost normal to the sloping boundaries separating the silver from the vacuum. As a result, the arbitrary cross section at $\rho=r_u$ can be considered as a series connection of three capacitors: two with the height $(H-\zeta_u)/2$ filled with silver and one (placed in between) with height $\zeta_u$ filled with vacuum. Accordingly, this serial cluster can be replaced by a single capacitor with a dielectric material of relative permittivity $\epsilon_{r,u}$, based on the well-known equivalent capacitance formula: $\frac{H}{\epsilon_{r,u}}=\frac{(H-\zeta_u)/2}{\epsilon_{r,silver}}+\frac{\zeta_u}{1}+\frac{(H-\zeta_u)/2}{\epsilon_{r,silver}}$. In this sense, the rigorous expression for $\epsilon_{r,u}$ is hyperbolic with respect to both $r_u$, $\zeta_u$ and is given as follows:
\begin{eqnarray}
\epsilon_{r,u}=\frac{H\epsilon_{r,silver}}{H+(\epsilon_{r,silver}-1)\zeta_u}.
\label{eq:EquivalentPermittivity}
\end{eqnarray}
We use layers of the same thickness (due to the constant inclination of the silver conical plates), namely: $r_u=b-\frac{u}{U-1}(b-a-g)$ for $u=1,\cdots,(U-1)$. After having defined all the parameters of the theoretical structure ($\epsilon_{r,u}, r_u$) based on the actual quantities of the real configuration ($b, a, g, H, h, \epsilon_{r,silver}$), we can develop the corresponding model and quantify its efficiency. It is remarkable that (for the given ranges of the input parameters) all the relative permittivities $\epsilon_{r,u}$ are positive and greater than unity.
A crucial quantity for the operation and the performance of a cloaking device is the total scattering width of the whole cylindrical structure normalized by the corresponding width of the uncloaked one. The smaller is the magnitude, the better the device serves its purpose. The quantity is defined by:
\begin{eqnarray}
\sigma_{norm}=\frac{\int_0^{2\pi}\left|\sum_{n=-N}^N
B_{n,0}j^ne^{jn\phi}\right|^2
d\phi}{\int_0^{2\pi}\left|\sum_{n=-N}^N
B'_{n,0}j^ne^{jn\phi}\right|^2 d\phi},
\label{eq:TotalScatteringWidth}
\end{eqnarray}
where $N$ is a sufficiently large integer to achieve convergence for the series and $B'_{n,0}$ denote the coefficients of the scattering field for the uncloaked cylinder. In Fig.~\ref{FigA} we show the variation of $\sigma_{norm}$ with respect to the operating frequency for two alternative heights $h=2.5$ and $h=5$ nm. In each case, we perform a numerical simulation of the actual metal-plate device with ANSYS HFSS full-wave software. \cite{HFSS} The obtained numerical results are compared to those derived through implementation of the corresponding multi-layered analytical model, in the way described above. There is an excellent agreement between the two sets of data despite the fact that they describe two completely different structures (simulation of an axially inhomogeneous real device and rigorous solution to an axially homogeneous multilayered configuration). This remarkable coincidence between so dissimilar configurations demonstrates the success of the adopted model. When it comes to the results themselves, there are large frequency bands where the magnitude $\sigma_{norm}$ takes values much smaller than unity. In addition, the metal-plate cloaking for $h=5$ nm is functional at larger frequencies than the device with $h=2.5$ nm does.
\begin{figure}[ht]
\centering \epsfig{file=Fig4.eps, width=0.49\textwidth}
\caption{(Color online) The total scattering width of the metal-plate structure normalized by the corresponding quantity of the uncloaked object as function of the operating frequency. The HFSS simulations for the actual device are compared with the results obtained through the analytical model implementation. Plot parameters: $b=113$ nm, $a=50$ nm, $g=15$ nm, $H=13$ nm, $U=8$.}
\label{FigA}
\end{figure}
In the considered numerical example, the electrical dimension of the structure is relatively small which could make one think that just the omnidirectional ($N=0$) term of the sums in (\ref{eq:TotalScatteringWidth}), is sufficient to evaluate the quantity $\sigma_{norm}$. If such an argument was correct, then the cloaking behavior demonstrated above would be attributed to the well-known ``scattering cancellation'' phenomenon. \cite{Alu_transparency, Alu_review} We believe that this is not the case since the electrical size of the structure is not so small. To support this statement, we assume the corresponding ``scattering cancellation'' device for the silver cloaked object of radius $a$. This is a homogeneous cladding of relative permittivity $\epsilon_r$ and radius $b$ without an air gap. In Fig.~\ref{FigG}, we evaluate the normalized total scattering width $\sigma_{norm}$ from (\ref{eq:TotalScatteringWidth}) as function of $\epsilon_r$ for several truncation limits $N$. We are mainly interested in the interval $0<\epsilon_r<5$ where smaller $\sigma_{norm}$ are observed and it is clear that for $N\ge 2$, the result does not change significantly. However, the first harmonic ($N=1$) is necessary to be included in the computation since the zeroth term is minimized (and more specifically nullified) at larger $\epsilon_r$. The nullification of the omnidirectional term is the ``scattering cancellation'' solution and it is different from the observed one.
\begin{figure}[ht]
\centering \epsfig{file=Fig5.eps, width=0.49\textwidth}
\caption{(Color online) The normalized total scattering width of a silver rod surrounded by a homogeneous cladding as function of the relative permittivity of the cladding. Plot parameters: $b=113$ nm, $a=50$ nm, $f_0=590$ THz, $U=1$.}
\label{FigG}
\end{figure}
\section{Cloaking of conducting cylinders}
As was shown in the previous section, a multilayer as well as a single layer dielectric cover can be used to reduce the scattering from a cylinder composed of an $\epsilon_r$--negative material. The same can be of course achieved by using the well-known ``scattering cancellation method''. However, this technique has not been used so far to hide a conducting object by a layered or a uniform cover made of dielectrics with $\epsilon_r>1$. Here we show, by using the analytical model described in Section~II, that such a case is possible at least for conducting cylinders of moderate electrical size.
The electromagnetic fields weakly penetrate the cloaked silver cylinder studied in the previous section, so it is expected that also conducting cylinders, i.e., impenetrable objects, could also be partially cloaked with simple multilayer or even uniform covers. To present a generalized case, in the following we normalize the dimensions and the results to the free-space wavelength $\lambda_0$. We study an electrically thin, PEC cylinder with radius $a=\lambda_0/10$ (the electrical size of this cylinder is of the same order as the cloaked object in the previous section). We assign a dielectric cover around this cylinder and consider three cases: (i) a uniform cover with constant $\epsilon_r$, (ii) a multilayer cover with linearly varying $\epsilon_{r,u}$, (iii) a multilayer cover with hyperbolically varying $\epsilon_{r,u}$ (as in (\ref{eq:EquivalentPermittivity})). For simplicity, we choose $b=2a$ in all the studied cases. However, the choice of $b$ and $a$ can be made freely; the resulting value of (optimal) $\epsilon_{r,U}$ of the cover material simply changes if the values of $a$ and $b$ are changed.
In the first case, we assign a constant $\epsilon_r$ to the material surrounding the PEC cylinder. The analytical model of Section~II is used to optimize the value of $\epsilon_r$ (see Fig.~\ref{epsr_opt1}). It is evident that for these values of $a$ and $b$, the optimal value is $\epsilon_r=5.42$ and with that, the normalized total scattering width of the cloaked PEC cylinder is slightly less than 0.4, i.e., the cloak reduces the total scattering width $\sigma_{norm}$ of the PEC cylinder by more than 60~\%.
\begin{figure}[t!]
\centering \epsfig{file=Fig6.eps, width=0.49\textwidth}
\caption{Sweep of $\epsilon_r$ to find the optimal value for cloaking at the frequency $f_0$ with fixed values of $a$ and $b$. The uniform ($U=1$) cloak has a thickness $(b-a)$ (with $b=2a=\lambda_0/5$) and is made of a dielectric material with $\epsilon_r$.}
\label{epsr_opt1}
\end{figure}
The frequency dependence of the normalized total scattering width is illustrated in Fig.~\ref{constant_epsr}, demonstrating a reasonably broadband cloaking effect: the relative bandwidth where the total scattering width of the PEC cylinder is reduced to the half or less, is about 21~\%. It is quite interesting to note that moderate losses do not deteriorate the cloaking effect; with a loss tangent of 0.01, the cloaking effect is actually slightly improved compared to the lossless case.
\begin{figure}[t!]
\centering \epsfig{file=Fig7.eps, width=0.49\textwidth}
\caption{(Color online) Normalized total scattering width as a function of the normalized frequency. The uniform ($U=1$) cloak has a thickness $(b-a)$ (with $b=2a=\lambda_0/5$) and is made of a dielectric material with $\epsilon_r=5.42$.}
\label{constant_epsr}
\end{figure}
In the second scenario, we model the case of Fig.~\ref{ModelGeometry} for $U=5$ and the value of $\epsilon_{r,u}$ depending on the layer number $u$ linearly. To find the optimal values of $\epsilon_{r,u}$ for cloaking at $f=f_0$, we plot $\sigma_{norm}$ as a function of $\epsilon_{r,5}$, i.e., the maximum value of $\epsilon_r$, while $\epsilon_{r,u}$ (for $u=1,2,3,4$) is linearly decreasing from $\epsilon_{r,5}$ to $\epsilon_{r,0}=1$. The result shown in Fig.~\ref{epsr_opt2} demonstrates that for the chosen dimensions $a$ and $b$, the lowest positive value of $\epsilon_{r,5}$, corresponding to a minimum in the normalized total scattering width, is $\epsilon_{r,5}=12.1$. In that case, the relative permittivities of the five-layer cloak are as shown in Table~I.
\begin{table}[t!]
\centering \caption{Values of relative permittivities with linearly changing $\epsilon_r$.}
\label{table2}
\begin{tabular}{|c|c|c|c|c|c|} \hline
$u$ & 1 & 2 & 3 & 4 & 5\\
\hline
$\epsilon_{r,u}$ & 3.22 & 5.44 & 7.66 & 9.88 & 12.1 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[t!]
\centering \epsfig{file=Fig8.eps, width=0.49\textwidth}
\caption{Sweep of $\epsilon_{r,U=5}$ to find the optimal value for cloaking at the frequency $f_0$ with fixed values of $a$ and $b=2a=\lambda_0/5$. Linear and hyperbolic profiles $(U=5)$ are considered.}
\label{epsr_opt2}
\end{figure}
With the values of Table~I, the normalized total scattering width as function of the frequency looks as shown in Fig.~\ref{varying_epsr}. In the same graph, the curve for the constant-permittivity cladding is depicted for comparison and also the corresponding numerical results obtained with ANSYS HFSS (showing good agreement with our analytical findings) have been attached.
\begin{figure}[t!]
\centering \epsfig{file=Fig9.eps, width=0.49\textwidth}
\caption{(Color online) Normalized total scattering width as a function of the normalized frequency for constant, linear, and hyperbolic profile $(U=5)$. The circles, squares, and diamonds denote the numerical results of each of the previous cases, respectively.}
\label{varying_epsr}
\end{figure}
Finally, we study the third scenario, i.e., the case with $\epsilon_r$ depending hyperbolically on the layer number $u=0,\cdots,U$, where again $U=5$. The variation of the normalized total scattering width $\sigma_{norm}$ with respect to $\epsilon_{r,5}$, is shown in Fig.~\ref{epsr_opt2}, resulting in the value $\epsilon_{r,5}=128$ for optimal operation at $f_0$. With this value, the relative permittivities of the five-layer cloak are as shown in Table~II. The frequency dependence of the normalized total scattering width is illustrated in Fig.~\ref{varying_epsr}. Again, we verify our analytical evaluations by plotting the simulation results in the same figure.
\begin{table}[t!]
\centering \caption{Values of relative permittivities with hyperbolically changing $\epsilon_r$.}
\label{table3}
\begin{tabular}{|c|c|c|c|c|c|} \hline
$u$ & 1 & 2 & 3 & 4 & 5\\
\hline
$\epsilon_{r,u}$ & 1.25 & 1.66 & 2.47 & 4.85 & 128 \\
\hline
\end{tabular}
\end{table}
Comparing the three curves of Fig.~\ref{varying_epsr}, we can conclude that changing the profile of $\epsilon_{r,u}$ from constant to linear and hyperbolic, improves the obtained scattering reduction at the frequency $f_0$, but at the same time, the bandwidth of efficient cloaking decreases. Concerning practical issues, the constant and linear profiles are easily realizable (the required values of $\epsilon_{r,u}$ are feasible), whereas the hyperbolic profile is far from practical. However, the cloaking performance with the constant profile is not much different from the linear profile, so it may not be worth the increased complexity to use even the linear one.
It is clear that cloaking of PEC objects with simple dielectric covers is far from ideal cloaking such that is in theory possible with, e.g., ``transformation-optics''. However, it is evident that the cloaking efficiencies presented in this work, are comparable to the experimental and numerical results obtained with various other cloaking techniques that have been realized with composite metamaterials. \cite{alu_exp, smith_exp}
\section{Conclusions}
We have presented a very simple analytical concept, based on transfer matrices at multilayered cylindrical structures. It has been found that this model describes accurately the previously reported cloaking effect obtained with conical silver plates in the visible frequency range. The effectiveness of the analytical model is verified by comparing the results of the normalized total scattering widths originating from it, to results obtained by numerical simulation software. The fidelity of the proposed concept allows it to be used in device design and to save computational resources due to its simplicity. The same analytical model is also used to demonstrate that, surprisingly, cloaking of impenetrable (perfectly conducting) objects is possible with simple dielectric covers whose relative permittivity surpasses unity. Such a property renders this type of electromagnetic configurations easily realizable, unlike most other cloaking devices reported in the literature.
\section{Acknowledgments}
The authors wish to thank Prof. S. Tretyakov for useful advice and discussions related to the topic of this paper. The work of P. Alitalo was supported by the Academy of Finland via postdoctoral project funding.
|
1,314,259,993,443 | arxiv | \section{Introduction}\label{sec1}
Federated learning is a distributed machine learning paradigm that collaboratively trains a model with data on many clients.
Unlike traditional distributed machine learning methods, which partition data into different clients to improve the efficiency of the learning algorithm, the goal of federated learning is to solve the learning problem without requiring the clients to reveal too much local information.
With the increasing demand for data security and privacy protection, federated learning has received significant attention in both industry and academia.
For example, banks want to collaboratively train a credit card scoring model without disclosing information about their customers, or hospitals want to carry out researches on a rare disease with each other due to the small number of sample cases, but they can't expose their patients' identity.
For more on the progress of federated learning, see \cite{Kairouz2019, Yang2020FL}.
The term \textit{federated learning} was introduced by McMahan et al. \cite{McMahan2016}, they also proposed the Federated Averaging (FedAvg) algorithm.
FedAvg composes multiple rounds of local stochastic gradient descent updates and server-side averaging aggregation to train a centralized model.
FedAvg and its variants such as FedProx \citep{LiTian2018FedProx}, SCAFFOLD \citep{Mohri2019SCAFFOLD} and FedAc \citep{yuan2020fedac} mainly focus on deep neural networks, while federated adaptations of traditional machine learning methods are rarely studied.
Especially in the area of dimension reduction and variable selection, few researches have appeared under federated learning settings.
Grammenos et al. \cite{grammenos2020fpca} proposed the federated principal component analysis (FedPCA) to reduce the dimensionality of the data.
Chai et al. \cite{chai2021fedsvd} offered masking-based federated singular vector decomposition (FedSVD) method, which can also perform dimension reduction by picking the $k$ right singular vectors with the largest singular values as the projection matrix.
However, principal component analysis is an unsupervised learning method that doesn't take into account the relationship between the responses and covariates.
Federated learning mainly applies to regression or classification problems with response variables.
Thus, we would like to develop a federated sufficient dimension reduction method that can reflect the relationship between responses and covariates.
Sufficient dimension reduction \citep{Li1991SIR, Cook1994plot} aimed at reducing the dimension of data without loss of sufficient information.
Consider a univariate response $y \in \mathbb{R}$ combined with a stochastic covariate vector $\mathbf{x} = (x_1, \dots, x_d)^{\mathrm{T}} \in \mathbb{R}^d$.
Let $K < d$ and $\mathbf{B} = (\boldsymbol{\beta}_1, \dots, \boldsymbol{\beta}_K) \in \mathbb{R}^{d \times K}$ such that
\begin{equation}
\label{eq2.1}
y \perp\!\!\!\!\perp \mathbf{x} \mid (\mathbf{B}^{\mathrm{T}} \mathbf{x}),
\end{equation}
where $\perp\!\!\!\!\perp$ signifies statistical independence.
Equation (\ref{eq2.1}) implies that $y \vert \mathbf{x}$ and $y \vert (\mathbf{B}^{\mathrm{T}} \mathbf{x})$ have identical distribution.
In other words, $\mathbf{B}^{\mathrm{T}} \mathbf{x}$ summarizes information in $\mathbf{x}$ with respect to $y$.
Therefore, it is sufficient to replace $\mathbf{x}$ with a set of $K$ linear combinations of $\mathbf{x}$ to characterize the dependence of $y$ on $\mathbf{x}$.
$\boldsymbol{\beta}_1, \dots, \boldsymbol{\beta}_K$ are defined as sufficient dimension reduction directions.
In general, the dimension reduction space spanned by $\mathbf{B}$ is not unique.
Define the intersection of all dimension reduction subspaces as the central subspace $\mathcal{V}_{y \vert \mathbf{x}}$ \citep{Cook1994plot}.
Originally from \cite{Li1991SIR}, a general regression model was proposed to characterize the relationship between $y$ and $\mathbf{x}$:
\begin{equation}
\label{eq2.2}
y = f(\mathbf{B}^{\mathrm{T}} \mathbf{x}, \varepsilon),
\end{equation}
where $f$ is an unknown link function and $\varepsilon$ captures stochastic noises.
Zeng and Zhu \cite{Zeng2010integral} proved that model (\ref{eq2.2}) is equivalent to model (\ref{eq2.1}) in the sense that a set of $K$ linear combinations of $\mathbf{x}$ captures the conditional distribution of $y$ given $\mathbf{x}$.
We usually consider the estimation of $\mathcal{V}_{y \vert \mathbf{x}}$.
Many estimation approaches of the central subspace have been proposed in the literature, including sliced inverse regression (SIR) \citep{Li1991SIR}, sliced average variance estimates (SAVE) \citep{Cook2000SAVE}, inverse regression (IR) \citep{Cook2005IR}, direction regression (DR) \citep{Li2007DR}, principal Hessian directions (PHD) \citep{Li1992PHD}, minimum average variance estimation (MAVE) \citep{xia2002MAVE}, etc.
SIR and its variants are perhaps the most widely used among all these methods and we will focus on SIR in this paper.
In \cite{hsing1992asym, zhu1995asymptotics, zhu2006sirhdc}, authors have studied methods and asymptotics for SIR.
However, SIR estimations usually contain all $d$ covariates, making the dimension reduction hard to interpret.
Researchers have made some attempts to perform variable selection via sparse sliced inverse regression, see \cite{li2005mf, li2008rsir, Lin2019LassoSIR}.
These methods conduct sparse dimension reduction in a step-wise process, which means they estimate a sparse solution for each direction one at a time.
Chen et al. \cite{chen2010coordinate} added a penalty term to encourage sparsity while estimating central subspace directly.
Tan et al. \cite{Kean2018convex} turned it into a convex optimization problem and used the linearized alternating direction method of multipliers (ADMM) algorithm \citep{Boyd2011ADMM, Fang2015GADMM} to solve the optimization problem.
Our proposal is the federated fashion of sparse sliced inverse regression in the high-dimensional setting.
We rewrite sliced inverse regression in a weighted averaging form and use a modified version of the FedSVD algorithm to construct the covariance matrix losslessly and securely.
Through convex relaxation of the constraint, we can turn the federated sliced inverse regression problem into a convex optimization problem.
We add a $L_1$ regularization term to the optimization objective to yield sparse solutions.
For this optimization problem, we use the linearized ADMM algorithm to solve it, which has been proved helpful in the federated learning literature, see \cite{zhang2020fedpd}.
We give an approach of Bayesian information criterion (BIC) to determine the dimension of the central subspace under the federated setting.
We use hold-out validation to select the tuning parameter of the ADMM algorithm.
Our method only needs to transfer intermediate variables and masked data between server and clients and does not require communication of the raw data or other variables, ensuring privacy protection and data security.
We also analyze the upper error bound of our estimator in the non-i.i.d. and high-dimensional setting.
Our method applies to a broader range of conditions, allowing different distributions of responses and covariates across clients.
The outline of this article is organized as follows.
In Section \ref{scenario}, we introduce the problem setup of federated sparse sliced inverse regression and give our FedSSIR algorithm for estimating the central subspace.
Section \ref{theoretical} provides an upper bound on the subspace distance between FedSSIR estimation and the true subspace.
Numerical simulations and real data applications are presented in Section \ref{numerical}.
Section \ref{conclusion} finishes this article with a brief conclusion and discussion on possible extensions.
\section{Learning Scenario}
\label{scenario}
In this section, we describe the learning scenario of federated sparse sliced inverse regression.
We start with some general definitions and notations used throughout the paper.
Assume we have $m$ clients, and the $i$-th client holds dataset $S_i$, $\forall i \in [m]$, where $[n]$ denotes the set $\{1, \dots, n\}$.
$S_i$ is a set of $n_i$ samples $(\mathbf{x}^{(i)}_1, y^{(i)}_1), \dots, (\mathbf{x}^{(i)}_{n_i}, y^{(i)}_{n_i}) \in \mathcal{X} \times \mathcal{Y}$, which are i.i.d. drawn from distribution $\mathcal{D}_i$.
For input space $\mathcal{X} \subseteq \mathbb{R}^d$ and output space $\mathcal{Y} \subseteq \mathbb{R}$, $\mathbf{x}^{(i)} = (x_1^{(i)}, \dots, x_d^{(i)})^{\mathrm{T}} \in \mathcal{X}$ is a stochastic covariate vector, $y^{(i)} \in \mathcal{Y}$ is a univariate response.
$N = \sum_{i=1}^{m} n_i$ denotes the number of all samples.
We denote the mean and covariance matrix of $\mathbf{x}^{(i)}$ as $\boldsymbol{\mu}_i$ and $\boldsymbol{\Sigma}_i$, and the covariance matrix of the conditional expectation $\mathbf{T}_i = \cov(\mathbb{E}[\mathbf{x}^{(i)} \vert y^{(i)}])$.
Define the $i$-th sample mean and sample covariance matrix as $\bar{\mathbf{x}}^{(i)} = \frac{1}{n_i} \sum_{j=1}^{n_i} \mathbf{x}^{(i)}_j$ and $\hat{\boldsymbol{\Sigma}}_i = \frac{1}{n_i} \sum_{j=1}^{n_i} (\mathbf{x}^{(i)}_j - \bar{\mathbf{x}}^{(i)}) (\mathbf{x}^{(i)}_j - \bar{\mathbf{x}}^{(i)})^{\mathrm{T}}$.
For a vector $\mathbf{v}$, we use $\|\mathbf{v}\|_0, \|\mathbf{v}\|_1, \|\mathbf{v}\|_2, \|\mathbf{v}\|_{\infty}$ to denote the $L_0$ norm, $L_1$ norm, $L_2$ norm and $L_{\infty}$ norm of $\mathbf{v}$, respectively.
For a matrix $\mathbf{M}$, we use $\|\mathbf{M}\|_*, \|\mathbf{M}\|_{\mathrm{F}}, \|\mathbf{M}\|_{1,1}, \|\mathbf{M}\|_2, \|\mathbf{M}\|_{\max}$ to denote the nuclear norm, Frobenius norm, $L_{1,1}$ norm, $L_2$ norm and max norm of $\mathbf{M}$, respectively, where $\|\mathbf{M}\|_* = \tr (\sqrt{\mathbf{M}^{\mathrm{T}} \mathbf{M}})$, $\|\mathbf{M}\|_{\mathrm{F}} = \sqrt{\tr (\mathbf{M}^{\mathrm{T}} \mathbf{M})}$, $\|\mathbf{M}\|_{1,1} = \sum_{i,j} \lvert \mathbf{M}_{i,j} \rvert$, $\|\mathbf{M}\|_2 = \max_{v: \|v\|_2 = 1} \|\mathbf{M} v\|_2$, $\|\mathbf{M}\|_{\max} = \max_{i,j} \lvert \mathbf{M}_{i,j} \rvert $.
$\vct(\mathbf{M})$ is the vectorization operator which stacks the columns of $\mathbf{M}$ into a vector.
For two matrices $\mathbf{A}$ and $\mathbf{B}$, we denote their inner product as $\langle \mathbf{A}, \mathbf{B} \rangle = \sqrt{\tr(\mathbf{A}^{\mathrm{T}} \mathbf{B})}$.
$\mathbf{A} \otimes \mathbf{B}$ denotes the Kronecker product of matrices $\mathbf{A}$ and $\mathbf{B}$.
\subsection{Problem Formulation}
\label{sec2.1}
We first introduce the formulation of the federated sliced inverse regression problem.
We can regard federated learning as a learning paradigm on a mixture distribution $\mathcal{D}$ consisting of client specific distributions, which is $\mathcal{D} = \sum_{i=1}^{m} \omega_i \mathcal{D}_i$, where $\omega_i$ is the mixture weight of $\mathcal{D}_i$ and $\sum_{i=1}^{m} \omega_i = 1$.
In order to represent this mathematically, we introduce a latent variable $z$ as an indicator of client.
We can represent the distribution as
\begin{align*}
\mathrm{Pr}(z = i) = \omega_i,\quad & \forall i \in [m], \\
\mathbf{x}, y \vert z = i \sim \mathcal{D}_i,\quad & \forall i \in [m].
\end{align*}
Here, we give formulations of the covariance matrix and the covariance matrix of the conditional expectation under this mixture distribution.
Assuming that the mean of the covariate satisfies $\boldsymbol{\mu}_1 = \cdots = \boldsymbol{\mu}_m = 0$, in practical, we only need to centralize the data on each client.
The population covariance matrix can be formulated as
\begin{equation}
\begin{aligned}
\mathrm{cov}(\mathbf{x}) & = \mathbb{E}[\mathrm{cov}(\mathbf{x} \vert z)] + \mathrm{cov}(\mathbb{E}[\mathbf{x} \vert z]) \\
& = \sum_{i=1}^{m} \omega_i \boldsymbol{\Sigma}_i + \sum_{i=1}^{m} \omega_i (\boldsymbol{\mu}_i - \sum_{i=1}^{m} \omega_i \boldsymbol{\mu}_i)^2 \\
& = \sum_{i=1}^{m} \omega_i \boldsymbol{\Sigma}_i.
\end{aligned}
\end{equation}
Let $m(y) = \mathbb{E}[\mathbf{x} \vert y]$, the population covariance matrix of the conditional expectation can be formulated as
\begin{equation}
\begin{aligned}
\mathrm{cov}(m(y)) & = \mathbb{E}[\mathrm{cov}(m(y) \vert z)] + \mathrm{cov}(\mathbb{E}[m(y) \vert z]) \\
& = \sum_{i=1}^{m} \omega_i \mathbf{T}_i + \sum_{i=1}^{m} \omega_i (\boldsymbol{\mu}_i - \sum_{i=1}^{m} \omega_i \boldsymbol{\mu}_i)^2 \\
& = \sum_{i=1}^{m} \omega_i \mathbf{T}_i,
\end{aligned}
\end{equation}
where the second equation comes from $\mathbb{E}[\mathbb{E}[m(y) \vert z]] = \mathbb{E}[m(y)] = \mathbb{E}[\mathbb{E}[\mathbf{x} \vert y]] = \mathbb{E}[\mathbf{x}]$ and $\mathbb{E}[m(y)\vert z = i] = \mathbb{E}[\mathbb{E}[\mathbf{x}^{(i)} \vert y^{(i)}]] = \mathbb{E}[\mathbf{x}^{(i)}]$.
We denote $\mathrm{cov}(m(y))$ as $\bar{\mathbf{T}}$.
Therefore, the empirical covariance matrix $\hat{\boldsymbol{\Sigma}} = \sum_{i=1}^{m} \hat{\omega}_i \hat{\boldsymbol{\Sigma}}_i$ and the empirical covariance matrix of the conditional expectation $\hat{\mathbf{T}} = \sum_{i=1}^{m} \hat{\omega}_i \hat{\mathbf{T}}_i$.
Under this representation, we have $n_i = \sum_{j=1}^{N} 1_{(z_j = i)}$, $\forall i \in [m]$.
$(n_1, \dots, n_m)$ has a multinomial distribution $\mathrm{Multinomial}(N, \omega_1, \dots, \omega_m)$, which means $\omega_i$ can be estimated by $n_i / N$.
Thus, we have $\hat{\boldsymbol{\Sigma}} = \sum_{i=1}^{m} \frac{n_i}{N} \hat{\boldsymbol{\Sigma}}_i$ and $\hat{\mathbf{T}} = \sum_{i=1}^{m} \frac{n_i}{N} \hat{\mathbf{T}}_i$.
Suppose distribution $\mathcal{D}_i$ satisfies model (\ref{eq2.2}) with client specific link function $f_i$ and noise $\varepsilon_i$:
\begin{equation}
\label{eq2.3}
y^{(i)} = f_i(\mathbf{B}^{\mathrm{T}} \mathbf{x}^{(i)}, \varepsilon_i), \forall i \in [m].
\end{equation}
Model (\ref{eq2.3}) implies that under certain conditions, data distributions on different clients may vary from each other in our federated learning system, as long as all clients share the same central subspace.
For example, the feature vectors of the user groups of two banks share the same internal structure, although their label dependencies on the central subspace have personal or regional variation.
This feature of our method can help us deal with non-i.i.d. data, especially for the skewed conditional distribution of $y^{(i)}$ given $\mathbf{x}^{(i)}$, which is known as \textit{concept shift} in federated learning literature.
For further discussion on non-i.i.d. data in federated learning, see \cite{hsieh2020noniid, Kairouz2019}.
As for SIR, it requires the covariate $\mathbf{x}^{(i)}$ to satisfy the linear condition:
\begin{equation}
\label{eq2.4}
\mathbb{E}[\mathbf{x}^{(i)} \vert \mathbf{B}^{\mathrm{T}} \mathbf{x}^{(i)}] = \mathbf{b}^{(i)} + \mathbf{W}^{(i)} \mathbf{B}^{\mathrm{T}} \mathbf{x}^{(i)}, \forall i \in [m],
\end{equation}
where $\mathbf{b}^{(i)} \in \mathbb{R}^{d}$ and $\mathbf{W}^{(i)} \in \mathbb{R}^{d \times K}$ are some constants.
According to Theorem 3.1 and Corollary 3.1 in \citep{Li1991SIR}, under model (\ref{eq2.3}) and condition (\ref{eq2.4}), assuming that each $\mathcal{D}_i$ has the same covariance matrix $\boldsymbol{\Sigma}$, the inverse regression curve $\mathbb{E}[\mathbf{x}^{(i)} \vert y^{(i)}] - \mathbb{E}[\mathbf{x}^{(i)}]$ is contained in the column space of $\boldsymbol{\Sigma} \mathbf{B}$, which means the covariance matrix of the conditional expectation $\mathbf{T}_i$ degenerates in the orthogonal directions of $\boldsymbol{\Sigma} \boldsymbol{\beta}_k$'s.
Thus, recall that $\bar{\mathbf{T}} = \sum_{i=1}^{m} \omega_i \mathbf{T}_i$, $\bar{\mathbf{T}}$ also degenerates in the orthogonal directions of $\boldsymbol{\Sigma} \boldsymbol{\beta}_k$'s.
The linear condition is satisfied when the covariate $\mathbf{x}^{(i)}$ is elliptically distributed \citep{EATON1986ellipical}.
The linear condition may be considered to be strict for real data, whose distribution is elusive for us to catch.
Hall and Li \cite{hall1993almost} argued that the condition is approximately satisfied when the dimension of data is large, regardless of the actual distribution of data.
From the discussion above, we can see that our method does not need the assumption that all clients have the identical marginal distribution for $\mathbf{x}$, provided each $\mathcal{D}_i$ satisfies condition (\ref{eq2.4}) and has the same covariance matrix.
This kind of non-i.i.d.-ness is known as \textit{covariate shift} in federated learning.
Covariate shift, according to Hidetoshi~\cite{Hidetoshi2000covariate}, means that covariate on each client has different marginal distributions.
For example, patients in hospitals in two different countries may be biased in physical conditions.
Numerical experiments of our method against non-i.i.d. data have been carried out in Section \ref{numerical}.
\subsection{Federated Sparse Sliced Inverse Regression}
\label{sec2.2}
Then we consider the sliced inverse regression problem under the non-i.i.d. setting.
Assume that for each client $i$, $\mathcal{D}_i$ satisfies model (\ref{eq2.3}) and condition (\ref{eq2.4}), and has the same covariance matrix $\boldsymbol{\Sigma}$.
Then we have $\bar{\mathbf{T}} \boldsymbol{\beta}_k = \lambda_k \boldsymbol{\Sigma} \boldsymbol{\beta}_k$ for $k \in [K]$, where $\boldsymbol{\beta}_k^{\mathrm{T}} \boldsymbol{\Sigma} \boldsymbol{\beta}_k = 1$, $\boldsymbol{\beta}_k^{\mathrm{T}} \boldsymbol{\Sigma} \boldsymbol{\beta}_l = 0$ for $l \neq k$, $\lambda_k$ is the k-th largest generalized eigenvalue.
In the traditional setting, we have access to all data, and we can get the estimators $\hat{\boldsymbol{\Sigma}}$ and $\hat{\mathbf{T}}$ based on the full sample.
In the federated setting, we can't aggregate all data in one server, and client $i$ can only access its own $\hat{\boldsymbol{\Sigma}}_i$ and $\hat{\mathbf{T}}_i$.
Recall that $\hat{\mathbf{T}} = \frac{1}{N} \sum_{i=1}^{m} n_i \hat{\mathbf{T}}_i$, $\hat{\boldsymbol{\Sigma}} = \frac{1}{N} \sum_{i=1}^{m} n_i \hat{\boldsymbol{\Sigma}}_i$.
Then a basis $\mathbf{V}$ of the central subspace can be estimated by solving a weighted averaging version of the generalized eigenvalue problem
\begin{equation}
\label{eq2.5c}
\hat{\mathbf{T}} \mathbf{V} = \hat{\boldsymbol{\Sigma}} \mathbf{V} \mathbf{\Lambda},
\end{equation}
where $\mathbf{V}^{\mathrm{T}} \hat{\boldsymbol{\Sigma}} \mathbf{V} = \mathbf{I}_K$, $\mathbf{\Lambda}$ is a diagonal matrix with elements $\{\lambda_1, \dots, \lambda_K\}$.
The generalized eigenvalue problem (\ref{eq2.5c}) can be equivalently converted into a non-convex optimization problem
\begin{equation}
\label{eq2.7}
\min_{\mathbf{V} \in \mathbb{R}^{d \times K}} - \tr \{ \mathbf{V}^{\mathrm{T}} \hat{\mathbf{T}} \mathbf{V} \} \text{ subject to } \mathbf{V}^{\mathrm{T}} \hat{\boldsymbol{\Sigma}} \mathbf{V} = \mathbf{I}_K.
\end{equation}
For privacy preserving, we prefer not to share local covariance matrices $\hat{\boldsymbol{\Sigma}}_i$'s and covariance matrices of the conditional expectation $\hat{\mathbf{T}}_i$'s between clients and server either.
Let $\boldsymbol{\Pi} = \mathbf{V} \mathbf{V}^{\mathrm{T}}$ and use the property of trace operator, (\ref{eq2.7}) turns into
\begin{equation}
\label{eq2.8}
\min_{\boldsymbol{\Pi} \in \mathcal{B}} - \sum_{i=1}^{m} \frac{n_i}{N} \tr \{ \hat{\mathbf{T}}_i \boldsymbol{\Pi} \},
\end{equation}
where $\mathcal{B} = \{\boldsymbol{\Pi} = \mathbf{V} \mathbf{V}^{\mathrm{T}}: \mathbf{V}^{\mathrm{T}} \hat{\boldsymbol{\Sigma}} \mathbf{V} = \mathbf{I}_K \}$.
(\ref{eq2.8}) consists of additive objectives $\sum_{i=1}^{m} \tr \{ \hat{\mathbf{T}}_i \boldsymbol{\Pi} \}$.
In the update step of our algorithm, client $i$ only needs to handle the local objective involving $\hat{\mathbf{T}}_i$, and the server will aggregate the intermediate results.
This means that our method does not need to compute $\hat{\mathbf{T}}$, but only $\hat{\mathbf{T}}_i$ on each client.
How to construct the covariance matrix $\hat{\boldsymbol{\Sigma}}$ is another problem we have to face.
If we deal with the covariance matrix separately on each client as we did with the covariance matrix of the conditional expectation, the constraints can be very complicated.
Thus, we use a modified version of FedSVD \citep{chai2021fedsvd} to securely and losslessly construct $\hat{\boldsymbol{\Sigma}}$, see Algorithm \ref{alg0}.
The proof of lossless precision for this masking based method can be found in Theorem 4.1 in \cite{chai2021fedsvd}.
\begin{algorithm}[ht]
\caption{Covariance estimation via FedSVD}
\label{alg0}
\begin{algorithmic}[1]
\Require centralized covariates $\mathbf{X}_i = (\mathbf{x}^{(i)}_1, \dots, \mathbf{x}^{(i)}_{n_i}) - \bar{\mathbf{x}}^{(i)} \mathbf{1}^{\mathrm{T}}$, $i \in [m]$
\Ensure the average covariance matrix $\hat{\boldsymbol{\Sigma}}$
\State \textbf{Server do: }
\State Generate random orthogonal matrix $\mathbf{P}$;
\State Broadcast $\mathbf{P}$ to clients;
\For{\textbf{Client} $i \in [m]$ in parallel}
\State Generate random orthogonal matrix $\boldsymbol{\Psi}_i$;
\State Compute $\mathbf{X}'_i = \mathbf{P} \mathbf{X}_i \boldsymbol{\Psi}_i$;
\State Send $\mathbf{X}'_i$ to Server;
\EndFor
\State \textbf{Server do: }
\State Aggregate $\mathbf{X}' = (\mathbf{X}'_1, \dots, \mathbf{X}'_m)$;
\State Factorize $\mathbf{X}'$ into $[\mathbf{U}', \mathbf{D}, \sim]$ via SVD;
\State Recover $\mathbf{U}$ through $\mathbf{P}^{\mathrm{T}} \mathbf{U}'$;
\State Compute $\hat{\boldsymbol{\Sigma}}$ through $\frac{1}{N} \mathbf{U} \mathbf{D}^2 \mathbf{U}^{\mathrm{T}}$;
\end{algorithmic}
\end{algorithm}
Similar to \cite{vu2013fantope}, we propose a convex relaxation for the non-convex optimization problem in (\ref{eq2.8})
\begin{equation}
\label{eq2.9}
\begin{aligned}
& \min_{\boldsymbol{\Pi} \in \mathcal{M}} - \sum_{i=1}^{m} \frac{n_i}{N} \tr \{ \hat{\mathbf{T}}_i \boldsymbol{\Pi} \} \\
& \text{ subject to } \|\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi} \hat{\boldsymbol{\Sigma}}^{1/2} \|_* \leq K, \|\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi} \hat{\boldsymbol{\Sigma}}^{1/2} \|_2 \leq 1, \\
\end{aligned}
\end{equation}
where $\mathcal{M}$ is the set of all $d \times d$ positive semi-definite matrices.
\begin{remark}
Under (\ref{eq2.8}), $\mathbf{V}$ belongs to the set $\{\mathbf{V} \in \mathbb{R}^{d \times K}: \mathbf{V}^{\mathrm{T}} \hat{\boldsymbol{\Sigma}} \mathbf{V} = \mathbf{I}_K \}$, which is an embedded submanifold of $\mathbb{R}^{d \times K}$.
There are some algorithms proposed to solve optimization problems on matrix manifolds, see \cite{AbsilMahonySepulchre2009manifolds}.
However, we choose to adopt the convex relaxation like (\ref{eq2.9}) in this paper because of the advantages of convex optimization.
\end{remark}
\begin{remark}
Traditional sufficient dimension reduction methods usually need to compute the inverse of covariance matrix \citep{Li1991SIR, Li1992PHD, Cook2000SAVE, Li2007DR}, while (\ref{eq2.9}) doesn't involve the inversion of $\hat{\boldsymbol{\Sigma}}$.
This can be helpful when the sample size $n$ is less than the dimension $d$.
\end{remark}
In addition to dimension reduction, variable selection is an important goal of statistical analysis.
To encourage sparsity, we can add a $L_1$ penalty on all elements of $\boldsymbol{\Pi}$
\begin{equation}
\label{eq2.10}
\begin{aligned}
& \min_{\boldsymbol{\Pi} \in \mathcal{M}} \sum_{i=1}^{m} \frac{n_i}{N} (- \tr \{ \hat{\mathbf{T}}_i \boldsymbol{\Pi} \} + \rho \| \boldsymbol{\Pi} \|_{1,1}) \\
& \text{ subject to } \|\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi} \hat{\boldsymbol{\Sigma}}^{1/2} \|_* \leq K, \|\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi} \hat{\boldsymbol{\Sigma}}^{1/2} \|_2 \leq 1, \\
\end{aligned}
\end{equation}
where $\rho$ is a positive tuning parameter.
Larger $\rho$ will yield a sparser estimator of basis vectors.
In practice, we can allow different $\rho_i$ to control the sparsity of intermediate results on client $i$ during algorithm iteration.
\subsection{Federated Sparse Sliced Inverse Regression algorithm}
\begin{algorithm}[ht]
\caption{Federated Sparse Sliced Inverse Regression (FedSSIR)}
\label{alg1}
\begin{algorithmic}[1]
\Require $S_i = \{(\mathbf{x}^{(i)}_j, y^{(i)}_j) \}_{j=1}^{n_i}$, $i \in [m]$, number of sufficient dimension reduction directions $K$, the tuning parameter $\rho$, the ADMM parameter $\nu > 0$, tolerance level $\varepsilon > 0$.
\Ensure estimation of projection matrix onto the central subspace $\hat{\boldsymbol{\Pi}}$.
\State Compute $\hat{\boldsymbol{\Sigma}}_i$, $\hat{\mathbf{T}}_i$ and the linearization parameter $\alpha_i = 4 \nu \lambda^2_{\max}(\hat{\boldsymbol{\Sigma}}_i)$ on each client, where $\lambda_{\max}(\mathbf{M})$ is the largest eigenvalue of $\mathbf{M}$;
\State Use \texttt{FedSVD} to compute $\hat{\boldsymbol{\Sigma}}$;
\State \textbf{Server} initialize the parameters: primal variables $\boldsymbol{\Pi}^0 = \mathbf{I}_d$, $H^0 = \mathbf{I}_d$, dual variable $\boldsymbol{\Gamma}^0 = 0$, intermediate variable $\mathbf{M}^0 = \hat{\boldsymbol{\Sigma}}^{2} - \hat{\boldsymbol{\Sigma}}$ and broadcast $\boldsymbol{\Pi}^0$, $\mathbf{M}^0$ to clients;
\While{$\|\boldsymbol{\Pi}^t - \boldsymbol{\Pi}^{t-1} \|_{\mathrm{F}} > \varepsilon $}
\For{\textbf{Client} $i \in [m]$ in parallel}
\State $\boldsymbol{\Pi}_i^{t+} = \mathrm{ST}[\boldsymbol{\Pi}^t + \frac{1}{\alpha_i} \hat{\mathbf{T}}_i - \frac{\nu}{\alpha_i} \mathbf{M}^{t}, \frac{\rho}{\alpha_i} ]$, where $\mathrm{ST}$ stands for soft-thresholding operator element-wise applied to a matrix;
\State Send $\boldsymbol{\Pi}_i^{t+}$ to server;
\EndFor
\State \textbf{Server updates: }
\State $\boldsymbol{\Pi}^{t+1} = \sum_{i=1}^{m} \frac{n_i}{N} \boldsymbol{\Pi}_i^{t+}; $
\State $\mathbf{H}^{t+1} = \sum_{i=1}^{d} \min \{1, \max(w_j - \gamma^*, 0)\} \mathbf{u}_j \mathbf{u}_j^{\mathrm{T}}$, where $\sum_{j=1}^d w_j \mathbf{u}_j \mathbf{u}_j^{\mathrm{T}}$ is the singular value decomposition of $\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi}^{t+1} \hat{\boldsymbol{\Sigma}}^{1/2} - \boldsymbol{\Gamma}^t$, and $$\gamma^* = \argmin_{\gamma > 0} \gamma, \quad \text{ subject to} \sum_{j=1}^d \min \{1, \max(w_j - \gamma, 0)\} \leq K; $$
\State $\boldsymbol{\Gamma}^{t+1} = \boldsymbol{\Gamma}^t + \hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi}^{t+1} \hat{\boldsymbol{\Sigma}}^{1/2} - \mathbf{H}^{t+1} $;
\State $\mathbf{M}^{t+1} = \hat{\boldsymbol{\Sigma}}^{1/2} (\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi}^{t+1} \hat{\boldsymbol{\Sigma}}^{1/2} - \mathbf{H}^{t+1} + \boldsymbol{\Gamma}^{t+1}) \hat{\boldsymbol{\Sigma}}^{1/2}$;
\State Broadcast $\boldsymbol{\Pi}^{t+1}$, $\mathbf{M}^{t+1}$ to clients;
\EndWhile
\end{algorithmic}
\end{algorithm}
ADMM is an algorithm developed for optimization problems with the objective and constraint terms divided into different parts.
This situation occurs in many statistical learning contexts, e.g., distributed machine learning with aggregated loss function, generalized $l_1$-norm regularization loss minimization.
We observe that (\ref{eq2.10}) also has this separable property and can be solved by a linearized ADMM algorithm.
Algorithm \ref{alg1} presents a detailed process of our method.
In the $\boldsymbol{\Pi}$-update step, we first compute the solution $\boldsymbol{\Pi}_{i}^{t+}$ to the local problem distributedly on each client and then update $\boldsymbol{\Pi}^{t+1}$ by averaging the solution $\boldsymbol{\Pi}_{i}^{t+}$'s.
The derivation of Algorithm \ref{alg1} is deferred to the appendix.
Fang et al. \cite{Fang2015GADMM} studied the convergence of the linearized version of ADMM algorithms and established a worst-case $\mathcal{O}(1/t)$ convergence rate, where $t$ is the iteration counter.
We pick $K$ eigenvectors of $\hat{\boldsymbol{\Pi}}$ with the largest eigenvalues as the estimation $\hat{\mathbf{B}}$ of sufficient dimension reduction directions.
The sparse model is estimated as $\{1 \leq j \leq d: \hat{\mathbf{B}}_{j \cdot} \neq \mathbf{0} \}$, where $\hat{\mathbf{B}}_{j \cdot}$ denotes the $j$-th row vector of $\hat{\mathbf{B}}$.
\subsection{Hyper-parameter selection}
\label{sec2.3}
There are some hyper-parameters in our proposed method: $K$ the dimension of the central subspace and $\rho$ that controls the degree of sparsity.
To determine the structural dimension of the central subspace, there are four main ways: sequential test, bootstrap, cross validation, and Bayesian information criterion (BIC), see \cite{Li2017SDR}.
Constructing a sequential test, according to \cite{zhu2010cumulative}, is challenging due to the complicated structure of asymptotic variance and the diverging number of degrees of freedom.
The next two methods both need multiple rounds of estimation and are too complicated for federated learning.
Therefore, we consider using BIC to determine the dimension $\hat{K}_i$ on each client, and then take the mode of all $\hat{K}_i$'s as the final $\hat{K}$.
If this produces two or more modes, we randomly choose one of them with equal probability.
Following \cite{zhu2006sirhdc, zhu2010cumulative}, we define a BIC type criterion on client $i$ as follows
\begin{equation}
\label{eq2.14a} \hat{K}_i = \argmax_{k \in [d-1]} n_i \sum_{j = 1}^{k} \hat{\lambda}_{i, j}^2 / \sum_{j=1}^{d} \hat{\lambda}_{i, j}^2 - C_{n_i} k (k + 1) / 2,
\end{equation} where $\hat{\lambda}_{i, j}$ is the $j$-th eigenvalue of $\hat{\mathbf{T}}_i$.
Then our estimator $\hat{K}$ is the mode of the set $\{ \hat{K}_i \vert i \in [m] \}$.
Zhu et al. \cite{zhu2010cumulative} proved that $\hat{K}_i$ converges to $K$ in probability, as $n_i \rightarrow \infty$, as long as $n_i^{-1} C_{n_i} \rightarrow 0$ and $C_{n_i} \rightarrow \infty$.
We have the following corollary.
\begin{corollary}
\label{cor:bic}
Assume the covariates on different clients are $d$-dimensional sub-Gaussian random vectors and $d = o((N/m)^{1/2})$, then $\hat{K}$ converges to $K$ in probability, if $n_i^{-1} C_{n_i} \rightarrow 0$ and $C_{n_i} \rightarrow \infty$, $i \in [m]$.
\end{corollary}
The proof of Corollary \ref{cor:bic} and numerical results on dimension determination are referred to Appendix \ref{bic}.
Throughout the context, we choose $C_n = n^{1/2} + 0.5 \log(n)$, which performs quite well in numerical studies.
We use hold-out validation to select the tuning parameter $\rho$.
Let $R(\mathbf{x})$ be the $K$-dimensional sufficient dimension reduction variable, compute the top $K$ eigenvectors $\hat{\boldsymbol{\pi}}_1, \dots, \hat{\boldsymbol{\pi}}_K$ of $\hat{\boldsymbol{\Pi}}$ and we can get an estimator $\hat{R}(\mathbf{x}) = (\hat{\boldsymbol{\pi}}_1^{\mathrm{T}} \mathbf{x}, \dots, \hat{\boldsymbol{\pi}}_K^{\mathrm{T}} \mathbf{x})^{\mathrm{T}}$.
Following \cite{Kean2018convex}, the conditional expectation $\mathbb{E} [y^{(i)} \vert \mathbf{x}^{(i)} = \mathbf{x}^{\prime}]$ can be estimated as
\begin{equation}
\label{eq2.14}
\hat{\mathbb{E}} [y^{(i)} \vert \mathbf{x}^{(i)} = \mathbf{x}^{\prime}] = \frac{\sum_{j=1}^{n_i} y_j^{(i)} \exp \{-\frac{1}{2}\|\hat{R}(\mathbf{x}^{\prime}) - \hat{R}(\mathbf{x}_j^{(i)})\|_2^2\}}{\sum_{j=1}^{n_i} \exp\{-\frac{1}{2}\|\hat{R}(\mathbf{x}^{\prime}) - \hat{R}(\mathbf{x}_j^{(i)})\|_2^2\}}.
\end{equation}
We first separate $S_i$ into training set $S_{i,tr}$ and validation set $S_{i,val}$ on each client and use Algorithm \ref{alg1} to get a solution $\hat{\boldsymbol{\Pi}}$ from $S_{i,tr}$'s.
Then we can predict the estimated conditional mean for samples in $S_{i,val}$'s using (\ref{eq2.14}).
Given a grid of $\rho$, we choose the best parameter to minimize the validation error $\sum_{i=1}^{m} \sum_{j \in S_{i,val}} (y_j^{(i)} - \hat{\mathbb{E}}[y^{(i)} \vert \mathbf{x}^{(i)} = \mathbf{x}_j^{(i)}])^2/\lvert S_{i,val}\rvert$, where $\lvert S \rvert$ is the cardinality of the set $S$.
\section{Theoretical Studies}
\label{theoretical}
In this section, we will discuss some theoretical properties of our method.
For simplicity, suppose there are $m$ clients in the system and client $i$ has $n_i$ independent observations, which follow the distribution $\mathcal{D}_i$, such that $(n_1, \dots, n_m) \sim \mathrm{Multinomial}(N, \omega_1, \dots, \omega_m)$.
We also assume that the distributions $\mathcal{D}_1, \dots, \mathcal{D}_m$ are independent conditioned on $n_1, \dots, n_m$.
The central subspace $\mathcal{V}_{y \vert \mathbf{x}}$ is assumed to have a sparsity level $$s = \lvert \mathrm{supp} \{\mathrm{diag}(\boldsymbol{\Pi})\} \rvert, $$where $\boldsymbol{\Pi} = \mathbf{V} \mathbf{V}^{\mathrm{T}}$ is the projection matrix onto $\mathcal{V}_{y \vert \mathbf{x}}$.
$\mathrm{diag}(\cdot)$ denotes the operation that maps a matrix to a vector composed of its diagonal elements.
$\mathrm{supp}(\mathbf{v})$ denotes the support set of a d-dimensional vector $\mathbf{v}$, which is defined as $\{ i \in [d] : \lvert \mathbf{v}_i \rvert > 0\}$.
The sparsity level of the central subspace is the number of non-zero diagonal elements in the projection matrix.
Suppose $\boldsymbol{\Pi}_{j j} = 0$, using the fact that $\boldsymbol{\Pi}_{j j} = \sum_{k=1}^{K} \mathbf{V}_{j k}^2$, and we have $\mathbf{V}_{j k} = 0$ for $k \in [K]$.
This means when the $j$-th diagonal element of the projection matrix is zero, the entire $j$-th row of $\mathbf{V}$ is zero, indicating not to select the $j$-th variable.
Our goal is to establish an upper error bound for the estimator $\hat{\boldsymbol{\Pi}}$ obtained from the FedSSIR algorithm under the non-asymptotic setting, where $N$, $m$, $d$, $K$, $s$ are allowed to grow.
We first provide concentration inequalities on the estimators $\hat{\boldsymbol{\Sigma}}$ and $\hat{\mathbf{T}}$.
Based on the concentration results, we can measure the distance between the population and estimated subspaces.
Here are some regularity conditions.
\begin{enumerate}[label=(C\arabic*)]
\item The distribution $\mathcal{D}_i$ satisfies the linear condition (\ref{eq2.4}), $\forall i \in [m]$.
\item The covariates on different clients are $d$-dimensional sub-Gaussian random vectors with a same covariance matrix $\boldsymbol{\Sigma}$.
\item The largest generalized eigenvalue $\lambda_1$ of matrices $\{ \bar{\mathbf{T}}, \boldsymbol{\Sigma} \}$ is bounded by some constant.
\item The inverse regression function $m^{i}(y^{(i)}) = \mathbb{E}[\mathbf{x}^{(i)} \vert y^{(i)}]$ has a bounded total variation, $\forall i \in [m]$.
\item The dimension of central subspace $K < \min(s, \log d)$.
\item There exists a constant $c > 0$ such that $1/c \leq \lambda_{\min} (\boldsymbol{\Sigma}, s) \leq \lambda_{\max} (\boldsymbol{\Sigma}, s) \leq c$, where $\lambda (\boldsymbol{\Sigma}, s)$ denotes the $s$-sparse eigenvalue of $\boldsymbol{\Sigma}$.
\end{enumerate}
Condition (C1) is the commonly assumed linear condition in SIR literature.
Condition (C2) states a sub-Gaussian tail decay of high dimensional random vectors, see \cite{vershynin2011introduction, vershynin2018HDP} for a comprehensive discussion of sub-Gaussian random variables.
(C2) indicates that our theoretical results don't require the i.i.d. assumption of covariates as long as they are independent random sub-Gaussian vectors and have an identical covariance matrix.
Condition (C3) is a mild condition on the generalized eigensystem.
Condition (C4) is an assumption on the smoothness of the inverse regression curve.
A similar assumption has been given in some other SIR literature, see \cite{hsing1992asym, zhu1995asymptotics, zhu2006sirhdc}.
Condition (C5) states an assumption on the number of dimension reduction directions and the sparsity level $s$ of the central subspace.
Condition (C6) is widely used in the high-dimensional literature, see \cite{Meinshausen2009lassohigh, zhang2010nearly} for example.
The concepts of total variation and $s$-sparse eigenvalue $\lambda (\boldsymbol{\Sigma}, s)$ of $\boldsymbol{\Sigma}$ are given in the Appendix \ref{defs}.
\begin{lemma}
\label{lem1}
Assume that Condition (C2) holds, then there exists constants $C_1, C_1^{'} > 0$ such that $$\|\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma} \|_{\max} \leq C_1 (\log d / N)^{1/2}, $$ with probability greater or equal to $1 - \exp (-C_1^{'} \log d)$.
\end{lemma}
Lemma \ref{lem1} shows that the modified FedSVD estimator of $\boldsymbol{\Sigma}$ shares the same tail bound as the classic batch estimator \cite{ravikumar2011high}.
To get the basis estimator $\hat{\mathbf{V}}$, we need to estimate the covariance matrix of the conditional expectation $\mathbf{T}_i$.
Obviously, we have the identity $\mathbf{T}_i = \boldsymbol{\Sigma}_i - \mathbf{Q}_i$, where $\mathbf{Q}_i$ denotes the average covariance matrix $\mathbb{E}[\cov(\mathbf{x}^{(i)} \vert y^{(i)})]$.
Two consistent estimators for $\mathbf{Q}_i$ are given in Appendix \ref{defs}.
\begin{lemma}
\label{lem2}
Assume that Conditions (C2)-(C4) and (C6) hold and the response $y^{(i)}$ is bounded for all clients.
Also, $m \leq C_1 N^{\eta}$ for some $C_1$, where $\eta \in (0, 1)$.
There exists constants $C_2, C_2^{\prime}, C_3, C_4 > 0 $ such that $$\|\hat{\mathbf{Q}} - \bar{\mathbf{Q}}\|_{\max} \leq C_2 (\log d / N)^{1/2} + C_3 (\log d)^{1/2} / N^{1 - \eta}, $$ with probability at least $1 - \exp(-C_4 \log d)$, where $\hat{\mathbf{Q}} = \sum_{i=1}^{m} \frac{n_i}{N} \hat{\mathbf{Q}}_i$ and $\bar{\mathbf{Q}} = \sum_{i=1}^{m} \omega_i \mathbf{Q}_i$.
If we further assume that $m \leq C_1 N^{1/2}$, we have $$\|\hat{\mathbf{Q}} - \bar{\mathbf{Q}}\|_{\max} \leq C_2^{\prime} (\log d / N)^{1/2}, $$ with probability greater or equal to $1 - \exp(-C_4 \log d)$.
\end{lemma}
It is shown in \cite{Kean2018convex} that when there is only one client, the estimation error of $\hat{\mathbf{Q}}$ is of order $(\log d / n)^{1/2}$ in the high-dimensional setting.
Our result coincides with it under the assumption that the ratio of $m$ to $N$ is within an appropriate range.
Directly from Lemma \ref{lem1} and Lemma \ref{lem2}, using the identities $\mathbf{T}_i = \boldsymbol{\Sigma}_i - \mathbf{Q}_i$ and $\hat{\mathbf{T}}_i = \hat{\boldsymbol{\Sigma}}_i - \hat{\mathbf{Q}}_i$, we have the following corollary.
\begin{corollary}
\label{cor1}
Under conditions in Lemma \ref{lem1} and Lemma \ref{lem2}, assume that $m \leq C_1 N^{\eta}$ for some $C_1$, where $\eta \in (0, 1)$.
There exists $C_2, C_2^{\prime}, C_3, C_4 > 0$ such that $$\|\hat{\mathbf{T}} - \bar{\mathbf{T}}\|_{\max} \leq C_2 (\log d / N)^{1/2} + C_3 (\log d)^{1/2} / N^{1 - \eta}, $$ with probability at least $1 - \exp(- C_4 \log d)$.
Furthermore, if we assume that $m \leq C_1 N^{1/2}$, we have $$\|\hat{\mathbf{T}} - \bar{\mathbf{T}}\|_{\max} \leq C_2^{\prime} (\log d / N)^{1/2}, $$ with probability at least $1 - \exp(-C_4 \log d)$.
\end{corollary}
Lemma \ref{lem1} and Corollary \ref{cor1} state that when the ratio of $m$ to $N$ is within an appropriate range, our estimations of covariance matrix and covariance matrix of the conditional expectation achieve the convergence rate proportional to $(\log d / N)^{1/2}$.
This implies our estimation of the central subspace based on $\hat{\boldsymbol{\Sigma}}$ and $\hat{\mathbf{T}}$ can achieve the same error bound under certain conditions.
Next, we state the theoretical result regarding the subspace distance between the central subspace and our estimation.
The distance between the estimated subspace and the true subspace is defined as $D (\mathcal{V}, \hat{\mathcal{V}}) = \|\mathbf{P}_{\boldsymbol{\Pi}} - \mathbf{P}_{\hat{\boldsymbol{\Pi}}}\|_{\mathrm{F}}$, where $\mathbf{P}_{\boldsymbol{\Pi}}$ and $\mathbf{P}_{\hat{\boldsymbol{\Pi}}}$ are the projection matrices onto $\mathcal{V}$ and $\hat{\mathcal{V}}$.
We can get the upper bound of the statistical error for our estimation.
\begin{theorem}
\label{thm1}
Let $\mathcal{V}$ and $\hat{\mathcal{V}}$ be the true and estimated subspaces.
Denote $\lambda_k$ as the k-th generalized eigenvalue of matrices $\{\hat{\mathbf{T}}, \hat{\boldsymbol{\Sigma}}\}$.
Assume $N > C s^2 \log d/ \lambda_K^2$, $m \leq C' N^{\eta}$, $\eta \in (0, 1)$ for some constants $C$, $C'$ and the number of active covariates $s > \lambda_K K^2 / \log d$.
Under Conditions (C1)-(C6), let $\rho = r (C_1 (\log d/ N)^{1/2} + C_1^{\prime} (\log d)^{1/2} / N^{1 - \eta})$ hold with some constants $C_1$, $C_1^{\prime}$, where $r \in [1, r_0]$, $r_0$ is a constant greater than 1, with probability at least $1 - \exp(-C_4 s) - \exp(-C_5 \log d)$, we have $$D (\mathcal{V}, \hat{\mathcal{V}}) \leq C_2 s (\log d/ N)^{1/2} / \lambda_K + C_3 s (\log d)^{1/2} / (\lambda_K N^{1 - \eta}), $$especially when $m \leq C^{\prime} N^{1/2}$, we have $$D (\mathcal{V}, \hat{\mathcal{V}}) \leq C_2^{\prime} s (\log d/ N)^{1/2} / \lambda_K, $$ for some constants $C_2$, $C_2^{\prime}$, $C_3$, $C_4$ and $C_5$.
\end{theorem}
As we can see, under appropriate conditions on $m$, $N$ and $d$, estimated subspace $\hat{\mathcal{V}}$ can achieve the statistical error rate of order $s (\log d/ N)^{1/2} / \lambda_K$ with high probability.
When $m = 1$, the statistical error rate reduces to $s (\log d/ n)^{1/2} / \lambda_K$, which coincides with traditional sparse SIR.
This shows that our approach can make use of data from all clients to get better results than a single client, solving the problem of insufficient data on a single client.
Our results say that when the sample size of a single client is not too small, compared with the number of clients in the federated learning system, FedSSIR under non-i.i.d. setting can achieve the same statistical error rate as whole sample SIR under i.i.d. setting.
We conclude this section with a remark on the federated learning setting.
\begin{remark}
Note that we allow $m$, the number of clients in the federated learning system, to approach infinity at a rate close to $N$, the total sample size of all clients.
The average sample size $n = N / m$ on each client is roughly of order $N^{1 - \eta}$.
And in our federated setting, $m$ doesn't have a direct relationship with $n$.
When $\eta > 1/2$, it comes to a situation where $m$ tends to infinity faster than $n$, which also occurs in federated settings.
For example, McMahan et al. \cite{McMahan2016} proposed federated learning as a solution for updating language models on mobile phones.
There are many mobile edge devices holding private data while the training sample size of each device may not be enough.
Theorem \ref{thm1} says our method remains effective in this case.
\end{remark}
\section{Numerical Studies}
\label{numerical}
\subsection{Simulation Studies}
We evaluate the performance of our method through simulations.
The simulation datasets are generated using the following models.
\begin{align*}
\text{Model 1: } & y = \boldsymbol{\beta}_1^{\mathrm{T}} \mathbf{x} + \varepsilon, \\
\text{Model 2: } & y = \exp(\boldsymbol{\beta}_1^{\mathrm{T}} \mathbf{x} / 3^{1/2} + \varepsilon), \\
\text{Model 3: } & y = \frac{\boldsymbol{\beta}_1^{\mathrm{T}} \mathbf{x}}{0.5 + (\boldsymbol{\beta}_2^{\mathrm{T}} \mathbf{x} + 1.5)^2} + \varepsilon, \\
\text{Model 4: } & y = \sgn(\boldsymbol{\beta}_1^{\mathrm{T}} \mathbf{x}) \times \lvert 2 + \boldsymbol{\beta}_2^{\mathrm{T}} \mathbf{x} \rvert^{-1} + \varepsilon. \\
\end{align*}
We set $\boldsymbol{\beta}_{1,j} = 1$ for $j = 1, 2, 3$ and $\boldsymbol{\beta}_{1,j} = 0$ otherwise;
$\boldsymbol{\beta}_{2,j} = 1$ for $j = 4, 5$ and $\boldsymbol{\beta}_{2,j} = 0$ otherwise.
We use two different ways to generate datasets for federated learning.
The first one is a modified version of the federated learning benchmark LEAF \citep{Caldas2019leaf}.
For client $i \in [m]$:
\begin{enumerate}
\item Generate $\mathbf{A}_i \sim N(0, \alpha \mathbf{I}_d)$;
\item Generate $\mathbf{v}_i \sim N(\mathbf{A}_i, \mathbf{I}_d)$;
\item Draw $\mathbf{x}^{(i)}$ from $N(\mathbf{v}_i, \boldsymbol{\Sigma})$, where $\boldsymbol{\Sigma}$ is the covariance matrix with $\boldsymbol{\Sigma}_{i,j} = \gamma^{\lvert i-j \rvert}$;
\item Generate $\varepsilon \sim N(0, 1)$ and compute $y^{(i)}$ with respect to the models.
\end{enumerate}
In our simulations, we choose $\alpha$ to be $1$ and $\gamma = 0.5$.
This approach reflects the \textit{covariate shift} in section \ref{sec2.1}, which corresponds to different marginal distributions of $\mathbf{x}^{(i)}$'s.
Simulation datasets of Setting 1-4 are generated in this way with respect to Model 1-4.
Another one pays attention to the \textit{concept shift}, which focuses on the gap of the conditional distribution of $y$ given $\mathbf{x}$.
For the $i$-th client, $\mathbf{x}^{(i)}$ is generated from a normal distribution with zero mean and the same covariance matrix structure as above.
However, we assume that $y^{(i)}$ on different clients may follow different models, which means we don't know whether the dataset on a client comes from Model 1 or Model 2 in Setting 5 and Model 3 or Model 4 in Setting 6.
Thus, we generate a Bernoulli random variable $b_i \sim B(0.5)$ for each client to decide which model $y^{(i)}$ belongs to.
In Setting 5, if $b_i = 1$, then the $i$-th client follows Model 1, otherwise follows Model 2;
in Setting 6, if $b_i = 1$, then the $i$-th client follows Model 3, otherwise follows Model 4.
Therefore, we have six settings of simulation experiments.
The goal is to estimate the central subspace $\mathcal{V}_{y \vert \mathbf{x}}$ using $\text{span}(\hat{\boldsymbol{\Pi}})$.
Also, we want to measure the variable selection accuracy of our algorithm.
We use the definition proposed by Tan et al. \cite{Kean2018convex} that the true positive rate (TPR) is the proportion of correctly identified variables, and the false positive rate (FPR) is the proportion of inactive variables falsely identified as active variables.
\begin{table}[t]
\begin{center}
\begin{minipage}{\textwidth}
\caption{True and false positive rates, and subspace distances with $m = 10$, $n = 100$ and $200$, $d = 150$.
All entries are averaged across 200 runs.
The standard deviations are in the brackets.
}
\label{tab1}
\begin{tabular}{@{}cccccccc@{}}
\toprule
& & Setting 1 & Setting 2 & Setting 3 & Setting 4 & Setting 5 & Setting 6 \\ \midrule
$n = 100$ & & & & & & & \\
FedSSIR & TPR & 1.000 & 1.000 & 0.946 & 0.897 & 1.000 & 0.958 \\
& & (0) & (0) & (0.128)&(0.180)& (0) & (0.101) \\
& FPR & 0.002 & 0.006 & 0.012 & 0.006 & 0.003 & 0.014 \\
& & (0.003) & (0.007) & (0.009) & (0.007) & (0.004) & (0.011) \\
& Dist & 0.088 & 0.167 & 0.630 & 0.862 & 0.105 & 0.654 \\
& & (0.044) & (0.201) & (0.324) & (0.371) & (0.048) & (0.302) \\
SSIR & TPR & 0.917 & 0.803 & 0.453 & 0.160 & 1.000 & 1.000 \\
& & (0.236) & (0.341) & (0.336) & (0.276) & (0) & (0) \\
& FPR & 0.347 & 0.280 & 0.199 & 0.046 & 0.129 & 0.075 \\
& & (0.321) & (0.304) & (0.228) & (0.105) & (0.145) & (0.201) \\
& Dist & 1.755 & 1.514 & 1.864 & 1.709 & 0.892 & 0.790 \\
& & (0.175) & (0.312) & (0.178) & (0.162) & (0.475) & (0.303) \\
LassoSIR & TPR & 1.000 & 1.000 & 0.789 & 0.686 & 1.000 & 0.972 \\
& & (0) & (0) & (0.162) & (0.171) & (0) & (0.078) \\
& FPR & 0.143 & 0.110 & 0.172 & 0.128 & 0.058 & 0.127 \\
& & (0.067) & (0.056) & (0.068) & (0.068) & (0.053) & (0.091) \\
& Dist & 0.176 & 0.190 & 1.198 & 1.353 & 0.142 & 0.998 \\
& & (0.076) & (0.082) & (0.110) & (0.171) & (0.062) & (0.156) \\
\midrule
$n = 200$ & & & & & & & \\
FedSSIR & TPR & 1.000 & 1.000 & 0.962 & 0.975 & 1.000 & 0.958 \\
&&(0) & (0) & (0.116) & (0.092) & (0) & (0.111) \\
& FPR & 0.000 & 0.002 & 0.011 & 0.009 & 0.001 & 0.017 \\
&&(0.001) & (0.003) & (0.008) & (0.007) & (0.002) & (0.011) \\
& Dist & 0.052 & 0.084 & 0.587 & 0.578 & 0.058 & 0.530 \\
&&(0.025) & (0.076) & (0.238) & (0.273) & (0.032) & (0.251) \\
SSIR & TPR & 0.998 & 0.995 & 0.504 & 0.232 & 1.000 & 1.000 \\
&&(0.024) & (0.053) & (0.360) & (0.341) & (0.000) & (0.000) \\
& FPR & 0.874 & 0.834 & 0.248 & 0.061 & 0.198 & 0.072 \\
&&(0.092) & (0.122) & (0.274) & (0.125) & (0.225) & (0.188) \\
& Dist & 1.895 & 1.672 & 1.967 & 1.684 & 0.915 & 0.742 \\
&&(0.099) & (0.231) & (0.151) & (0.168) & (0.462) & (0.274) \\
LassoSIR & TPR & 1.000 & 1.000 & 0.905 & 0.805 & 1.000 & 1.000 \\
&&(0) & (0) & (0.142) & (0.165) & (0) & (0.000) \\
& FPR & 0.231 & 0.168 & 0.249 & 0.179 & 0.053 & 0.187 \\
&&(0.116) & (0.089) & (0.088) & (0.075) & (0.055) & (0.087) \\
& Dist & 0.172 & 0.172 & 1.150 & 1.263 & 0.086 & 0.549 \\
&&(0.119) & (0.101) & (0.087) & (0.166) & (0.039) & (0.115) \\
\botrule
\end{tabular}
\end{minipage}
\end{center}
\end{table}
On each client, we need to estimate the covariance matrix of the conditional expectation.
We use the sample covariance matrix and (\ref{eq2.6a}) to give the estimation, where each slice set the sample size of $n_h = 20$.
The number of dimension reduction directions $K$ is chosen by the BIC-type criterion.
The penalization parameter $\rho$ is selected by the hold-out validation outlined in section \ref{sec2.3}.
For comparison, we include the results of sparse SIR (SSIR) \cite{Kean2018convex} and LassoSIR \cite{Lin2019LassoSIR} on the aggregated dataset.
The tuning parameters of these two methods are selected by cross validation.
We randomly repeat these experiments $200$ times with equal sample size $n = 100$ or $200$ on $m = 10$ clients and covariate dimension $d = 150$.
The results are summarized in Table \ref{tab1}.
Our proposal yields a better estimation accuracy in the non-i.i.d. federated data.
Our method also outperforms other methods in terms of true and false positive rates.
It seems that SSIR and LassoSIR may not be suitable for the \textit{covariate shift}.
FedSSIR has good performance in the high-dimensional case that $n = 100$.
Comparing the results of $n = 100$ with $n = 200$, we can see that with the increase of $n$, our method improves in TPR, FPR and estimation error, which is also consistent with our theoretical results.
The results show that our method is robust under non-i.i.d. settings.
Next, we conduct experiments on simulated datasets to evaluate the statistical error rate.
We employed the same data generation technique above except for one difference, where client sample sizes are unbalanced.
We generate a random vector $\boldsymbol{\omega} \sim \mathrm{Dirichlet}_m(5)$ and $(n_1, \dots, n_m) \sim \mathrm{Multinomial}(N, \boldsymbol{\omega})$ with $m = 10$.
In this case, we assume that the real $K$ is known and set $\rho = 0.5 (\log d / N)^{1/2}$.
The estimation errors at different $N$ and $d$ are presented in Figure \ref{fig:1}-\ref{fig:6}.
Circles represent results of $d = 75$ and squares represent results of $d = 50$.
The figures demonstrate the subspace distance increases with respect to $s(\log d / N)^{1/2}$.
This is consistent with Theorem \ref{thm1} where the statistical error has an order of $\mathcal{O}(\sqrt{\frac{\log d}{N}})$.
\begin{figure}[htbp]
\begin{minipage}[t]{0.3\linewidth}
\centering
\includegraphics[scale=0.18]{simfunc1v10.eps}
\caption{Results for the sub-space distance, averaged on 500 data sets of Setting 1. \label{fig:1}}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\centering
\includegraphics[scale=0.18]{simfunc2v10.eps}
\caption{Results for the sub-space distance, averaged on 500 data sets of Setting 2. \label{fig:2}}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\centering
\includegraphics[scale=0.18]{simfunc3v10.eps}
\caption{Results for the sub-space distance, averaged on 500 data sets of Setting 3. \label{fig:3}}
\end{minipage}
\end{figure}
\begin{figure}[htbp]
\begin{minipage}[t]{0.3\linewidth}
\centering
\includegraphics[scale=0.18]{simfunc4v10.eps}
\caption{Results for the sub-space distance, averaged on 500 data sets of Setting 4. \label{fig:4}}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\centering
\includegraphics[scale=0.18]{simfunc5v10.eps}
\caption{Results for the sub-space distance, averaged on 500 data sets of Setting 5. \label{fig:5}}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\centering
\includegraphics[scale=0.18]{simfunc6v10.eps}
\caption{Results for the sub-space distance, averaged on 500 data sets of Setting 6. \label{fig:6}}
\end{minipage}
\end{figure}
We further investigate the performance of our method with the number of clients $m$ increasing.
We keep $N = 10000$, $d = 100$ and split the data onto $m$ clients using the same heterogeneous setup.
Figure \ref{fig:7} plots the subspace distance $D(\mathcal{V}, \hat{\mathcal{V}})$ against $m$.
Each point on the plot is averaged on 200 replications.
When the subsample size is small, or the number of clients is large, there is a slightly growing error of FedSSIR.
This aligns with Condition (C4) and Lemma \ref{lem2} that it requires $n_i$ to be not too small to achieve optimal statistical performance.
We can see that FedSSIR can efficiently estimate the central subspace as long as $m$ is in a reasonable range.
Our method has stable performance under different settings.
\begin{figure}[htbp]
\centering
\includegraphics[scale = 0.6]{Rplotv12.eps}
\caption{Statistical error with respect to the number of clients when the total sample
size $N = 10000$ is fixed. }
\label{fig:7}
\end{figure}
In order to analysis the impact of different $\omega_i$ on the simulation results, we conduct a sensitivity experiment against the mixture weight.
We keep $N = 2000$, $d = 100$ and split the data onto $m$ clients using the heterogeneous set up.
We generate a random vector $\boldsymbol{\omega} \sim \mathrm{Dirichlet}_m(\alpha)$ and $(n_1, \dots, n_m) \sim \mathrm{Multinomial}(N, \boldsymbol{\omega})$ with $m = 10$.
The smaller the $\alpha$, the more extreme the $\boldsymbol{\omega}$.
The number of dimension reduction directions $K$ is chosen by the BIC-type criterion.
The penalization parameter $\rho$ is selected by the hold-out validation outlined in section \ref{sec2.3}.
The results in Table \ref{tabomega} show that our method performs robustly against the choices of the mixture weight $\omega_i$.
Our proposal has good performance for datasets with unbalanced sample size.
\begin{table}[ht]
\begin{center}
\begin{minipage}{\textwidth}
\caption{True and false positive rates, and subspace distances with $\alpha = 1, 2, 5$, $N = 2000$, $m = 10$, $d = 100$.
All entries are averaged across 200 runs.
The standard deviations are in the brackets.
}
\label{tabomega}
\begin{tabular}{@{}cccccccc@{}}
\toprule
& & Setting 1 & Setting 2 & Setting 3 & Setting 4 & Setting 5 & Setting 6 \\ \midrule
$\alpha=1$ & TPR & 1.000 & 1.000 & 0.990 & 1.000 & 1.000 & 0.994 \\
& & (0) & (0) & (0.056) & (0) & (0) & (0.044) \\
& FPR & 0.000 & 0.001 & 0.011 & 0.014 & 0.000 & 0.012 \\
& & (0.001) & (0.003) & (0.004) & (0.006) & (0.002) & (0.005) \\
& Dist & 0.033 & 0.044 & 0.419 & 0.388 & 0.033 & 0.417 \\
& & (0.016) & (0.022) & (0.132) & (0.114) & (0.018) & (0.097) \\
$\alpha=2$ & TPR & 1.000 & 1.000 & 0.996 & 1.000 & 1.000 & 1.000 \\
& & (0) & (0) & (0.040) & (0) & (0) & (0) \\
& FPR & 0.000 & 0.001 & 0.011 & 0.013 & 0.000 & 0.012 \\
& & (0.001) & (0.002) & (0.004) & (0.005) & (0.001) & (0.003) \\
& Dist & 0.031 & 0.041 & 0.388 & 0.399 & 0.034 & 0.386 \\
& & (0.016) & (0.020) & (0.093) & (0.086) & (0.018) & (0.062) \\
$\alpha=5$ & TPR & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\
& & (0) & (0) & (0) & (0) & (0) & (0) \\
& FPR & 0.000 & 0.000 & 0.011 & 0.013 & 0.000 & 0.011 \\
& & (0) & (0.002) & (0.002) & (0.005) & (0) & (0.003) \\
& Dist & 0.028 & 0.040 & 0.376 & 0.385 & 0.032 & 0.370 \\
& & (0.015) & (0.020) & (0.072) & (0.075) & (0.017) & (0.042) \\
\botrule
\end{tabular}
\end{minipage}
\end{center}
\end{table}
In addition to the simulations above, we have carried out simulations measuring the performance of our method on non-Gaussian random variables.
The results of this experiment are presented in Appendix \ref{bic}, along with the results of the previous experiments on BIC dimension determination.
\subsection{Real Data Analysis}
Default of credit card clients dataset~\citep{YEH2009dccc} contains 30,000 samples of people with different credit risks.
For each subject, there are 23 attributes.
First, we separate this dataset into different clients using a heterogeneous pattern, where sample sizes and class proportions on different clients are unbalanced.
For class $j$, we generate a random vector $\boldsymbol{\omega}_j$ with respect to a Dirichlet distribution $\mathrm{Dirichlet}_{m}(\alpha)$, where $\alpha$ is the concentration parameter that controls the heterogeneity of federated data.
Smaller $\alpha$ yields more heterogeneous data.
Then we generate a random vector $\mathbf{n}_{j}$ from the multinomial distribution $\mathrm{Multinomial}(\mathbf{\boldsymbol{\omega}}_{j})$.
We allocate $\mathbf{n}_{j, i}$ samples of class $j$ to client $i$.
Then we partition the samples on each client into a training set and a test set randomly of the same size.
We use logistic regression to construct the classifier and take the average prediction accuracy in the test sets as the testing metric.
We compare the performance of the following competitors.
\begin{enumerate}[label=(M\arabic*)]
\item Logistic regression on the aggregated training set. This serves as a benchmark for comparison.
\item Logistic regression on each client.
\item Federated sparse sliced inverse regression followed by logistic regression on dimension reduction variables on each client.
\item Federated principal component analysis by Grammenos et al. \cite{grammenos2020fpca} followed by logistic regression on dimension reduction variables on each client.
\item Federated principal component analysis by Chai et al. \cite{chai2021fedsvd} followed by logistic regression on dimension reduction variables on each client.
\end{enumerate}
We choose the number of the dimension reduction directions $K = 1$ for all dimension reduction methods, and the penalization parameter $\rho$ of our method is chosen by cross validation.
We repeat this process 100 times to obtain the mean performances of each method and standard deviations.
The results are shown in Table~\ref{tab3}.
It is clearly seen that FedSSIR (M3) performs better than its competitors (M4) and (M5).
Also, FedSSIR performs better than local learner (M2) as the number of clients rises, which shows the significance of collaboration.
In addition, FedSSIR is comparable to the global model (M1).
This indicates that our method successfully captures the low dimensional structure.
\begin{table}[ht]
\begin{minipage}{\textwidth}
\caption{The averaged testing accuracies based on 100 repetitions. The standard deviations are in the brackets. All entries are multiplied by 100. }
\label{tab3}
\begin{center}
\begin{tabular}{@{}ccccccc@{}}
\toprule
m & $\alpha$ & (M1) & (M2) & (M3) & (M4) & (M5) \\ \midrule
5 & 1 & 79.42(0.77) & 79.18(1.78) & 79.47(1.64) & 74.77(2.91) & 74.77(2.91) \\
5 & 2 & 79.71(0.70) & 79.86(1.45) & 79.96(1.47) & 75.22(2.51) & 75.22(2.51) \\
5 & 5 & 80.23(0.52) & 80.68(0.86) & 80.82(0.81) & 76.50(1.48) & 76.50(1.48) \\
10 & 1 & 79.37(0.60) & 78.66(1.46) & 79.17(1.45) & 74.51(2.34) & 74.51(2.34) \\
10 & 2 & 79.66(0.57) & 79.13(1.11) & 79.63(1.15) & 74.55(1.74) & 74.55(1.74) \\
10 & 5 & 80.20(0.37) & 80.24(0.63) & 80.63(0.63) & 76.32(1.06) & 76.33(1.06) \\
20 & 1 & 79.19(0.45) & 77.55(1.20) & 78.83(1.10) & 73.86(1.75) & 73.87(1.76) \\
20 & 2 & 79.56(0.42) & 78.46(1.01) & 79.54(0.83) & 74.71(1.54) & 74.71(1.54) \\
20 & 5 & 80.23(0.38) & 79.64(0.61) & 80.52(0.57) & 76.11(0.92) & 76.11(0.92) \\
30 & 1 & 79.13(0.37) & 77.06(1.11) & 78.99(0.98) & 73.93(1.60) & 73.94(1.59) \\
30 & 2 & 79.56(0.35) & 77.74(0.84) & 79.50(0.77) & 74.65(1.18) & 74.65(1.18) \\
30 & 5 & 80.19(0.33) & 78.92(0.55) & 80.50(0.42) & 75.98(0.68) & 75.98(0.68) \\
\botrule
\end{tabular}
\end{center}
\end{minipage}
\end{table}
The other one is the Communities and Crime dataset~\citep{misc_communities_and_crime_183}.
It contains 1,944 observations of communities and 100 predictive features.
The task is to predict the total number of violent crimes per 100K population of each sample.
We first separate the communities into different clients by their states and select clients with sample size larger than 80 for our experiment.
Thus we have 7 clients and a total of 1066 samples.
Then we partition the samples on each client into a training set and a test set randomly of the same size.
We use linear regression to construct the model and take the relative prediction error to evaluate the prediction performance, i.e. $\sum_{i \in test set} (\hat{y}_i - y_i)^2 / (y_i - \bar{y})^2$.
We compare the performance of the following competitors.
\begin{enumerate}[label=(M\arabic*)]
\item Linear regression on the aggregated training set. This serves as a benchmark for comparison.
\item Linear regression on each client.
\item Federated sparse sliced inverse regression followed by linear regression on dimension reduction variables on each client.
\item Federated principal component analysis by Grammenos et al. \cite{grammenos2020fpca} followed by linear regression on dimension reduction variables on each client.
\item Federated principal component analysis by Chai et al. \cite{chai2021fedsvd} followed by linear regression on dimension reduction variables on each client.
\end{enumerate}
We choose the number of the dimension reduction directions by BIC-type criterion, and the penalization parameter $\rho$ of our method is chosen by validation.
We repeat this process 100 times to obtain mean performances of each method and standard deviations.
The results are shown in Table~\ref{tab4}.
FedSSIR (M3) is significantly superior to the unsupervised learners (M4)-(M5) in regression problems.
Local learner (M2) performs badly in this case, which indicates that training with only a single client suffers from lack of data and results in under-fitting.
Our FedSSIR method helps in this case, like other federated learning methods.
Moreover, FedSSIR performs better than global learner (M1), which shows that our proposal captures the low dimensional structure and enhances the prediction power in the high dimensional case.
\begin{table}[ht]
\begin{minipage}{\textwidth}
\caption{The averaged prediction errors based on 100 repetitions. The standard deviations are in the brackets. }
\label{tab4}
\begin{center}
\begin{tabular}{@{}ccccc@{}}
\toprule
(M1) & (M2) & (M3) & (M4) & (M5) \\ \midrule
0.355(0.024) & 2.059(0.033) & 0.312(0.023) & 0.400(0.022) & 0.398(0.022) \\
\botrule
\end{tabular}
\end{center}
\end{minipage}
\end{table}
\section{Discussions \& Conclusions}
\label{conclusion}
Sufficient dimension reduction in the context of federated learning is an exciting topic that hasn't gained much attention.
We devised the FedSSIR algorithm to estimate the sufficient dimension reduction directions in the federated setting.
With a $L_1$-norm penalty term, our method can simultaneously perform dimension reduction and variable selection in the high-dimensional setting.
Our simulations and real data experiments show that FedSSIR can perform quite well on heterogeneous data.
Also, this approach can easily be extended to other federated learning problems involving generalized eigenvalue decomposition.
Nowadays, sufficient dimension reduction on a stochastic data stream is an essential topic in the big data era, see \cite{cai2020online, cheng2021online}, and how to establish a valid federated sufficient dimension reduction approach on several data streams is also an interesting problem.
\clearpage
\begin{appendices}
\section{Derivation of Algorithm \ref{alg1}}
In this section, we derive the ADMM algorithm for solving (\ref{eq2.10}).
We can write (\ref{eq2.10}) in the ADMM form
\begin{equation}
\label{eq2.11}
\min_{\boldsymbol{\Pi}, \mathbf{H} \in \mathcal{M}} f(\boldsymbol{\Pi}) + g(\mathbf{H}) \text{ subject to } \hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi} \hat{\boldsymbol{\Sigma}}^{1/2} = \mathbf{H},
\end{equation}
where $f(\boldsymbol{\Pi}) = \sum_{i=1}^{m} \frac{n_i}{N} (- \tr \{ \hat{\mathbf{T}}_i \boldsymbol{\Pi} \} + \rho \| \boldsymbol{\Pi} \|_{1,1})$ and $g(\mathbf{H}) = \infty 1_{(\| \mathbf{H} \|_* > K)} + \infty 1_{(\| \mathbf{H} \|_2 > 1)}$.
Define $f_i(\boldsymbol{\Pi}) = - \tr \{ \hat{\mathbf{T}}_i \boldsymbol{\Pi} \} + \rho \| \boldsymbol{\Pi} \|_{1,1}$.
First, we form the augmented Lagrangian for (\ref{eq2.11})
\begin{equation}
\label{eqa.1}
L(\boldsymbol{\Pi}, \mathbf{H}, \mathbf{G}) = \sum_{i=1}^{m} \frac{n_i}{N} f_i(\boldsymbol{\Pi}) + g(\mathbf{H}) + \langle \mathbf{G}, \hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi} \hat{\boldsymbol{\Sigma}}^{1/2} - \mathbf{H} \rangle + \frac{\nu}{2} \|\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi} \hat{\boldsymbol{\Sigma}}^{1/2} - \mathbf{H}\|_{\mathrm{F}}^2,
\end{equation}
where $\mathbf{G}$ is the dual variable, $\nu$ is the augmented Lagrangian parameter.
The resulting ADMM algorithm is
\begin{align*}
\boldsymbol{\Pi}^{t+1} & := \argmin_{\boldsymbol{\Pi} \in \mathcal{M}} \left (\sum_{i=1}^{m} \frac{n_i}{N} \{f_i(\boldsymbol{\Pi}) + \langle \mathbf{G}^t, \hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi} \hat{\boldsymbol{\Sigma}}^{1/2} - \mathbf{H}^t \rangle + \frac{\nu}{2} \|\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi} \hat{\boldsymbol{\Sigma}}^{1/2} - \mathbf{H}^{t}\|_{\mathrm{F}}^2 \} \right ) \\
\mathbf{H}^{t+1} & := \argmin_{\mathbf{H} \in \mathcal{M}} \left ( g(\mathbf{H}) - \langle \mathbf{G}^t, \mathbf{H} \rangle + \frac{\nu}{2} \|\mathbf{H} - \hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi}^{t+1} \hat{\boldsymbol{\Sigma}}^{1/2} \|_{\mathrm{F}}^2 \right ) \\
\mathbf{G}^{t+1} & := \mathbf{G}^t + \nu (\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi}^{t+1} \hat{\boldsymbol{\Sigma}}^{1/2} - \mathbf{H}^{t+1} ).
\end{align*}
With a scaled augmented Lagrangian parameter $\boldsymbol{\Gamma} = \mathbf{G} / \nu$, we can get the scaled form of this algorithm
\begin{align}
\label{eq2.12a} \boldsymbol{\Pi}^{t+1} & := \argmin_{\boldsymbol{\Pi} \in \mathcal{M}} \left ( \sum_{i=1}^{m} \frac{n_i}{N} \{ f_i(\boldsymbol{\Pi}) + \frac{\nu}{2} \|\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi} \hat{\boldsymbol{\Sigma}}^{1/2} - (\mathbf{H}^{t} - \boldsymbol{\Gamma}^{t})\|_{\mathrm{F}}^2 \} \right ) \\
\label{eq2.12b} \mathbf{H}^{t+1} & := \argmin_{\mathbf{H} \in \mathcal{M}} \left ( g(\mathbf{H}) + \frac{\nu}{2} \|\mathbf{H} - (\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi}^{t+1} \hat{\boldsymbol{\Sigma}}^{1/2} + \boldsymbol{\Gamma}^t) \|_{\mathrm{F}}^2 \right ) \\
\label{eq2.12c} \boldsymbol{\Gamma}^{t+1} & := \boldsymbol{\Gamma}^t + \hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi}^{t+1} \hat{\boldsymbol{\Sigma}}^{1/2} - \mathbf{H}^{t+1}.
\end{align}
The scaled version is simpler and easier to work with than the unscaled form.
Update for $\boldsymbol{\Pi}$: (\ref{eq2.12a}) is composed of client-specific problems
\begin{equation}
\label{eq2.13}
\min_{\boldsymbol{\Pi} \in \mathcal{M}} f_i(\boldsymbol{\Pi}) + \frac{\nu}{2} \|\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi} \hat{\boldsymbol{\Sigma}}^{1/2} - (\mathbf{H}^{t} - \boldsymbol{\Gamma}^{t})\|_{\mathrm{F}}^2.
\end{equation}
To solve (\ref{eq2.12a}), we adopt the idea of Federated Primal-Dual method \citep{zhang2020fedpd}.
We will solve the local problem first and average the solutions to get the closed-form update for $\boldsymbol{\Pi}$.
We use vectorization operation to convert (\ref{eq2.13}) into a vector minimization form.
Let $\boldsymbol{\pi} = \vct(\boldsymbol{\Pi})$, $\mathbf{h} = \vct(\mathbf{H})$, $\boldsymbol{\gamma} = \vct(\boldsymbol{\Gamma})$, $\boldsymbol{\tau}_i = \vct(\hat{\mathbf{T}}_i)$, and $\mathbf{C} = \hat{\boldsymbol{\Sigma}}^{1/2} \otimes \hat{\boldsymbol{\Sigma}}^{1/2}$.
Use the identity $\vct(\mathbf{A}\mathbf{X}\mathbf{B}) = (\mathbf{B}^{\mathrm{T}} \otimes \mathbf{A}) \vct(\mathbf{X})$, (\ref{eq2.13}) turned into
\begin{equation}
\label{eqa.2}
\min_{\boldsymbol{\pi}} - \boldsymbol{\tau}_i^{\mathrm{T}} \boldsymbol{\pi} + \rho \| \boldsymbol{\pi} \|_1 + \frac{\nu}{2} \|\mathbf{C} \boldsymbol{\pi} - \mathbf{h} + \boldsymbol{\gamma}\|_2^2.
\end{equation}
(\ref{eqa.2}) doesn't have a closed-form solution, but we can consider a linearized version of it by adding a quadratic term of $\boldsymbol{\pi} - \boldsymbol{\pi}^{t}$ as suggested by Fang et al. \cite{Fang2015GADMM}
\begin{equation}
\label{eqa.3}
\boldsymbol{\pi}_i^{t+} = \argmin_{\boldsymbol{\pi}} \left ( -\boldsymbol{\tau}_i^{\mathrm{T}} \boldsymbol{\pi} + \rho \| \boldsymbol{\pi} \|_1 + \nu \{ \boldsymbol{\pi} - \boldsymbol{\pi}^t \}^{\mathrm{T}} \mathbf{m}^t + \frac{\alpha_i}{2} \| \boldsymbol{\pi} - \boldsymbol{\pi}^t \|_2^2 \right ),
\end{equation}
where $\mathbf{m}^t = \mathbf{C} (\mathbf{C} \boldsymbol{\pi}^t - \mathbf{h}^t + \boldsymbol{\gamma}^t)$.
We choose $\alpha_i = 4 \nu \lambda^2_{\max} (\hat{\boldsymbol{\Sigma}}_i)$ to ensure the convergence of the algorithm.
Convert (\ref{eqa.3}) into matrix optimization version:
\begin{equation}
\label{eqa.4}
\boldsymbol{\Pi}_i^{t+} = \argmin_{\boldsymbol{\Pi} \in \mathcal{M}} \rho \|\boldsymbol{\Pi}\|_{1,1} + \frac{\alpha_i}{2} \|\boldsymbol{\Pi} - [\boldsymbol{\Pi}^t + \frac{1}{\alpha_i} \hat{\mathbf{T}}_i - \frac{\nu}{\alpha_i} \mathbf{M}^t] \|_{\mathrm{F}}^2,
\end{equation}
where $\mathbf{M}^t = \hat{\boldsymbol{\Sigma}} \boldsymbol{\Pi}^t \hat{\boldsymbol{\Sigma}} - \hat{\boldsymbol{\Sigma}}^{1/2} (\mathbf{H}^{t} - \boldsymbol{\Gamma}^{t}) \hat{\boldsymbol{\Sigma}}^{1/2}$.
(\ref{eqa.4}) has the closed-form solution
\begin{equation}
\label{eqa.5}
\boldsymbol{\Pi}_i^{t+} = \mathrm{ST}[\boldsymbol{\Pi}^t + \frac{1}{\alpha_i} \hat{\mathbf{T}}_i - \frac{\nu}{\alpha_i} \mathbf{M}^t, \frac{\rho}{\alpha_i} ].
\end{equation}
Therefore, we can get the update of $\boldsymbol{\Pi}$:
\begin{equation*}
\boldsymbol{\Pi}^{t+1} = \sum_{i=1}^{m} \frac{n_i}{N} \boldsymbol{\Pi}_i^{t+}.
\end{equation*}
\begin{remark}
We can write (\ref{eq2.13}) in vectorized form and then the problem is reduced to solving a Lasso regression problem
\begin{equation*}
\min_{\boldsymbol{\pi}} \frac{\nu}{2} \|\mathbf{C} \boldsymbol{\pi} - \mathbf{d}_i^t\|_2^2 + \rho \|\boldsymbol{\pi}\|_1,
\end{equation*}
where $\mathbf{d}_i^t = \vct(\mathbf{H}^t - \boldsymbol{\Gamma}^t + \frac{1}{\nu} \hat{\boldsymbol{\Sigma}}^{-1/2} \hat{\mathbf{T}}_i \hat{\boldsymbol{\Sigma}}^{-1/2})$.
Although plenty of algorithms are proposed to solve Lasso regression problems, this transformation involves a $d^2$-dimensional Lasso problem in each iteration and increases the computation cost.
\end{remark}
\begin{remark}
Observe that $$\|\boldsymbol{\Pi}^{t+1} - \boldsymbol{\Pi}^{t}\|_{\mathrm{F}} \leq \|\sum_{i=1}^{m} \frac{n_i}{N} \boldsymbol{\Pi}_i^{t+} - \boldsymbol{\Pi}^{t}\|_{\mathrm{F}} \leq \sum_{i=1}^{m} \frac{n_i}{N} \| \boldsymbol{\Pi}_i^{t+} - \boldsymbol{\Pi}^{t} \|_{\mathrm{F}}. $$
Theorem 6 in \cite{Fang2015GADMM} implies that $\| \boldsymbol{\Pi}_i^{t+} - \boldsymbol{\Pi}^{t} \|_{\mathrm{F}} \leq C_i / t$ for some constants $C_i$'s.
Thus, we have $\|\boldsymbol{\Pi}^{t+1} - \boldsymbol{\Pi}^{t}\|_{\mathrm{F}} \leq C / t$ for some constant $C$.
According to Lemma 4 in \cite{Fang2015GADMM}, if $\| \boldsymbol{\Pi}^{t+1} - \boldsymbol{\Pi}^{t} \|_{\mathrm{F}} = 0$, $\boldsymbol{\Pi}^{t+1}$ is a solution to (\ref{eq2.10}).
A $\mathcal{O}(1/t)$ convergence rate for our algorithm is thus established.
\end{remark}
Update for $\mathbf{H}$: The update of $\mathbf{H}$ follows directly from Proposition 2 in \cite{Kean2018convex}.
\begin{proposition}
\label{prop1}
\cite{Kean2018convex}
Let $\sum_{j=1}^d w_j \mathbf{u}_j \mathbf{u}_j^{\mathrm{T}}$ be the singular value decomposition of $\mathbf{W}$.
Then the optimization problem $$\min_{\mathbf{H} \in \mathcal{M}} \|\mathbf{H} - \mathbf{W}\|_{\mathrm{F}}^2 \text{ subject to } \|\mathbf{H}\|_* \leq K \text{ and } \|\mathbf{H}\|_2 \leq 1 $$ has solution $\mathbf{H}^* = \sum_{j=1}^d \min \{1, \max(w_j - \gamma^*, 0)\} \mathbf{u}_j \mathbf{u}_j^{\mathrm{T}}. $
$\gamma^*$ satisfies $$\gamma^* = \argmin_{\gamma > 0} \gamma \text{ subject to } \sum_{j=1}^d \min \{1, \max(w_j - \gamma, 0)\} \leq K. $$
\end{proposition}
Thus, let $\sum_{j=1}^d w_j \mathbf{u}_j \mathbf{u}_j^{\mathrm{T}}$ be the singular value decomposition of $\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Pi}^{t+1} \hat{\boldsymbol{\Sigma}}^{1/2} + \boldsymbol{\Gamma}^t$.
By proposition \ref{prop1}, we have $$\mathbf{H}^{t+1} = \sum_{j=1}^d \min \{1, \max(w_j - \gamma^*, 0)\} \mathbf{u}_j \mathbf{u}_j^{\mathrm{T}}, $$where $$\gamma^* = \argmin_{\gamma > 0} \gamma \text{ subject to } \sum_{j=1}^d \min \{1, \max(w_j - \gamma, 0)\} \leq K. $$
\section{Proof of Theorem \ref{thm1}}
It is necessary to show that Lemma \ref{lem1} and Lemma \ref{lem2} hold to prove this theorem.
In this section, we first show the proofs of Lemma \ref{lem1} and Lemma \ref{lem2}.
Then we are going to prove Theorem \ref{thm1}.
\subsection{Proof of Lemma \ref{lem1}}
\begin{proof}
For simplify, we assume $\mathbb{E}[\mathbf{x}] = 0$ and $N = m n$ since each client's sample mean and sample size don't influence the theoretical results in this part.
For pair $(j, k)$, we have
$\hat{\boldsymbol{\Sigma}}_{jk} = \frac{1}{m} \sum_{i=1}^{m} \frac{1}{n} \sum_{l_i=1}^{n} \mathbf{x}_{l_i,j}^{(i)} \mathbf{x}_{l_i,k}^{(i)} = \frac{1}{m n} \sum_{(i, l_i)=(1,1)}^{(m,n)} \mathbf{x}_{l_i,j}^{(i)} \mathbf{x}_{l_i,k}^{(i)}$
where $\hat{\boldsymbol{\Sigma}}_{jk}$ is the $(j,k)$th element of matrix $\hat{\boldsymbol{\Sigma}}$, and $\mathbf{x}_{l_i,j}^{(i)}$ is the $j$th element of $\mathbf{x}_{l_i}^{(i)}$.
Then Lemma 1 in \cite{ravikumar2011high} implies the results of Lemma \ref{lem1} here.
\end{proof}
\subsection{Proof of Lemma \ref{lem2}}
\begin{proof}
Recall that $\mathbf{x}^{(i)}_{(k)^*}$ are the concomitants of the order statistics $y^{(i)}_{(k)}$ and $m^{i}(y)$ is the inverse regression curve at client $i$.
Define the model error of the inverse regression by $\boldsymbol{\epsilon}^{(i)} = \mathbf{x}^{(i)} - m^{i}(y^{(i)})$.
Clearly, $\eps{i}{k} = \xc{i}{k} - \my{i}{k}$ are also the concomitants of $y^{(i)}_{(k)}$.
(2.1) and (2.4) in \cite{Yang1977concomitant} show that the concomitants are conditionally independent given order statistics $y^{(i)}_{(k)}$ and $\mathbb{E}[\xc{i}{k}] = \mathbb{E}[\my{i}{k}]$.
Then, $\eps{i}{k}$ are zero-mean random vectors and conditionally independent given $y^{(i)}_{(k)}$.
First, we will introduce sub-Gaussian and sub-exponential distributions.
For a random variable $z \in \mathbb{R}$, define its $\psi_1$-norm and $\psi_2$-norm as $\|z\|_{\psi_1} = \sup_{p \geq 1} p^{-1} (\mathbb{E} \lvert z \rvert^p)^{1/p}$ and $\|z\|_{\psi_2} = \sup_{p \geq 1} p^{-1/2} (\mathbb{E} \lvert z \rvert^p)^{1/p}$.
Sub-Gaussian random variable is defined to have a $\psi_2$-norm less than a constant and sub-exponential random variable is defined to have a $\psi_1$-norm less than a constant.
The following proposition states the connection between sub-Gaussian and sub-exponential random variables.
The proof of it is a direct consequence of H\"older's inequality and the definitions of sub-Gaussian and sub-exponential random variables.
\begin{proposition}
\label{prop2}
Suppose $X$ and $Y$ are sub-Gaussian random variables, then $X Y$ is sub-exponential and $\|X Y\|_{\psi_1} \leq 2 \|X\|_{\psi_2} \|Y\|_{\psi_2}$.
\end{proposition}
Denote the $j$-th element of $\mathbf{x}^{(i)}$ by $\mathbf{x}^{(i)}_j$, the same for $\boldsymbol{\epsilon}^{(i)}_j$ and $m^{i}_j(y)$.
Clearly, under Condition (C2), $\mathbf{x}^{(i)}_j$ is a sub-Gaussian random variable and its $\psi_2$-norm $\|\mathbf{x}^{(i)}_j\|_{\psi_2} \leq C (\boldsymbol{\Sigma}_{j j})^{1/2}$.
Proposition 3 in \cite{Kean2018convex} states that $m^i_j(y^{(i)})$ and $\boldsymbol{\epsilon}^{(i)}_j$ are also sub-Gaussian random variables with norm $\|\mathbf{x}^{(i)}_j\|_{\psi_2}$ and $2 \|\mathbf{x}^{(i)}_j\|_{\psi_2}$, respectively.
Now, we have
\begin{align*}
\hat{\mathbf{Q}} & = \frac{1}{N} \sum_{i=1}^{m} n_i \hat{\mathbf{Q}}_i \\
& = \frac{1}{N} \sum_{i=1}^{m} \sum_{k=1}^{\lfloor n_i/2 \rfloor} \{\mathbf{x}^{(i)}_{(2k)^*} - \mathbf{x}^{(i)}_{(2k-1)^*}\}\{\mathbf{x}^{(i)}_{(2k)^*} - \mathbf{x}^{(i)}_{(2k-1)^*}\}^{\mathrm{T}} \\
& = \frac{1}{N} \sum_{i=1}^{m} \sum_{k=1}^{\lfloor n_i/2 \rfloor} \{ \my{i}{2k} - \my{i}{2k-1} \} \{ \my{i}{2k} - \my{i}{2k-1} \}^{\mathrm{T}} \\
& + \frac{1}{N} \sum_{i=1}^{m} \sum_{k=1}^{\lfloor n_i/2 \rfloor} \{ \eps{i}{2k} - \eps{i}{2k-1} \} \{ \my{i}{2k} - \my{i}{2k-1} \}^{\mathrm{T}} \\
& + \frac{1}{N} \sum_{i=1}^{m} \sum_{k=1}^{\lfloor n_i/2 \rfloor} \{ \my{i}{2k} - \my{i}{2k-1} \} \{ \eps{i}{2k} - \eps{i}{2k-1} \}^{\mathrm{T}} \\
& + \frac{1}{N} \sum_{i=1}^{m} \sum_{k=1}^{\lfloor n_i/2 \rfloor} \{ \eps{i}{2k} - \eps{i}{2k-1} \} \{ \eps{i}{2k} - \eps{i}{2k-1} \}^{\mathrm{T}} \\
& = M_1 + M_2 + M_3 + M_4. \\
\end{align*}
It is easy to see that
\begin{equation}
\label{eq.p1}
\|\hat{\mathbf{Q}} - \bar{\mathbf{Q}}\|_{\max} \leq \|\mathbf{M}_1\|_{\max} + \|\mathbf{M}_2\|_{\max} + \|\mathbf{M}_3\|_{\max} + \|\mathbf{M}_4 - \mathbf{Q}^*\|_{\max} + \|\mathbf{Q}^* - \bar{\mathbf{Q}}\|_{\max},
\end{equation}
where $\mathbf{Q}^* = \sum_{i=1}^{m} \frac{n_i}{N} \mathbf{Q}_i$.
Then we only need to derive the convergence rate of the foregoing five terms on the right side of (\ref{eq.p1}).
For the $(j,l)$th element of $\mathbf{M}_1$, we have
\begin{align*}
\lvert (\mathbf{M}_1)_{j l} \rvert & = \lvert \frac{1}{N} \sum_{i=1}^{m} \sum_{k=1}^{\lfloor n_i/2 \rfloor} \{ \m{i}{2k}{j} - \m{i}{2k-1}{j} \} \{ \m{i}{2k}{l} - \m{i}{2k-1}{l} \} \rvert \\
& \leq \frac{1}{N} \sum_{i=1}^{m} [ \sup_{\mathcal{U}(B)} \sum_{k=2}^{n_i} \|\my{i}{k} - \my{i}{k-1} \|_{\infty}]^2 \\
& \leq \frac{m \delta^2}{N}.
\end{align*}
By Condition (C4), there exists some constant $\delta > 0$, we have $\sup_{\mathcal{U}(B)} \sum_{k=2}^{n} \|\my{i}{k} - \my{i}{k-1} \|_{\infty} \leq \delta$, which implies the last inequality above.
Thus, we have $\|\mathbf{M}_1\|_{\max} \leq \frac{m \delta^2}{N} \leq \frac{C \delta^2}{N^{1 - \eta}}$.
Similarly, for the $(j, l)$th element of $\mathbf{M}_2$, we have
\begin{align*}
\lvert (\mathbf{M}_2)_{j l}\rvert & = \lvert \frac{1}{N} \sum_{i=1}^{m} \sum_{k=1}^{\lfloor n_i/2 \rfloor} \{\ep{i}{2k}{j} - \ep{i}{2k-1}{j}\} \{\m{i}{2k}{j} - \m{i}{2k-1}{j}\} \rvert \\
& \leq \frac{1}{N} \sum_{i=1}^{m} 2 \max_{k,j} \lvert \boldsymbol{\epsilon}^{(i)}_{k,j} \rvert [\sup_{\mathcal{U}(B)} \sum_{k=2}^{n_i} \|\my{i}{k} - \my{i}{k-1} \|_{\infty}] \\
& \leq \sum_{i=1}^{m} \frac{2 \delta}{N} \max_{k,j} \lvert \boldsymbol{\epsilon}^{(i)}_{k,j} \rvert. \\
\end{align*}
Then, following directly from the equivalence of sub-Gaussian properties in Lemma 5.5 in \cite{vershynin2011introduction}, for some constant $C$, we have $$\mathrm{Pr} \left (\max_{k,j} \lvert \boldsymbol{\epsilon}^{(i)}_{k,j} \rvert \geq C (\frac{N^{2 \eta}}{m^{2}}\log d)^{1/2}\right ) \leq \exp (-C' \frac{N^{2 \eta}}{m^{2}} \log d), $$ with $C' \frac{N^{2 \eta}}{m^{2}} > 2$.
Thus, with probability being at least $1 - \exp(-C' \frac{N^{2 \eta}}{m^{2}} \log d)$, we have $\lvert (\mathbf{M}_2)_{j l} \rvert \leq C\frac{(\log d)^{1/2}}{N^{1 - \eta}}$.
Taking the union bound, we have
\begin{align*}
\mathrm{Pr} \left (\|\mathbf{M}_2\|_{\max} \geq C\frac{(\log d)^{1/2}}{N^{1 - \eta}} \right ) & \leq \sum_{j,l} \mathrm{Pr} \left (\lvert (\mathbf{M}_2)_{j l} \rvert \geq C\frac{(\log d)^{1/2}}{N^{1 - \eta}} \right ) \\
& \leq d^2 \mathrm{Pr} \left (\lvert (\mathbf{M}_2)_{j l} \rvert \geq C\frac{(\log d)^{1/2}}{N^{1 - \eta}} \right ) \\
& \leq \exp(-C' \frac{N^{2 \eta}}{m^{2}} \log d + 2 \log d) \\
& \leq \exp(-C'' \log d).
\end{align*}
$\mathbf{M}_3$ can also be upper bounded similarly.
For the $(j,l)$th element of $\mathbf{M}_4 - \mathbf{Q}^*$, we have
\begin{align*}
(\mathbf{M}_4)_{j l} - \mathbf{Q}^{*}_{j l} & = \frac{1}{N} \sum_{i=1}^{m} \sum_{k=1}^{\lfloor n_i/2 \rfloor} \{ \ep{i}{2k}{j} - \ep{i}{2k-1}{j} \} \{ \ep{i}{2k}{l} - \ep{i}{2k-1}{l} \} - \mathbf{Q}^{*}_{j l} \\
& = \frac{1}{N} \sum_{i=1}^{m} [ \sum_{k=1}^{n_i} (\boldsymbol{\epsilon}^{(i)}_{k, j} \boldsymbol{\epsilon}^{(i)}_{k, l} - \mathbf{Q}_{i, j l}) - \sum_{k=1}^{\lfloor n_i/2 \rfloor} \ep{i}{2k}{j} \ep{i}{2k-1}{l} \\
& - \sum_{k=1}^{\lfloor n_i/2 \rfloor} \ep{i}{2k}{l} \ep{i}{2k-1}{j}] \\
& = \mathbf{J}_1 - \mathbf{J}_2 - \mathbf{J}_3.
\end{align*}
It requires to show that $\mathbf{J}_1$, $\mathbf{J}_2$ and $\mathbf{J}_3$ is upper bounded by $C(\log d/ N)^{1/2}$.
Since $\mathbb{E}[\boldsymbol{\epsilon}^{(i)} \boldsymbol{\epsilon}^{(i)T}] = \mathbb{E}[(\mathbf{x}^{(i)} - m^{i}(y^{(i)}))(\mathbf{x}^{(i)} - m^{i}(y^{(i)}))^{\mathrm{T}}] = \mathbb{E}[\mathbf{x}^{(i)} \mathbf{x}^{(i)T}] - \mathbb{E}[\mathbb{E}(\mathbf{x}^{(i)} \vert y^{(i)})\mathbb{E}(\mathbf{x}^{(i)} \vert y^{(i)})^{\mathrm{T}}] = E[\cov(\mathbf{x}^{(i)} \vert y^{(i)})] = \mathbf{Q}_i$, combined with Proposition \ref{prop2} and the centering lemma in \cite{vershynin2018HDP}, $\boldsymbol{\epsilon}^{(i)}_{k, j}\boldsymbol{\epsilon}^{(i)}_{k,l} - \mathbf{Q}_{i, j l}$ is sub-exponential with $\psi_1$-norm bounded by $c \| \mathbf{x}\|_{\psi_2}^2$, where $c > 0$ is a constant.
Thus, following directly from the Bernstein-type inequality in Proposition 5.16 in \cite{vershynin2011introduction}, for some sufficiently large constant $C$, we obtain $$\mathrm{Pr} \left (\lvert \frac{1}{N} \sum_{i=1}^{m} \sum_{k=1}^{n_i} (\boldsymbol{\epsilon}^{(i)}_{k, j} \boldsymbol{\epsilon}^{(i)}_{k, l} - \mathbf{Q}_{i, j l})\rvert \geq C (\frac{\log d}{N})^{1/2} \right ) \leq \exp(-C' \log d). $$
Also, by taking the union bound, we have $$\|\mathbf{J}_1\|_{\max} \leq C (\frac{\log d}{N})^{1/2}, $$ with probability greater than $1 - \exp(-C'' \log d)$.
Recall that $\eps{i}{2k}$ and $\eps{i}{2k-1}$ are conditionally independent sub-Gaussian random vectors with mean zero given the order statistics $y^{(i)}_{(1)}, \dots, y^{(i)}_{(n_i)}$.
Thus, $\ep{i}{2k}{j} \ep{i}{2k-1}{l}$ is sub-exponential with mean zero and have a $\psi_1$-norm less than $c \|\mathbf{x}\|_{\psi_2}^2$, conditionally given the order statistics.
Similar to above, by the Berstein-type inequality and taking the union bound, we have $$\mathrm{Pr} \left (\|\mathbf{J}_2\|_{\max} \geq C (\frac{\log d}{N})^{1/2} \mid y^{(i)}_{(1)}, \dots, y^{(i)}_{(n_i)}, i \in [m] \right ) \leq \exp(- C'' \log d). $$
Taking expectation on the order statistics, we have
\begin{align*}
\mathrm{Pr} \left (\|\mathbf{J}_2\|_{\max} \geq C (\frac{\log d}{N})^{1/2} \right ) & = \mathbb{E} \left [ \mathrm{Pr} \left (\|\mathbf{J}_2\|_{\max} \geq C (\frac{\log d}{N})^{1/2} \mid y^{(i)}_{(1)}, \dots, y^{(i)}_{(n_i)}, i \in [m] \right ) \right ] \\
& \leq \exp(-C'' \log d).
\end{align*}
$\mathbf{J}_3$ can also be upper bounded in the same manner.
Thus, with probability being at least $1 - \exp(- C' \log d)$, $\|\mathbf{M}_4 - \mathbf{Q}^{*}\|_{\max}$ is upper bounded by $C (\frac{\log d}{N})^{1/2}$.
For the $(j, l)$th element of $\mathbf{Q}^* - \bar{\mathbf{Q}}$, we have
\begin{align*}
\lvert \mathbf{Q}^{*}_{j l} - \bar{\mathbf{Q}}_{j l} \rvert & = \sum_{i=1}^{m} \lvert \frac{n_i}{N} - \omega_i \rvert \lvert Q_{i, j l} \rvert \\
& \leq C \sum_{i=1}^{m} \lvert \frac{n_i}{N} - \omega_i \rvert. \\
\end{align*}
This inequality follows from Condition (C3) and (C6).
By Lemma C.2 in \cite{Agrawal2017rl}, we have an inequality against multinomial random vector $$\mathrm{Pr} \left (\sum_{i=1}^{m} \lvert \frac{n_i}{N} - \omega_i \rvert \geq C (\frac{\log d}{N})^{1/2} \right ) \leq \exp (-C' \log d). $$
Taking the union bound, for some constants $C, C^{\prime}, C^{\prime\prime}$, we have
\begin{align*}
\mathrm{Pr} \left (\| \mathbf{Q}^* - \bar{\mathbf{Q}} \|_{\max} \geq C (\frac{\log d}{N})^{1/2} \right ) & \leq \sum_{j,l} \mathrm{Pr} \left ( \lvert \mathbf{Q}^{*}_{j l} - \bar{\mathbf{Q}}_{j l} \rvert \geq C (\frac{\log d}{N})^{1/2} \right ) \\
& \leq d^2 \mathrm{Pr} \left ( \lvert \mathbf{Q}^{*}_{j l} - \bar{\mathbf{Q}}_{j l} \rvert \geq C (\frac{\log d}{N})^{1/2} \right ) \\
& \leq d^2 \mathrm{Pr} \left ( \sum_{i=1}^{m} \lvert \frac{n_i}{N} - \omega_i \rvert \geq C^{\prime} (\frac{\log d}{N})^{1/2} \right ) \\
& \leq \exp (-C^{\prime\prime} \log d). \\
\end{align*}
Together with the upper bounds of $\mathbf{M}_1$, $\mathbf{M}_2$, $\mathbf{M}_3$ and $\mathbf{M}_4 - \mathbf{Q}^*$, we have $$\|\hat{\mathbf{Q}} - \bar{\mathbf{Q}}\|_{\max} \leq C (\frac{\log d}{N})^{1/2} + C^{\prime} \frac{(\log d)^{1/2}}{N^{1 - \eta}}, $$ with probability being at least $1 - \exp(-C_1 \log d)$ for some constant $C_1$.
\end{proof}
\subsection{Proof of Theorem \ref{thm1}}
\begin{proof}
Before establishing the upper bound on the distance between true and estimated subspaces, we first consider the statistical error of the estimated projection matrix $\hat{\Pi}$.
Proposition 4 in \cite{Kean2018convex} states that $\mathbf{V}$ is the solution of (\ref{eq2.5c}) if and only if $\mathbf{T}$ can be written as $\boldsymbol{\Sigma} \mathbf{V} \boldsymbol{\Lambda} \mathbf{V}^{\mathrm{T}} \boldsymbol{\Sigma}$, where $\mathbf{V}^{\mathrm{T}} \boldsymbol{\Sigma} \mathbf{V} = \mathbf{I}_{K}$.
Let $\tilde{\mathbf{T}} = \hat{\boldsymbol{\Sigma}} \mathbf{V} \boldsymbol{\Lambda} \mathbf{V}^{\mathrm{T}} \hat{\boldsymbol{\Sigma}}$, $\tilde{\mathbf{V}} = \mathbf{V} (\mathbf{V}^{\mathrm{T}} \hat{\boldsymbol{\Sigma}} \mathbf{V})^{-1/2}$, $\tilde{\boldsymbol{\Pi}} = \tilde{\mathbf{V}} \tilde{\mathbf{V}}^{\mathrm{T}}$ and $\tilde{\boldsymbol{\Lambda}} = (\mathbf{V}^{\mathrm{T}} \hat{\boldsymbol{\Sigma}} \mathbf{V})^{1/2} \boldsymbol{\Lambda} (\mathbf{V}^{\mathrm{T}} \hat{\boldsymbol{\Sigma}} \mathbf{V})^{1/2}$.
$\tilde{\boldsymbol{\Pi}}$ can be viewed as the bridge between $\hat{\boldsymbol{\Pi}}$ and $\boldsymbol{\Pi}$.
Thus, we obtain
\begin{equation*}
\| \hat{\boldsymbol{\Pi}} - \boldsymbol{\Pi} \|_{\mathrm{F}} \leq \| \tilde{\boldsymbol{\Pi}} - \boldsymbol{\Pi} \|_{\mathrm{F}} + \| \hat{\boldsymbol{\Pi}} - \tilde{\boldsymbol{\Pi}} \|_{\mathrm{F}}.
\end{equation*}
For the term $\| \tilde{\boldsymbol{\Pi}} - \boldsymbol{\Pi} \|_{\mathrm{F}}$, Lemma 2 in \cite{Kean2018convex} establishes concentration that given constants $C_1$ and $C_2$,
\begin{equation}
\label{b3.0}
\| \tilde{\boldsymbol{\Pi}} - \boldsymbol{\Pi} \|_{\mathrm{F}} \leq C_1 K (s / N)^{1/2} \leq C_2 s \rho.
\end{equation}
with probability being at least $1 - \exp(-C' s)$.
The second inequality holds by Condition (C5) and the assumption that $\rho \geq C (\log d / N)^{1/2}$ for some constant $C$.
Let $\boldsymbol{\Delta} = \hat{\boldsymbol{\Pi}} - \tilde{\boldsymbol{\Pi}}$.
Following Gao et al. \cite{Gao2017scca}, we first derive the upper bound and lower bound for $\| \hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Delta} \hat{\boldsymbol{\Sigma}}^{1/2} \|_{\mathrm{F}}$.
Then we can upper bound $\boldsymbol{\Delta}$ with those results.
Upper bound for $\| \hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Delta} \hat{\boldsymbol{\Sigma}}^{1/2} \|_{\mathrm{F}}$:
First, we want to show that $\tilde{\boldsymbol{\Pi}}$ is a feasible solution of (\ref{eq2.10}).
We have
\begin{align*}
\| \hat{\boldsymbol{\Sigma}}^{1/2} \tilde{\boldsymbol{\Pi}} \hat{\boldsymbol{\Sigma}}^{1/2} \|_2 & \leq \|\hat{\boldsymbol{\Sigma}}^{1/2} \tilde{\mathbf{V}} \|_2^2 = \|\tilde{\mathbf{V}}^{\mathrm{T}} \hat{\boldsymbol{\Sigma}} \tilde{\mathbf{V}} \|_2 \\
& = \|\left(\mathbf{V}^{\mathrm{T}} \hat{\boldsymbol{\Sigma}} \mathbf{V}\right)^{-1 / 2} \mathbf{V}^{\mathrm{T}} \hat{\boldsymbol{\Sigma}} \mathbf{V}\left(\mathbf{V}^{\mathrm{T}} \hat{\boldsymbol{\Sigma}} \mathbf{V}\right)^{-1 / 2}\|_2 \\
& = \|\mathbf{I}_K\|_2 = 1,
\end{align*}
and
\begin{align*}
\| \hat{\boldsymbol{\Sigma}}^{1/2} \tilde{\boldsymbol{\Pi}} \hat{\boldsymbol{\Sigma}}^{1/2} \|_* & = \tr\{\hat{\boldsymbol{\Sigma}}^{1/2} \tilde{\boldsymbol{\Pi}} \hat{\boldsymbol{\Sigma}}^{1/2}\} \\
& = \tr\{\tilde{\mathbf{V}}^{\mathrm{T}} \hat{\boldsymbol{\Sigma}} \tilde{\mathbf{V}}\} \\
& = K.
\end{align*}
Since $\hat{\boldsymbol{\Sigma}}$ is the optimal solution of (\ref{eq2.10}), we have
\begin{equation*}
-\langle \hat{\mathbf{T}}, \hat{\boldsymbol{\Pi}} \rangle+\rho\|\hat{\boldsymbol{\Pi}} \|_{1,1} \leq - \langle \hat{\mathbf{T}}, \tilde{\boldsymbol{\Pi}} \rangle+\rho\|\tilde{\boldsymbol{\Pi}}\|_{1,1}.
\end{equation*}
Rearrange the variables and use the H\"older's inequality we have
\begin{align*}
\rho\|\tilde{\boldsymbol{\Pi}}\|_{1,1} - \rho\|\tilde{\boldsymbol{\Pi}} + \boldsymbol{\Delta}\|_{1,1} & \geq - \langle \tilde{\mathbf{T}}, \boldsymbol{\Delta} \rangle - \langle \hat{\mathbf{T}} - \tilde{\mathbf{T}}, \boldsymbol{\Delta} \rangle \\
& \geq - \langle \tilde{\mathbf{T}}, \boldsymbol{\Delta} \rangle - \|\hat{\mathbf{T}} - \tilde{\mathbf{T}} \|_{\max} \|\boldsymbol{\Delta}\|_{1,1}.
\end{align*}
When $\|\hat{\mathbf{T}} - \tilde{\mathbf{T}} \|_{\max} \leq \frac{\rho}{2}$, we have
\begin{equation}
\label{b3.1}
\rho\|\tilde{\boldsymbol{\Pi}}\|_{1,1} - \rho\|\tilde{\boldsymbol{\Pi}} + \boldsymbol{\Delta}\|_{1,1} + \frac{\rho}{2} \|\boldsymbol{\Delta}\|_{1,1} \geq - \langle \tilde{\mathbf{T}}, \boldsymbol{\Delta} \rangle.
\end{equation}
Let $\mathcal{S} = \mathrm{supp}(\boldsymbol{\Pi})$, we have $\lvert \mathcal{S} \rvert \leq s^2$.
Let $\mathcal{S}^c$ be the complementary set of $\mathcal{S}$.
By the definition of $\tilde{\boldsymbol{\Pi}}$ and $\tilde{\mathbf{V}}$, we have $\mathrm{supp}(\tilde{\boldsymbol{\Pi}}) = \mathrm{supp}(\boldsymbol{\Pi})$.
Reconsider the LHS of (\ref{b3.1}),
\begin{align*}
\rho\|\tilde{\boldsymbol{\Pi}} \|_{1,1} - \rho\|\tilde{\boldsymbol{\Pi}} + \boldsymbol{\Delta}\|_{1,1} + \frac{\rho}{2} \|\boldsymbol{\Delta}\|_{1,1} & = \rho\|\tilde{\boldsymbol{\Pi}}_{\mathcal{S}} \|_{1,1} - \rho\|\tilde{\boldsymbol{\Pi}}_{\mathcal{S}} + \boldsymbol{\Delta}_{\mathcal{S}}\|_{1,1} - \rho \|\boldsymbol{\Delta}_{\mathcal{S}^c}\|_{1,1} + \frac{\rho}{2} \|\boldsymbol{\Delta}\|_{1,1} \\
& \leq \rho \|\boldsymbol{\Delta}_{\mathcal{S}}\|_{1,1} - \rho \|\boldsymbol{\Delta}_{\mathcal{S}^c}\|_{1,1} + \frac{\rho}{2} \|\boldsymbol{\Delta}_{\mathcal{S}} + \boldsymbol{\Delta}_{\mathcal{S}^c} \|_{1,1} \\
& = \frac{3\rho}{2} \|\boldsymbol{\Delta}_{\mathcal{S}}\|_{1,1} - \frac{\rho}{2} \|\boldsymbol{\Delta}_{\mathcal{S}^c}\|_{1,1}.
\end{align*}
Thus, we have
\begin{equation}
\label{b3.2}
- \langle \tilde{\mathbf{T}}, \boldsymbol{\Delta} \rangle \leq \frac{3\rho}{2} \|\boldsymbol{\Delta}_{\mathcal{S}}\|_{1,1} - \frac{\rho}{2} \|\boldsymbol{\Delta}_{\mathcal{S}^c}\|_{1,1}.
\end{equation}
Directly from Lemma 3 in \cite{Kean2018convex}, we obtain
\begin{equation}
\label{b3.3}
\begin{aligned}
- \langle \tilde{\mathbf{T}}, \boldsymbol{\Delta} \rangle &= \langle\hat{\boldsymbol{\Sigma}}^{1/2} \mathbf{V} \boldsymbol{\Lambda} \mathbf{V}^{\mathrm{T}} \hat{\boldsymbol{\Sigma}}^{1/2}, \hat{\boldsymbol{\Sigma}}^{1/2}(\tilde{\boldsymbol{\Pi}}-\hat{\boldsymbol{\Pi}}) \hat{\boldsymbol{\Sigma}}^{1/2} \rangle \\
&= \langle\hat{\boldsymbol{\Sigma}}^{1/2} \tilde{\mathbf{V}} \tilde{\boldsymbol{\Lambda}} \tilde{\mathbf{V}}^{\mathrm{T}} \hat{\boldsymbol{\Sigma}}^{1/2}, \hat{\boldsymbol{\Sigma}}^{1/2}(\tilde{\boldsymbol{\Pi}}-\hat{\boldsymbol{\Pi}}) \hat{\boldsymbol{\Sigma}}^{1/2} \rangle \\
& \geq \frac{\lambda_{K}}{2}\|\hat{\boldsymbol{\Sigma}}^{1/2}(\tilde{\boldsymbol{\Pi}}-\hat{\boldsymbol{\Pi}}) \hat{\boldsymbol{\Sigma}}^{1/2} \|_{F}^{2}-\|\tilde{\boldsymbol{\Lambda}}-\boldsymbol{\Lambda}\|_{F} \|\hat{\boldsymbol{\Sigma}}^{1/2}(\tilde{\boldsymbol{\Pi}}-\hat{\boldsymbol{\Pi}}) \hat{\boldsymbol{\Sigma}}^{1/2} \|_{F}.
\end{aligned}
\end{equation}
Let $\delta = \|\tilde{\boldsymbol{\Lambda}}-\boldsymbol{\Lambda}\|_{F}$.
Combining (\ref{b3.2}) and (\ref{b3.3}), we have
\begin{equation}
\label{b3.4}
\begin{aligned}
\lambda_{K}\|\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Delta} \hat{\boldsymbol{\Sigma}}^{1/2}\|_{\mathrm{F}}^{2} - 2 \delta \|\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Delta} \hat{\boldsymbol{\Sigma}}^{1/2}\|_{\mathrm{F}} & \leq 3 \rho\|\boldsymbol{\Delta}_{\mathcal{S}}\|_{1,1} - \rho\|\boldsymbol{\Delta}_{\mathcal{S}^c}\|_{1,1} \\
& \leq 3 \rho\|\boldsymbol{\Delta}_{\mathcal{S}}\|_{1,1},
\end{aligned}
\end{equation}
which implies
\begin{equation}
\label{b3.5}
\|\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Delta} \hat{\boldsymbol{\Sigma}}^{1/2}\|_{\mathrm{F}}^{2} \leq \frac{4 \delta^{2}}{\lambda_{K}^{2}}+\frac{6 \rho}{\lambda_{K}}\|\boldsymbol{\Delta}_{\mathcal{S}}\|_{1,1}.
\end{equation}
Lower bound for $\| \hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Delta} \hat{\boldsymbol{\Sigma}}^{1/2} \|_{\mathrm{F}}$:
By (\ref{b3.4}), we have
\begin{equation}
\label{b3.6}
2 \delta \|\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Delta} \hat{\boldsymbol{\Sigma}}^{1/2}\|_{\mathrm{F}} + 3 \rho\|\boldsymbol{\Delta}_{\mathcal{S}}\|_{1,1} - \rho\|\boldsymbol{\Delta}_{\mathcal{S}^c}\|_{1,1} \geq 0.
\end{equation}
Using the fact that $a^2 + b^2 \geq 2 a b$, we have
\begin{equation}
\label{b3.7}
\frac{\delta^2}{\lambda_K} + \lambda_K \|\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Delta} \hat{\boldsymbol{\Sigma}}^{1/2}\|_{\mathrm{F}}^2 \geq 2 \delta \|\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{\Delta} \hat{\boldsymbol{\Sigma}}^{1/2}\|_{\mathrm{F}}.
\end{equation}
Combining (\ref{b3.5}), (\ref{b3.6}) and (\ref{b3.7}), we obtain
\begin{equation}
\label{b3.8}
\frac{5 \delta^2}{\lambda_K} + 9 \rho\|\boldsymbol{\Delta}_{\mathcal{S}}\|_{1,1} - \rho\|\boldsymbol{\Delta}_{\mathcal{S}^c}\|_{1,1} \geq 0.
\end{equation}
(\ref{b3.8}) shows that $\boldsymbol{\Delta}$ lies in a restricted set.
We further partition the set $\mathcal{S}^c$ into $J$ subsets.
$\mathcal{S}^c_1$ contains the indices corresponding to the largest $l$ entries of $\lvert \boldsymbol{\Delta} \rvert$, $\mathcal{S}^c_2$ contains the indices corresponding to the second largest $l$ entries of $\lvert \boldsymbol{\Delta} \rvert$, and so forth, with $\lvert \mathcal{S}^c_J \rvert \leq l$.
By the triangle inequality, we have
\begin{equation}
\label{b3.10}
\begin{aligned}
\|\hat{\boldsymbol{\Sigma}}^{1 / 2} \boldsymbol{\Delta} \hat{\boldsymbol{\Sigma}}^{1 / 2}\|_{\mathrm{F}} & \geq\|\hat{\boldsymbol{\Sigma}}^{1 / 2} \boldsymbol{\Delta}_{\mathcal{S} \cup \mathcal{S}_{1}^{c}} \hat{\boldsymbol{\Sigma}}^{1 / 2}\|_{\mathrm{F}}-\sum_{j=2}^{J}\|\hat{\boldsymbol{\Sigma}}^{1 / 2} \boldsymbol{\Delta}_{\mathcal{S}_{j}^{c}} \hat{\boldsymbol{\Sigma}}^{1 / 2}\|_{\mathrm{F}} \\
& \geq \lambda_{\min }(\hat{\boldsymbol{\Sigma}}, s+l)\|\boldsymbol{\Delta}_{\mathcal{S} \cup \mathcal{S}_{1}^{c}}\|_{\mathrm{F}}-\lambda_{\max }(\hat{\boldsymbol{\Sigma}}, l) \sum_{j=2}^{J}\|\boldsymbol{\Delta}_{\mathcal{S}_{j}^{c}}\|_{\mathrm{F}}.
\end{aligned}
\end{equation}
The definitions of $\lambda_{\max}(\hat{\boldsymbol{\Sigma}}, s)$ and $\lambda_{\max}(\hat{\boldsymbol{\Sigma}}, s)$ are deferred to Appendix \ref{defs}.
The second inequality follows directly from the definitions.
For $\|\boldsymbol{\Delta}_{\mathcal{S}_{j}^{c}}\|_{\mathrm{F}}$, we have
\begin{equation*}
\sum_{j=2}^{J} \|\boldsymbol{\Delta}_{\mathcal{S}_{j}^{c}}\|_{\mathrm{F}} \leq l^{1/2} \sum_{j=2}^{J} \|\boldsymbol{\Delta}_{\mathcal{S}_{j}^{c}}\|_{\max} \leq l^{-1/2} \sum_{j=2}^{J} \|\boldsymbol{\Delta}_{\mathcal{S}_{j-1}^{c}}\|_{1, 1} \leq l^{-1/2} \|\boldsymbol{\Delta}_{\mathcal{S}^c}\|_{1, 1},
\end{equation*}
which is combined with (\ref{b3.8}), to obtain
\begin{equation}
\label{b3.9}
\begin{aligned}
\sum_{j=2}^{J} \|\boldsymbol{\Delta}_{\mathcal{S}_{j}^{c}}\|_{\mathrm{F}} & \leq \frac{5 \delta^2}{\rho \lambda_K l^{1/2}} + 9 l^{-1/2} \|\boldsymbol{\Delta}_{\mathcal{S}}\|_{1,1} \\
& \leq \frac{5 \delta^2}{\rho \lambda_K l^{1/2}} + 9 s l^{-1/2} \|\boldsymbol{\Delta}_{\mathcal{S}}\|_{\mathrm{F}},
\end{aligned}
\end{equation}
where the second inequality follows from the Cauchy-Schwarz inequality.
Substituting (\ref{b3.9}) into (\ref{b3.10}), we have
\begin{equation}
\label{b3.11}
\begin{aligned}
\|\hat{\boldsymbol{\Sigma}}^{1 / 2} \boldsymbol{\Delta} \hat{\boldsymbol{\Sigma}}^{1 / 2}\|_{\mathrm{F}} & \geq \lambda_{\min }(\hat{\boldsymbol{\Sigma}}, s+l)\|\boldsymbol{\Delta}_{\mathcal{S} \cup \mathcal{S}_{1}^{c}}\|_{\mathrm{F}} - \lambda_{\max }(\hat{\boldsymbol{\Sigma}}, l) (\frac{5 \delta^2}{\rho \lambda_K l^{1/2}} + 9 s l^{-1/2} \|\boldsymbol{\Delta}_{\mathcal{S}}\|_{1,1}) \\
& \geq (\lambda_{\min }(\hat{\boldsymbol{\Sigma}}, s+l) - 9 s l^{-1/2} \lambda_{\max }(\hat{\boldsymbol{\Sigma}}, l)) \|\boldsymbol{\Delta}_{\mathcal{S} \cup \mathcal{S}_{1}^{c}}\|_{\mathrm{F}} - \frac{5 \lambda_{\max }(\hat{\boldsymbol{\Sigma}}, l) \delta^2}{\rho \lambda_K l^{1/2}}.
\end{aligned}
\end{equation}
By Condition (C6) and Lemma 6 in \cite{Kean2018convex}, let $c_1 = l / s^2$, for constant $C$, we have
\begin{equation}
\label{b3.12}
c^{-1} - C (\frac{(c_1 s^2 + s) \log (ed)}{N})^{1/2} \leq \lambda_{\min }(\hat{\boldsymbol{\Sigma}}, s+l) \leq \lambda_{\max }(\hat{\boldsymbol{\Sigma}}, s+l) \leq c + C (\frac{(c_1 s^2 + s) \log (ed)}{N})^{1/2}.
\end{equation}
with probability being at least $1 - \exp (C^{\prime} (c_1 s^2 + s) \log (ed))$.
Substituting (\ref{b3.12}) into (\ref{b3.11}) and using the assumption that $N > C_1 s^2 \log d/ \lambda_K^2$, we have that $\lambda_{\min }(\hat{\boldsymbol{\Sigma}}, s+l) - 9 s l^{-1/2} \lambda_{\max }(\hat{\boldsymbol{\Sigma}}, l)$ is lower bounded by some constant $C_2$ when $c_1$ is sufficiently large.
Also, $\lambda_{\max }(\hat{\boldsymbol{\Sigma}}, l)$ is bounded by some constant $C_3$ similarly.
Thus we have
\begin{equation}
\label{b3.13}
\|\hat{\boldsymbol{\Sigma}}^{1 / 2} \boldsymbol{\Delta} \hat{\boldsymbol{\Sigma}}^{1 / 2}\|_{\mathrm{F}} \geq C_2 \|\boldsymbol{\Delta}_{\mathcal{S} \cup \mathcal{S}_{1}^{c}}\|_{\mathrm{F}} - C_3 \frac{\delta^2}{\rho \lambda_K l^{1/2}}.
\end{equation}
Now, we have obtained the upper and lower bound of $\|\hat{\boldsymbol{\Sigma}}^{1 / 2} \boldsymbol{\Delta} \hat{\boldsymbol{\Sigma}}^{1 / 2}\|_{\mathrm{F}}$.
Combining (\ref{b3.5}) and (\ref{b3.13}) and using the fact that $\|\boldsymbol{\Delta}_{\mathcal{S}}\|_{1,1} \leq \|\boldsymbol{\Delta}_{\mathcal{S} \cup \mathcal{S}_{1}^{c}}\|_{1,1} \leq s \|\boldsymbol{\Delta}_{\mathcal{S} \cup \mathcal{S}_{1}^{c}}\|_{\mathrm{F}}$, we have
\begin{equation*}
C_2 \|\boldsymbol{\Delta}_{\mathcal{S} \cup \mathcal{S}_{1}^{c}}\|_{\mathrm{F}} \leq C_3 \frac{\delta^2}{\rho \lambda_K l^{1/2}} + (\frac{4 \delta^{2}}{\lambda_{K}^{2}}+\frac{6 s \rho}{\lambda_{K}}\|\boldsymbol{\Delta}_{\mathcal{S} \cup \mathcal{S}_{1}^{c}}\|_{\mathrm{F}})^{1/2}.
\end{equation*}
Squaring both sides, we get
\begin{equation*}
\|\boldsymbol{\Delta}_{\mathcal{S} \cup \mathcal{S}_{1}^{c}}\|_{\mathrm{F}}^2 \leq C_4 \frac{\delta^4}{\rho^2 \lambda_K^2 l} + C_5 \frac{\delta^{2}}{\lambda_{K}^{2}} + C_6 \frac{s \rho}{\lambda_{K}}\|\boldsymbol{\Delta}_{\mathcal{S} \cup \mathcal{S}_{1}^{c}}\|_{\mathrm{F}}.
\end{equation*}
Thus, we have
\begin{equation}
\label{b3.14}
\|\boldsymbol{\Delta}_{\mathcal{S} \cup \mathcal{S}_{1}^{c}}\|_{\mathrm{F}}^2 \leq C_7 (\frac{s^2 \rho^2}{\lambda_{K}^2} + \frac{\delta^4}{\rho^2 \lambda_K^2 l} + \frac{\delta^{2}}{\lambda_{K}^{2}}).
\end{equation}
Now, we can solve for the upper bound of $\| \boldsymbol{\Delta} \|_{\mathrm{F}}$.
By (\ref{b3.9}) and (\ref{b3.14}), we have
\begin{equation}
\label{b3.15}
\begin{aligned}
\| \boldsymbol{\Delta} \|_{\mathrm{F}} & \leq \|\boldsymbol{\Delta}_{\mathcal{S} \cup \mathcal{S}_{1}^{c}}\|_{\mathrm{F}} + \sum_{j=2}^{J} \|\boldsymbol{\Delta}_{\mathcal{S}_{j}^{c}}\|_{\mathrm{F}} \\
& \leq \|\boldsymbol{\Delta}_{\mathcal{S} \cup \mathcal{S}_{1}^{c}}\|_{\mathrm{F}} + \frac{5 \delta^2}{\rho \lambda_K l^{1/2}} + 9 s l^{-1/2} \|\boldsymbol{\Delta}_{\mathcal{S} \cup \mathcal{S}_{1}^{c}}\|_{\mathrm{F}} \\
& \leq C_8 (\frac{s^2 \rho^2}{\lambda_{K}^2} + \frac{\delta^4}{\rho^2 \lambda_K^2 l} + \frac{\delta^{2}}{\lambda_{K}^{2}})^{1/2} + \frac{5 \delta^2}{\rho \lambda_K l^{1/2}}.
\end{aligned}
\end{equation}
Recall that $\delta = \|\tilde{\boldsymbol{\Lambda}} - \boldsymbol{\Lambda}\|_{\mathrm{F}} \leq K^{1/2} \|\tilde{\boldsymbol{\Lambda}} - \boldsymbol{\Lambda}\|_2$.
By Lemma \ref{lem1} and Lemma 2 in \cite{Kean2018convex}, we have $\|\tilde{\boldsymbol{\Lambda}} - \boldsymbol{\Lambda}\|_2 \leq C (s / N)^{1/2}$ with probability being greater than $1 - \exp(- C^{\prime} s)$.
Thus $\delta \leq C (\frac{K s}{N})^{1/2}$.
Under Condition (C5) and the assumption that $\rho \geq C'' (\log d/ N)^{1/2}$ for some constant $C''$, we have $\delta \leq C_9 \rho l^{1/4}$.
Substituting this into (\ref{b3.15}), we have
\begin{equation}
\| \boldsymbol{\Delta} \|_{\mathrm{F}} \leq C_8 (\frac{s^2 \rho^2}{\lambda_{K}^2} + C_9^4 \frac{\rho^2 }{\lambda_K^2} + C_9^2 \frac{s \rho^{2}}{\lambda_{K}^{2}})^{1/2} + C_9^2 \frac{5 \rho}{\lambda_K} \leq C_{10} \frac{s \rho}{\lambda_K}.
\end{equation}
Thus, we obtain the upper bound for $\| \hat{\boldsymbol{\Pi}} - \boldsymbol{\Pi} \|_{\mathrm{F}}$.
Combined with (\ref{b3.0}), for some constant $C$, we have
\begin{equation}
\| \hat{\boldsymbol{\Pi}} - \boldsymbol{\Pi} \|_{\mathrm{F}} \leq C \frac{s \rho}{\lambda_K},
\end{equation}
with probability being at least $1 - \exp(-C^{\prime} s)$.
From Corollary 3.2 in \cite{vu2013fantope}, for the population central subspace $\mathcal{V}$ and the estimated subspace $\hat{\mathcal{V}}$, we have
\begin{equation}
D(\mathcal{V}, \hat{\mathcal{V}}) \leq C \frac{s \rho}{\lambda_K},
\end{equation}
with probability being at least $1 - \exp(-C^{\prime} s)$.
Combining Corollary \ref{cor1} with Lemma 7 in \cite{Kean2018convex}, we can choose $\rho \leq C_1^{\prime} (\log d / N)^{1/2} + C_2^{\prime} (\log d)^{1/2} / N^{1 - \eta}$, then $\|\hat{\mathbf{T}} - \tilde{\mathbf{T}} \|_{\max} \leq \frac{\rho}{2}$ holds with probability at least $1 - \exp(-C_3^{\prime} \log d)$, which completes the proof.
\end{proof}
\section{Dimension determination and other simulation results}
\label{bic}
\subsection{Proof of Corollary \ref{cor:bic}}
We first give the proof of Corollary \ref{cor:bic}.
\begin{proof}
Theorem 6 in \cite{zhu2010cumulative} states that $\hat{K}_i$ converges to $K$ in probability, $\forall \gamma > 0$, there exists $N_i > 0$ that for $n_i > N_i$, $$\mathrm{Pr} (\hat{K}_i \neq K) < \gamma. $$
Suppose there are $k$ clients making correct decision of $K$ but $\hat{K} \neq K$, then there must exist a $K^{\prime}$ that at least $k+1$ clients choosing $K^{\prime}$.
This means if $\hat{K} \neq K$, there are at least $\lfloor \frac{m}{2} \rfloor$ clients making the wrong decision.
Thus, let $\hat{K}_{1}^{\prime}, \dots, \hat{K}_{m}^{\prime}$ be a permutation of $\hat{K}_{1}, \dots, \hat{K}_{m}$, we have
\begin{align*}
\mathrm{Pr} (\hat{K} \neq K) & \leq \sum_{i=0}^{\lfloor \frac{m}{2} \rfloor} \binom{m}{i} \mathrm{Pr}(\hat{K}_{1}^{\prime} \neq K) \times \cdots \times \mathrm{Pr}(\hat{K}_{m-i}^{\prime} \neq K) \\
& \times \mathrm{Pr}(\hat{K}_{m-i+1}^{\prime} = K) \times \cdots \times \mathrm{Pr}(\hat{K}_{m}^{\prime} = K) \\
& \leq \sum_{i=0}^{\lfloor \frac{m}{2} \rfloor} \binom{m}{i} \gamma^{m-i} 1^{i} \\
& \leq \sum_{i=0}^{\lfloor \frac{m}{2} \rfloor} \binom{m}{i} \gamma^{\lfloor \frac{m}{2} \rfloor} \\
& \leq (2 \sqrt{\gamma})^{m}.
\end{align*}
\end{proof}
\subsection{Simulation on non-Gaussian covariates}
We conduct a simulation to show the performance of our proposal on non-Gaussian covariates.
We generate non-Gaussian covariates in the following way.
Let $\mathbf{x}^{(i)} = \mathbf{x}_{1}^{(i)} + \mathbf{x}_{2}^{(i)}$, where $\mathbf{x}_{1}^{(i)}$ is generated same as before and the elements of $\mathbf{x}_{2}^{(i)}$ are sampled independently from Bernoulli distribution.
Since $\mathbf{x}_{1}^{(i)}$ and $\mathbf{x}_{2}^{(i)}$ are independently sampled from sub-Gaussian distributions, $\mathbf{x}^{(i)}$ is also a sub-Gaussian random vector, but not Gaussian.
\begin{table}[ht]
\begin{center}
\begin{minipage}{\textwidth}
\caption{True and false positive rates, and subspace distances with $\alpha = 5$, $N = 2000$, $m = 10$, $d = 100$.
All entries are averaged across 200 runs.
The standard deviations are in the brackets.
}
\label{tabNG}
\begin{tabular}{@{}cccccccc@{}}
\toprule
& & Setting 1 & Setting 2 & Setting 3 & Setting 4 & Setting 5 & Setting 6 \\ \midrule
FedSSIR & TPR & 1.000 & 1.000 & 0.963 & 0.851 & 1.000 & 0.950 \\
& & (0) & (0) & (0.117) & (0.210) & (0) & (0.133) \\
& FPR & 0.000 & 0.002 & 0.005 & 0.007 & 0.000 & 0.011 \\
& & (0.001) & (0.005) & (0.011) & (0.012) & (0.002) & (0.013) \\
& Dist & 0.050 & 0.075 & 0.462 & 0.683 & 0.059 & 0.459 \\
& & (0.030) & (0.035) & (0.277) & (0.348) & (0.029) & (0.246) \\
SSIR & TPR & 1.000 & 1.000 & 0.764 & 0.571 & 1.000 & 0.989 \\
& & (0) & (0) & (0.250) & (0.386) & (0) & (0.064) \\
& FPR & 0.928 & 0.918 & 0.517 & 0.280 & 0.199 & 0.032 \\
& & (0.092) & (0.105) & (0.269) & (0.265) & (0.330) & (0.147) \\
& Dist & 1.819 & 1.544 & 1.802 & 1.620 & 0.877 & 0.488 \\
& & (0.134) & (0.251) & (0.166) & (0.152) & (0.538) & (0.333) \\
LassoSIR & TPR & 1.000 & 1.000 & 0.969 & 0.997 & 1.000 & 0.999 \\
& & (0) & (0) & (0.090) & (0.032) & (0) & (0.014) \\
& FPR & 0.327 & 0.270 & 0.682 & 0.991 & 0.395 & 0.996 \\
& & (0.114) & (0.104) & (0.350) & (0.074) & (0.406) & (0.057) \\
& Dist & 0.136 & 0.141 & 1.777 & 1.776 & 1.176 & 1.328 \\
& & (0.092) & (0.063) & (0.590) & (0.523) & (0.718) & (0.367) \\
\botrule
\end{tabular}
\end{minipage}
\end{center}
\end{table}
We keep $N = 2000$, $d = 100$ and split the data onto $10$ clients using the following heterogeneous set up.
We generate a random vector $\boldsymbol{\omega} \sim \mathrm{Dirichlet}_m(5)$ and $(n_1, \dots, n_m) \sim \mathrm{Multinomial}(N, \boldsymbol{\omega})$ with $m = 10$.
The number of dimension reduction directions $K$ is chosen by the BIC-type criterion.
The penalization parameter $\rho$ is selected by the hold-out validation outlined in section \ref{sec2.3}.
We also include the results of SSIR and LassoSIR.
The results are shown in Table \ref{tabNG}.
We can see that our method still performs well on non-Gaussian covariates.
\subsection{Dimension determination results}
Here, we also report the proportion of correct decisions about the dimension of the central subspace in our simulations in Table \ref{tab:bic}.
We can see that BIC type criterion has a satisfactory performance in most cases.
As the sample size grows, the performance becomes better, which is consistent with our conclusion that $\hat{K}$ converges to $K$ in probability.
Our proposed BIC has similar performance under different $\boldsymbol{\omega}$ settings, which reflects that our method performs well for unbalanced datasets.
Compared with the other two methods, our proposed BIC performs more robustly under the non-Gaussian setting.
\begin{table}[ht]
\begin{minipage}{\textwidth}
\caption{The empirical probabilities of correctly estimating dimension in different simulations. }
\label{tab:bic}
\begin{center}
\begin{tabular}{@{}cccccccc@{}}
\toprule
& & Setting 1 & Setting 2 & Setting 3 & Setting 4 & Setting 5 & Setting 6 \\ \midrule
n = 100 & BIC & 1.000 & 0.950 & 0.785 & 0.855 & 1.000 & 0.805 \\
& SSIR & 0.020 & 0.350 & 0.575 & 0.065 & 0.430 & 0.530 \\
& LassoSIR & 1.000 & 1.000 & 0.030 & 0.055 & 1.000 & 0.525 \\
n = 200 & BIC & 1.000 & 0.995 & 0.825 & 0.865 & 1.000 & 0.875 \\
& SSIR & 0.015 & 0.290 & 0.450 & 0.065 & 0.410 & 0.615 \\
& LassoSIR & 0.995 & 1.000 & 0.035 & 0.100 & 1.000 & 0.995 \\ \midrule
$\alpha=1$ & BIC & 1.000 & 1.000 & 0.960 & 0.985 & 1.000 & 0.980 \\
$\alpha=2$ & BIC & 1.000 & 1.000 & 0.985 & 0.995 & 1.000 & 0.995 \\
$\alpha=5$ & BIC & 1.000 & 1.000 & 0.995 & 1.000 & 1.000 & 1.000 \\ \midrule
non-Gaussian & BIC & 1.000 & 1.000 & 0.840 & 0.760 & 1.000 & 0.870 \\
& SSIR & 0.015 & 0.385 & 0.645 & 0.045 & 0.365 & 0.790 \\
& LassoSIR & 0.995 & 1.000 & 0.025 & 0.010 & 0.700 & 0.010 \\
\botrule
\end{tabular}
\end{center}
\end{minipage}
\end{table}
\section{Auxiliary definitions}
\label{defs}
Here are some definitions used in the text.
Let $y_{(1)}, \dots, y_{(n)}$ be the order statistics of $y_1, \dots, y_n$.
Define the inverse regression function $m(y) = \mathbb{E}[\mathbf{x} \vert y]$ and let $m(y_{(k)}) = \mathbb{E}[\mathbf{x} \vert y_{(k)}]$.
Total variation for a vector-valued function $m(y)$ under the $L_{\infty}$ norm is defined as below.
\begin{definition}
\label{def1}
Let $\mathcal{U}(B)$ be the collection of all $n$-point partitions $-B \leq y_{(1)} \leq \dots \leq y_{(n)} \leq B$ on the interval $[-B, B]$, where $B > 0$ and $n \geq 1$.
A vector-valued function $m(y)$ is said to have a total variation defined as $\sup_{\mathcal{U}(B)} \sum_{k=2}^{n} \| m(y_{(k)}) - m(y_{(k-1)})\|_{\infty}$.
\end{definition}
Meinshausen and Yu \cite{Meinshausen2009lassohigh} gave a definition on the $s$-sparse eigenvalue of a matrix $\mathbf{M}$.
\begin{definition}
\label{def2}
The $s$-sparse minimal and maximal eigenvalues of $\mathbf{M}$ are $$\lambda_{\min} (\mathbf{M}, s) = \min_{v:\|v\|_0 \leq s} \frac{v^{\mathrm{T}} \mathbf{M} v}{v^{\mathrm{T}} v}, \quad \quad \lambda_{\max} (\mathbf{M}, s) = \max_{v:\|v\|_0 \leq s} \frac{v^{\mathrm{T}} \mathbf{M} v}{v^{\mathrm{T}} v}. $$
\end{definition}
There are two consistent estimators for $\mathbf{Q}_i$.
Hsing and Carroll \cite{hsing1992asym} gave a two slice estimation $\hat{\mathbf{Q}}_i$.
Let $y^{(i)}_{(1)}, \dots, y^{(i)}_{(n_i)}$ be the order statistics of responses.
$\mathbf{x}^{(i)}_{(k)^*}$ is denoted as the value of $\mathbf{x}$ associated with $y^{(i)}_{(k)}$ and termed the concomitant of $y^{(i)}_{(k)}$.
Then the estimator is
\begin{equation}
\label{eq2.6}
\hat{\mathbf{Q}}_i = \frac{1}{n_i} \sum_{k=1}^{\lfloor n_i/2 \rfloor} \{\mathbf{x}^{(i)}_{(2k)^*} - \mathbf{x}^{(i)}_{(2k-1)^*}\}\{\mathbf{x}^{(i)}_{(2k)^*} - \mathbf{x}^{(i)}_{(2k-1)^*}\}^{\mathrm{T}},
\end{equation}
where $\lfloor \cdot \rfloor$ is the floor function that returns the greatest integer less than or equal to the input.
Zhu and Ng \cite{zhu1995asymptotics} gave another estimator for $\mathbf{Q}_i$.
First, partition the dataset $S_i$ into $H$ slices with respect to the order statistics of $y^{(i)}$ and let $\Xi_1, \dots, \Xi_H$ be the indice sets for each slice.
Then, the estimator $\tilde{\mathbf{Q}}_i$ is obtained by computing the average of covariance matrices on each slice:
\begin{equation}
\label{eq2.6a}
\tilde{\mathbf{Q}}_i = \frac{1}{H} \sum_{h=1}^{H} \{ \frac{1}{n_h} \sum_{l \in \Xi_h} (\mathbf{x}^{(i)}_l - \bar{\mathbf{x}}^{(i)}_{\Xi_h}) (\mathbf{x}^{(i)}_l - \bar{\mathbf{x}}^{(i)}_{\Xi_h})^{\mathrm{T}} \}.
\end{equation}
Therefore, a trivial estimator for $\mathbf{T}_i$ is $\hat{\mathbf{T}}_i = \hat{\boldsymbol{\Sigma}}_i - \hat{\mathbf{Q}}_i$.
Similar to \ref{sec2.1}, we derive the population and empirical average conditional covariance matrices.
Let $\boldsymbol{\epsilon} = \mathbf{x} - \mathbb{E}[\mathbf{x} \vert y]$, the population average conditional covariance matrix $\bar{\mathbf{Q}}$ can be represented as $\bar{\mathbf{Q}} = \boldsymbol{\Sigma} - \bar{\mathbf{T}} = \sum_{i=1}^{m} \omega_i (\boldsymbol{\Sigma}_i - \mathbf{T}_i) = \sum_{i=1}^{m} \omega_i \mathbf{Q}_i$.
Thus, the empirical average conditional covariance matrix $\hat{\mathbf{Q}} = \sum_{i=1}^{m} \hat{\omega}_i \hat{\mathbf{Q}}_i = \sum_{i=1}^{m} \frac{n_i}{N} \hat{\mathbf{Q}}_i$.
\end{appendices}
|
1,314,259,993,444 | arxiv | \section{Introduction}
Magnetic Induction Tomography (MIT) measurements rely on the inductive coupling between a radio-frequency (rf) magnetic field, the so-called primary rf field, and the object of interests, Fig. ~\ref{fig:Setup}, \cite{Griffiths2001, Ma2017}. As a result of the coupling an object response is produced in the form of a secondary rf field.
For objects whose response is dominated by electrical conductivity, eddy currents induced by the primary rf field produce the secondary rf field that opposes the driving field. This leads to dissipation of the primary rf field and reduced field penetration within the object.
When the response is dominated by magnetic permeability the primary field creates within the object a magnetisation oscillating in phase with the driving field.
In general, any object shows some level of electric conductivity and magnetic permeability. The secondary rf field reflects the character of the dominating property but the measured amplitude and phase of the inductive response depends on relative ratio between the electric conductivity and magnetic permeability, which in principle indicates the composition of the object.
MIT provides a portfolio of measurements addressing a wide range of contemporary challenges in applied physics. In the area of non-destructive testing (NDT), inductive measurements enable detection of defects either covered by insulation or concealed within the object structure \cite{Auld1999, Perez2004, Deans2017, Yoshimura2019, Bevington2020a}. Immediate applications of the technology lie in the energy sector where corrosion under insulation is responsible for a significant fraction of the losses in the transport and storage of oil and gas. Implementations of MIT in object detection and surveillance include imaging through barriers and in turbulent underwater environments that prevent the use of visual or ultrasound technology.
The use of an rf atomic magnetometer as the sensor in MIT brings superior sensitivity \cite{Savukov2005, Chalupczak2012, Deans2016, Deans2018a} as well as a range of functionalities such as the ability to obtain vector measurements \cite{Bevington2019, Bevington2019b}, high bandwidth in self-oscillating mode \cite{Bevington2019c, Bevington2020}, and tunability over a wide frequency range without compromise of performance \cite{Wickenbrock2016, Bevington2020c}. Measurement of the object response with an rf atomic magnetometer relies on monitoring the change in the amplitude and phase of the rf resonance recorded with the magnetometer while scanning across the material, Fig.~\ref{fig:Setup}.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{Fig1.pdf}
\caption{Generic components of a Magnetic Induction Tomography measurement setup. The primary rf field (green arrow) produced by the rf coil (1) causes the inductive response of the object (2) in the form of a secondary field (yellow arrow). The signal is recorded by a sensor (3). Here, the radio-frequency atomic magnetometer as a sensor monitors only the components of the secondary field orthogonal to the sensor axis (black arrow).}\label{fig:Setup}
\end{figure}
Our studies so far have been focused on the implementation of rf atomic magnetometers for inductive tomographic mapping in scenarios of defect detection and object surveillance. These include the demonstration of tomographic mapping of material thinning in steelwork, which represents the detection of corrosion under insulation \cite{Bevington2018}. The imaging provides us with a vector measurement (2D) of the secondary field, while the orientation of the sensor axis, i.e. direction of the bias magnetic field, defines which field components are being monitored \cite{Bevington2019, Bevington2019b}. With the sensor and the primary rf field coil axes parallel to each other the magnetometer signal represents only the amplitude of the secondary field which leads to increased image contrast. Implementation of a spin maser, an rf atomic magnetometer operating in self-oscillating mode, increases the data acquisition rate and makes the sensor immune to variations in the ambient magnetic field \cite{Bevington2019c, Bevington2020}. This increase in measurement bandwidth comes at the price of the reduction of the part of the defect/ object signature that results the phase matching condition in the sensor feedback loop. The application of a pair of the primary rf field coils with opposite polarity, a dual frequency spin maser \cite{Bevington2020b} or an external phase scan can solve this issue.
Whilst this inductive tomographic mapping can provide information about the depth and spatial extent of a defect or object, it requires a scan of the sensor over the area of interest. Although the scan time can be optimised there is a category of scenarios, such as security screening, that requires rapid measurements that are possible at a single location and determine whether more detailed screening is required. Usually, this decision is based on the ability to discriminate between different types of materials.
In this paper, we present a technique that can potentially help determine object composition and hence reduce measurement duration. It combines measurements of the angular, frequency and spatial dependence of the signal with comparisons of the object's inductive response to those of reference materials with mutually exclusive properties such as copper (high electric conductivity, negligible magnetic permeability) and ferrite (negligible electric conductivity, high magnetic permeability). While the rf signal frequency dependence has been shown to provide discrimination between different objects, the demonstration was limited to a narrow class of purely conductive materials and required a series of extra measurements for calibration \cite{Wickenbrock2016, Deans2018c}. The discrimination discussed in \cite{Wickenbrock2016, Deans2018c} was based on the direct dependence of the inductive signal on the electrical conductivity of objects with negligible magnetic permeability. Identification of objects whose inductive response, the secondary rf field, results from both eddy currents and magnetisation components is more complex. Moreover, as we demonstrate, the signal depends on the experiment geometry and the object shape, complicating both the measurements and data analysis. We present frequency and angular dependence measurements that are performed at single spot above the object, which can reduce the screening time. The focus of this paper is on validation of the technique, i.e. demonstration of a series of measurements that can provide discrimination between objects made of different materials. Analysis of the data for practical application can be improved by the introduction of various metrics, such as those based on machine learning. We demonstrate two methods of inductive image analysis that can assist in identification of object composition, the first based on the signal amplitude's frequency dependence and the second using the amplitude integrated over the entire image area.
\section{Experimental setup}
The measurements described here are performed with a radio-frequency atomic magnetometer operating in a magnetically unshielded environment \cite{Bevington2018, Bevington2019, Bevington2019b}. For the purposes of the techniques described here, the technical details of the atomic magnetometer are not essential. A description of the sensor and instrumentation is presented elsewhere \cite{Bevington2019, Bevington2019b} and we limit discussion of the sensor to the enumeration of its major components. Our rf atomic magnetometer instrumentation includes three major subsystems: lasers, caesium atomic vapour contained in a paraffin-coated cell and the detection. The cell is kept at ambient temperature (atomic density $n_{\text{Cs}}=0.33 \times10^{11} \text{cm}^{-3}$) in a static magnetic bias field, created by a set of nested, orthogonal, square Helmholtz coils. The strength of the bias field defines the operating (Larmor) frequency of the sensor. The laser system produces two beams. A circularly polarised pump beam, frequency stabilized to the $6\,^2$S$_{1/2}$ F=3$\rightarrow{}6\,^2$P$_{3/2}$ F'=2 transition (D2 line, $\SI{852}{\nano\meter}$) propagates along the direction of the bias magnetic field. It creates a population imbalance within the ensemble of caesium atoms. A probe laser, whose frequency is tuned $\SI{2.75}{\giga\hertz}$ below the $6\,^2$S$_{1/2}$ F=3$\rightarrow{}6\,^2$P$_{3/2}$ F'=2 transition, propagates in a direction orthogonal to the pump beam. It monitors the atomic signal created by the coupling of the atoms and the rf magnetic fields (i.e. atomic coherence). The primary rf field, oscillating at the sensor operating frequency, is produced by a coil located in the vicinity of the object. Lift off, the distance between the primary rf field coil and the object, is between $\SI{2}{\milli\meter}$ and $\SI{20}{\milli\meter}$. The axis of the primary rf field is parallel to the bias field direction. It is important to stress that the atomic magnetometer can sense only the rf magnetic field that is perpendicular to the direction of the bias magnetic field. In the following we refer to the bias field direction as the axis of the sensor. The parallel orientation of the sensor axis and the primary rf field makes the sensor insensitive to the primary rf field \cite{Bevington2019b}. Consequently, the sensor readout, either measured by a lock-in amplifier or recorded by a 2 MS/s data acquisition board, monitors directly the secondary rf field. This simplifies the analysis of the data and makes the normalisation procedure, essential in \cite{Wickenbrock2016}, obsolete.
\begin{figure}[tbp]
\includegraphics[width=\columnwidth]{Fig2.pdf}
\caption{The secondary rf field produced by eddy currents (a), (c) and magnetisation (b), (d) in measurement geometries where the normal to the object surface is either parallel (a), (b) or tilted at an angle (c), (d) to the primary rf field direction. The red arrows in (b) and (d) indicate how we define the lift off in each geometry.}\label{fig:Secondary_field}
\end{figure}
\section{Measurement geometry}
In general, eddy currents and magnetisation induced within the object produce two secondary rf field components that have different amplitude and phase characteristics. Depending on the relative electric conductivity and magnetic permeability values, one of these components dominates the object's inductive response. In this section we show that the measurement geometry, specifically the relative orientation between the normal to the object surface and the sensor axis, can suppress or enhance the contribution to the measurement signal from each component of the secondary field. This provides a mechanism to distinguish between them.
Due to eddy current driven dissipation, penetration of the rf field within an electrically conductive object can be limited to a thin layer in the immediate vicinity of the surface. In the particular case of an object made from aluminium, the skin depth is $\SI{0.8}{\milli\meter}$ for a primary rf field frequency of $\SI{10}{\kilo\hertz}$. In general it can be expected that for any object the component of the secondary field due to eddy currents is produced in the immediate vicinity of the surface and its direction is parallel to the normal to the surface. This means that the secondary field direction reflects the orientation of the surface and any change in the orientation of the object surface results in a change of the direction of the secondary field. This will manifest itself as a change in the detected rf signal amplitude and phase. In contrast, in objects with negligible electrical conductivity (and hence low rf field dissipation) and high magnetic permeability, such as ferrites, the direction of the secondary field is defined by magnetisation throughout the object. It is parallel to the primary rf field direction regardless of the orientation of the object. Hence, it can be expected that the component of the secondary field produced by magnetisation in any object mirrors the primary rf field direction.
To gain further insight we consider two measurement geometries, the first where the normal to an object's surface is parallel to the primary rf field, Fig.~\ref{fig:Secondary_field} (a)-(b), and the second where there is a non-zero tilt between the two, Fig.~\ref{fig:Secondary_field} (c)-(d). As discussed earlier, the primary rf field direction (green arrow) is parallel to the axis of the sensor (black arrow). The atomic magnetometer used as the sensor is insensitive to rf field components directed along this axis, so the primary rf field does not contribute to the measured signal.
In the first configuration components of the secondary field produced by both eddy currents and magnetisation are parallel to the axis of the sensor, and are in consequence invisible to it. With a non-zero tilt between the axes the direction of the component produced by eddy currents is no longer parallel to the sensor axis, making it visible to the sensor, Fig.~\ref{fig:Secondary_field} (c). The direction of the component produced by magnetisation remains parallel to the sensor axis, and so does not contribute to the detected signal, Fig.~\ref{fig:Secondary_field} (d). In general, with increasing angle between the sensor axis and the normal to the object surface the visibility of the eddy current driven component increases, while that of the magnetisation component doesn't change.
\begin{figure}[tbp]
\includegraphics[width=\columnwidth]{Fig3.pdf}
\caption{Plots of the amplitude (a) and phase (b) of measured rf signals generated by a single scan over pairs of the stainless steel, copper and ferrite $35\times\SI{35}{\milli\meter\squared}$ plates. Red dashed lines mark the positions of the plates. For amplitude and phase images in the lower row the normal to the surface of the plates is parallel to the primary rf field, whilst there is a $15^{\circ}$ tilt between them for images in the upper row. The axis of rotation is directed along the Y axis through the centre of the plates. The image was recorded at an operating frequency $\SI{29}{\kilo\hertz}$. }\label{fig:Tilt_image}
\end{figure}
To illustrate the differences between object responses produced by eddy currents and magnetisation in different measurements geometries two sets of amplitude and phase inductive images were recorded. The images were generated by a single scan over three pairs of stainless steel, copper and ferrite $35\times\SI{35}{\milli\meter\squared}$ plates, marked with red dashed lines in Fig. ~\ref{fig:Tilt_image}. All the plates used in the experiment were $\SI{0.5}{\milli\meter}$ thick, except the ferrite, which was $\SI{2}{\milli\meter}$ thick. The image in Fig. ~\ref{fig:Tilt_image} was recorded with the normal to the object surface either parallel to the primary rf field (lower row) or with a $15^{\circ}$ tilt with respect to it (upper row).
The lift off, defined as shown in Fig.~\ref{fig:Secondary_field} (b) as the distance from the primary rf coil to the axis of plate rotation, was $\SI{10}{\milli\meter}$. The scans were performed at an operating frequency of $\SI{29}{\kilo\hertz}$. The choice of this particular frequency will be explained in the following section. The images of the ferrite plate represent the case when the inductive response is solely produced by the magnetisation of the object. Both ferrite amplitude images show a dark area produced by the centre of the plate surrounded by a bright square created by the edges. This results from the secondary field component parallel to the plate surface created by the plate edges \cite{Bevington2020d}. Both ferrite phase images show the presence of a vortex, another signature of the plate edges \cite{Bevington2020d}. For this material the amplitude and phase images recorded in different measurement geometries have the same structure and the signals have similar dynamic range, which supports the expectation that the magnetisation orientation is the same regardless of the measurement configuration. The smaller amplitude on right hand side of the image recorded with $15^{\circ}$ tilt between the axes results from the bigger lift off.
The images of the copper plate represent the case when the inductive response is produced by eddy currents within the object. Images recorded in different geometries differ not only in amplitude but also in phase. The latter confirms that the direction of the secondary field produced by eddy currents depends on the orientation of the object's surface. It is worth pointing out the reversed character of the copper amplitude image recorded at $15^{\circ}$ with respect to ferrite one. The inner part representing the secondary field created by the centre of the plate is bright and is surrounded by a dark square produced by the edges.
The stainless steel represents an object that exhibits both electrical conductivity and a some magnetic permeability. Because the permeability of stainless steel is smaller than that of the ferrite, the signature of the plate edges is small and neither the bright square nor the phase vortex is visible when the plate surface is not tilted.
With a $15^{\circ}$ tilt between the sensor axis and the normal to the plate surface the amplitude and phase signatures become visible. The similarity of these signatures to those produced by the copper plate indicate that in this case the secondary field also originates from eddy currents.
\begin{figure}[tbp]
\includegraphics[width=\columnwidth]{Fig4.pdf}
\caption{Amplitude (a) and phase (b) of the rf signal as a function of the angle between the normal to the object surface and the sensor axis recorded for the ferrite (dark blue points), stainless steel (light blue triangles), brass (green diamonds) and aluminium (red squares) plates. Lines conecting points serve only as eye guides. For reference the signal level recorded in the absence of the sample is shown with the solid black line. The measurement was performed at an operating frequency $\SI{6.7}{\kilo\hertz}$. The primary rf field coil is located above the centre of the plate.}\label{fig:Tilt}
\end{figure}
Figure ~\ref{fig:Tilt} shows the rf signal amplitude (a) and phase (b) as a function of the angle between the normal to the object surface and the sensor axis recorded at a single location above ferrite (dark blue points), stainless steel (light blue triangles), brass (green diamonds) and aluminium (red squares) plates. The measurement was done by placing the object on a support plate attached to a rotation mount. Care was taken to ensure that the primary rf field coil was located above the axis of rotation, as in Fig.~\ref{fig:Secondary_field}, such that the change of object orientation does not affect its distance to the primary rf field coil.
The amplitude and phase of the signal produced by the ferrite plate does not change significantly with plate orientation, confirming that the secondary field generated by magnetisation mirrors the primary rf field direction. The high magnetic permeability and low electrical conductivity result in signals that are similar to those obtained in the absence of a sample.
In the case of the aluminium plate, the amplitude of the signal increases in angle range $0^{\circ}-45^{\circ}$.
The non-zero signal amplitude in the absence of the object results from residual misalignment between the primary rf field and sensor axes.
It is worth reiterating that this is a result of the measurement configuration and magnetisation behaviour, Fig. ~\ref{fig:Secondary_field} (b) and (d), where the sensor axis, indicated by the black arrow, is parallel to the direction of the primary rf field (green arrow) and the secondary rf field (yellow arrow).
The secondary field component due to eddy currents changes direction with plate rotation. The detected signal is sensitive only to the projection of the secondary field onto the plane perpendicular to the sensor axis, with the amplitude given by the radius of this projected vector and the phase by the radial angle. Here we rotate the plate about an axis that is perpendicular to the sensor axis (Fig.~\ref{fig:Secondary_field}), which changes the radius of the projected vector but not the radial angle. As a result we see a change in the amplitude of the signal, but no change in phase.
The difference between the phases measured for ferrite and aluminium plates at any given tilt is about $120^{\circ}$ and reflects the different character of the effect that generates the secondary field. The stainless steel plate possesses significant electrical conductivity and residual magnetic permeability and while the latter dominates for low angles the former becomes visible with increasing angle. This is reflected in the increase of signal amplitude. The lower conductivity of the stainless steel plate is reflected in the lower signal amplitude and phase relative to the aluminium plate when observed at larger tilt angles. The values of the amplitude and phase produced by the brass plate lie between those of stainless steel and aluminium, which is consistent with its intermediate conductivity.
An angular dependence of the amplitude and phase of the signal similar to that presented in Fig.~\ref{fig:Tilt} is observed for operating frequencies above $\SI{4}{\kilo\hertz}$, which confirms that the effect requires eddy current generation limited to the immediate vicinity of the surface.
Because of the measurement configuration, where the sensor has an insensitive axis that is aligned with the primary rf field, the angular dependence of the measured signal amplitude and phase is affected by an object's geometry. In the particular case of a plate the amplitude reaches a minimum for $0^{\circ}$ and $90^{\circ}$ when a surface of the plate faces the rf primary field, because the surface orthogonal to the primary rf field does not contribute to the signal. This can be seen in Fig. ~\ref{fig:Tilt} (a), where the signal amplitude reaches a maximum at $45^{\circ}$ and shows signs of decreasing for bigger angles. It is worth noticing that the thickness of the plate is not important. The same angular dependence of the signal amplitude would be observed in the case of a cubic box.
One might expect that, for regular shapes the number of minima in the angular dependence of the signal amplitude and the angles at which they occur can provide information about the symmetry of an object. In the more general case, a proper understanding of how the output of the local measurement depends on object geometry is important in the reconstruction of object shape and composition.
\section{Frequency scan}
Figure ~\ref{fig:Frequency_scan} shows the amplitude and phase of the rf signal as a function of the rf field frequency for ferrite (dark blue points), stainless steel (light blue triangles), brass (green diamonds) and aluminium (red squares) plates. For reference the signal recorded in the absence of an object is also shown (black solid line). A $20^{\circ}$ tilt between the normal to the plate surface and the primary rf field ensures that the component of the secondary field created by the eddy currents is visible.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{Fig5.pdf}
\caption{Amplitude (a) and phase (b) of the rf signal as a function of the operating frequency recorded for the ferrite (dark blue points), stainless steel (light blue triangles), brass (green diamonds) and aluminium (red squares) plates. Lines conecting points serve only as eye guides. For reference the signal level recorded in the absence of the sample is shown with a solid black line. The plates are tilted by $20^{\circ}$ with respect to the primary rf field direction. The primary rf field coil is located above the centre of the plate.}\label{fig:Frequency_scan}
\end{figure}
Similarly to the angle scan, the set of points representing the amplitude and phase of the signal observed over the ferrite plate overlaps with that observed in the absence of a sample. As pointed out before, non-zero signal amplitude in the absence of the object results from residual misalignment between the primary rf field and sensor axes. The decrease of the signal amplitude with operating frequency is consitent with a similar dependence observed in a standard rf spectroscopy arrangement. It is useful to take the data recorded over ferrite and aluminium plates as the points of reference. Analysis of the frequency dependencies of the stainless steel and brass amplitudes and phases relative to ferrite and aluminium indicates the presence of three frequency regimes. The first, up to $\SI{4}{\kilo\hertz}$, represents frequencies where low induction efficiency results in low eddy current density. In this range the amplitude of the stainless steel overlaps with ferrite. For the frequencies in a second range, $\SI{4}{\kilo\hertz}$ - $\SI{15}{\kilo\hertz}$, a transition is observed in the stainless steel signal amplitude and phase from the level observed over ferrite to that recorded over the aluminium. In the third frequency range, above $\SI{15}{\kilo\hertz}$, all observed amplitude and phase values are close to their asymptotic levels. It is worth pointing out that the $\SI{29}{\kilo\hertz}$ operating frequency used to acquire the inductive images in Fig. ~\ref{fig:Tilt_image} lies in this third frequency range, where the frequency dependence of the signals is negligible. It is worth comparing the frequency dependence of the phase changes of the signal generated by brass and stainless steel objects. The phase measured with brass, although smaller in value, mirrors the dependence observed for aluminium across the whole frequency range. These phase changes indicate that the inductive properties are dominated by electrical conductivity. With stainless steel the phase behaviour is similar to that of the ferrite at low frequencies, but approaches that of aluminium at higher frequencies. This is consistent with an object that has significant magnetic permeability and electrical conductivity.
\begin{figure*}[h!]
\includegraphics[width=\textwidth]{Fig6.pdf}
\caption{The measured change of the amplitude (a) and phase (b) of the rf signal recorded over the ferrite and aluminium $35\times\SI{35}{\milli\meter\squared}$ plates at $\SI{3.5}{\kilo\hertz}$ (red), $\SI{7}{\kilo\hertz}$ (green), and $\SI{29}{\kilo\hertz}$ (blue). (c) Images integrated over three frequencies for various types of plates. }\label{fig:Image}
\end{figure*}
\section{Object mapping - spatial scan}
In this section we present two methods of the inductive image analysis. The first method uses the frequency dependence of the object response, whilst the second uses the integrated image amplitude.
We have previously shown that for conductive objects an optimum value within a 1–2 kHz frequency range can be identified, which maximises the amplitude and contrast of features (defect, edge signatures) observed in the inductive images \cite{Bevington2020a}. Similar behaviour was seen in magnetically permeable objects, but where the optimum values were shifted to a higher frequency range \cite{Bevington2021}. These observations suggest that monitoring the frequency dependence of the inductive image amplitude or contrast may indicate the object composition. The first analysis method follows the concept of colour perception by the human eye. White colour is a mixture of three basic (RGB) colours. Imbalance in these colour intensities will produce colour tones.
In order to explore this capability, we have recorded images showing the amplitude and phase of the rf signal over either individual or stacks of $35\times\SI{35}{\milli\meter\squared}$ plates made of various materials. The images were recorded in a measurement configuration with the sensor axis parallel to the normal to the plate surface. In this configuration the non-zero signal is created solely by the edges of the object \cite{Bevington2020d}. For each object (i.e. plate or ensemble of plates) we recorded a set of images at $\SI{3.5}{\kilo\hertz}$, $\SI{7}{\kilo\hertz}$, and $\SI{29}{\kilo\hertz}$ and used them as the basis for an RGB representation. The images shown in Fig. ~\ref{fig:Image} (a)-(b) show scans over pure ferrite and copper plates recorded at $\SI{3.5}{\kilo\hertz}$ (red), $\SI{7}{\kilo\hertz}$ (green) and $\SI{29}{\kilo\hertz}$ (blue). Each of the frequency values used in this measurement represents one of the three frequency ranges identified in the previous section. The images within each set were normalised to the maximum amplitude value recorded within the set and summed up.
Figure ~\ref{fig:Image} (c) shows the images integrated over three frequencies (RGB representation) for various other plates and sets of plates. A simple visual analysis of the images can be done by taking the ferrite and copper plates as the reference points. In this context, one can see the image of copper-ferrite plates set in Fig. ~\ref{fig:Image} (c) is a clear combination of the two references. Moreover, measuring two sheets of copper instead of one causes the conductivity fingerprint to be more pronounced, which is a manifestation of the layer's thickness.
Because of the surface character of the effects in electrically conductive objects the amplitude of the image reflects the order of the materials in sets of plates.
Reversing the order of the ferrite and copper layers does not significantly modify the structure of the image but does lead to a colour change. When copper is on the top it screens the rf field and shifts the colour palette towards the copper plate.
\begin{figure*}[h!]
\includegraphics[width=\textwidth]{Fig7.pdf}
\caption{Analysis of the object composition based on the signal amplitude integrated over the image area. The inductive images of the same plates as shown in Fig. ~\ref{fig:Image}, were recorded at $\SI{29}{\kilo\hertz}$. Each image (object) is represented by a point in 3D space defined by three orthogonal vectors that represent purely magnetically permeable (1,0,0-horizontal axis), electrically conductive (0,1,0- vertical axis) and background 'no-object detected' case (0,0,1- axis orthogonal to the plot plane). The position of the point shown in the plot is given by probabilities of the tested object having the reference properties (purely magnetically permeable, purely electrically conductive object, and no object detected). (a)/ (b) Distribution obtained with the custom metrics defined in the text/ standard the deep learning model. Error bars correspond to the standard deviation each set of samples.}\label{fig:Metric}
\end{figure*}
The second approach to inductive image analysis is based on the signal amplitude integrated over the image area. The method takes advantage of the opposite directions of the secondary fields created by the eddy currents and object magnetisation relative to an external reference such as a background field. Since the signal amplitude in the recorded image includes both the secondary and background fields, its total magnitude provides information on the relative orientation of these two components. Integration over an image area that includes elements like tilted surfaces, edges, etc. is in a sense analogous to the measurement of the angular dependence of the signal and can provide useful information for discrimination between object compositions. The method evaluates a measure of the probability that the object properties are the same as those of three reference standards: ferrite (an approximation of a purely magnetic object), copper (purely conductive object) and the absence of an object (plate presence). The integrated image amplitudes of these references define three vectors of an orthogonal basis for 3D space. Location in this space of the point representing a tested object is identified by three coordinates specified by metrics (proximity) with respect to the three references. This location is a measure of the probability of seeing the object, and of the object being electrically conductive or magnetically permeable.
We introduce the metrics, which are a measure of proximity, d, of the given data point (tested object) to the specific reference standard x, as: $d_x=1/ \sum\limits_{i=1}^N {(R^i-R_{x}^{i})^2}$, where $R^i$ is the amplitude of a single pixel, $i$, in the tested object image, $R_{x}^{i}$ is the amplitude of the corresponding single pixel in the reference image, and N is the number of pixels in the image. The index $x$ refers to either the purely magnetically permeable (1,0,0), purely electrically conductive (0,1,0) and no plate case (0,0,1). The signal amplitudes in the reference image, $R_{x}^{i}$, are calculated as an average over 70 recorded images. Since the result, i.e. the data point location in 3D space represents probability, the sum of its coordinates is normalised, $d_x +d_y+d_z=1$.
It is worth discussing the metrics structure in more detail, particularly the choice of the inverse dependence on amplitude difference, $(R^i-R_{x}^{i})^2$. The amplitude difference decreases as the object's properties become more similar to those of the reference. This leads to an increase in the value of the inverted factor, $1/(R^i-R_{x}^{i})^2$, which eventually becomes dominant over the other two factors, ie. the proximity to the other two references. Normalisation of the sum of the coordinates and projection of its position on the $xy$ (magnetic permeability vs electric conductivity) plane, places the measured point under the line $x+ y = 1$, in other words inside the triangle confined by $x= 0, y = 0$ and $x + y = 1$. The smaller the integrated amplitude difference is, the higher its inverted value and the closer its normalised value approaches 1. We have tested different types of metrics, in particular a linear dependence on integrated amplitude difference, as well as metrics including signal phase. While we have verified that all metrics provide similar qualitative results, i.e. spatial distribution of the tested points relative to reference standards, the metrics described here deliver the best differentiation between different materials.
Figure ~\ref{fig:Metric} (a) shows the location of the points representing different materials (plates, sets of plates) in the 2D electrical conductivity - magnetic permeability plane. This subspace is chosen as it enables us to demonstrate the discrimination of objects based on composition. As a result of the normalisation condition the distance from the origin reflects the probability that an object is present, and hence is a demonstration of our ability to detect objects. Each point in the plot is an average over 15 images. The points are grouped near the line connecting purely magnetic objects (1,0) and purely electrically conductive objects (0,1). This indicates all the tested samples showed a significant degree of conductivity or permeability. It is worth pointing out that the method can distinguish between the set of plates with ferrite on top of copper (blue cross) and the same set in opposite order (yellow diamond).
The images used in the measurements shown were recorded at $\SI{29}{\kilo\hertz}$, but equivalent data taken at $\SI{7}{\kilo\hertz}$ showed similar behaviour. Data recorded at $\SI{3.5}{\kilo\hertz}$ were more scattered and led to poorer material discrimination, which is consistent with the observed weaker inductive signals at low frequencies. The similar distribution of the points in the data sets recorded at $\SI{7}{\kilo\hertz}$ and $\SI{29}{\kilo\hertz}$ indicates that even an image recorded at one frequency may contain enough information for the discrimination of object composition.
The above procedure relies on the fact that all the objects have the same shape and position within the image. More flexible alternatives could be use in the form of machine learning. To test this we implemented a convolutional neural network constructed from a combination of standard layers applied in computer vision tasks (convolutional layer, pooling layer, dense layers and activation layer). The advantage of the algorithm is its ability to make a decision based on the fragments of the whole sample. The model was trained on 8x8 pixel fragments cut from 17x17 pixel images. This allowed us to increase number of 230 images available for training by factor of 100. Similar fragments of scans were then used to test trained model predictions for unknown sample types. Averaged results for all samples are presented in Fig. ~\ref{fig:Metric} (b). Individual points are strongly scattered which resulted in bigger uncertainties than in Fig. ~\ref{fig:Metric} (a) but the distributions of points are similar.
Relatively large uncertainties in Fig. ~\ref{fig:Metric} (b) are caused by the small number of object categories used for the training (free space, ferrite, copper), which was equal to the number of the properties (free space, ferromagnetic, conductor). Increase in the number of object categories (predefined standards) used in the training process would significantly increase model accuracy even if these categories did not cover all types of objects expected in tests. In other words, the ability of the algorithm to identify object properties with small uncertainty will be enhanced by introducing more standards with similar, but not necessarily the same, qualities/characteristics.
The ability to discriminate between ferrite plates and a mixture of copper and ferrite plates (Fig. ~\ref{fig:Metric}) shows that the combination of the measurement geometry and the difference in angular responses generated by eddy currents and magnetisation (Fig.~\ref{fig:Secondary_field}) allows us to see objects behind barriers or within electrically conductive enclosures.
An important concern is the practicality of implementing the presented methods in object screening. The results shown in Fig. ~\ref{fig:Metric} were recorded with objects that have the same geometry and dimensions, which is a highly idealized case.
One possible approach would be the use of a geometry non-specific procedure that combines large-scale inductive imaging of an object followed by the identification of appropriate features for composition analysis. The challenge of comparing results from objects with different geometries can also be addressed by more powerful tools such as supervised and unsupervised machine learning methods, which have proven to be very successful tools in solving similar problems \cite{Wu2020}. The implementation of machine learning used to generate the results in Fig. ~\ref{fig:Metric} (b) was successful despite using only the amplitude of the measured signals at a single frequency. Enhanced performance would be expected from an implementation incorporating a combination of frequency, spatial and angle data.
The main aim of this paper is the demonstration that a combination of the three degrees of freedom (spatial, angular and frequency) applied in the inductive measurements can provide sufficient information to deduce object composition. We anticipate that similar information combined with advanced machine learning techniques will provide an even more versatile and effective tool, in which an optimised measurement sequence for a specific implementation is determined by the machine controlling the process. In this scenario the actual test would consist of a series of moves using all the degrees of freedom in a sequence that is autonomously decided and continuously updated by a pretrained machine learning model that, at the end of the measurement procedure, would provide some specific information about the interrogated object based on the collected data.
\section{Conclusions}
In conclusion, we have demonstrated a series of MIT measurements that can assist in the identification of an object's composition. We showed that the angular, frequency and spatial dependencies of the rf signal recorded over the objects can discriminate between objects made of materials with different magnetic permeability and electrical conductivity. The concept relies on the different penetration depth in purely electrically conductive and magnetically permeable materials. The skin depth that reflects this penetration depth is a function of the operating frequency, electrical conductivity, and magnetic permeability. Observed signal dependences confirm that in electrically conductive materials the secondary field is created at the surface. This could explain why permittivity does not play a significant role in our measurements. The discrimination is made possible through the use of a sensor with an insensitive axis. This eliminates the contribution of the primary rf field to the signal and gives the sensor a different sensitivity to the eddy current and magnetisation components of an induced response. We have discussed the influence of object shape on the angular dependence of the rf signal. Whilst frequency scans can be performed at a single location over an object, the measurement of angular dependence requires a physical change of the measurement configuration. This could be performed in various ways. In the specific case of goods screening, the objects are often transferred with a conveyer belt. Location of a sensor at a bend in such a system would allow the measurement of objects at similar distances but at different orientations relative to the sensor axis. Finally, we have demonstrated that even very simple methods for acquiring and analysing inductive images can successfully discriminate between different materials.
We acknowledge the support of the UK government department for Business, Energy and Industrial Strategy through the UK national quantum technologies programme.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
1,314,259,993,445 | arxiv | \section{#1}\label{se:#2}\setcounter{equation}{0}\setcounter{paragraph}{0}\labels{
}{\mnote{#2}}}
\newcommand{\bsu}[2]{\paragraph{#1.}\label{su:#2}\labels{}{\mnote{#2}}}
\newcommand{\bss}[2]{\paragraph{#1.}\label{ss:#2}\labels{}{\mnote{#2}}}
\newtheorem{proposition}{\sc Proposition}[section]
\newtheorem{theorem}[proposition]{\sc Theorem}
\newtheorem{lemma}[proposition]{\sc Lemma}
\newtheorem{remark}[proposition]{\sc Remark}
\newtheorem{definition}[proposition]{\sc Definition}
\newtheorem{corollary}[proposition]{\sc Corollary}
\newenvironment{proof}{\begin{list}{}{\leftmargin=0pt\rightmargin=0pt}
\item {\em Proof.\ }}{\rule{6pt}{6pt}\end{list}\vspace{4pt plus 2pt minus
1pt}}
\newenvironment{notation}{\begin{list}{}{\leftmargin=0pt\rightmargin=0pt}
\item {\sc Notation\ }}{\end{list}}
\newenvironment{convention}{\begin{list}{}{\leftmargin=0pt\rightmargin=0pt}
\item {\sc Convention\ }}{\end{list}}
\newcommand{\bth}[1]{\begin{theorem}\label{th:#1}\labels{}{\mnote{#1}}}
\newcommand{\bthn}[2]{\begin{theorem}[#1]\label{th:#2}\labels{}{\mnote{#2}}}
\renewcommand{\eth}{\end{theorem}}
\newcommand{\bpr}[1]{\begin{proposition}\label{pr:#1}\labels{}{\mnote{#1}}}
\newcommand{\end{proposition}}{\end{proposition}}
\newcommand{\bco}[1]{\begin{corollary}\label{co:#1}\labels{}{\mnote{#1}}}
\newcommand{\end{corollary}}{\end{corollary}}
\newcommand{\ble}[1]{\begin{lemma}\label{le:#1}\labels{}{\mnote{#1}}}
\newcommand{\end{lemma}}{\end{lemma}}
\newcommand{\bre}[1]{\begin{remark}\label{re:#1}\labels{}{\mnote{#1}}\rm}
\newcommand{\end{remark}}{\end{remark}}
\newcommand{\bde}[1]{\begin{definition}\label{de:#1}\labels{}{\mnote{#1}}\rm}
\newcommand{\end{definition}}{\end{definition}}
\newcommand{\de}[1]{{\sc #1}}
\newcommand{\begin{proof}}{\begin{proof}}
\newcommand{\end{proof}}{\end{proof}}
\newcommand{\refpr}[1]{Proposition \ref{pr:#1}}
\newcommand{\refre}[1]{Remark \ref{re:#1}}
\newcommand{\refth}[1]{Theorem \ref{th:#1}}
\newcommand{\refco}[1]{Corollary \ref{co:#1}}
\newcommand{\refle}[1]{Lemma \ref{le:#1}}
\newcommand{\refde}[1]{Definition \ref{de:#1}}
\newcommand{\refeq}[1]{(\ref{eq:#1})}
\newcommand{\refse}[1]{\S\ref{se:#1}\,\,}
\newcommand{\refsu}[1]{\S\ref{su:#1}\,\,}
\newcommand{\refpa}[1]{part \ref{#1}}
\newcommand{\beql}[1]{\labels{}{\mnote{#1}}\begin{eqnarray}\label{eq:#1}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\begin{eqnarray*}}{\begin{eqnarray*}}
\newcommand{\end{eqnarray*}}{\end{eqnarray*}}
\newcommand{\alpha}{\alpha}
\newcommand{\beta}{\beta}
\newcommand{\epsilon}{\epsilon}
\newcommand{\gamma}{\gamma}
\newcommand{\lambda}{\lambda}
\newcommand{\Gamma}{\Gamma}
\renewcommand{\O}{\Omega}
\renewcommand{\L}{\Lambda}
\newcommand{\Lambda}{\Lambda}
\newcommand{{\Bbb C}}{{\Bbb C}}
\renewcommand{\H}{{\Bbb H}}
\newcommand{{\Bbb N}}{{\Bbb N}}
\renewcommand{\P}{{\Bbb P}}
\newcommand{{\Bbb Q}}{{\Bbb Q}}
\newcommand{{\Bbb R}}{{\Bbb R}}
\newcommand{{\Bbb Z}}{{\Bbb Z}}
\newcommand{{\cal D}}{{\cal D}}
\newcommand{{\cal E}}{{\cal E}}
\newcommand{{\cal F}}{{\cal F}}
\newcommand{{\cal G}}{{\cal G}}
\newcommand{{\cal H}}{{\cal H}}
\newcommand{{\cal K}}{{\cal K}}
\newcommand{{\cal L}}{{\cal L}}
\newcommand{{\cal M}}{{\cal M}}
\newcommand{{\cal N}}{{\cal N}}
\newcommand{{\cal O}}{{\cal O}}
\newcommand{{\cal P}}{{\cal P}}
\newcommand{{\cal Q}}{{\cal Q}}
\newcommand{{\cal S}}{{\cal S}}
\newcommand{{\cal T}}{{\cal T}}
\newcommand{Hermitian-Yang-Mills-Higgs}{Hermitian-Yang-Mills-Higgs}
\newcommand{Hermitian-Yang-Mills}{Hermitian-Yang-Mills}
\newcommand{Yang-Mills-Higgs}{Yang-Mills-Higgs}
\newcommand{{\rm i.\,e.\ }}{{\rm i.\,e.\ }}
\newcommand{{\rm e.\,g.\ }}{{\rm e.\,g.\ }}
\newcommand{$V$-bundle}{$V$-bundle}
\newcommand{\o\partial}{\o\partial}
\renewcommand{\Im}{\mbox{Im\,}}
\renewcommand{\deg}{\mbox{deg\,}}
\newcommand{\mbox{trace\,}}{\mbox{trace\,}}
\newcommand{\mbox{tr\,}}{\mbox{tr\,}}
\newcommand{{\rm Herm\,}}{{\rm Herm\,}}
\newcommand{{\rm End\,}}{{\rm End\,}}
\newcommand{{\rm vol\,}}{{\rm vol\,}}
\newcommand{{\rm Aut\,}}{{\rm Aut\,}}
\newcommand{{\rm Hom\,}}{{\rm Hom\,}}
\newcommand{\mbox{ad\,}}{\mbox{ad\,}}
\renewcommand{\and}{\quad\mbox{ and }\quad}
\renewcommand{\u}[1]{\underline{#1}}
\renewcommand{\o}[1]{\overline{#1}}
\newcommand{\oo}[1]{\overline{#1}'}
\newcommand{\quad\mbox{ or }\quad}{\quad\mbox{ or }\quad}
\renewcommand{\b}[1]{\breve{#1}}
\renewcommand{\t}[1]{\tilde{#1}}
\newcommand{{\rm i}}{{\rm i}}
\newcommand{\sum_{i=1}^n}{\sum_{i=1}^n}
\newcommand{\ilist}[1]{#1_1,\dots,#1_n}
\newcommand{|}{|}
\newcommand{\|}{\|}
\newcommand{\supsetne}{\setbox0=\hbox{$\ne$}%
\setbox1=\hbox{$ni$}%
\supset\kern-\wd1
\lower1.2ex\box0}
\newcommand{\surjarrow}{\setbox0=\hbox{$\rightarrow$}%
\rightarrow\kern-\wd0
\longrightarrow}
\newcommand{\bprn}[2]{\begin{proposition}[#1]\label{pr:#2}}
\newcommand{\wo}[1]{\widetilde{#1}}
\newcommand{\widehat}{\widehat}
\begin{document}
\title{{\bf Orbifold Riemann Surfaces and the Yang-Mills-Higgs\ Equations}} \author{{\bf
Ben
Nasatyr\thanks{Current address: Department of Mathematical Sciences,
University of Aberdeen,
Edward Wright Building, Dunbar Street, Old Aberdeen, AB9 2TY.}\,
and Brian
Steer,}\\ Peterhouse, Cambridge and Hertford College, Oxford} \date{Preprint,
August 6,
1993,\\
revised, January 5 and April 26, 1995}
\maketitle
\pagenumbering{arabic}
\setcounter{section}{-1}
\bse{Introduction}{int}
In this paper we study the $U(2)$ Yang-Mills-Higgs\ equations on orbifold Riemann surfaces.
Among other aspects, we discuss existence theorems for solutions of the Yang-Mills-Higgs\
equations, the analytic construction of the moduli space of such solutions, the
connectivity and topology of this space, its holomorphic symplectic structure
and its reinterpretations as a space of orbifold Higgs bundles or
$SL_2({\Bbb C})$-representations of (a central extension of) the orbifold fundamental
group. We follow Hitchin's original paper for (ordinary) Riemann surfaces
\cite{hi87} quite closely but there are many novelties in the orbifold
situation. (There is some overlap with a recent paper of Boden and
Yokogawa \cite{by}.)
It may help to mention here a few of our motivations.
\begin{enumerate}
\item In studying the orbifold moduli space, we are also studying the parabolic
moduli
space (see \refsu{parhig}, also \cite{si90,by,ns93}).
\item The moduli space provides interesting examples of non-compact
hyper-K\"ahler
manifolds in all dimensions divisible by 4.
\item As a special case of the existence theorem for solutions of the Yang-Mills-Higgs\
equations
we
get the existence of metrics with conical singularities and constant sectional
curvature
on `marked' Riemann surfaces (see \refco{negative curvature}, \refth{conical}
and
compare \cite{ht92}).
\item The orbifold fundamental groups we study are Fuchsian groups and their
central
extensions: these include the fundamental groups of elliptic surfaces and of
Seifert
manifolds. We obtain results on
varieties of $SL_2({\Bbb C})$- and $SL_2({\Bbb R})$-representations of such
groups (see \refse{rep} and compare {\rm e.\,g.\ }
\cite{jn85}). In particular, we prove that Teichm\"uller space for a Fuchsian
group or, equivalently, for a `marked' Riemann surface is homeomorphic to
a ball (\refth{ball}).
\item Moduli of parabolic Higgs bundles and of marked Riemann surfaces have
potential
applications in Witten's work on Chern-Simons gauge theory.
\end{enumerate}
Let $E$ be a Hermitian rank 2 $V$-bundle ({\rm i.\,e.\ } orbifold bundle) over an orbifold
Riemann
surface of negative Euler characteristic, equipped with a normalised volume
form,
$\Omega$. Let $A$ be a unitary connexion on $E$ and $\phi$ an
${\rm End\,}(E)$-valued $(1,0)$-form. Then the Yang-Mills-Higgs\ equations are
\begin{eqnarray*}
\begin{array}{rcl} F_A + [\phi,\phi^*] &=& - \pi {\rm i}\, c_1(E)\Omega I_E
\quad{\rm
and}\\ \o\partial_A\phi &=& 0.
\end{array}
\end{eqnarray*}
See \refsu{ymhymh} for details. These equations arise by dimensional-reduction
of the 4-dimensional Yang-Mills equations. Another interpretation is that they
arise if we split projectively flat $SL_2({\Bbb C})$-connexions into compact and
non-compact parts (see \refsu{repsta}).
Just as for ordinary Riemann surfaces, the moduli space, $\cal M$, of solutions
to the
Yang-Mills-Higgs\
equations has an extremely rich geometric structure which we study in the later
sections
of this paper. Let us indicate the main results and outline the contents of
each section.
The first is devoted to preliminaries on orbifold Riemann surfaces and
$V$-bundles ({\rm i.\,e.\ }
orbifold bundles): \refsu{orbint} covers the very basics, for the sake of
revision and in
order to fix notation, while \refsu{orbdiv} deals with the correspondence
between divisors
and holomorphic line $V$-bundles on an orbifold Riemann surface (some of this
may have
been anticipated in unpublished work of B. Calpini). We particularly draw
attention to
the notational conventions concerning rank 2 $V$-bundles and their rank 1
sub-$V$-bundles
established in \refsu{orbint} which are used throughout this paper.
The second section introduces Higgs $V$-bundles and the appropriate stability
condition (\refsu{highig}) and studies the basic algebraic-geometric properties
of stable Higgs $V$-bundles (\refsu{higalg})---the principal result here is
\refth{stable pairs}. This material roughly parallels \cite[\S 3]{hi87}, an
important difference being that \cite[proposition 3.4]{hi87} does not
generalise
to the orbifold case.
The third section introduces the Yang-Mills-Higgs\ equations (\refsu{ymhymh}), discusses
the existence
of solutions on stable Higgs $V$-bundles (\refsu{ymhexi}) and gives the
analytic
construction of $\cal M$ (\refsu{ymhmod}). These first three
subsections parallel \cite[\S\S 4--5]{hi87} and only in \refsu{ymhexi} would
any
significant alteration to Hitchin's work necessary to allow for the orbifold
structure.
The main results are \refth{Narasimhan-Seshadri} and \refth{moduli}.
The Riemannian structure of the moduli space (including the fact that the
moduli space is
hyper-K\"ahler) is also
discussed briefly in \refsu{ymhmod}, following \cite[\S 6]{hi87}. There is one
other
subsection:
\refsu{ymhequ} sketches alternative, equivariant, arguments that can be used
for the
existence theorem and the construction of $\cal M$. This last subsection also
discusses the pull-back map between moduli spaces which arises when an
orbifold Riemann surface is the base of a branched covering by a Riemann
surface---see
\refth{sub}. We stress that equivariant arguments {\em cannot} easily be
applied
throughout the paper---difficulties arise {\rm e.\,g.\ } in \refsu{higalg}, \refse{det}
and \refse{rep}.
The fourth section discusses the topology of $\cal M$, following
\cite[\S 7]{hi87}. The results are \refth{Morse} and \refco{topology}.
General formul\ae\ for the Betti numbers are not given
but it is clear how to calculate the Poincar\'e polynomial in any
given instance (however, see \cite{by}).
The fifth section is devoted to the holomorphic symplectic structure on $\cal
M$: following \cite[\S 8]{hi87}, $\cal M$ is described as a completely
integrable Hamiltonian system via the determinant map $\det : {\cal M} \to
H^0(K^2)$, defined by taking the determinant of the Higgs field. This result
is
given as \refth{determinant map} (we believe that a similar result was obtained
by Peter Scheinost). There are a number of stages to the proof: first, it is
simpler to use parabolic Higgs bundles and these are discussed in
\refsu{parhig}; \refsu{gendet} contains the major part of the proof, with two
special cases which arise in the orbifold case being dealt with separately in
\refsu{detred} and \refsu{detspe}. Moreover, it is shown that with respect to
the determinant map $\cal M$ is a fibrewise compactification of the cotangent
bundle of the moduli space of stable $V$-bundles (\refsu{detnon}).
The final section deals with the interpretation of the moduli space as a space
of projectively flat connexions (\refsu{repsta}) or $SL_2({\Bbb C})$-representations
of (a central extension of) the orbifold fundamental group (\refsu{reprep}),
the
identification of the submanifold of $SL_2({\Bbb R})$-representations
(\refsu{reprea})
and the interpretation of one of the components as Teichm\"uller space
(\refsu{reptei}), which leads to a proof that Teichm\"uller space is
homeomorphic to ball. The proofs are much like those of \cite[\S\S
9--11]{hi87}
and \cite{do87} and accordingly we concentrate on those aspects of the orbifold
case which are less familiar.
{\em Acknowledgements.} The great debt that the authors owe to the paper
\cite{hi87} is obvious but they are also grateful to Nigel Hitchin for many
useful conversations. Both authors would also like to thank
Hans Boden, who pointed out an error in \refco{topology}, and Mikio Furuta.
This work is an extension of part of Ben Nasatyr's doctoral thesis: he would
like to thank Simon Donaldson for his patient supervision and Oscar
Garc\'{\i}a-Prada, Peter Kronheimer and Michael Thaddeus for the contribution
that their comments made to that thesis. At that time Ben Nasatyr was a
College
Lecturer at Lady Margaret Hall, Oxford, and he spent the following year as a
Post-Doctoral Fellow at the University of British Columbia: he would like to
thank LMH and NSERC of Canada for their generous financial support and
Gabrielle
Stoy at LMH and David Austin and Dale Rolfsen at UBC for their hospitality. He
is currently the Sir Michael Sobell Research Fellow at Peterhouse, Cambridge.
Part of Brian Steer's work on this paper took place during a sabbatical year
that he spent in Bonn and Pisa: he would like to thank Friedrich Hirzebruch
and
the Max-Planck-Institut and Giuseppe Tomassini and the Scuola Normale Superiore
di Pisa for their hospitality.
\bse{Orbifold Riemann Surfaces}{orb}
This section compiles some basic facts about orbifold Riemann surfaces and
fixes some notations
which we will need in the sequel.
\bsu{Introduction to Orbifold Surfaces}{orbint}
We start with the definition and basic properties of orbifold surfaces (or
$V$-surfaces). The notion of a $V$-manifold was introduced by Satake
\cite{sa56} and re-invented as `orbifold' by Thurston. By an \de{orbifold
surface} (respectively \de{orbifold Riemann surface}) $M$ we mean a closed,
connected, smooth, real 2-manifold (respectively complex 1-manifold) together
with a finite number (assumed non-zero) of `marked' points with, at each marked
point, an associated order of isotropy $\alpha$ (an integer greater than one).
(See \cite{sa56} or \cite{sc'83} for full details of the definition.) Notice
that $M$ has an `underlying' surface where we forget about the marked points
and
orders of isotropy.
Although every point of a surface has a neighbourhood modeled on $D^2$ (the
open
unit disc), we think of a neighbourhood of a {\em marked} point as having the
form $D^2/{\Bbb Z}_\alpha$, where ${\Bbb Z}_{\alpha}$ acts on ${\Bbb R}^2 \cong {\Bbb C}$ in the standard way
as the $\alpha^{\rm th}$ roots of unity. We make this distinction because $M$ is
to be thought of as an orbifold. Orbifold ideas do not seem to have been
widely
used in the study of `surfaces with marked points'. For instance the tangent
$V$-bundle to $D^2/{\Bbb Z}_\alpha$ is $(D^2 \times {\Bbb R}^2)/{\Bbb Z}_\alpha$---this leads to an
idea
of an orbifold Riemannian metric on $M$ which corresponds to that of a metric
on
the underlying surface with conical singularities at the marked points (see
\refsu{reptei}).
We introduce the following notations, which will remain fixed throughout this
paper. Let $M$ be an orbifold (Riemann) surface with topological genus $g$;
denote by $\wo M$ the `underlying' (Riemann) surface obtained by forgetting the
marked points and isotropy. Denote the number of marked points of $M$ by $n$,
the points themselves by $p_1,\dots,p_n$ and the associated orders of isotropy
by $\alpha_1,\dots,\alpha_n$. Let $\sigma_i$ denote the standard representation of
${\Bbb Z}_{\alpha_i}$, with generator $\zeta_i = e^{2\pi{\rm i}/\alpha_i}$. At a point where
$M$ is locally $D^2$ or $D^2/\sigma_i$ use $z$ for the standard (holomorphic)
coordinate on $D^2$; call this a local \de{uniformising} coordinate and at a
marked point let $w=z^{\alpha_i}$ denote the associated local coordinate. When
giving local arguments centred at a marked point, drop the subscript $i$'s; {\rm i.\,e.\ }
use $p$ for $p_i$ and so on.
Given a surface which is the base of a branched covering we naturally consider
it to be an orbifold surface by marking a branch point with isotropy given by
the ramification index. In this way we arrive at a definition of the
\de{orbifold fundamental group} $\pi_1^V(M)$ (see \cite{sc'83}): it has the
following presentation
\beql{F-group}
\begin{array}{rcl}\pi_1^V(M) &=& \langle
a_1,b_1,\dots, a_g,b_g,q_1,\dots, q_n \quad |\\ && \quad q_i^{\alpha_i}=1,\
q_1\dots q_n[a_1,b_1]\dots[a_g,b_g]=1 \rangle. \end{array}
\end{eqnarray}
In this presentation $a_1,b_1,\dots,a_g,b_g$ generate the fundamental group of
the underlying surface while $q_1,\dots,q_n$ are represented by small loops
around the marked points. Similarly, in this situation, the Riemann-Hurwitz
formula suggests the following definition of the \de{Euler characteristic} of
an
orbifold surface: \beql{Euler characteristic} \chi(M) = 2-2g-n+\sum_{i=1}^n
\frac1{\alpha_i}.\end{eqnarray} We always work with orbifold surfaces with
$\chi(M)<0$---note that this includes cases with $g=0$ or $g=1$ in contrast to
the situation for ordinary surfaces.
A \de{$V$-bundle}, $E$, with fibre ${\Bbb C}^r$, is as follows. We ask for a local
trivialisation around each point of $M$ with smooth (or holomorphic) transition
functions; at a marked point $p$ this should be of the form $E|_{D^2/\sigma}
\stackrel{\simeq}{\to} (D^2 \times {\Bbb C}^r)/(\sigma \times \tau)$, where $\tau$ is
an \de{isotropy representation} $\tau : {\Bbb Z}_\alpha \to GL_r({\Bbb C})$. We can always
choose coordinates in a $V$-bundle which \de{respect the $V$-structure}: that
is, if the isotropy representation is $\tau : {\Bbb Z}_\alpha \to GL_r({\Bbb C})$ then we can
choose coordinates so that $\tau$ decomposes as $\tau = \sigma^{x_1} \oplus
\sigma^{x_2} \oplus \cdots \oplus\sigma^{x_r}$, where, for $j=1,\dots,r$, $x_j$
is an integer with $0\le x_j < \alpha$ and the $x_j$ are increasing.
We will mostly be interested in rank 2 and rank 1 $V$-bundles and for these we
introduce particular notations for the isotropy, which will be fixed
throughout:
for a rank 2, respectively rank 1, $V$-bundle, denote the isotropy at a marked
point by $x$ and $x'$, respectively by $y$, with $0 \le x,x',y <\alpha$. In the
rank 2 case order $x$ and $x'$ so that $x\le x'$. If a rank 1 $V$-bundle is a
sub-$V$-bundle of a rank 2 $V$-bundle then of course $y \in \{ x,x'\}$: in
this
case, let $\epsilon \in \{ -1,0,1 \}$ describe the isotropy of the
sub-$V$-bundle, with $\epsilon = 0$ if $x=x'$, $\epsilon = -1$ if $y=x$ and
$\epsilon = 1$ if $y=x'$. Add subscript $i$'s, when necessary, to indicate the
marked point in question. Call a vector $(\epsilon_i)$ with $\epsilon_i = 0$
if
$x_i = x'_i$ and $\epsilon_i\in\{\pm1\}$ if not an \de{isotropy vector}. For a
rank 2 $V$-bundle let $n_0 = \# \{ i : x_i=x_i'\}$ and for a rank 1
sub-$V$-bundle let $n_\pm = \# \{ i : \epsilon_i = \pm 1\}$.
If a $V$-bundle is, at a marked point, locally like $(D^2 \times {\Bbb C}^r)/(\sigma
\times \tau)$ then by a Hermitian metric we mean, locally, a Hermitian metric
on
$D^2 \times {\Bbb C}^r$ which is equivariant with respect to the action of ${\Bbb Z}_\alpha$
via $\sigma\times\tau$. Considering the tangent $V$-bundle, we can also define
the concepts of Riemannian metric and orientation for an orbifold surface (an
orientation of an orbifold surface is just an orientation of the underlying
surface).
We introduce the notion of a connexion in a $V$-bundle in the obvious way. The
first Chern class or degree of a $V$-bundle can be defined using Chern-Weil
theory. Notice that the degree of a $V$-bundle is a {\em rational} number,
congruent modulo the integers to the sum $\sum_{i=1}^n(y_i/\alpha_i)$, where
$(y_i)$ is the isotropy of the determinant line $V$-bundle.
When $E$ is a rank 2 $V$-bundle with isotropy $(x_i,x_i')$, as above then we
write
\begin{eqnarray*}
c_1(\Lambda^2E) = l + \sum_{i=1}^n\frac{x'_i + x_i}{\alpha_i},
\end{eqnarray*}
for $l \in {\Bbb Z}$. Similarly, if $L$ is a sub-$V$-bundle with isotropy given by an
isotropy
vector $(\epsilon_i)$ in the manner explained above then we write
\begin{eqnarray*}
c_1(L) = m + \sum_{i=1}^n \frac{\epsilon_i(x'_i - x_i) + (x'_i + x_i)}{2\alpha_i}
\end{eqnarray*}
for $m \in {\Bbb Z}$. These meanings of $l$ and $m$ will be fixed throughout.
Topologically, $U(1)$ and $U(2)$ $V$-bundles are classified by their isotropy
representations and first Chern class: we quote the following classification
result from \cite{fs92}.
\bprn{Furuta-Steer}{v-bundles} Let $M$ be an orbifold surface. Then, over $M$:
\begin{enumerate}
\item any complex line $V$-bundle is topologically determined by its isotropy
representations and
degree, \item any $SU(2)$ $V$-bundle is topologically determined by its
isotropy representations
(necessarily of the form $\sigma^{x}\oplus\sigma^{-x}$, where $0\le x\le
[\alpha/2]$) and \item any
$U(2)$ $V$-bundle is topologically determined by its isotropy representations
and its determinant
line $V$-bundle. \end{enumerate} \end{proposition}
\bre{subbundles}
Let $E$ be a $U(2)$ $V$-bundle with isotropy $(x_i,x_i')$ and let
$(\epsilon_i)$ be any
isotropy vector. Then there exists
a $U(1)$ $V$-bundle $L$ with isotropy specified by $(\epsilon_i)$ (unique up to
twisting
by a $U(1)$-bundle {\rm i.\,e.\ } up to specifying the integer $m$, above) and,
topologically,
$E=L\oplus L^*\Lambda^2E$, by
\refpr{v-bundles}.
\end{remark}
\bsu{Divisors and Line $V$-bundles}{orbdiv}
The theory of divisors developed here has also been dealt with in the Geneva
dissertation of B. Calpini written some time ago.
Suppose $M$ is an orbifold Riemann surface. It is convenient to associate an
order of isotropy $\alpha_p$ to every point $p$; it is 1 if the point is not one
of
the marked points (and $\alpha_i$ if $p=p_i$ for some $i$). A \de{divisor} is
then
a linear combination \begin{eqnarray*} D = \sum_{p\in M}\frac{n_p}{\alpha_p}.p \end{eqnarray*} with
$n_p\in
{\Bbb Z}$ and zero for all but a finite number of $p$.
If $f$ is a non-zero meromorphic function on $M$ we define the \de{divisor of
$f$} by $Df = \sum_p \nu_p(f).p$. Here $\nu_p(f)$ is defined in the usual way
when $\alpha_p=1$. When $\alpha_p=\alpha > 1$ and $z$ is a local uniformising
coordinate
with $\rho : D^2 \surjarrow D^2/\sigma$ the associated projection, then on
$D^2$ we find that $\rho^*f$ has a Laurent expansion of the form \begin{eqnarray*}
\sum_{j\ge
-N}a_j z^{\alpha j}\qquad\mbox{with $a_{-N}\ne 0$} \end{eqnarray*} and we set $\nu_p(f) =
-N$.
(The divisor of a meromorphic function is thus an {\em integral} divisor.) Two
divisors $D$ and $D'$ are \de{linearly equivalent} if \begin{eqnarray*} D-D' = Df \end{eqnarray*} for
some meromorphic f. The \de{degree} of a divisor $D=\sum_p(n_p/\alpha_p).p$ is
defined to be $d(D)=\sum_p n_p/\alpha_p$.
The correspondence between divisors and holomorphic line $V$-bundles goes
through in exactly the same way as for Riemann surfaces without marked points.
To a point $p$ with $\alpha_p=1$ we associate the point line bundle $L_p$ as in
\cite{gu66}. If $\alpha_p=\alpha>1$ then to the divisor $p/\alpha$ we associate the
following $V$-bundle. Let $z$ be a local uniformising coordinate; then, making
the appropriate identification locally with $D^2/\sigma$, we define \begin{eqnarray*}
L_{p/\alpha} = ((D^2\times {\Bbb C})/(\sigma\times\sigma)) \cup_\Phi
((M\setminus\{p\})\times{\Bbb C} ), \end{eqnarray*} where $\Phi: (D^2\setminus\{0\}\times
{\Bbb C})/(\sigma\times\sigma) \to ((M\setminus\{p\})\times{\Bbb C} )$ is given by its
${\Bbb Z}_\alpha$-equivariant lifting \begin{eqnarray*} \widehat\Phi:(D^2\setminus\{0\})\times{\Bbb C} &\to&
((D^2/\sigma)\setminus\{0\})\times{\Bbb C}\\ (z,z')&\mapsto&(z^{\alpha},z^{-1}z'). \end{eqnarray*}
This $V$-bundle has an obvious section `$z$'; this is given on $D^2\times {\Bbb C}$
by
$z\mapsto(z,z)$ and extends by the constant map to the whole of $M$. So
$L_{p/\alpha}$ is positive. We denote by $L_i$ the line $V$-bundle
$L_{p_i/\alpha_i}$, associated to the divisor $p_i/\alpha_i$, and by $s_i$ the
canonical section `$z$'.
Finally for a general divisor \begin{eqnarray*} D = \sum_{p\in M}\frac{n_p}{\alpha_p}.p \end{eqnarray*} we
set \begin{eqnarray*} L_D = \otimes_p(L_{p/\alpha_p})^{n_p}. \end{eqnarray*}
As for a meromorphic function, we can define the divisor of a meromorphic
section of a line $V$-bundle $L$. If $p$ has ramification index $\alpha_p=\alpha$
and
we have a local uniformising coordinate $z$ and a corresponding local
trivialisation $L|_{D^2/\sigma} \cong (D^2\times{\Bbb C}) /(\sigma\times\sigma^y) $,
for some isotropy $y$ (with, by convention, $0\le y < \alpha$), then locally we
have $ s(z) = \sum_{j\ge -N'}a'_j z^j$ with $a'_{-N'}\ne 0$. However, we have
${\Bbb Z}_\alpha$-equivariance which means that $s(\zeta.z)=\zeta^ys(z)$ (where $\zeta =
e^{2\pi{\rm i}/\alpha}$ generates ${\Bbb Z}_\alpha$). It follows that $a'_j = 0$ unless
$j\equiv y\pmod \alpha$ and hence \beql{Taylor} s(z) = z^{y}\sum_{j\ge -N}a_j
z^{\alpha j}\qquad\mbox{with $a_{-N}\ne 0$,} \end{eqnarray} where $-N\alpha + y = -N'$. We
define $\nu_p(s)=-N'/\alpha = -N + y/\alpha$: so for the canonical section $s_i$ of
the line $V$-bundle $L_{i}$ we have $\nu_{p_i}(s_i) = 1/\alpha_i$.
\bpr{divisors} The above describes a bijective correspondence between
equivalence classes of
divisors and of holomorphic line $V$-bundles. The degree $d(D)$ of a divisor
$D$ is just $c_1(L_D)$, the first Chern class of the corresponding line
$V$-bundle. \end{proposition}
\begin{proof} Much of the proof is contained in \cite{fs92}. The correspondence has
been
defined above and it is clear that if we start from a divisor $D$ and pass to
$L_D$ then taking the divisor associated to the tensor product of the canonical
sections we get back $D$. We have to show that the correspondence behaves well
with respect to equivalence classes. If $D_1\equiv D_2$, where $D_j = \sum
(n^{(j)}_p/\alpha_p).p$ for $j=1,2$, then from what we know about divisors of
meromorphic functions we see that $n^{(1)}_p \equiv n_p^{(2)} \pmod{\alpha_p}$.
Now $L_{D_j}=\otimes_p(L_{p/\alpha_p})^{n^{(j)}_p}$. Since $n^{(1)}_p \equiv
n_p^{(2)} \pmod{\alpha_p}$, we find that
$L_{D_j}\otimes\bigotimes_{i=1}^n(L_{p_i/\alpha_i})^{-n^{(1)}_{p_i}}$ is a genuine
line bundle for $j=1,2$. Moreover the two are equivalent because the
corresponding divisors are. Hence $L_{D_1}\equiv L_{D_2}$. Similarly we show
that two meromorphic sections of the same line $V$-bundle\ define equivalent divisors.
\end{proof}
\bco{divisors1} If
$L$ is a
holomorphic line $V$-bundle with $c_1(L)\le 0$ then $H^0(L) = 0$, unless $L$ is
trivial. \end{corollary}
Let $L$ be a holomorphic line $V$-bundle over $M$, with isotropy $y_i$ at
$p_i$,
and let ${\cal O}(L)$ be the associated sheaf of germs of holomorphic sections; we
take the cohomology of $L$ over $M$ to be the sheaf cohomology of ${\cal O}(L)$ over
$\wo M$. From \refeq{Taylor}, ${\cal O}(L)$ is locally free over ${\cal O}_M = {\cal O}_{\wo
M}$ and hence there is a natural line bundle $\wo L$ over $\wo M$ with ${\cal O}(\wo
L) \cong {\cal O}(L)$. If we define $\wo L = L\otimes L_1^{-y_1}\otimes
\cdots\otimes L_n^{-y_n}$ then this gives the required isomorphism of sheaves.
\bpr{parabolic} If $L$ is a holomorphic line $V$-bundle then, with $\wo L$
defined as above, there
is a natural isomorphism of sheaves ${\cal O}(L)\cong {\cal O}(\wo L)$ given by tensoring
with the canonical
sections of the $L_i$.
\end{proposition}
\begin{proof}
Recall that $\ilist{s}$ are the canonical sections of $\ilist{L}$. If $s$ is a
holomorphic section of $L$ then $\wo s=s_1^{-y_1}\dots s_n^{-y_n}s$ will be a
meromorphic section of $\wo L$, holomorphic save perhaps at $p_i$. In fact (by
choosing a local coordinate) we see that $\wo s$ has removable singularities at
$p_i$ and that $D(\wo s) = Ds - \sum_{i=1}^n ({y_i}/{\alpha_i})p_i$. Conversely, given a
section $\wo s$ of $\wo L$, then $s_1^{y_1}\dots s_n^{y_n}\wo s$ is a section
of
$L$ and the correspondence is bijective.
\end{proof}
As corollaries we get the orbifold Riemann-Roch theorem, originally due to
Kawasaki \cite{ka79} and an orbifold version of Serre duality.
\bthn{Kawasaki-Riemann-Roch}{Riemann-Roch} Let $L$ be a
holomorphic line $V$-bundle
with the isotropy at $p_i$ given by ${y_i}$, with $0 \le y_i < \alpha_i$. Then $$
h^0(L) - h^1(L)
= 1-g+c_1(L) -
\sum_{i=1}^n \frac {y_i}{\alpha_i}, $$ where $h^i$ denotes the dimension of
$H^i$. \eth
\bth{Serre duality} If $L$ is a holomorphic line $V$-bundle\ and $K_M$ is the
canonical
$V$-bundle of the
orbifold Riemann surface then \begin{eqnarray*} H^1(L) \cong H^{0}(L^* K_M)^*. \end{eqnarray*} \eth
\begin{proof} By
definition, $H^1(L)=H^1({\cal O}(L))=H^1(\wo L)$. So $H^1(L)\cong H^{0}((\wo L)^*
K_{\wo M})^*$ by the standard duality. But $(\wo L)^* K_{\wo M}=\wo {L^* K_M}$
by
a straightforward computation. \end{proof}
\bse{Higgs $V$-Bundles}{hig}
Throughout this section $E \to M$ is a holomorphic rank 2 $V$-bundle
over an orbifold Riemann surface with $\chi(M)<0$ and we write $K=K_M$, the
canonical $V$-bundle,
and $\Lambda = \Lambda^2E$, the determinant line $V$-bundle.
\bsu{Higgs $V$-Bundles}{highig}
In this subsection we introduce Higgs $V$-bundles---this is a straightforward
extension of the basic
material in Hitchin's paper \cite{hi87} to orbifold Riemann surfaces.
Define a \de{Higgs field}, $\phi$, to be a
holomorphic section of ${\rm End}_0(E)\otimes K$ where
${\rm End}_0(E)$ denotes the trace-free endomorphisms of $E$. A \de{Higgs
$V$-bundle} or \de{Higgs
pair} is just a pair $({E},\phi)$.
Let $({E}_1,\phi_1)$ and $({E}_2,\phi_2)$ be two Higgs $V$-bundles. A
\de{homomorphism of Higgs
$V$-bundles} is just a homomorphism of $V$-bundles $h : E_1 \to E_2$ such that
$h$ is holomorphic
and intertwines $\phi_1$ and $\phi_2$. The corresponding notion of an
\de{isomorphism of Higgs
$V$-bundles} is then clear.
A holomorphic line sub-$V$-bundle $L$ of $E$ is called a \de{Higgs
sub-$V$-bundle} (or
`$\phi$-invariant sub-$V$-bundle') if $\phi(L) \subseteq KL$. A Higgs
$V$-bundle $({E},\phi)$ is
said to be \de{stable} if \beql{stable Higgs} c_1(L) < \frac12
c_1(E),\quad\mbox{for every
Higgs sub-$V$-bundle, $L$.} \end{eqnarray} If we allow possible equality in
\refeq{stable Higgs} then the
Higgs $V$-bundle is called \de{semi-stable}. If a Higgs $V$-bundle is stable
or a direct sum of two
line $V$-bundles of equal degree with $\phi$ also decomposable then (it is
certainly
semi-stable and) it is called
\de{polystable}. If $E$ is stable then certainly $(E,\phi)$ is stable for
any Higgs field $\phi$. The following result, due to Hitchin in the smooth
case
\cite[proposition 3.15]{hi87}, goes over
immediately.
\bpr{stable regular} Let $({E}_1,\phi_1)$ and $({E}_2,\phi_2)$ be stable Higgs
$V$-bundles with isomorphic holomorphic determinant line $V$-bundles,
$\Lambda^2{E}_1 \cong
\Lambda^2{E}_2$. Suppose that $\psi: E_1 \to E_2$ is a non-zero homomorphism
of Higgs $V$-bundles.
Then $\psi$ is an isomorphism of Higgs $V$-bundles. If
$({E}_1,\phi_1)=({E}_2,\phi_2)$ then $\psi$
is scalar multiplication. \end{proposition}
\bsu{Algebraic Geometry of Stable Higgs $V$-Bundles}{higalg}
For applications in later sections we now develop some results on the
possibilities for
stable Higgs $V$-bundles. Higgs $V$-bundles are
holomorphic $V$-bundles with an associated `Higgs field'; a holomorphic
$(1,0)$-form-valued
endomorphism of the $V$-bundle. We assume familiarity with \cite[\S
3]{hi87}.
Given $E \to M$, we investigate whether there are any Higgs fields $\phi$ such
that the Higgs pair
$(E,\phi)$ is stable. Recall that the isotropy of $E$ at $p_i$ is denoted by
$(x_i,x_i')$ and that
$n_0=\#\{i\,:\,x_i = x_i'\}$. We will suppose throughout that $n_0 < n$---this
is because the case
$n=n_0$ is just that of a genuine bundle twisted by a line $V$-bundle and so
essentially
uninteresting (see also \refsu{detred}).
The following lemma is a simple computation using the
Kawasaki-Riemann-Roch theorem and Serre duality.
\ble{}
We have \[ h^0(K^2) = \chi(K^2) = 3g-3+n \and \chi({\rm End\,}_0(E)\otimes
K)=3g-3+n-n_0. \] \end{lemma}
If $E$ is stable we know that the only endomorphisms of $E$ are scalars and so
$h^0({\rm End\,}_0(E))=0$;
consequently if $3-3g-n+n_0>0$ (this only happens if $g=0$ and $n-n_0\le 2$)
there are no stable
$V$-bundles.
Suppose that $L$ is a holomorphic sub-$V$-bundle of $E$. Then we have the
short exact sequences \beql{b}
&0\to L \stackrel{i}{\to} E \stackrel{j}{\to} L^* \Lambda\to 0&\and\nonumber\\
&0\to
L \Lambda^* \stackrel{j^*}{\to} E^* \stackrel{i^*}{\to} L^*\to 0& \end{eqnarray} from
which follows
\beql{d} 0\to E^*\otimes KL \to {\rm End\,}_0(E) \otimes K\to KL^{-2} \Lambda\to 0.
\end{eqnarray}
Associated to \refeq{b} tensored by $KL$ is the long exact sequence in
cohomology \beql{lesb}
\begin{array}{c} 0 \to H^0(KL^2\Lambda^*) \to H^0(E^*\otimes KL) \to
H^0(K)\stackrel{\delta}{\to}\qquad\\ \qquad\stackrel{\delta}{\to}
H^1(KL^2\Lambda^*) \to
H^1(E^*\otimes KL) \to H^1(K) \to 0 \end{array} \end{eqnarray} and associated to
\refeq{d} we have \beql{les}
\begin{array}{c} 0 \to H^0(E^*\otimes KL) \to H^0({\rm End\,}_0(E) \otimes K) \to
H^0(KL^{-2}
\Lambda)\stackrel{\delta}{\to}\qquad\\ \qquad\stackrel{\delta}{\to}
H^1(E^*\otimes KL) \to
H^1({\rm End\,}_0(E) \otimes K) \to H^1(KL^{-2} \Lambda) \to 0. \end{array} \end{eqnarray}
Now let us review the strategy of the proof of \cite[proposition 3.3]{hi87}:
if $E$ is stable then
all pairs $(E,\phi)$ are certainly stable and we know something about stable
$V$-bundles from
\cite{fs92}. If $E$ is not stable then there is a destabilising sub-$V$-bundle
$L_E$. Recall that $L_E$
is unique if $E$ is not semi-stable. Moreover, in the semi-stable case the
assumption $n\ne n_0$
implies that $L_E \not\cong L_E^*\Lambda$ and so $L_E$ is unique if $E$ is not
decomposable and
if it is then $L_E$ and $L_E^*\Lambda$ are the only destabilising
sub-$V$-bundles. Thus there will be
some $\phi$ such that the pair $(E,\phi)$ is stable unless every Higgs field
fixes $L_E$ (or
$L_E^*\Lambda$, in the semi-stable, decomposable case).
Moreover, the subspace of sections leaving $L$ invariant is $H^0(E^*\otimes
KL)
\subset
H^0({\rm End\,}_0(E)\otimes K)$. It follows that a necessary and sufficient condition
for $E$ to
occur in a stable pair is $H^0(E^*\otimes KL_E)\ne H^0({\rm End\,}_0(E)\otimes K)$
(and similarly for
$L_E^*\Lambda$, in the semi-stable, decomposable case). Considering
\refeq{les} this amounts to
non-injectivity of the Bockstein operator $\delta$, which we consider in the
next
lemma---proved as in the proof of \cite[proposition 3.3]{hi87}. From the lemma
we obtain a version of \cite[proposition 3.3]{hi87}.
\ble{extension'} If $L$ is a
sub-$V$-bundle of $E$ with $\deg(L)\ge \deg(\Lambda)/2$ then \begin{enumerate}
\item\label{ex1}
$H^1(E^*\otimes KL)\cong{\Bbb C};$ \item\label{ex2} $H^0(KL^{-2}
\Lambda)\stackrel{\delta}{\to}H^1(E^*\otimes KL)$ is surjective if and only if
$e_E \ne 0$,
where $e_E\in H^1(L^2 \Lambda^*)$ is the extension class. \end{enumerate}
\end{lemma} \begin{proof}
\begin{enumerate} \item Consider the long exact sequence in cohomology
\refeq{lesb} for $L$, which
includes the segment \beql{bit} \cdots \to H^1(KL^2 \Lambda^*)
\stackrel{j^*}{\to}
H^1(E^*\otimes KL) \stackrel{i^*}{\to} {\Bbb C} \to 0 . \end{eqnarray} Then the result follows
from the fact
that $h^1(KL^2 \Lambda^*)=0$, using Serre duality and the vanishing theorem.
\item
Consider \refeq{les} and let $i^*$ be the map on cohomology indicated in
\refeq{bit}; then the
result follows from the fact that $i^*.\delta$ is multiplication by the
extension class $e_E$.
\end{enumerate} \end{proof}
\bpr{non-s} Let $E$ be a non-stable $V$-bundle. Then $E$ appears
in a stable pair if and only if one of the following holds: \begin{enumerate}
\item\label{ns1} $E$
is indecomposable with $h^0(KL_E^{-2}\Lambda)>1$; \item\label{ns2} $E$ is
decomposable, not
semi-stable with $h^0(KL_E^{-2}\Lambda)\ge 1$; \item\label{ns3} $E$ is
decomposable,
semi-stable with $h^0(KL_E^{-2}\Lambda)\ge 1$ and $h^0(KL_E^{2}\Lambda^*)\ge
1$.
\end{enumerate} \end{proposition}
To find more precise results in the case that $E$ is semi-stable we
estimate $h^0(KL_E^{-2}\Lambda)$ and $h^0(KL_E^{2}\Lambda^*)$ using the
following lemmas. For these recall the definitions of the integers
$n_0$, $n_\pm$, $l$ and $m$ from \refsu{orbint}.
\ble{chi}
Suppose that $L$ is any sub-$V$-bundle of $E$. Then, with the notations
established
above,
\begin{eqnarray*} \chi(KL^{-2} \Lambda) &=& l-2m
+g - 1 + n_-\and\\
\chi(KL^{2} \Lambda^*)
&=& 2m-l +g - 1 + n_+. \end{eqnarray*} Moreover:
\begin{enumerate}
\item\label{xpos} if $2c_1(L)-c_1(\Lambda) \ge 0$ then $h^0(KL^{2} \Lambda^*)
=
\chi(KL^{2} \Lambda^*) \ge g$ and $\chi(KL^{-2} \Lambda) \le g - 2 + n -
n_0$;
\item\label{xneg} if $2c_1(L)-c_1(\Lambda) \le 0$ then
$h^0(KL^{-2} \Lambda) = \chi(KL^{-2} \Lambda) \ge g$ and
$\chi(KL^{2} \Lambda^*) \le g - 2 + n - n_0$.
\end{enumerate}
\end{lemma}
\begin{proof} The first part is just the Kawasaki-Riemann-Roch theorem. Now consider
\refpa{xpos}
(\refpa{xneg} is entirely similar):
we have $H^1(KL^{2}\Lambda^*) \cong H^0(L^{-2}\Lambda)^*$ and this is zero
(because the degree is
non-positive and the isotropy is non-trivial as $n> n_0$).
Let $\theta = \sum_{i=1}^{n}{\epsilon_i(x'_i-x_i)}/{\alpha_i}$ so that
$2c_1(L)-c_1(\Lambda)
\equiv \theta \pmod{{\Bbb Z}}$. Then $-n_- < \theta < n_+$ and the estimates on
$\chi(KL^{2}
\Lambda^*)$ and $\chi(KL^{-2}
\Lambda)$ follow.
\end{proof}
\ble{bounds}
For a given $M$ and $n-n_0$, an $E$ (with the given $n-n_0$) such that the
bounds on
$\chi(KL^{2}
\Lambda^*)$ and $\chi(KL^{-2} \Lambda)$ in \refle{chi}, parts 1 and 2
are attained exists if and only if
\begin{eqnarray*}
\min_{ \{i_1,\dots,i_{n-n_0}\} \subseteq \{1,\dots,n \} }\left\{
\sum_{j=1}^{n-n_0} \frac{1}{\alpha_{i_j}} \right\} \le 1.\end{eqnarray*}
For a given topological $E$ the bounds are attained for some
holomorphic structure on $E$ if and only if
\begin{eqnarray*}
\min_{ \{(\epsilon_i)\ :\ n_+ + l\equiv 1 (2) \}} \left\{ n_+ -
\sum_{i=1}^n\frac{\epsilon_i(x'_i-x_i)}{\alpha_i} \right\} \le 1,\end{eqnarray*}
where $(\epsilon_i)$ varies over all isotropy vectors with $n_+ +l \equiv 1
(2)$.
\end{lemma}
\begin{proof}
To see this we construct examples as follows. It is sufficient to consider
only
topological examples and therefore, given any $M$ and topological $E$, to
choose
$(\epsilon_i)$ and $m\in {\Bbb Z}$ to specify $L$ topologically. (Examples where $L$
is a topological sub-$V$-bundle of $E$ exist by \refre{subbundles}.)
Now, given a choice of $(\epsilon_i)$ and $m$, we
have $\chi(KL^{-2}\Lambda) = l - 2m + g - 1 + n_-$ and $\chi(KL^{2}\Lambda^*) = 2m - l
+ g - 1 +
n_+$ from \refle{chi}.
So, for $2c_1(L)-c_1(\Lambda) \ge 0$ (the case $2c_1(L)-c_1(\Lambda)\le 0$ is
entirely similar), the bounds
are attained provided $2m - l + n_+ = 1$ and $2m - l +
\sum_{i=1}^{n}\epsilon_i(x'_i-x_i)/\alpha_i \ge 0$.
Since we can vary $m$, the first equation just fixes the parity of $n_+$.
Hence the problem reduces to finding $(\epsilon_i)$ such that
\beql{succinct}
\sum_{i=1}^{n}\frac{\epsilon_i(x'_i-x_i)}{\alpha_i} - n_+ &\ge& -1\and\\
n_+ + l &\equiv& 1 \pmod{2}.\label{eq:constraint}
\end{eqnarray}
This gives the desired result, for a given topological $E$. To see whether
examples exist for a given $M$ and $n-n_0$ as we allow $E$ to vary over
topological types with fixed $n-n_0$, we simply note that the maximum value of
the left-hand side of \refeq{succinct} (subject to \refeq{constraint}) is \begin{eqnarray*}
\max_{ \{i_1,\dots,i_{n-n_0}\} \subseteq \{1,\dots,n \} }\left\{
\sum_{j=1}^{n-n_0} \left( -\frac{1}{\alpha_{i_j}} \right) \right\}.\end{eqnarray*}
Thus the bounds are certainly attained if the $\alpha_i$ are such that this is not
less
than $-1$.\end{proof}
\bco{semi-stable'} If $L$ is a
sub-$V$-bundle of $E$ with
$c_1(L)=c_1(\Lambda)/2$ and $\epsilon_i$, $n_+$ and $n_-$ are defined by the
isotropy of $L$, as
before, then \beql{} h^0({\rm End\,}_0(E)\otimes K)
&=& \left\{\begin{array}{ll} 3g-3+n-n_0 & \mbox{ if }0\to L \to E \to L^*
\Lambda\to
0\mbox{ is non-trivial;}\\ 3g-2+n-n_0 & \mbox{ if it is trivial;}
\end{array}\right.\label{h0end}\\
h^0(E^*\otimes KL) &=& 2g - 1 -
\sum_{i=1}^{n}\frac{\epsilon_i(x'_i-x_i)}{\alpha_i} + n_+;
\label{vkl}\\
h^0(KL^{-2} \Lambda) &=& g - 1 + \sum_{i=1}^{n}
\frac{\epsilon_i(x'_i-x_i)}{\alpha_i} +
n_-;\label{kl}\\
h^0(KL^{2} \Lambda^*) &=& g - 1 - \sum_{i=1}^{n}
\frac{\epsilon_i(x'_i-x_i)}{\alpha_i}
+ n_+;\nonumber. \end{eqnarray}
Moreover,
\begin{eqnarray*} 2g \le &h^0(E^*\otimes KL)& \le
n-n_0 + 2g -2,\\ g \le &h^0(KL^{-2} \Lambda) & \le n-n_0 + g -2\and\\ g \le
&h^0(KL^{2} \Lambda^*) & \le n-n_0 + g -2. \end{eqnarray*}
These estimates are attained for all values
of $g$ and $n-n_0$ (but not necessarily for all $M$ or $E$).
\end{corollary} \begin{proof} The results on $h^0(KL^{-2} \Lambda)$ and
$h^0(KL^{2} \Lambda^*)$ follow from \refle{chi}. Moreover we know that
$h^1(E^*\otimes KL)= 1$
from \refle{extension'}, \refpa{ex1} and so $h^0(E^*\otimes KL)$ follows from
the
Kawasaki-Riemann-Roch theorem. To calculate $h^0({\rm End\,}_0(E)\otimes K)$ we use
\refeq{les} and \refle{extension'}, \refpa{ex2}. The estimates on
$h^0(KL^{-2}\Lambda)$ and $h^0(KL^2\Lambda^*)$ are contained
in \refle{chi} and the estimate on $h^0(E^*\otimes KL)$ follows (as
$h^0(E^*\otimes KL) = - h^0(KL^{-2} \Lambda) +
3g -2 + n - n_0$). \end{proof}
When $c_1(L)=c_1(\Lambda)/2$ it is not possible to have $n-n_0=1$
(because $c_1(L^2\Lambda^*)$ cannot be an integer if $n-n_0=1$ but, on the
other hand, it
is supposed zero).
Applying these results to $L_E$ (and $L_E^*\Lambda$ in the semi-stable,
decomposable case)
we can strengthen \refpr{non-s} as far as it refers to semi-stable
$V$-bundles. Adding in some necessary conditions on $g$ and $n-n_0$ derived
from
our estimates above we obtain the following theorem.
\bth{stable pairs} A holomorphic rank 2
$V$-bundle $E$ occurs in a stable pair if and only if one of the following
holds: \begin{enumerate}
\item\label{s} $E$ is stable (if $g=0$ then necessarily $n-n_0\ge 3$);
\item\label{ss} $E$ is semi-stable, not
stable (necessarily $n-n_0 \ge 2$) with one of the following holding:
\begin{enumerate}
\item\label{ssin1} $E$ is indecomposable and $g>1$;
\item\label{ssin2} $E$ is indecomposable, $g=0$
or 1 and $h^0(KL_E^{-2} \Lambda)>1$ (necessarily $g + n-n_0\ge 4$);
\item\label{ssde1} $E$
is decomposable and $g>0$;
\item\label{ssde2} $E$ is decomposable, $g=0$ and $1 \le
h^0(KL_E^{-2} \Lambda) \le n -n_0 -3$ (necessarily $n-n_0\ge 4$);
\end{enumerate}
\item\label{not semi-stable}
$E$ is not semi-stable with one of the following holding: \begin{enumerate}
\item\label{nsin} $E$
is indecomposable and $h^0(KL_E^{-2} \Lambda)>1$ (necessarily $g\ge 2$ or $g +
n-n_0 \ge
4$; if $g=2$ and $n-n_0 =1$ then $\wo{KL_E^{-2} \Lambda}$ is necessarily
canonical);
\item\label{nsde} $E$ is decomposable and $h^0(KL_E^{-2}
\Lambda) \ge 1$
(necessarily $g\ge 1$ or $n-n_0\ge 3$; if $2g+n-n_0=3$ then $\wo{KL_E^{-2}
\Lambda}$ is necessarily
trivial). \end{enumerate} \end{enumerate}
In all cases the necessary conditions are the best possible ones depending only
on $g$ and
$n-n_0$. \eth
\begin{proof}
In \refpa{ss} the first three items follow from
\refco{semi-stable'} together with \refpr{non-s}, parts 1 and 3,
while for the last item we note that when $g=0$, $h^0(KL_E^{2} \Lambda^*)\ge
1$
if and only if $h^0(KL_E^{-2} \Lambda)<n-n_0-2$ (from \refco{semi-stable'})
and
apply \refpr{non-s}, \refpa{ns3}.
Only the necessary conditions in \refpa{not semi-stable} need any additional
comment. Using \refle{chi}, \refpa{xpos} we have that $\chi(KL_E^{-2} \Lambda)
\le g - 2 + n - n_0$ and the bound is attained for some $M$ and $E$ by
\refle{bounds}. Thus if $g>2$ there are cases with $\chi(KL_E^{-2} \Lambda)\ge
2$ and hence $h^0(KL_E^{-2} \Lambda)\ge 2$. If $g=2$ then there are cases with
$\chi(KL_E^{-2} \Lambda)=n-n_0$, similarly. The only problem then occurs if
$n-n_0 =1$ when $c_1(\wo{KL_E^{-2} \Lambda})=2$: in order to have
$h^0(KL_E^{-2} \Lambda)>1$ we must have $\wo{KL_E^{-2} \Lambda} = K_{\wo M}$.
Similarly, if $g=1$ we can suppose that $\chi(KL_E^{-2} \Lambda) = n - n_0 -1$.
Then for $h^0(KL_E^{-2}\Lambda) >1$ we need $n-n_0 \ge 3$ and for
$h^0(KL_E^{-2}\Lambda) \ge 1$ we need $n-n_0 \ge 1$ with $\wo{KL_E^{-2}
\Lambda}$ trivial if $n-n_0 =1$. Finally, if $g=0$ we need $n-n_0 \ge 4$ for
$h^0(KL_E^{-2}\Lambda) >1$ and $n-n_0 \ge 3$ (with $\wo{KL_E^{-2} \Lambda}$
trivial if $n-n_0 =3$) for $h^0(KL_E^{-2}\Lambda) \ge 1$.
\end{proof}
For each of the items of \refth{stable pairs} examples of such
$V$-bundles do
actually exist (see also \refse{top} and \refse{det}). Only items \ref{ssin2},
\ref{ssde2},
\ref{nsin} and \ref{nsde} pose any problem but it
is fairly easy to construct the required examples using the ideas of
\refsu{orbdiv} and
\refle{bounds}. Of particular
interest is \refpa{nsde} when $g=0$ and $n-n_0=3$: we have the following
result (compare
\refse{top}).
\bpr{g0} There exist orbifold Riemann surfaces with $g=0$ with $V$-bundles
with
$n-n_0=3$ over them which are decomposable but not semi-stable and
exist in stable pairs. Such a stable pair contributes an isolated point
to the moduli space (which is nevertheless connected---see \refco{topology}).
\end{proposition}
\begin{proof}
We set $E=L_E\oplus L_E^*\Lambda$ with $2c_1(L_E) >
c_1(\Lambda)$. Now, according
to \refth{stable pairs}, \refpa{nsde}, we get a stable pair if and only if
$\wo{KL_E^{-2} \Lambda}$ is trivial. Moreover, applying \refsu{orbdiv} or
\refle{bounds}, we see
that examples certainly exist.
We write the Higgs field according to the decomposition $\phi = \left(
\begin{array}{cc}t & u\\ v &
-t \end{array}\right)$. Now $h^0(KL_E^{-2} \Lambda)=1$ implies that
$h^0(KL_E^2 \Lambda^*)=0$ and hence $u=0$. More simply, $g=0$ implies $t=0$
and so $\phi$ is given by $v$, with
$v\in H^0(KL_E^{-2} \Lambda)\cong {\Bbb C}$ non-zero for a stable pair. Now we
need
to consider the
action of $V$-bundle automorphisms: $\left( \begin{array}{cc}\lambda & 0\\ 0 &
\lambda^{-1} \end{array}\right)$ acts on $ H^0(KL_E^{-2} \Lambda)\cong {\Bbb C}$ by
$z \mapsto
\lambda^2 z$ and hence there is a single orbit.
\end{proof}
Notice that \cite[proposition 3.4]{hi87} does {\em not} extend to orbifold
Riemann surfaces with $\chi(M)<0$. To prove that result Hitchin uses
Bertini's
theorem to show that, for
a given rank 2 holomorphic bundle over a Riemann surface with negative Euler
characteristic, either
the generic Higgs field leaves no subbundle invariant or there is a subbundle
invariant under all
Higgs fields; he then shows that the latter cannot happen when the bundle
exists in a stable pair.
Although we have not been able to enumerate all the cases in which this result
is false in the
orbifold case there are three things which can go wrong:
\begin{enumerate}
\item Bertini's theorem may not apply and the conclusion may be false: $E$ may
be such that it
exists in a stable pair, the generic Higgs field has an invariant
sub-$V$-bundle and no
sub-$V$-bundle is invariant by all Higgs fields;
\item $E$ may be stable and have a sub-$V$-bundle invariant by
all Higgs fields;
\item $E$ may be non-stable, exist in a stable pair and have a
sub-$V$-bundle invariant by all Higgs fields.
\end{enumerate}
We give counterexamples of the first and third types. Although we suspect that
counterexamples of the second type also exist we have not been able to show
this. For a counterexample where Bertini's theorem doesn't apply consider the
following: if $g=1$ and $n-n_0=1$ then, anticipating \refle{invariants}, {\em
every Higgs field has an invariant sub-$V$-bundle} and yet if $E$ is a
non-stable $V$-bundle which exists in a stable pair (these exist by
\refth{stable pairs}, \refpa{nsde}) then {\em no sub-$V$-bundle is invariant by
all Higgs fields.} All counterexamples of the third type are given in the
following proposition, which also has interesting applications in \refse{top}.
\bpr{counter}
A non-stable $V$-bundle $E$ exists in a stable pair and has a sub-$V$-bundle
invariant by all Higgs
fields if and only if $g=0$, $E=L_E \oplus L_E^*\Lambda$ with $2c_1(L_E) >
c_1(\Lambda)$ and $L_E$ is
such that the bounds in \refle{chi}, \refpa{xpos} are attained. Moreover, there
exist orbifold
Riemann surfaces with such $E$ over them, with $E$ having any given $n-n_0 \ge
3$.
\end{proposition}
\begin{proof}
Suppose $E$ is non-stable, exists in a stable pair and has a sub-$V$-bundle
invariant by all Higgs
fields. Since $E$ is non-stable and exists in a stable pair the destabilising
sub-$V$-bundle(s)
cannot be invariant by all Higgs fields. Moreover, if $h^0(KL_E^2\Lambda^*)>0$
then, via the
inclusions $H^0(KL_E^2\Lambda^*) \hookrightarrow H^0(E^*\otimes KL_E)
\hookrightarrow
H^0({\rm End\,}_0(E)\otimes K)$, we get a family of Higgs fields which leave no
sub-$V$-bundle except $L_E$
invariant---hence we must have $h^0(KL_E^2\Lambda^*)=0$. By \refle{chi},
\refpa{xpos} this can only
happen if the bounds there are attained and $g=0$. Now consideration of the
long exact sequence
\refeq{lesb}
shows that $h^0(E^*\otimes KL_E)=g$ and hence \refle{extension'} and
\refeq{les} together show that
$E$ is decomposable. Considering the Higgs field according to the
decomposition, in the manner of
\refpr{g0}, we see that $L_E^*\Lambda$ is invariant under all Higgs fields: it
follows that $2c_1(L_E)$
must be strictly greater than $c_1(\Lambda)$ for $E$ to form a stable pair.
The converse is straightforward: we suppose that $g=0$, $2c_1(L_E) > c_1(\Lambda)$
and $L_E$ is such
that the bounds in \refle{chi}, \refpa{xpos} are attained and, exactly as in
\refpr{g0}, we set $E=L_E\oplus L_E^*\Lambda$. We write the Higgs field
according to the
decomposition as $\phi =\left( \begin{array}{cc}0 & u\\ v &
-0\end{array}\right)$. Since $g=0$,
the fact that $L_E$ is such that the bounds in \refle{chi}, \refpa{xpos} are
attained means that
$h^0(KL_E^{-2}\Lambda) = n - n_0 -2 \ge 1$ and $h^0(KL_E^{2} \Lambda^*) = 0$.
Hence $v$ can be chosen
non-zero so that $E$ exists in a stable pair and $u=0$ so that $L_E^*\Lambda$
is invariant by all
$\phi$, as required.
Finally, examples where the bounds in \refle{chi}, \refpa{xpos} are attained
exist by
\refle{bounds}. \end{proof}
\bse{The Yang-Mills-Higgs Equations and Moduli}{ymh}
We now prove an equivalence between stable Higgs $V$-bundles and the
appropriate
analytic objects---irreducible Yang-Mills-Higgs\ pairs---and use this to give an analytic
construction
of the moduli space. Throughout this section $M$ is an orbifold Riemann surface
of negative Euler
characteristic, equipped with a normalised volume form, $\Omega$, and $E$ is a
smooth
rank 2 $V$-bundle over $M$ with a fixed Hermitian metric.
\bsu{The Yang-Mills-Higgs\ Equations}{ymhymh}
Given the fixed Hermitian metric on $E$, holomorphic structures correspond to
unitary
connexions. Let $\phi$ be a Higgs field with respect to $A$, {\rm i.\,e.\ } a Higgs field
on ${E}_A$ or
satisfying $\o\partial_A\phi =0$. We call the pair $(A,\phi)$ a \de{Higgs
pair}. (With
the unitary structure understood Higgs pairs are entirely equivalent to the
corresponding
Higgs $V$-bundles and so we can talk about stable Higgs pairs, isomorphisms of
Higgs pairs
and so on.) (From some points of view it is more natural to consider the
holomorphic structure as fixed and the unitary structure as varying. Of course
the two
approaches are equivalent.)
We impose determinant-fixing conditions in what follows; they are not essential
but they remove some redundancies associated with scalar automorphisms (see
\refpr{stable regular}), tensoring by line $V$-bundles and so on. We have
already made the assumption that the Higgs field $\phi$ fixes determinants in
the sense that it is trace-free; the other determinant-fixing conditions are
defined as follows. A unitary structure on $E$ induces one on the determinant
line $V$-bundle $\Lambda$. With this fixed and a choice of isomorphism class
of
holomorphic structure on $\Lambda$, there is a unique (up to unitary gauge)
unitary connexion on $\Lambda$ which is compatible with the class of
holomorphic
structure and is Yang-Mills, {\rm i.\,e.\ } has constant central curvature $-2\pi
{\rm i}\,c_1(\Lambda)\Omega$. Fix one such connexion and denote it $A_\Lambda$. We
say
that a unitary connexion or holomorphic structure on $E$ has \de{fixed
determinant} if it induces this fixed connexion or holomorphic structure in the
determinant line $V$-bundle. (On the other hand if we fix the holomorphic
structure then we can choose a Hermitian-Yang-Mills\ metric on the determinant line $V$-bundle
and fix the determinant of our metrics by insisting that they induce this
metric.)
Given a unitary connexion $A$ the trace-free part of the curvature is
$F_A^0 =_{\rm def} F_A + \pi {\rm i}\, c_1(\Lambda)\Omega I_E$,
by the Chern-Weil theory. We say that a Higgs pair $(A,\phi)$ (with fixed
determinants
understood) is \de{Yang-Mills-Higgs} if
\beql{hymh condition}
\begin{array}{rcl} F_A^0 + [\phi,\phi^*] &=& 0 \quad{\rm and}\\
\o\partial_A\phi &=& 0.
\end{array}
\end{eqnarray}
(For a Hermitian metric varying on a fixed Higgs $V$-bundle this is
the condition for the metric to be Hermitian-Yang-Mills-Higgs .) The
involution $\phi\mapsto \phi^*$ is a combination of the conjugation $dz\mapsto
d\overline{z}$ and taking the adjoint of an endomorphism with respect to the
metric. The
second part of the condition merely reiterates the fact that $\phi$ is
holomorphic with
respect to the holomorphic structure induced by $A$. Of course if $\phi=0$
then
\refeq{hymh condition} is just the Yang-Mills equation (see \cite{ab82,fs92})
and we say
that $A$ is \de{Yang-Mills}. An existence theorem for Yang-Mills connexions
in
stable $V$-bundles, generalising the Narasimhan-Seshadri theorem from the
smooth
case \cite{do83}, is given in \cite{fs92}.
The first half of our correspondence between stable Higgs $V$-bundles and
Yang-Mills-Higgs\
pairs is not difficult; again a result of Hitchin \cite[theorem 2.1]{hi87}
generalises easily.
\bpr{stable}
Let $M$ be an orbifold Riemann surface with negative Euler characteristic.
If $(A,\phi)$ is a Yang-Mills-Higgs\ pair (with respect to the fixed unitary structure on
$E$ and with
fixed determinants) then the pair $(A,\phi)$ is stable unless it has a
$U(1)$-reduction,
in which case it is polystable.
\end{proposition}
We call a pair with a $U(1)$-reduction, {\em as a pair}, \de{reducible};
otherwise the
pair is \de{irreducible}. Notice that a reducible pair is Yang-Mills-Higgs\ if and only if
the
connexions in the two line $V$-bundles are Yang-Mills.
Define the \de{gauge group} ${\cal G}(E)$ to be the group of unitary
automorphisms of $E$ (fixing the base). This acts on Higgs fields by
conjugation and has a natural action on $\o\partial$-operators such that the
corresponding Chern connexions transform in the standard way. Thus this action
fixes the determinant line $V$-bundle, acts on the set of Higgs $V$-bundles by
isomorphisms and takes one Yang-Mills-Higgs\ pair to another. We also consider the
\de{complexified gauge group} ${\cal G}^c(E)$ of complex-linear automorphisms
of
$E$ (fixing the base). Again this acts on Higgs $V$-bundles by isomorphisms.
Isomorphic Higgs $V$-bundle structures are precisely those that lie in the same
${\cal G}^c(E)$-orbit. Notice that \refpr{stable regular} implies that ${\cal
G}^c(E)$ acts freely (modulo scalars) on the set of stable Higgs $V$-bundles.
(If we think of the Higgs $V$-bundle $({E},\phi)$ as fixed and the Hermitian
metric as variable then ${\cal G}^c(E)$ acts transitively on the space of
Hermitian metrics.) Once again we easily obtain a uniqueness result due to
Hitchin \cite[theorem 2.7]{hi87} in the smooth case.
\bpr{regular pairs}
Let $({E}_1,\phi_1)$ and $({E}_2,\phi_2)$ be isomorphic Higgs $V$-bundles with
fixed
determinants, with Chern connexions $A_1$ and $A_2$ and the same underlying
rank 2
Hermitian $V$-bundle. Suppose that the Higgs pairs $(A_1,\phi_1)$ and
$(A_2,\phi_2)$ are
both Yang-Mills-Higgs. Then $({E}_1,\phi_1)$ and $({E}_2,\phi_2)$ are gauge-equivalent
({\rm i.\,e.\ } there is
an element of ${\cal G}(E)$ taking one to the other). \end{proposition}
\bsu{An Existence Theorem for Yang-Mills-Higgs\ Pairs}{ymhexi}
A version of the Narasimhan-Seshadri theorem for stable Higgs $V$-bundles
(essentially a converse to \refpr{stable}) can be proved directly for
orbifolds,
extending the arguments of \cite{do83,hi87}.
\bth{Narasimhan-Seshadri}
Let $E\to M$ be a fixed $U(2)$ $V$-bundle over an orbifold Riemann surface of
negative
Euler characteristic. If $(A,\phi)$ is a polystable Higgs pair with fixed
determinant on
$E$ then there exists an element $g \in {\cal G}^c$ of determinant 1, unique
modulo
elements of $\cal G$ of determinant 1, such that $g(A,\phi)$ is Yang-Mills-Higgs.
\eth
We shall deduce the theorem from the ordinary case by equivariant arguments in
\refsu{ymhequ}, though there is some advantage to a direct proof, as
an appeal to Fox's theorem is avoided and uniformisation results from the
following corollary, proved as in \cite[corollary 4.23]{hi87}.
\bco{negative curvature}
If $M$ is an orbifold Riemann surface of negative Euler characteristic
then $M$ admits a unique compatible metric of constant sectional
curvature -4.
\end{corollary}
\begin{proof}
We define a stable Higgs $V$-bundle by equipping $E=K\oplus 1$
with the Higgs field
\begin{eqnarray*}
\phi=\left(\begin{array}{cc}
0 & 0 \\
1 & 0
\end{array}\right).
\end{eqnarray*}
We fix a Hermitian-Yang-Mills\ metric on $\Lambda^2E$. From \refth{Narasimhan-Seshadri} we
have a Hermitian-Yang-Mills-Higgs\ metric $h$ on $E$. Exactly as in \cite[corollary 4.23]{hi87},
this must
split and we obtain a metric on $K$ such that the dual metric in the tangent
bundle has constant sectional curvature -4.
\end{proof}
\bsu{The Yang-Mills-Higgs\ Moduli Space}{ymhmod}
We now construct the moduli space of irreducible Yang-Mills-Higgs\ pairs, beginning with a
brief discussion of reducible Yang-Mills-Higgs\ pairs. Let $(A,\phi)$ be a reducible
Yang-Mills-Higgs\ pair on $E$. The reduction means that there is a splitting of $E$ into a
direct sum
$E=L\oplus L^*\Lambda$, where $L$ and $L^*\Lambda$ have the same degree, with
respect to which $A$ and $\phi$ are diagonal---the resulting Higgs $V$-bundle
is
polystable but not stable. The isotropy group of the pair $(A,\phi)$ is $S^1$
or $SU(2)$
according to whether the two summands are distinct or identical; since $\phi$
is
trace-free the latter is only possible if $\phi=0$.
Let us now consider the question of the existence of reductions. Obviously the
essential prerequisite is that $L$ exists such that $L$ and $L^*\Lambda$ have the
same degree. If $a$ denotes the least common multiple of the $\alpha_i$'s then
the
degrees of line $V$-bundles have the form $s/a$ for $s\in {\Bbb Z}$ and all $s$
occur.
Thus a necessary condition for a reduction is that $c_1(\Lambda) = s/a$ with $s$
even. However, even when $s$ is even, there is a further constraint: the
isotropy of $E$ is fixed and, as before, the isotropy of $L$ must be described
by an isotropy vector $(\epsilon_i)$ with $c_1(L) \equiv
\sum_{i=1}^n\{\epsilon_i(x'_i-x_i)+(x'_i+x_i)\}/2\alpha_i \pmod{{\Bbb Z}}$ and so the isotropy
may imply a constraint to finding $L$ with appropriate $c_1(L)$. For general
$M$ and $E$ it is impossible in \lq most' cases (see \cite{fs92} for details).
{}From now on we make the assumption that the isotropy of $M$ and the degree
and
isotropy of $E$ are such that there are no reducible Yang-Mills-Higgs\ pairs on $E$.
We outline the deformation theory to show that the moduli space is a
finite-dimensional manifold. (For the purposes of this outline we
suppress the use of Sobolev spaces---this is standard; see {\rm e.\,g.\ } \cite{pa'82}.)
Fix an irreducible Yang-Mills-Higgs\ pair $(A,\phi)$. The `deformation
complex' at $(A,\phi)$ is then the following elliptic complex:
\beql{deformation}
0\to \Gamma(\frak{su}(E)) \stackrel{d_1}{\to} \Gamma(\frak{su}^1(E)) \oplus
\Omega^{1,0}(\frak{sl}(E))
\stackrel{d_2}{\to} \Gamma(\frak{su}^2(E)) \oplus \Omega^{1,1}(\frak{sl}(E))
\to
0,
\end{eqnarray}
where $\frak{su}^k(E)$ denotes the bundle of skew-adjoint $k$-forms with values
in the trace-free endomorphisms of $E$ and $\frak{sl}(E)$ denotes the bundle
of trace-free endomorphisms of $E$.
Here $d_1$, giving the linearisation of
the action, is given by
$$
d_1 : \psi \mapsto (d_A\psi,\ [\phi,\psi]))
$$
and $d_2$, giving the linearisation of the Yang-Mills-Higgs equations, by
$$
d_2 : (A',\phi') \mapsto (d_AA' + [\phi',\phi^*] +[\phi,\phi'^*],\
\o\partial_A\phi' + [(A')^{0,1},\phi]).
$$
We use the orbifold Atiyah-Singer index theorem \cite{ka81} to calculate the
index of \refeq{deformation} as $6(g-1) +
2(n-n_0)$. We note that the zeroeth and second cohomology groups, $H^{0}$ and
$H^{2}$, of the complex vanish---for $H^{0}$ this follows from the
irreducibility of $(A,\phi)$ and for $H^{2}$ the duality argument given by
Hitchin will suffice. Hence the first cohomology group has dimension $ 6(g-1)
+
2(n-n_0)$. Moreover the Kuranishi method shows that a neighbourhood of zero in
$H^1$ is a local model for the moduli space and hence
the moduli space is a smooth complex manifold of dimension $ 6(g-1) +
2(n-n_0)$.
\bth{moduli} Let $M$ be an orbifold Riemann surface of negative Euler
characteristic and $E\to M$ a fixed complex rank 2 $V$-bundle.
\begin{enumerate}
\item Suppose that $E$ is equipped with a Hermitian metric and admits no
reducible Yang-Mills-Higgs\ pairs. Then the moduli space of Yang-Mills-Higgs\ pairs on $E$ with fixed
determinants, ${\cal M}(E,A_\Lambda)$, is a complex manifold of dimension $
6(g-1) + 2(n-n_0)$.
\item Suppose that $E$ admits no Higgs $V$-bundle structures
which are polystable but not stable. Then the moduli space of stable Higgs
$V$-bundle
structures on $E$ with fixed determinants is a complex manifold of
dimension $6(g-1) + 2(n-n_0)$.
\end{enumerate}
\eth
\bre{roots}
In the smooth case there are essentially only two moduli spaces (of which only
one is smooth), according to the parity of the degree. In the orbifold case,
how many moduli spaces are there? Clearly it is sufficient to consider only
one
topological $\Lambda$ in each class under the equivalence $\Lambda \sim \Lambda
L^2$, for any topological line $V$-bundle $L$---`square-free' representatives
for each class will be discussed in \refsu{reprep}. A further subtlety in the
orbifold case is the possibility of non-trivial topological square roots of the
trivial line $V$-bundle, or simply \de{topological roots}: if $L$ is a
topological root then there is a map on moduli ${\cal M}(E,A_\Lambda)
\leftrightarrow
{\cal M}(E\otimes L,A_\Lambda)$ by tensoring by $L$, which fixes $\Lambda$ but
alters
the topology of $E$. For $L$ to be a topological root necessarily $c_1(L)=0$
and $L$ has `half-trivial' isotropy, {\rm i.\,e.\ } the isotropy is 0 or $\alpha/2$ at each
marked point. If we consider topological line $V$-bundles of the form $L=
\otimes_{\alpha_i {\rm\ even}}L_i^{\delta_i\alpha_i/2}$, for $\delta_i \in
{\Bbb Z}$ where the $L_i$ are the point $V$-bundles of \refsu{orbdiv}, then it
is clear that $L$ is a topological root provided $c_1(L)=\sum\delta_i/2 =0$.
If
we let $n_2$ denote the number of marked points where the isotropy is even,
then, provided $n_2 \ge 1$, there are $2^{n_2-1}$ topological roots. It
follows
that for each topological $\Lambda$, if $n_2 \ge 1$, there will be $2^{n_2-1}$
different topological $E$'s giving essentially the same moduli space. We will
see another manifestation of this in \refsu{reprep}. \end{remark}
Recall that the tangent space to the moduli space is given by the first
cohomology of the
deformation complex \refeq{deformation}, {\rm i.\,e.\ } by $\ker{(d_1^*)}\cap\ker{(d_2)}$.
This space
admits a natural $L^2$ metric and, just as in
\cite[theorems 6.1 \& 6.7]{hi87}, we have the following result.
\bpr{metric}
Let $E$ be a fixed rank 2 Hermitian $V$-bundle over an orbifold
Riemann surface of negative Euler characteristic and suppose that $E$ admits no
reducible
Yang-Mills-Higgs\ pairs. Then the natural $L^2$ metric on the moduli space ${\cal
M}(E,A_\Lambda)$ is
complete and hyper-K\"ahler.
\end{proposition}
\bsu{The Yang-Mills-Higgs\ equations and Equivariance}{ymhequ}
Here we sketch how many {\em but not all} of the previous results of this
section can be treated by equivariant arguments. Further details for this
subsection can be found in \cite{na91}.
An orbifold Riemann surface with negative Euler characteristic, $M$, has a
topological orbifold covering by a surface \cite{sc'83} and so its universal
covering is necessarily a surface with negative Euler characteristic.
Pulling-back the complex structure we find that the universal covering is
necessarily $D^2$, the unit disk, with $\pi_1^V(M)$ a group of automorphisms
acting properly discontinuously. In other words $\pi_1^V(M)$ is a co-compact
Fuchsian group or, in the terminology of \cite{fo52}, an \de{$F$-group}.
Thinking of $D^2$ as the hyperbolic upper half-plane or Poincar\'e disk, the
elements of $\pi_1^V(M)$ act by orientation-preserving isometries and so
we get a compatible Riemannian metric of constant sectional curvature on $M$.
This is just \refco{negative curvature}. In this context we need the following
result of \cite{fo52}.
\bprn{Fox}{Fox}
If $\Gamma$ is an $F$-group then $\Gamma$ has a normal subgroup of finite
index, containing no elements of finite order.
\end{proposition}
\bco{smooth covering}
Let $M$ be an orbifold Riemann surface with negative Euler characteristic. Then
there
exists a smooth Riemann surface, $\widehat{M}$, with negative Euler
characteristic, together
with a
finite group, $F$, of automorphisms of $\widehat M$, such that $M=F\backslash\widehat
M$.
\end{corollary}
The important point here is that the covering is {\em finite} and hence
$\widehat M$ is compact.
The existence result of \refth{Narasimhan-Seshadri} follows from the
corresponding result on $\widehat M$, \cite[theorem 4.3]{hi87}, using an averaging
argument (compare \cite{gp91}). We will always use the notation that objects
on
$\widehat M$ pulled-back from $M$ under the covering map $\widehat M\to M$ will be
denoted by a `hat'; $\widehat{\ \ }$. In this notation the pull-back of a
$V$-bundle $E \to M$ becomes $\widehat E \to \widehat M$, and so on. For the
equivariant argument it is easiest to fix the Higgs $V$-bundle structure on $E$
and vary the metric; therefore, rather than suppose that a {\em Hermitian}
structure on $E$ is given, we temporarily suppose that a {\em holomorphic}
structure on $E$ (and hence on $\widehat E$) is given. We will show that if
$(E,\phi)$ is stable then $(\widehat E,\widehat\phi)$ is polystable and admits a
Hermitian-Yang-Mills-Higgs\ metric which is $F$-invariant and so descends to the required metric on
$E$.
\bpr{polystable}
Let $(E,\phi)$ be a stable Higgs $V$-bundle and let $(\widehat E,\widehat\phi)$ be
the
pull-back to $\widehat M$. Then $(\widehat E,\widehat\phi)$ is polystable.
\end{proposition}
\begin{proof}
Suppose first that $(\widehat E,\widehat\phi)$ is {\em not semi-stable}. Then there
is a unique
destabilising Higgs sub-$V$-bundle $L=L_{\widehat E}$ and the action of $F$ cannot
fix $L$.
Therefore for some $f\in F$ we have that $f(L) \ne L$. However $f(L)$ is a
Higgs
sub-$V$-bundle of $(\widehat E,\widehat\phi)$ (because $\widehat\phi$ commutes with the
action of $f\in F$) and has the same degree as $L$. This contradicts the
uniqueness of
$L$. So $(\widehat E,\widehat\phi)$ is semi-stable. Suppose it is {\em not stable}.
Then again there
is a destabilising Higgs sub-$V$-bundle $L= L_{\widehat E}$ (not necessarily
unique).
As before $L$ cannot be fixed by $F$ and so we obtain, for some $f\in F$, a
Higgs
sub-$V$-bundle $f(L) \ne L$ of the same degree as $L$. Let $g:f(L)\to \widehat
E/L$
be the composition of the inclusion of $f(L)$ into
$\widehat E$ with the projection onto $\widehat E/L$: $g$ is a homomorphism
between two line bundles of the same degree and hence either
zero or constant. Since $f(L) \ne L$ the map $g$ cannot be zero and hence
$f(L)=\widehat
E/L$. Since $f(L)$ is actually a Higgs sub-$V$-bundle, $(\widehat E,\widehat\phi)$
is
a direct sum $(L \oplus f(L),\widehat\phi_{L}\oplus\widehat\phi_{f(L)})$ and so
is polystable as claimed. \end{proof}
\bpr{metrics exist}
Let $(E,\phi)$ be a stable Higgs $V$-bundle and let $(\widehat E,\widehat\phi)$ be
the pull-back to $\widehat M$. Then the polystable Higgs $V$-bundle $(\widehat
E,\widehat\phi)$
admits a Hermitian-Yang-Mills-Higgs\ metric which is $F$-invariant (and unique up to scale).
\end{proposition}
\begin{proof}
Certainly $(\widehat E,\widehat\phi)$ admits a Hermitian-Yang-Mills-Higgs\ metric (by \refpr{polystable}
and \cite[theorem 4.3]{hi87}). By averaging, the Hermitian-Yang-Mills-Higgs\ metric can be supposed
$F$-invariant. \end{proof}
An $F$-invariant Hermitian-Yang-Mills-Higgs\ metric descends to $(E,\phi)$, where it
trivially still satisfies the Hermitian-Yang-Mills-Higgs\ condition. We can satisfy the
determinant-fixing condition by a choice of scalar multiple and so we
obtain the desired existence result---\refth{Narasimhan-Seshadri}.
Suppose again that a Hermitian, rather than holomorphic, structure on $E$ is
given. We recall that Hitchin proves that if $\widehat E$ has odd degree then
there is a smooth moduli space ${\cal M}(\widehat E,\widehat A_\Lambda)$ of complex
dimension $6(\widehat g-1)$. The pull-back map $(A,\phi) \mapsto (\widehat
A,\widehat\phi)$ defines a map from Higgs pairs on $E$ to $F$-invariant Higgs
pairs
on $\widehat E$---what can be said about the corresponding map on moduli? Suppose
that $(A,\phi)$ is an irreducible Yang-Mills-Higgs\ pair on $E$. The first point to note
is
that $(\widehat A,\widehat\phi)$ may be reducible, by the analogue of
\refpr{polystable} for pairs. For simplicity, we will ignore this possibility
in our discussion---we suppose that there are topological obstructions to the
existence of reducible Yang-Mills-Higgs\ pairs on $\widehat E$.
\ble{regular lifts} Suppose that $(A,\phi)$ is an irreducible Yang-Mills-Higgs\ pair
on $E$ with an irreducible lift. Suppose further that for some
$g\in \widehat{\cal G}$, of determinant 1, $g(\widehat A,\widehat\phi)$ is
$F$-invariant. Then
$f^{-1}gf=\pm g$ for all $f\in F$.
Conversely, given $g\in \widehat{\cal G}$ of determinant 1 such that $f^{-1}gf=\pm
g$ for all
$f\in F$, $g(\widehat A,\widehat\phi)$ is irreducible and $F$-invariant.
\end{lemma}
\begin{proof}
Since $(\widehat A,\widehat\phi)$ is $F$-invariant we
know that $fd_{\widehat A} = d_{\widehat A}f \and f\widehat\phi = \widehat\phi f $ for any
$f\in F$. Since the same is also true of $g(\widehat A,\widehat\phi)$ It follows
that
$d_{\widehat A} = (g^{-1}f^{-1}gf)(d_{\widehat A})(f^{-1}g^{-1}fg)$ and similarly
for the Higgs field. Since $(A,\phi)$ is a stable pair it follows
(\refpr{stable
regular}) that $\pm g = f^{-1}gf$. The converse is clear.\end{proof}
Let $\widehat{\cal G}^F$ be the subgroup of $\widehat{\cal G}$ consisting of $F$-invariant
elements of determinant 1 and let $\widehat{\cal G}^{\pm}$ denote that of elements
$g\in \widehat{\cal G}$ of determinant 1 such that, for all $f\in F$,
$f^{-1}gf=\pm
g$. Clearly either $\widehat{\cal G}^{\pm}=\widehat{\cal G}^F$ or $\widehat{\cal G}^F <
\widehat{\cal
G}^{\pm}$ with even index. (In fact these groups will be equal under quite
mild
hypotheses, which amount to the vanishing of a certain equivariant
${\Bbb Z}_2$-characteristic class---see \cite{na91} and compare \cite[proposition
1.8,
part iii)]{fs92}.) If these groups are unequal then $f^{-1}gf = -g$ for some
$f\in F$ and $g\in \widehat{\cal G}$ of determinant 1---but such a $g$ cannot be
close to $\pm 1$ and so does not enter the local description of the moduli
space
(compare \cite[theorem 4.1]{pa'82}). At an irreducible $F$-invariant pair
$(\widehat A,\widehat\phi)$ the group $F$ acts on the
deformation complex. The pull-back map induces a commutative diagram of
deformation complexes and it follows immediately that ${\cal M}(E,A_\Lambda)$
covers a submanifold of ${\cal M}(\widehat E,\widehat A_\Lambda)$ with covering group
$\widehat{\cal G}^{\pm}/\widehat{\cal G}^{F}$.
\bth{sub}
Let $M$ be an orbifold Riemann surface of negative Euler
characteristic and $E\to M$ a fixed complex rank 2 $V$-bundle. Let $\widehat E$
be
the pull-back of $E$ under the identification $M = F\backslash \widehat M$
of \refco{smooth covering}.
\begin{enumerate}
\item Suppose that $E$ is equipped with a Hermitian metric and $\widehat E$ with
the pulled-back metric and that $E$ admits no
reducible Yang-Mills-Higgs\ pairs. If $\widehat E$ has odd degree then, under pull-back,
the moduli space of Yang-Mills-Higgs\ pairs with fixed determinants on $E$, ${\cal
M}(E,A_\Lambda)$, covers a
submanifold of the corresponding moduli space on $\widehat E$ with
covering group $\widehat{\cal G}^{\pm}/\widehat{\cal G}^{F}$ (with
$\widehat{\cal G}^\pm$ and $\widehat{\cal G}^F$ as above). If $\widehat E$ has even degree
then this remains true for those classes of Higgs pairs which are irreducible
on $\widehat
E$.
\item Suppose that $E$ admits no Higgs $V$-bundle structures
which are polystable but not stable. If $\widehat
E$ has odd degree then, under pull-back, the moduli space of stable Higgs
$V$-bundle
structures with fixed determinants on $E$ covers a submanifold of the
corresponding moduli
space on $\widehat E$ with covering group $\widehat{\cal G}^{\pm}/\widehat{\cal G}^{F}$
(with
$\widehat{\cal G}^\pm$ and $\widehat{\cal G}^F$ as above). If
$\widehat E$ has even degree then this remains true for those classes of Higgs
$V$-bundle
structure which are stable on $\widehat E$.
\end{enumerate}
\eth
Notice that in the case when $\widehat M$ is a hyperelliptic surface of
genus 2 branched over 6 points of the Riemann sphere then the dimensions of the
two moduli spaces are equal (a simple arithmetic check shows that this is the
only case where this happens).
\bse{The topology of the moduli space}{top}
We now give some results on the topology of the moduli space using the Morse
function $(A,\phi)\stackrel{\mu}{\to}||\phi||_{L^2}^2$, following \cite[\S
7]{hi87}. Notation and assumptions remain as before; in particular, we suppose
that $E$ admits no reducible Yang-Mills-Higgs\ pairs, so that the moduli space ${\cal M} = {\cal M}
(E,A_\Lambda)$ is smooth and recall the definitions of the integers $n_\pm$
and
$l$ from \refsu{orbint}.
The function $(A,\phi)\stackrel{\mu}{\to}||\phi||_{L^2}^2=2{\rm i} \int \mbox{tr\,}(\phi
\phi^*)$ is invariant with respect to the circle action
$e^{{\rm i}\theta}(A,\phi)=(A,e^{{\rm i}\theta}\phi)$ and $d\mu(Y) = -2{\rm i}
\omega_1(X,Y)$ where $X$ generates the $S^1$-action and $\omega_1$ is as in
\cite[\S 6]{hi87}. The map $\mu$ is proper and there's an extension of
\cite[proposition 7.1]{hi87}. To describe it we need to consider pairs
$(m,(\epsilon_i))$ where $m$ is an integer and $(\epsilon_i)$ is an isotropy
vector---such pairs describe topological sub-$V$-bundles of $E$, with isotropy
described by $(\epsilon_i)$ and degree $m + \sum_{i=1}^n
\{\epsilon_i(x'_i-x_i)+(x'_i+x_i)\}/(2\alpha_i)$ (see {\rm e.\,g.\ } \refre{subbundles}).
\bth{Morse} Let $E$ be a fixed rank 2 Hermitian $V$-bundle over an orbifold
Riemann surface of negative Euler characteristic and suppose that $E$ admits no
reducible
Yang-Mills-Higgs\ pairs. If $g=0$ then
suppose that
$n-n_0\ge 3$. Let $\mu$ be as above: then, with the notations established
above,
\begin{enumerate}
\item\label{critical values} $\mu$ has critical values 0 and
$2\pi\{ 2 m -l + \sum_{i=1}^n \{\epsilon_i(x'_i-x_i)/\alpha_i\} \}$ for an integer $m$
and
isotropy vector $(\epsilon_i)$ with
\begin{eqnarray*}
l < 2m + \sum_{i=1}^n \frac{\epsilon_i(x'_i-x_i)}{\alpha_i} \le l + 2g - 2 +
\sum_{i=1}^n\frac{\epsilon_i(x_i' - x_i)}{\alpha_i} + n_-;
\end{eqnarray*}
\item the minimum $\mu^{-1}(0)$ is a non-degenerate critical manifold of index
0 and is
diffeomorphic to the space of stable $V$-bundles with fixed determinants and
\item the other critical manifolds are also
non-degenerate and are $2^{2g}$-fold coverings of
$S^r\wo M$, where $r = l- 2m +2g -2 + n_-$. Moreover, they
are of index $2\{2m -l + g - 1 + n_+ \}$.
\end{enumerate}
\eth
\begin{proof}
The critical points are the fixed points of the induced circle action on ${\cal M}$.
Because we are taking quotients by the gauge group, these correspond to pairs
$((A,\phi),\lambda)$ where $\lambda : S^1 \to {\cal G}$ such that, for all $\theta$,
$\lambda(e^{{\rm i} \theta})d_A\lambda(e^{-{\rm i} \theta}) = d_A$ and $\lambda(e^{{\rm i}
\theta})\phi\lambda(e^{-{\rm i} \theta}) = e^{{\rm i} \theta}\phi$. If $\phi=0$ then,
holomorphically, we simply get stable $V$-bundles. If $\phi\ne 0$ then
certainly $\lambda(e^{{\rm i} \theta})\ne 1$ for $\theta \not\equiv 0 \pmod{2\pi}$.
The
first equation now implies that the stabiliser ${\cal G}_A$ is non-trivial and $A$
is
reducible to a $U(1)$-connexion. Consequently, as a holomorphic $V$-bundle,
$E$
is decomposable (so, in particular, not stable) and can be written $L\oplus L^*
\Lambda$. If we write $\phi = \left( \begin{array}{cc} t & u\\v & -t
\end{array}\right)$ and $\lambda(e^{{\rm i} \theta})= \left( \begin{array}{cc}
\mu_\theta & 0\\0 & \mu_\theta^{-1} \end{array}\right)$ with respect to this
splitting then the second equation implies $t=0$ and either $u=0$ or $v=0$.
Replacing $L$ by $L^* \Lambda$ if necessary, we can suppose that $u=0$ and that
$v\in H^0(KL^{-2} \Lambda)$---$v$ is holomorphic from the self-duality
equations.
The remaining term of the Yang-Mills-Higgs\ equations is
$*(F_A + [\phi,\phi^*] )= -\pi{\rm i} d I_E$. Writing $*F_{A_L} = *F-\pi{\rm i} d$,
in
terms of the above decomposition, so that $*F_{A_{L^*\Lambda}} =
-*F-\pi{\rm i} d$, we find that $F = v \wedge \o v$ and
\begin{eqnarray*}
\deg L
=\frac{{\rm i}}{2\pi}\int(F - *\pi{\rm i} c_1(\Lambda)) = \frac{{\rm i}}{2\pi}\int (v\wedge
\o v) +
\frac{c_1(\Lambda)}2 = \frac{\mu}{4\pi} + \frac{c_1(\Lambda)}{2}.
\end{eqnarray*}
Since $\mu> 0$ for $\phi\ne 0$, we have $2\deg L > c_1(\Lambda)$ and $L=L_E$,
the
destabilising
sub-$V$-bundle of
$E$. Moreover, because $v\ne 0$ we must have $h^0(KL^{-2}\Lambda)\ge
1$ (compare \refth{stable pairs}).
Now, for any $(m,(\epsilon_i))$ let $L_{(m,(\epsilon_i))}$ be the
corresponding
topological sub-$V$-bundle of $E$. Consider pairs $(m,(\epsilon_i))$ with
$2c_1(L_{(m,(\epsilon_i))})> c_1(\Lambda)$ and
set $L=L_{(m,(\epsilon_i))}$ and $E=L\oplus L^*\Lambda$.
This occurs as a stable pair $(E,\phi)$ provided $L$ admits a holomorphic
structure with $h^0(KL^{-2}\Lambda) \ge 1$, and the Higgs field $\phi$ is then
given by $v\in H^0(KL^{-2}\Lambda)\setminus\{0\}$ (compare \refth{stable
pairs},
\refpa{nsde} and \refpr{counter}).
To see whether a given topological $L=L_{(m,(\epsilon_i))}$ admits an
appropriate holomorphic
structure we use our results from \refsu{higalg}: by \refle{chi} we have
$\chi(KL^{-2} \Lambda) =
l - 2m + g - 1 + n_-$. It follows that $r=
c_1 (\wo{ KL^{-2}\Lambda }) = l - 2m + 2g -2 + n_-$.
Hence, supposing that $r \ge 0$, for each effective
(integral) divisor of divisor order $r$ (if $r=0$ then for the empty divisor)
we obtain a
holomorphic structure on $\wo{K L^{-2}\Lambda}$ with a holomorphic section
determining the divisor (determined up to multiplication by elements of
${\Bbb C}^*$). Hence we get a
holomorphic structure on $K L^{-2}\Lambda$ with holomorphic section $v$ and all
holomorphic sections arise in this way. Placing a corresponding holomorphic
structure on
$L$ requires a choice of holomorphic square root and there are $2^{2g}$ such
choices. For each root $L$ the pair $(E,v) =(L\oplus L^*\Lambda,v)$ is clearly
stable by construction. The section $v$ is determined by the divisor up to a
multiplicative constant
$\lambda\ne 0$ but $(L\oplus L^*\Lambda,v)$ and $(L\oplus L^*\Lambda,\lambda v)$ are in
the same orbit under
the action of the complexified gauge group and hence
equivalent. Two distinct divisors determine distinct stable pairs so that we
have the critical set
is a $2^{2g}$-fold covering of the set of effective divisors of degree $r =
l - 2m + 2g -2 + n_-$; that is, a $2^{2g}$-fold covering of
$S^r\wo M$ (a point if $r=0$).
Let $E=L\oplus L^*\Lambda$ for $L=L_{(m,(\epsilon_i))}$, as above. The subset
$U=\left\{ \phi \in
H^0({\rm End\,}_0(E)\otimes K)\ :\ (E,\phi)\mbox{ is stable } \right\}$ is acted upon
freely by
${\rm Aut\,}_0(E)/\{\pm 1\}$, where ${\rm Aut\,}_0(E)$ are the holomorphic automorphisms of
determinant 1 (see
\refpr{stable regular}). The quotient $U/({\rm Aut\,}_0(E)/\{\pm 1\})$ is a complex
manifold of dimension
$3g-3+n-n_0$. So through each point $P\in {\cal M}$ there passes a
$(3g-3+n-n_0)$-dimensional isotropic
complex submanifold $U/({\rm Aut\,}_0(E)/\{\pm 1\})$, invariant under $S^1$: it is
thus {\em Lagrangian}.
Suppose $P\in {\cal M}$ is fixed under the $S^1$-action and $P=(E,\phi)$, where
$E=L\oplus L^*\Lambda$, $\phi
= \left( \begin{array}{cc}0 & 0 \\ v & 0 \end{array}\right)$, as above. The
homomorphism $\lambda$
is given by $\lambda(\theta) = \left( \begin{array}{cc}e^{-{\rm i}\theta/2} & 0 \\
0 & e^{{\rm i}\theta/2}
\end{array}\right)$ with respect to this decomposition. Now ${\rm End\,}_0(E) =
L^{-2}\Lambda\oplus
L^2\Lambda^*\oplus {\Bbb C}$ and $\lambda(\theta)$ acts as
$(e^{{\rm i}\theta},e^{-{\rm i}\theta},1)$. Hence
$\lambda(\theta)$ acts with negative weight solely on $H^0(KL^2\Lambda^*) \subset
H^0({\rm End\,}_0(E)\otimes
K)$. As $\lambda(\theta)$ acts on $\phi$ by multiplication by $e^{{\rm i}\theta}$
there are no negative
weights on $H^0({\rm End\,}_0(E)).\phi$ and hence we find, as in \cite{hi87}, that the
index is $2
h^0(KL^2\Lambda^*) = 2\{2m -l + g - 1 + n_+ \}$, by
\refle{chi}. \end{proof}
{}From this, the work of \cite{fr59} and general Morse-Bott theory
\cite{ab'99} we can, in
principle, calculate the Betti numbers---see \cite{by}.
We content ourselves with \refco{topology}, below, for which we need the
following preliminary lemma.
\ble{index0}
There is exactly one critical manifold of index 0 and this
is connected and simply-connected.
\end{lemma}
\begin{proof}
\refth{Morse} shows that if $g>0$ then the space of stable $V$-bundles
is the only index 0 critical manifold and this is connected and
simply-connected (even when $g=0$) by \cite[theorem 7.11]{fs92}.
When $g=0$, critical manifolds of index 0 other than the moduli of
stable $V$-bundles may occur: these have the form $S^{r}\tilde M \cong
{\Bbb C}\P^{r}$ and so are also connected and simply connected.
It remains to show that exactly one of the possibilities is
non-empty in each case. Making allowances for differences in notation,
the following is implicit in \cite[theorem 4.7]{fs92}:
the space of stable $V$-bundles is empty if and only if there exists a
vector $(\epsilon_i)$ with $n_+ +l \equiv 1 (2)$ and
\beql{empty}
n_+ - \sum_{i=1}^n\frac{\epsilon_i(x'_i-x_i)}{\alpha_i} < 1-g.
\end{eqnarray}
Since the left-hand side of \refeq{empty} is clearly not less than zero we
see that the space of stable $V$-bundles is non-empty whenever $g>0$.
When $g=0$, \refth{Morse} shows that the critical manifolds of index 0 other
than the moduli of stable $V$-bundles consist precisely of the
$V$-bundles considered in \refpr{counter}. The number of such critical
manifolds
is the number of topological types
$L_{(m,(\epsilon_i))}$ satisfying the criteria of \refpr{counter}, which,
using the ideas of \refle{bounds}, is
\beql{count}
\# \left\{ (\epsilon_i)\ :\ n_+ + l\equiv 1 (2) \quad\mbox{and}\quad n_+
-
\sum_{i=1}^n\frac{\epsilon_i(x'_i-x_i)}{\alpha_i} < 1 \right\},
\end{eqnarray}
where $(\epsilon_i)$ varies over all isotropy vectors.
Comparing \refeq{count} to \refeq{empty} we see that exactly one
of the two types of critical manifold must occur.
Moreover, we claim that the number in \refeq{count} is at most 1---this
is sufficient to establish the lemma.
To prove the claim suppose, without loss of generality, that $n_0=0$.
Observe that it is an easy exercise to show that
if $t_1, \dots t_{n}\in (0,1)$ are such that
$\sum_{i=0}^{n} t_i <1$ then at most one $t_i$ can be replaced by
$1-t_i$ with the sum remaining less than 1. Let
\begin{eqnarray*}
t_i = \frac{1+\epsilon_i}{2} - \frac{\epsilon_i(x'_i - x_i)}{\alpha_i}
\end{eqnarray*}
so that $\sum_{i=0}^n t_i = n_+ - \sum_{i=0}^n \epsilon_i(x'_i - x_i)/
\alpha_i$ and changing the sign of $\epsilon_i$ simply sends $t_i$ to
$1-t_i$. The observation applies to show that this sum can be less than
1 for at most two vectors $(\epsilon_i)$ and these cannot have $n_+$ of
the same parity. Hence the count in \refeq{count} is at most 1, as
claimed.
\end{proof}
\bco{topology}
The moduli space ${\cal M}$ is non-compact---except in the case $g=0$ and
$n-n_0=3$ when it is a point---and connected and simply-connected.
\end{corollary}
\begin{proof}
The non-compactness follows from the fact that the critical manifolds cannot be
maxima except if $g=0$ and $n-n_0 = 3$. This is because the critical manifolds
have index $ i = 2\left\{2m -l + g - 1 + n_+ \right\}$ and (real) dimension $2r
= 2\left\{ l - 2m + 2g -2 + n_- \right\}$ and $2r+i = 6g - 6 +2(n-n_0)$, which
is exactly half the (real) dimension of the moduli space. The connectedness
and
simple-connectedness follow from the analogous facts for the unique critical
manifold of index 0 (\refle{index0}) and the fact that the other Morse indices
are all even and strictly positive.
\end{proof}
\bse{The Determinant Map}{det}
Recall that $M$ is an orbifold Riemann surface with negative Euler
characteristic, with $E\to M$ a fixed $U(2)$-$V$-bundle. We assume that $E$
admits no reducible Yang-Mills-Higgs\ pairs so that the moduli space is smooth.
Thinking of the moduli space as a space of stable Higgs $V$-bundles, there is a
holomorphic gauge-invariant map $ (A,\phi) \mapsto \det(\phi) $ which descends
to a holomorphic map $ \det : {\cal M}(E,A_\Lambda) \to H^0(K^2).$ Hitchin showed
that in the smooth case this map is proper, surjective and makes ${\cal M}$ a
completely integrable Hamiltonian system. Moreover he showed that when $q \in
H^0(K^2)$ has simple zeros the fibre $\det^{-1}(q)$ is biholomorphic to the
Prym
variety of the double covering determined by $\sqrt{-q}$ \cite[theorem
8.1]{hi87}. We will see that things are similar but a little more involved in
the orbifold case: the first significant observation is that $h^0(K^2) = 3g
- 3 + n$---this is half the dimension of the moduli space exactly when $n_0 =
0$. For this reason it will be useful to suppose that $n_0 = 0$. (In
\refsu{detred} we will show that the image of the determinant map is contained
in a canonical $(3g - 3 +n-n_0)$-dimensional subspace of $H^0(K^2)$ and thus
all
cases can be reduced to the case $n_0 = 0$.) In addition, there are two
special
cases which we exclude: when $g=0$, $n=3$ the determinant map is identically
zero, and when $g=1$, $n=1$ we have a special case which leads to a breakdown
in
our methods---this case is dealt with separately in \refsu{detspe}.
We summarise our results in the following theorem (proofs are for the most part
discussed in the remainder of this section; the details which have been omitted
are exactly as in \cite[\S 8]{hi87}). We believe that a similar result was
obtained by Peter Scheinost.
\bth{determinant map}
Let $E$ be a fixed
rank 2 Hermitian $V$-bundle over an orbifold Riemann surface of negative Euler
characteristic, with
$n-n_0>3$ if $g=0$. Suppose further that $E$ admits no reducible Yang-Mills-Higgs\ pairs.
Then the determinant
map on the moduli space of Yang-Mills-Higgs\ pairs on $E$ with fixed determinants
\begin{eqnarray*}
\det : {\cal M}(E,A_\Lambda) \to H^0(K^2)
\end{eqnarray*}
has the following properties: \begin{enumerate} \item $\det$ is proper; \item
the image of $\det$
lies in a
canonical $(3g - 3 +n-n_0)$-dimensional subspace $H^0(\b M;K_{\b M}^2)\subseteq
H^0(K^2)$ and
$\det$ surjects onto $H^0(\b M;K_{\b M}^2)$;
\item with respect to $\det : {\cal M}(E,A_\Lambda) \to H^0(\b
M;K_{\b M}^2)$, ${\cal M}(E,A_\Lambda)$ is a completely integrable Hamiltonian
system; \item for a generic $q$ in
the image of $\det$, the fibre $\det^{-1}(q)$ is biholomorphic to a torus of
dimension
$3g-3+n-n_0$---this can be identified with the Prym variety of the covering
determined by
$q$ except when $g=n-n_0=1$, when it is identified with the Jacobian; \item
${\cal M}(E,A_\Lambda)$ is a fibrewise compactification of $T^*{\cal N}(E,A_\Lambda)$
with respect
to the map $\det : T^*{\cal N}(E,A_\Lambda) \to H^0(\b M;K_{\b M}^2)$, where
${\cal N}(E,A_\Lambda)$ is the
moduli space of Yang-Mills connexions on $E$ with fixed determinants.
\end{enumerate}
\eth
It seems possible to obtain results arguing using orbifold methods but it is
often simpler to
translate this orbifold problem into one about parabolic bundles; we review the
necessary results in
the next subsection.
\bsu{Parabolic Higgs bundles}{parhig}
Recall the basic facts concerning the correspondence between $V$-bundles over
$M$ and parabolic
bundles over $\wo M$ \cite{fs92}. Let $\wo E$ be a rank $2$ holomorphic vector
bundle over $\wo M$. A
\de{quasi-parabolic structure} on $\wo E$ is, for each marked point $p \in \{
p_1,\dots,p_n\}$, a flag
in $\wo E_p$ of the form \begin{eqnarray*}
\wo E_{p} = {\Bbb C}^2 \supset {\Bbb C} \supset 0, &\mbox{ or }& \wo E_p = {\Bbb C}^2
\supset 0.
\end{eqnarray*}
A flag of the second form is said to be \de{degenerate}. A quasi-parabolic
bundle $\wo E$ is a \de{parabolic bundle} if to each flag of the first form
there is attached a pair of weights, $0\le \lambda < \lambda' < 1$ and to each of the
second form there is a single (multiplicity 2) weight $0\le \lambda = \lambda' < 1$.
There is a notion of parabolic degree involving the degree of $\wo E$ and the
weights. A basis $\{ e,e' \}$ for the fibre at a parabolic point is said to
\de{respect the quasi-parabolic structure} if either the flag is degenerate or
$e'$ spans the intermediate subspace in the flag. An endomorphism of a
parabolic bundle $\psi$ is a \de{parabolic endomorphism} if for each $p$, with
respect to a basis which respects the quasi-parabolic structure, $\psi_p$
satisfies $(\psi_p)_{12} = 0$ whenever $\lambda < \lambda'$.
Let $E$ be a rank $2$ holomorphic $V$-bundle over $M$. Recall
that by convention $x \le x'$ (if we assume that $n_0 = 0 $ then there is
strict inequality). For a
line $V$-bundle $L$, we can consider the passage $L\mapsto \wo L$
(\refsu{orbdiv}) as a
smoothing process and the construction of parabolic bundles follows similar
lines: for a marked
point $p$ we consider \begin{eqnarray*} (E|_{M \setminus\{ p \}}) \cup_\Psi D^2 \times {\Bbb C}^2,
\end{eqnarray*} with clutching
function $\Psi$ given, in local coordinates, by its ${\Bbb Z}_\alpha$-equivariant
lifting \beql{patch}
\begin{array}{rcl} \widehat\Psi : (D^2 \setminus\{0\}) \times {\Bbb C}^2 &\to& D^2
\times {\Bbb C}^2\\
(z,(z_1,z_2)) &\mapsto& (z^\alpha,(z^{-x}z_1,z^{-x'}z_2)). \end{array} \end{eqnarray} Now
a holomorphic
section of $(D^2 \times {\Bbb C}^2)/(\sigma \times \tau)$ is given by holomorphic
maps $s_j : D^2\to {\Bbb C}$,
for $j=1,2$, invariant under the action of ${\Bbb Z}_\alpha$. As with
\refeq{Taylor}, Taylor's theorem implies that $s_j(z) = z^{x_j}
\wo{s}_j(z^\alpha)$, where $\wo{s}_j$ is a
holomorphic function $D^2 \to {\Bbb C}$ and we use the temporary notations $x_1=x$
and $x_2=x'$.
Under the map $\Psi$ defined by \refeq{patch} we simply get
a section of $(D^2 \setminus \{ 0 \})\times {\Bbb C}^2$ which is given by the
functions $\wo{s}_j(w)$ and
hence extends to a holomorphic section of $D^2\times {\Bbb C}^2$. In other words the
map $\Psi$ is an
isomorphism between the sheaves of germs of holomorphic sections. Repeating
this construction about
each marked point, we get a holomorphic bundle $\wo E\to \wo M$ corresponding
to the holomorphic
$V$-bundle $E\to M$.
In fact $\wo E$ has a natural parabolic structure as follows: working in our
local coordinates
about a particular marked point (which respect the $V$-structure) we define
weights $\lambda=x/\alpha$ and
$\lambda'=x'/\alpha$. Define a flag in ${\Bbb C}^2$ so that the smallest proper flag space
is
the subspace of ${\Bbb C}^2$ on which $\tau$ acts like $\sigma^{x'}$. The
corresponding quasi-parabolic
structure on $\wo E_p$ is then given by the image of this flag---notice that
this is degenerate if
and only if $x = x'$. With the weights $\lambda,\lambda'$ it is clear that $\wo E$ is
a parabolic bundle.
(Whilst it is not true in general that $\Lambda^2\wo E = \wo{\Lambda}$, the
bundle $\Lambda^2\wo E$ is determined by $\Lambda$ and the isotropy so that our
determinant-fixing condition on $E$ translates to one on $\wo E$.)
We quote the following result of \cite{fs92}.
\bprn{Furuta-Steer}{V-parabolic}
For a fixed
orbifold Riemann surface $M$, the correspondence $E \mapsto \wo E$ gives a
bijection between
isomorphism classes of rank 2 holomorphic $V$-bundles and those of rank 2
parabolic bundles over
$\wo M$ with rational weights of the form $x/\alpha$. Moreover, the induced map
${\cal O}(E) \mapsto
{\cal O}(\wo E)$ is an isomorphism of analytic sheaves. \end{proposition}
Now consider what happens to Higgs fields under the passage $E \mapsto \wo E$:
we use a local uniformising coordinate $z$, centred on a given marked point,
and let $w=z^\alpha$ be the
local holomorphic coordinate on $\wo M$. There is a Taylor series expansion
as before: if $\phi$ is a Higgs field on $E$ then in our local coordinates
\beql{Taylor Higgs}
\phi_{ij}dz &=&\left\{\begin{array}{ll}
z^{x_i-x_j-1}\wo\phi_{ij}(z^\alpha)dz&\quad\mbox{ if }x_i>x_j\and\\ z^{\alpha +
x_i-x_j-1}\wo\phi_{ij}(z^\alpha)dz&\quad\mbox{ if }x_i \le x_j,\\
\end{array}\right. \end{eqnarray}
where $\wo\phi_{ij}$ are holomorphic functions and we again use the temporary
notations
$x_1=x$ and $x_2=x'$.
To transfer this across to $\wo E$ simply notice that away from the marked
point
the clutching
function $\Psi$ defined by \refeq{patch} is a bundle isomorphism and so acts on
the Higgs field by
conjugation. Conjugating by $\Psi$ we obtain \beql{Taylor Higgs 2}
\phi_{ij}^\Psi dz &=&
z^{x_j-x_i}\phi_{ij}dz\nonumber\\ &=& \left\{ \begin{array}{ll}
\wo\phi_{ij}(w)\frac{dw}{\alpha w}&\quad\mbox{ if }x_i>x_j\and\\
\wo\phi_{ij}(w)\frac{dw}{\alpha}&\quad\mbox{ if }x_i\le x_j,\\ \end{array}\right.
\end{eqnarray}
with $x_1=x$ and $x_2=x'$. We take this to
define a \de{parabolic Higgs field}. Denote the parabolic Higgs field
constructed in this way by
$\wo{\phi}$. In Simpson's language \cite{si90} is $\wo{\phi}$ just a filtered
regular Higgs field.
This defines a correspondence between Higgs $V$-bundles and parabolic Higgs
bundles (with appropriate parabolic weights). In order to make this a
correspondence between the stable objects we simply have to check that the
invariant subbundles correspond---this is easy. Thus we can apply many of our
preceding results to spaces of stable parabolic Higgs bundles.
\bsu{Reduction to the case $n_0 = 0$}{detred}
Suppose that at some marked points the $V$-bundle $E$ has $x = x'$ so that $n_0
> 0$.
Number the marked points so that these are the last $n_0$. We can twist by
a line $V$-bundle to make the isotropy zero at such points. Thus, as far as
$E$ is
concerned, the orbifold structure at these points is irrelevant and we suppose
that
$M$ only has $n-n_0$ marked points. More precisely, we can construct $\b M$
from $M$
using the smoothing process that gives $\wo M$ but only at the last $n_0$
marked
points. We write $\b E$ for $E$ considered as a $V$-bundle over $\b M$.
We also have to consider the canonical $V$-bundle $K$. Notice that $K = K_{\b
M}\otimes_{i=n-n_0+1}^{n} L_{i}^{\alpha_i-1}$ so that there is a natural inclusion
$H^0(K^2_{\b M}) \hookrightarrow H^0(K^2_M)$ given by $ s \mapsto
s\otimes_{i=n-n_0+1}^{n} s_{i}^{2\alpha_i-2}$. (Here the $L_i$ are point
$V$-bundles and $s_i$ are the canonical sections, as in \refsu{orbdiv}.) We
identify $H^0(K^2_{\b M})$ with its image in $H^0(K^2_M)$. From \refeq{Taylor
Higgs} it is clear that $\det(\phi)$ vanishes to order $2\alpha -2$ in $z$ at the
last $n_0$ marked points (since $x = x'$ there). It follows that
$\det(\phi)\in
H^0(K^2_{\b M})$ for all Higgs fields $\phi$ on $E$. Moreover, if we pass from
$\phi$ to $\b\phi$ by applying the smoothing process for Higgs fields at the
last $n_0$ marked points, then it is clear that $(\b E,\b\phi)$ is a Higgs
$V$-bundle over $\b M$. Notice that by \refeq{Taylor Higgs 2} $\b\phi$ is
holomorphic at the last $n_0$ marked points because there we have $x = x'$.
The process outlined above is invertible. For the proofs in the remainder of
this section therefore, although we will be careful to state results for $q\in
H^0(\b M,K_{\b M}^2)$ and $n_0\ge 0$, we can assume that $n_0 = 0$ without loss
of
generality.
\bsu{Generic fibres of the determinant map}{gendet}
We assume that $2g+n-n_0>3$. Let $q \in H^0(K_{\b M}^2)$ and consider the
corresponding section $\wo q \in H^0(\wo{K_{\b M}^2})$. We want to suppose
that
$\wo q$ has simple zeros and that none of the zeros of $\wo q$ occurs at a
marked point (of $\b M$) but first we would like to know that such behaviour is
generic.
\ble{generic} The generic section $\wo q \in H^0(\wo{K_{\b M}^2})$ has simple
zeros, none of which is at a marked point of $\b M$, provided $2g+n-n_0>3$.
\end{lemma}
\begin{proof}
We can assume that $n_0=0$. Notice that $\wo{K^2} = K_{\wo M}^2
\otimes_{i=1}^{n}L_{p_i}$, where $L_{p_i}=L_i^{\alpha_i}$ is the point bundle
associated to a marked point $p_i$. We know that the $\wo q$ with simple zeros
form a non-empty Zariski-open set in the complete linear system $|K_{\wo M}^2
\otimes_{i=1}^{n}L_{p_i}|$. The extra condition that none of the zeros is at a
marked point is obviously also an open condition, so we only need to check that
the resulting set is non-empty.
If $n=1$ then we only need to show that the marked point is not a base-point of
the linear system. Similarly, if there are several marked points then it
suffices to show that none is a base point, because then the sections vanishing
at a given marked point cut out a hyperplane in the projective space $|K_{\wo
M}^2 \otimes_{i=1}^{n}L_{p_i}|$. Using \cite[IV, proposition 3.1]{ha77}, this
is equivalent to showing that $h^0(K_{\wo M}^2 L_{p_j}^{*}
\otimes_{i=1}^{n}L_{p_i}) = h^0(K_{\wo M}^2 \otimes_{i=1}^{n}L_{p_i}) - 1$ for
each $j = 1,\dots,n$---this follows from an easy Riemann-Roch calculation,
provided $2g+n>3$. \end{proof}
\ble{generic2} Let $\phi$ be a Higgs field on $E$ with $\det(\phi)=q$ and $\wo
q$ generic in the
sense of \refle{generic}. Then $\wo q$ has simple zeros at each marked point
where $x=x'$.
Moreover, at every marked point of $M$ we have $\wo\phi_{21}\ne 0$ and
$\wo\phi_{12}\ne
0$, where
$\wo\phi_{21}$ and $\wo\phi_{12}$ are as in \refeq{Taylor Higgs}. \end{lemma} \begin{proof}
Using
\refeq{Taylor
Higgs} we have that, in our local coordinates around a marked point,
\beql{Taylor Higgs 3} \phi =
\left( \begin{array}{rr} z^{\alpha-1}\wo\phi_{11}(z^\alpha) & z^{\alpha + x - x'
-1}\wo\phi_{12}(z^\alpha) \\
z^{x' - x -1}\wo\phi_{21}(z^\alpha)& -z^{\alpha-1}\wo\phi_{11}(z^\alpha)
\end{array}\right)dz,
\end{eqnarray}
assuming that $x \ne x'$. If $x' = x$ then the $(2,1)$-term is $z^{\alpha
-1}\wo\phi_{21}(z^\alpha)dz$. Here the $\wo\phi_{ij}$ are holomorphic functions.
If $\wo q$ is generic then it is non-zero at a marked point of $\b M$ and has
at
most a simple zero at a marked point where $x=x'$---in fact there will
be a zero at such a point. It follows that we must have that $\det(\phi) = q$
vanishes exactly to order $\alpha -2$ in $z$ in the first case and order $2\alpha -2$
in the second. Hence $\wo\phi_{21}(0) \ne 0$ and $\wo\phi_{12}(0) \ne 0$ at
each marked point of $M$. \end{proof}
Henceforth we assume that $\wo q$ is a generic section, as in
\refle{generic}, and construct $\det^{-1}(q)$. For the purposes of exposition
we also assume that $n_0 = 0$. We face two problems in defining the
spectral variety of $\phi$ or $\wo\phi$---the first is that $\wo\phi$ has
simple
poles at the marked points and the second is that $\wo q$ is not the
determinant
of $\wo\phi$. Let
$s_{p_i}=s_i^{\alpha_i}$ be the canonical section of the point-bundle $L_{p_i}$
associated to a marked point $p_i$ and let $s_0 = \otimes_{i=1}^{n}s_{p_i}$ be
the corresponding section of $\otimes_{i=1}^{n}L_{p_i}$. Define
\begin{eqnarray*}
\oo q = \wo q s_0 \in H^0(K_{\wo M}^2
\otimes_{i=1}^{n}L_{p_i}^2) &\mbox{and}& \oo\phi = \wo\phi s_0 \in
\mbox{ParEnd}_0(\wo
E)\otimes K_{\wo
M} \otimes_{i=1}^{n}L_{p_i}.\label{eq:oophi}
\end{eqnarray*}
It is clear that $\det(\oo\phi)=\oo q$ and that $\oo q$ has simple zeros
(including one at each marked point). Eventually we will need to reverse the
construction of $\oo\phi$ from $\phi$; this can be done for a given $\oo\phi
\in
\mbox{ParEnd}_0(\wo E)\otimes K_{\wo M} \otimes_{i=1}^{n}L_{p_i}$ provided
$\oo\phi$ obeys the obvious vanishing conditions at each marked point.
The square root $\sqrt{-\oo q}$ defines a smooth Riemann surface $\widehat M$ with
double-covering $\pi:\widehat M \to \wo M$ and branched at the zeros of $\oo q$.
Therefore there are $4g-4 + 2n$ branch-points and the Riemann-Hurwitz
formula gives the genus of $\widehat M$ as $\widehat g = 4g-3 + n$. We set
$s=\sqrt{-\oo q}$---a section of $\pi^*(K_{\wo
M}\otimes_{i=1}^{n}L_{p_i})$---and $\widehat\phi = \pi^*\oo\phi$. Moreover, if
$\sigma$ is the involution interchanging the leaves of $\widehat M$ then $\sigma^*
s = -s$ and $\widehat\phi$ is $\sigma$-invariant.
In order to reverse the passage from $E$ to $\wo E$ we have to keep track of
the
quasi-parabolic data. The following lemma is useful here. (Applying the
involution $\sigma$, the same result holds for $\sigma^*L =\ker(\widehat\phi -
s)$.)
\ble{quasi structure}
If $\phi$ is a Higgs field on $E$ with $\det(\phi) = q$ and $\wo
q$ generic in the sense of \refle{generic}, then the kernel of $\widehat\phi + s$
(with $s$,
$\widehat\phi$ defined as above) is a line subbundle $L$ of $\pi^*\wo E$ and, at a
marked point (of
$\b M$) $p$, $0 \subsetneq L_{\pi^{-1}(p)} \subsetneq \pi^*\wo E_{\pi^{-1}(p)}
= \wo E_p$ describes
the quasi-parabolic structure. \end{lemma}
\begin{proof}
At a marked point, using \refeq{Taylor Higgs 2} and
\refeq{oophi}, we write \beql{matrix oophi} \oo\phi = \left( \begin{array}{rr}
w\wo\phi_{11}(w) &
w\wo\phi_{12}(w) \\ \wo\phi_{21}(w)& -w\wo\phi_{11}(w)
\end{array}\right)\frac{dw}\alpha,
\end{eqnarray} with, from
\refle{generic2}, $\wo\phi_{21}(0) \ne 0$ and $\wo\phi_{12}(0) \ne 0$.
This means that $\oo\phi$ is not zero at a marked point. Similarly, using the
fact that $\wo q$ has simple zeros, $\oo\phi$ is non-zero at {\em every}
branch
point. Now consider $\widehat\phi + s$: since $\det(\widehat\phi +
s)\equiv 0$ this mapping has nullity 1 or 2 at every point. Because
$\widehat\phi$
is trace-free and $s$ is scalar it follows that zeros of $\widehat\phi + s$ can
only occur at zeros of $s$ {\rm i.\,e.\ } at the ramification points. However, since
$\oo\phi$ is non-zero at a branch point $p$ it is impossible for $\widehat\phi +
s$
to be zero at $\pi^{-1}(p)$. So $\widehat\phi + s$ is nowhere zero and the kernel
is a line bundle. Finally, if $p$ is a marked point it is clear from
\refeq{matrix oophi} that $\ker(\widehat\phi + s)_{\pi^{-1}(p)}$ is spanned by
$\left( 0 , 1 \right)^T$ in our local coordinates. The result about the
quasi-parabolic structure follows.
\end{proof}
\bth{fibres of det} Suppose that $2g+n-n_0>3$.
Given $q \in H^0(\b M,K_{\b M}^2)$ such that $\wo q$ is generic in the sense of
\refle{generic} the
fibre of the determinant map $\det^{-1}(q)$ is biholomorphic to the Prym
variety of the covering
$\pi:\widehat M \to \wo M$, determined by $q$ (via $\wo q'$). \eth
\begin{proof}
Since the proof is familiar \cite[theorem 8.1]{hi87} we only sketch it. We
assume $n_0=0$. Fix $q$ such that $\wo q$ is generic and $\widehat M$ as
constructed above and also a line bundle $P$ over $\widehat M$ such that
$P\sigma^* P = \pi^*(K_{\wo M}^*\Lambda^2\wo E\otimes_{i=1}^{n}L_{p_i}^* )$.
Suppose that $(E,\phi)$ is a Higgs $V$-bundle over $M$ with $\det(\phi)=q$.
Consider the parabolic bundle $\wo E$ and $\oo\phi\in \mbox{ParEnd}_0(\wo
E)\otimes K_{\wo M} \otimes_{i=1}^{n}L_{p_i}$ with determinant $\oo q$ defined
as above. Now set $L = \ker(\widehat\phi + s)$ and notice that $L\sigma^*L \cong
\pi^*(K_{\wo M}^*\Lambda^2\wo E\otimes_{i=1}^{n}L_{p_i}^*)$. Since $P$ was chosen
to
have the same property $LP^*$ is an element of the Prym variety.
Conversely, we consider $L$ such that $LP^*$ is a given point in the Prym
variety. The push-forward sheaf $\pi_*{\cal O}(L)$ is locally free analytic of rank
2 and so defines a rank 2 holomorphic vector bundle $W$ over $\wo M$. There is
a natural quasi-parabolic structure on $W^*$ at a branch point $p$ because $W_p
= (J_1L)_{\pi^{-1}(p)}$ and there is a natural filtration of jets $ 0 \subset
L^*_{\pi^{-1}(p)} \subset (J_1L)^*_{\pi^{-1}(p)}.$ The Hecke correspondence for
quasi-parabolic bundles defines a rank 2 holomorphic bundle $W'^*$: that is,
the quasi-parabolic structure on $W^*$ defines a natural surjective map
${\cal O}(W^*) \surjarrow {\cal S}$, where ${\cal S}$ is a sheaf supported at the branch
points, and the kernel of this map is locally free analytic of rank 2 and so
defines $W'^*$.
This construction of $W'$ actually recovers $\wo E$: there is a natural map
${\cal O}(W) \to {\cal O}(W')$ which induces an inclusion $L \hookrightarrow \pi^*W'$.
Similarly there is an inclusion $\sigma^*L \hookrightarrow \pi^*W'$. As
subbundles of $\pi^*W'$, $L$ and $\sigma^*L$ coincide precisely on the
ramification points so that there is a map $L\oplus \sigma^*L \to \pi^*W'$
which
is an isomorphism away from the ramification points. It follows that $\Lambda^2W'
=
\Lambda^2\wo E$ and that $W' = \wo E$. Moreover, at a marked point $p$ the
inclusion $L_{\pi^{-1}(p)} \hookrightarrow \pi^*\wo E_{\pi^{-1}(p)} = \wo E_p$
gives the quasi-parabolic structure and so we recover the original
$V$-bundle $E$ (see \refpr{V-parabolic} and \refle{quasi structure}). We
recover the Higgs field simply by defining $\widehat\phi : \pi^*\wo E \to
\pi^*(\wo E\otimes K_{\wo M}\otimes_{i=1}^{n}L_{p_i})$ by $\widehat\phi(e) = \mp
se$ according as $v \in L$ or $v \in \sigma^* L$. Since this is
$\sigma$-invariant it descends to define $\oo\phi$ on $\wo M$---this is
trace-free with determinant $\oo q$ and recovers the old $\oo\phi$. At a
marked
point $p$, we have $\ker(\widehat\phi_{\pi^{-1}(p)}) = L_{\pi^{-1}(p)}$ and
hence,
in coordinates which respect the quasi-parabolic structure, the
$(1,2)$-, $(2,2)$- and $(1,1)$-components of $\oo\phi$ vanish at $p$ to first
order in $w$. Of course this is exactly the condition for $\oo\phi$ to define
$\wo\phi$ via \refeq{oophi} and to $\wo\phi$ there corresponds a Higgs field
$\phi$ on the $V$-bundle $E$.
Finally note that if there was an $\oo\phi$-invariant subbundle $L'$ then there
would be a section $t \in H^0(K_{\wo M}\otimes_{i=1}^{n}L_{p_i})$ such that for
any $l\in L'$, $\oo\phi(l) = tl$. Since $\oo\phi$ is trace-free it would
follow
that $\oo q = \det(\oo\phi) = -t^2$---contradicting the assumption that $\oo q$
has simple zeros. So $\oo\phi$ has no invariant subbundles and the same is
therefore true of $\wo\phi$ and $\phi$. \end{proof}
Notice that this shows that a Higgs field in the generic fibre of $\det$ leaves
no sub-$V$-bundle invariant (compare \refsu{higalg}).
\bsu{The case $g=n-n_0=1$}{detspe}
We briefly indicate how the preceding arguments can be modified to identify the
generic fibre of the determinant map when $g=n-n_0=1$. We outline the argument
working with $V$-bundles although the proofs again require translation to the
parabolic case. As before we simplify the exposition by supposing that $n_0=0$
so that there is a single marked point $p=p_1$.
\ble{invariants} If $g=n-n_0=1$ then every Higgs field has an
invariant sub-$V$-bundle. \end{lemma} \begin{proof} Since $h^0(K^2) = 1$ the natural squaring
map $H^0(K) \to H^0(K^2)$ is surjective. Thus, given any Higgs field $\phi$,
$\det(\phi) = -s^2$ for some $s \in H^0(K)$. Consider $\theta_\pm = \phi \pm
s$: if $\phi\ne 0$ this is non-zero (if $\phi = 0$ then there is nothing to
prove) but has determinant zero and so we have line $V$-bundles $L_\pm
\hookrightarrow E$ with $L_\pm \subseteq \ker \theta_\pm$. Clearly $L_\pm$ are
invariant, with $\phi$ acting on $L_\pm$ by multiplication by $\mp s$. \end{proof}
Since the squaring map is surjective, \refle{generic} certainly can't hold in
this case---we now consider any non-zero determinant to be `generic'. Using
\refle{invariants} we see that any Higgs field with a generic ({\rm i.\,e.\ } non-zero)
determinant has two invariant sub-$V$-bundles $L_+$ and $L_-$.
Notice that $K = L_1^{\alpha_1 -1}$ and so sections of $K$ are multiples of the
canonical section
$s_1^{\alpha_1-1}$ and those of $K^2$ are multiples of $s_1^{2\alpha_1-2}$. Thus in
\refeq{Taylor Higgs 3} $\wo\phi_{11}(z^{\alpha_1})$ and exactly one of
$\wo\phi_{12}(z^{\alpha_1})$ and $\wo\phi_{21}(z^{\alpha_1})$ are non-zero at the
marked point, while the other must vanish to first order in $w=z^{\alpha_1}$. A
small local calculation using \refeq{Taylor Higgs 3} shows that $L_+$ and
$L_-$ have the same isotropy; it is $x$ if $\wo\phi_{21}(0)=0$ and $x'$ if
$\wo\phi_{12}(0) = 0$. Hence $L_+L_- \cong \Lambda L_{1}^{x - x'}$ or
$\Lambda L_1^{x' - x - \alpha}$, where the isotropy of $L_\pm$ is $x$ in the first
case and $x'$ in the second. Using these and stability, we calculate that
$c_1(L_\pm) = r/2 + x/\alpha$ or $(r-1)/2 + x'/\alpha$, respectively, where $c_1(E) =
r + (x + x')/\alpha$. Notice that the parity of $r$ determines the isotropy of
$L_\pm$. Thus a point in the generic fibre gives a point not of a Prym variety
but of the Jacobian $\mbox{Jac}_0{M}\cong T^2$ corresponding to $L_+$.
Reversing the correspondence as in \refth{fibres of det} yields the following
result.
\bpr{special} If $g=n-n_0=1$ then for $q\in H^0(\b M,K^2_{\b
M})\setminus \{ 0 \}$ the fibre $\det^{-1}(q)$ is biholomorphic to the Jacobian
torus. \end{proposition}
\bsu{Non-stable $V$-Bundles in Fibres of the Determinant Map}{detnon}
We have a natural inclusion of the cotangent bundle to the moduli of stable
$V$-bundles in to the
moduli of stable Higgs $V$-bundles and we would like to show, following
\cite[\S 8 ]{hi87}, that in
fact we
have a fibrewise compactification with respect to the determinant map. Thus we
need to analyse the
fibres of the determinant map and check that, generically, the non-stable
$V$-bundles form
subvarieties of codimension at least 1. We wish to adapt Hitchin's argument
here
but there are additional complications and a new variant of the argument is
needed in the special case $g=n-n_0=1$.
\bpr{} Suppose that $2g+n-n_0>3$. For fixed, generic, $q\in H^0(\b
M,K_{\b M}^2)$ let ${\rm Prym}(\widehat M)$ be the Prym variety which is the fibre
of the determinant
map (\refth{fibres of det}). Then the points of ${\rm Prym}(\widehat M)$
corresponding to
non-stable $V$-bundles form a finite union of subvarieties of codimension at
least 1.
\end{proposition}
\begin{proof}
Suppose $n_0=0$ and consider $L_E\hookrightarrow E$ a destabilising
sub-$V$-bundle, with $\wo{L_E} \hookrightarrow \wo E$ {\em parabolically}
destabilising (see \cite{fs92} and \refsu{parhig}) and $L' = \pi^*\wo{L_E}$.
The outline of the argument is similar to that of \cite[\S 8]{hi87}---with
which
we assume familiarity---but there are two problems. Firstly, a sufficient
condition for lifts from $H^0(L'^*L^*\pi^*\Lambda^2\wo E)$ to $H^0(L'^*\pi^*\wo E)$
to be unique is $H^0(L'^*L)=0$ but this is not always the case if $g=0$.
However, {\em invariant} lifts will still be unique because
$H^0(L'^*L)\hookrightarrow H^0(L'^*\pi^*\wo E)$ is moved by the involution
$\sigma$. Secondly, because $\wo{L_E}$ is parabolic destabilising we can't fix
the degree of $L'^*L^*\pi^*\Lambda^2\wo E$ in the same way that Hitchin does. Let
the isotropy of $L_E$ be specified by an isotropy vector $(\epsilon_i)$. A
small computation with the stability condition shows
\begin{eqnarray*}
c_1(L'^*L^*\pi^*\Lambda^2\wo E) \le \sum_{i=1}^n\frac{\epsilon_i(x_i'-x_i)}{\alpha_i} + 2g
-2.
\end{eqnarray*}
Since $L_{\pi^{-1}(p)}$ gives the flag which describes the quasi-parabolic
structure at a marked point $p$, by \refle{quasi structure}, the subset of
$\pi^{-1}(\{p_1,\dots,p_n\})$ at which our section of $L'^*L^*\pi^*\Lambda^2\wo E$
vanishes is just $\pi^{-1}(\{p_i : \epsilon_i=1\})$. Hence, for given
$(\epsilon_i)$, it is more natural to consider sections of $(\otimes_{\{i :
\epsilon_i=1\}}L_i^*)L'^*L^*\pi^*\Lambda^2\wo E$ and these correspond to divisors
of
degree less than or equal to $\sum_{i=1}^n(\epsilon_i(x_i'-x_i)/\alpha_i) - n_+ +2g -2$.
For each $(\epsilon_i)$ (a finite number) we obtain a subvariety of the variety
of effective divisors and correspondingly a subvariety of the Prym variety of
codimension at least 1. \end{proof}
\bpr{} If $g=n-n_0=1$ then
for $q\in H^0(K^2_{\b M})\setminus \{ 0 \}$ there are only a finite number of
points in the fibre
$\det^{-1}(q)$ corresponding to non-stable $V$-bundles. \end{proposition}
\begin{proof}
Again, we consider a destabilising sub-$V$-bundle $L_E \hookrightarrow E$ and
the corresponding parabolic bundle $\wo{L_E}$. Since $\wo{L_E}$ is parabolic
destabilising $2c_1(\wo{L_E}) \ge c_1(\wo E) + 1$ or $2c_1(\wo{L_E}) \ge
c_1(\wo
E)$, according to whether $L_E$ has isotropy $x$ or $x'$.
Recall (from \refsu{detspe}) that $E$ has two $\phi$-invariant sub-$V$-bundles
$L_\pm$ and so is an extension $0 \to L_\pm \to E \to
L_\pm^*\Lambda \to 0$. Set $r=c_1(\wo E)$. The discussion in
\refsu{detspe} also shows that if $r$ is even then $c_1(\wo{L_\pm})=r/2$,
$L_\pm$ have isotropy $x$ and $\wo{L_+}\wo{L_-}\cong \Lambda^2\wo E$, while if $r$
is odd then $c_1(\wo{L_\pm}) = (r-1)/2$, $L_\pm$ have isotropy $x'$ and
$\wo{L_+}\wo{L_-}\cong \Lambda^2\wo EL_{p_1}^*$.
Consider the sequence of bundles
\begin{eqnarray*}
0 \to \wo{L_E}^*\wo{L_\pm} \to
\wo{L_E}^*\wo E \to \wo{L_E}^*\wo{L_\pm}^*\Lambda^2\wo E \to 0.
\end{eqnarray*}
and the first three terms of the associated cohomology long exact sequence. By
assumption $H^0(\wo{L_E}^*\wo E)$ is non-zero so at least one of
$\wo{L_E}^*\wo{L_+}$ and $\wo{L_E}^*\wo{L_+}^*\Lambda^2\wo E$ must have a non-zero
section and the same is true with $L_-$ in place of $L_+$. If we had that
$H^0(\wo{L_E}^*\wo{L_\pm})\ne 0$ and $H^0(\wo{L_E}^*\wo{L_\pm}^*\Lambda^2\wo E)=0$
then the inclusion of $\wo{L_E}$ in $\wo E$ would have to factor through that
of
$\wo{L_\pm}$, which is impossible as $\wo{L_\pm}$ does not destabilise. So
$\wo{L_E}^*\wo{L_+}^*\Lambda^2\wo E$ and $\wo{L_E}^*\wo{L_-}^*\Lambda^2\wo E$ must have
non-zero sections. However, considering cases according to the parity of $r$
and the isotropy of $L_E$, we see that $c_1(\wo{L_E}^*\wo{L_\pm}^*\Lambda^2\wo
E)\le
0$. It follows that a non-stable $V$-bundle occurs only if $\wo{L_E} \cong
\wo{L_\pm}^*\Lambda^2\wo E$. Since $\wo{L_+}\wo{L_-}\cong \Lambda^2\wo E$
or $\wo{L_+}\wo{L_-}\cong \Lambda^2\wo EL_{p_1}^*$, it follows that
$\wo{L_+}^2\cong
\Lambda^2\wo E$ or $\wo{L_+}^2\cong \Lambda^2\wo EL_{p_1}^*$. Hence, if a non-stable
$V$-bundle occurs then $\wo{L_+}$ is one of the $2^{2g}=4$ possible square
roots of a given line bundle. \end{proof}
\bse{Representations and Higgs $V$-bundles}{rep}
Throughout this section $E\to M$ is a complex rank 2 $V$-bundle over an
orbifold
Riemann surface of negative Euler characteristic. We also suppose that a fixed
metric
and Yang-Mills connexion, $A_\Lambda$, are given on $\Lambda$.
\bsu{Stable Higgs $V$-bundles and Projectively Flat Connexions}{repsta}
Suppose that $E$ is given a Higgs $V$-bundle structure with Higgs field $\phi$,
compatible with $A_\Lambda$. Given a Hermitian metric on $E$ inducing the fixed
metric on $\Lambda$, there is a unique Chern connexion $A$ compatible with the
holomorphic and unitary structures and inducing $A_\Lambda$ on $\Lambda$. The
metric also defines an adjoint of $\phi$, $\phi^*$. Set \begin{eqnarray*} D = \partial_A +
\phi + \o\partial_A + \phi^*; \end{eqnarray*} this is a (non-unitary) connexion with
curvature $F_D = F_A + [\phi,\phi^*]$ and $D$ is projectively flat
if and only if the pair $(A,\phi)$ is Yang-Mills-Higgs. The
determinant-fixing condition on $D$ is simply that it induces the fixed ({\em
unitary}) Yang-Mills connexion $A_\Lambda$ in $\Lambda$.
Conversely, given a connexion $D$ (with fixed determinant) and a Hermitian
metric on $E$, inducing the fixed metric on $\Lambda$, we can decompose $D$
into
its $(1,0)$- and $(0,1)$-parts; $D=\partial_1 + \o\partial_2$. There are then
uniquely defined operators $\o\partial_1$ and $\partial_2$ (of types $(0,1)$
and
$(1,0)$ respectively) such that $d_1=\partial_1 + \o\partial_1$ and
$d_2=\partial_2 + \o\partial_2$ are unitary connexions. Define $\phi =
(\partial_1-\partial_2)/2$ and $d_A=(d_1+d_2)/2$ so that $\o\partial_A =
(\o\partial_1+ \o\partial_2)/2$. Clearly $(A,\phi)$ is a Higgs pair if and
only
if $\o\partial_A(\phi) = 0$, {\rm i.\,e.\ } $\phi$ is holomorphic; if we define $D''=
\o\partial_A + \phi$ then this condition becomes $D''^2 = 0$. Here $D''$ is a
first order operator which satisfies the appropriate $\o{\partial}$-Leibniz
rule. Moreover, if $D''^2=0$ then $(A,\phi)$ is Yang-Mills-Higgs\ if and only if $D$ has
curvature $- \pi {\rm i}\, c_1(\Lambda)\Omega I_E$.
{}From now on suppose that $D$ has curvature $- \pi {\rm i}\,c_1(\Lambda)\Omega
I_E$.
We call a Hermitian metric (with fixed determinant) \de{twisted harmonic} with
respect to $D$ if the resulting $D''$-operator satisfies $D''^2=0$. Using the
fact that the curvature of $D$ is $- \pi {\rm i}\,c_1(\Lambda)\Omega I_E$, a small
calculation shows that the condition for the metric to be twisted harmonic is
$F_1 = F_2$, where $F_i$ is the curvature of $d_i$, for $i=1,2$. If the metric
is twisted harmonic then $D''$ defines a Higgs $V$-bundle with respect to which
the metric is Hermitian-Yang-Mills-Higgs. Clearly the processes of passing from a Higgs $V$-bundle
to a projectively flat connexion and vice-versa are mutually inverse and
respect
the determinant-fixing conditions.
We prove an existence result for twisted harmonic metrics,
following \cite{do87}. The connexion $D$ on $E$ comes from a projectively flat
connexion
in the corresponding principal $GL_2({\Bbb C})$ $V$-bundle $P$ with
$E=P\times_{GL_2({\Bbb C})}{\Bbb C}^2$.
Hence $D$ determines a holonomy representation $\rho_D:\pi_1^V(M) \to
PSL_2({\Bbb C})$. Let ${\rm Herm}^+_2$ denote the $2\times 2$ positive-definite
Hermitian matrices (with the metric described in \cite[\S VI.1]{ko87}). The
corresponding $V$-bundle of Hermitian metrics on $E$ is just
$H'=P\times_{GL_2({\Bbb C})}{\rm Herm}_2^+$. Here $GL_2({\Bbb C})$ acts on ${\rm
Herm}^+_2$
by $h \mapsto \o{g}^T h g$, for $h\in {\rm Herm}_2^+$ and $g\in GL_2({\Bbb C})$.
This
is an action of $PSL_2({\Bbb C})$ and so $H'$ is flat and can be written as
$H'=H'_{\rho_D}={\cal H}^2 \times_{\rho_D} {\rm Herm}^+_2$ (where ${\cal H}^2$
is the universal cover of $M$). A choice of Hermitian metric on $E$ is a
section of $H'_{\rho_D}$ or a $\pi_1^V(M)$-equivariant map ${\cal H}^2 \to
{{\rm Herm\,}}_2^+$---is this map is harmonic in the sense that it minimises energy
among such maps?
Using the determinant-fixing condition, we suppose that the map to ${\rm
Herm}^+_2$ has constant determinant 1. We identify the subspace of ${\rm
Herm}_2^+ \cong GL_2({\Bbb C})/U(2)$ in which the image of the map lies with
$SL_2({\Bbb C})/SU(2) \cong {\cal H}^3$. So we consider sections of the flat ${\cal
H}^3$ $V$-bundle $H_{\rho_D}={\cal H}^2 \times_{\rho_D} {\cal H}^3$: the
sections of $H_{\rho_D}$ are precisely the types of map considered by Donaldson
in \cite{do87}. The condition that a metric $h$ be twisted harmonic will then
be precisely that it is given by a harmonic $\pi_1^V(M)$-equivariant map $\widehat
h: {\cal H}^2 \to {\cal H}^3$.
Donaldson shows that the Euler-Lagrange condition for the map $\widehat h$ to be
harmonic is just $d_A^*(\phi + \phi^*) = 0$ and moreover that, at least in the
smooth case and when $\rho_D$ is irreducible, such a harmonic map always
exists.
This Euler-Lagrange condition agrees with our definition of a twisted harmonic
metric. For the existence of such harmonic maps we either follow Donaldson's
proof directly or argue equivariantly, as in \refsu{ymhequ}, obtaining the
following results.
\bpr{Donaldson} Let $\rho_D:\pi_1^V(M) \to PSL_2({\Bbb C})$ be an irreducible
representation and $s_0$ a section of the flat ${\cal H}^3$ $V$-bundle
$H_{\rho_D}={\cal H}^2 \times_{\rho_D} {\cal H}^3$. Then $H_{\rho_D}$ admits a
twisted harmonic section homotopic to $s_0$. \end{proposition}
\bco{twisted exists} Let $\Lambda$ have a fixed Hermitian metric and compatible
Yang-Mills connexion. Given an irreducible $GL_2({\Bbb C})$-connexion $D$ on $E$
with
curvature $- \pi {\rm i}\, c_1(\Lambda)\Omega I_E$ and fixed determinant, $E$
admits
a Hermitian metric of fixed determinant which is twisted harmonic with respect
to $D$. Hence $D$ determines a stable Higgs $V$-bundle structure on $E$ with
fixed determinant, for which the metric is Hermitian-Yang-Mills-Higgs. \end{corollary}
\bco{} Let $E$ have a fixed Hermitian metric and let $\Lambda$ have a
compatible
Yang-Mills connexion. Let $D$ be an irreducible $GL_2({\Bbb C})$-connexion on $E$
with curvature $- \pi {\rm i}\, c_1(\Lambda)\Omega I_E$ and fixed determinant.
Then
there is a complex gauge transformation $g \in {\cal G}^c$, of determinant 1,
such that the {\em fixed} metric is twisted harmonic with respect to $g(D)$.
Hence $g(D)$ determines a stable Higgs $V$-bundle structure on $E$ with fixed
determinant. \end{corollary}
To identify the space of such projectively flat connexions modulo gauge
equivalence with our moduli space of Higgs $V$-bundles we have to consider the
actions of the gauge groups and the question of irreducibility. We have the
following result adapted from \cite[theorem 9.13 \& proposition 9.18]{hi87}.
\bpr{irreducibles} Let
$E\to M$ be a complex rank 2 $V$-bundle with a fixed Hermitian metric and
compatible Yang-Mills
connexion on the determinant line $V$-bundle $\Lambda$. Then the following
hold.
\begin{enumerate} \item\label{irreducible'} A Yang-Mills-Higgs\ pair $(A,\phi)$ (with fixed
determinant) is
irreducible if and only if the corresponding projectively flat
$GL_2({\Bbb C})$-connexion $D=\partial_A +
\o\partial_A + \phi + \phi^*$ is irreducible. \item\label{gauge} Two irreducible
$GL_2({\Bbb C})$-connexions
on $E$ with curvature $- \pi {\rm i}\, c_1(\Lambda)\Omega I_E$ (and fixed
determinant), $D$ and
$D'$, are equivalent under the action of ${\cal G}^c$ if and only if the
corresponding
Yang-Mills-Higgs\ pairs $(A,\phi)$ and $(A',\phi')$ are equivalent under the action of
$\cal G$.
\end{enumerate} \end{proposition}
\bsu{Projectively Flat Connexions and Representations}{reprep}
In the smooth case projectively flat connexions are described by
representations
of a universal central extension of
the fundamental group (see \cite{hi87}, also \cite[\S 6]{ab82}). However over
an orbifold
Riemann surface there is in general no {\em one} central extension which will
do \cite[\S
3]{fs92} but the determinant-fixing condition tells us that the
appropriate central extension to use is the fundamental group of the circle
$V$-bundle
$S(\Lambda)$. Let $(y_i)$ ($0\le y_i \le \alpha_i-1$) denote the
isotropy of a line $V$-bundle $L$ and let $b = c_1(L) - \sum_{i=1}^n (y_i/\alpha_i)$.
The orbifold fundamental group of $S(L)$ is
well-known (see, for instance, \cite[\S 2]{fs92}) and has presentation
\beql{circle}
\begin{array}{rcl} \pi_1^V(S(L)) &=& \langle a_1,b_1,\dots, a_g,b_g,q_1,\dots,
q_n,h \quad | \\ &
&\quad [a_j,h]=1, \ [b_j,h]=1,\ [q_i,h]=1,\ q_i^{\alpha_i} h^{y_i}=1,\ q_1\dots
q_n[a_1,b_1]\dots[a_g,b_g]h^{-b}=1 \rangle. \end{array} \end{eqnarray}
\bpr{representations} Let $\Lambda\to M$ be a line $V$-bundle with a fixed
Hermitian
metric and compatible Yang-Mills connexion. Let $S(\Lambda)$ be the
corresponding circle
$V$-bundle. Then there is a bijective correspondence between \begin{enumerate}
\item
conjugacy classes of irreducible representations $\pi_1^V(S(\Lambda)) \to
SL_2({\Bbb C})$ such
that
the generator $h$ in \refeq{circle} is mapped to $-I_2\in SL_2({\Bbb C})$ and \item
isomorphism
classes of pairs $(E,D)$, where $E$ is a $GL_2({\Bbb C})$ $V$-bundle with $\Lambda^2E
= \Lambda$
and $D$
is an irreducible $GL_2({\Bbb C})$ connexion on $E$ with curvature $- \pi {\rm i}\,
c_1(\Lambda)\Omega
I_E$ and inducing the fixed connexion on $\Lambda$. \end{enumerate} \end{proposition}
\begin{proof} The proof
can be carried over from \cite[theorem 4.1]{fs92} (compare also
\cite[theorem 6.7]{ab82}) except that we need to replace $U(2)$ with
$GL_2({\Bbb C})$ at each stage---only the unitary structure on the determinant line
$V$-bundle is
necessary for the proof. \end{proof}
Since \refpr{representations} insists that $h$ maps to
$-I_2$, it is sufficient to consider a central ${\Bbb Z}_2$-extension rather than the
central
${\Bbb Z}$-extension of $\pi_1^V(M)$ given by the presentation \refeq{circle}---this
is
equivalent to adding the relation $h^2=1$ to that presentation. Then it is only
the {\em
parity} of the integers $y_i$ and $b$ that matters. Something a little subtler
is true.
Recall \refre{roots}: it is sufficient to consider topological
$\Lambda$'s modulo the equivalence $\Lambda \sim \Lambda L^2$. Moreover, the
topology of
$\Lambda$ is specified by the $y_i$'s and $b$ (\refpr{v-bundles})---write
$\Lambda=\Lambda_{(b,(y_i))}$ to emphasise this. Clearly if $(b,(y_i)) \equiv
(b',(y'_i))
\pmod2$ (meaning that the congruence holds componentwise) then
$\Lambda_{(b,(y_i))} \sim
\Lambda_{(b',(y'_i))}$. However, if $\alpha_i$ is odd then $L$ can be chosen so
that
tensoring
by $L^2$ brings about a change $y_i \mapsto y_i+1$; if any $\alpha_i$ is even then
a change
$b
\mapsto b+1$ is possible similarly. These equivalences correspond to
group isomorphisms between the corresponding presentations \refeq{circle}, with
the added
relation $h^2=1$. Thus we normalise the $y_i$'s and $b$ to find exactly
one representative of each class, supposing that
\beql{normal}
y_i = \left\{\begin{array}{rl}
0 &\mbox{if $\alpha_i$ is odd;}\\
0,1&\mbox{if $\alpha_i$ is even;}
\end{array}\right.
\quad b = \left\{\begin{array}{rl}
0 &\mbox{if at least one $\alpha_i$ is even;}\\
0,1&\mbox{if no $\alpha_i$ is even.}
\end{array}\right.
\end{eqnarray}
This is equivalent to considering only the following \de{square-free}
topological $\Lambda$'s:
\beql{normal2}
\Lambda \in \left\{\begin{array}{rl}
\{\otimes_{\alpha_i {\rm\ even}}L_i^{\delta_i} : \delta_i =0,1 \} &\mbox{if at
least
one $\alpha_i$ is even;}\\
\{L^{\delta_0} : \delta_0 = 0,1\} &\mbox{if no $\alpha_i$ is even,}
\end{array}\right.
\end{eqnarray}
where $L$ has no isotropy with $c_1(L)=1$ and
the $L_i$ are the point $V$-bundles of \refsu{orbdiv}.
An alternative way to understand these ${\Bbb Z}_2$-extensions of the fundamental
group is as
follows.
Since $SL_2({\Bbb C})$ double-covers $PSL_2({\Bbb C})$ any representation
$\rho_D:\pi_1^V(M)\to PSL_2({\Bbb C})$
induces a central ${\Bbb Z}_2$-extension of $\pi_1^V(M)$:
\beql{extensor}
0 \to {\Bbb Z}_2 \to \Gamma \to \pi_1^V(M) \to 0.
\end{eqnarray}
Since the group of central ${\Bbb Z}_2$-extensions of $\pi_1^V(M)$ is discrete, the
$\Gamma$ thus induced is constant over connected components of the representation
space.
So, given any $\rho_D$, we obtain an extension $\Gamma$: what invariants
$(b,(y_i))$ characterise these $\Gamma$'s and thus the central
${\Bbb Z}_2$-extensions
of $\pi_1^V(M)$? The answer is that $(b,(y_i))$ can be supposed to have one of
the normalised forms given by \refeq{normal} and so these parameterise the
central ${\Bbb Z}_2$-extensions of $\pi_1^V(M)$. This is because the image of each
generator of \refeq{circle} has exactly two possible lifts to $SL_2({\Bbb C})$ except
that $h$ must map to $-I_2$: choosing lifts at random, the relations
$q_i^{\alpha_i} h^{y_i}=1$ and $q_1\dots q_n[a_1,b_1]\dots[a_g,b_g]h^{-b}=1$ of
\refeq{circle} will be satisfied for exactly one choice of normalised
$(b,(y_i))$. By our previous discussion, this is exactly equivalent to
considering only the square-free $\Lambda$'s of \refeq{normal2}.
As well as topological types of determinant line $V$-bundles we need to
consider
topological types
of rank 2 $V$-bundles with the {\em same} determinant line
$V$-bundle---\refpr{representations}
deals with all topological types of $V$-bundles with the same determinant line
$V$-bundle
simultaneously. These types can be determined following the ideas of \cite[\S
4]{fs92}, as
follows. The various topological types are distinguished by the rotation
numbers associated to the
images of the elliptic generators $q_i$ of the presentation \refeq{circle}. By
this we mean that
the image of $q_i$ has conjugacy class described by the roots of its
characteristic polynomial, necessarily of the
form $e^{\pi{\rm i} r_i/\alpha_i}$, $e^{-\pi{\rm i} r_i/\alpha_i}$, for $0\le r_i \le \alpha_i$;
these
$r_i$ are the
\de{rotation numbers}. Notice that the relation $q_i^{\alpha_i} h^{y_i}=1$ means
that $r_i$
has the same parity as $y_i$ and this is the only {\em a priori}
restriction on $r_i$. Call an abstract set of rotation numbers $(r_i)$
\de{compatible
with $\Lambda$} if $r_i$ has the same parity as $y_i$. The result
is the following and the proof, using \refpr{v-bundles}, is easy.
\ble{rotation numbers} The
topological types
of $GL_2({\Bbb C})$ $V$-bundles $E$ with fixed determinant constructed in
\refpr{representations}
correspond to the rotation numbers $r_i$ associated to the images of the
elliptic
generators $q_i$
of the presentation \refeq{circle}. \end{lemma}
Denote the space of representations of $\pi_1^V(S(\Lambda))$ into $SL_2({\Bbb C})$,
sending the generator $h$ of \refeq{circle} to $-I_2$, by
${\rm Hom\,}^{-1}(\pi_1^V(S(\Lambda)),SL_2({\Bbb C}))$ and the {\em irreducible}
representations by ${\rm Hom\,}^{*,-1}(\pi_1^V(S(\Lambda)),SL_2({\Bbb C}))$, for a fixed
line
$V$-bundle $\Lambda$. For any set of rotation numbers $(r_i)$ (with $0\le r_i
\le \alpha_i$ and $r_i\equiv y_i \pmod2$) we have a corresponding subset
${\rm Hom\,}^{-1}_{(r_i)}(\pi_1^V(S(\Lambda)),SL_2({\Bbb C}))$ and, by
\refpr{representations} and the results of \refsu{repsta}, a bijection between
${\rm Hom\,}^{*,-1}_{(r_i)}(\pi_1^V(S(\Lambda)),SL_2({\Bbb C}))/SL_2({\Bbb C})$ and the moduli
space of stable Higgs $V$-bundles (with fixed determinants) on the topological
$E$ corresponding to the rotation numbers (\refle{rotation numbers}).
The representation space
${\rm Hom\,}^{*,-1}_{(r_i)}(\pi_1^V(S(\Lambda)),SL_2({\Bbb C}))/SL_2({\Bbb C})$ can be thought of
as the quotient of a set of $2g+n$ matrices subject to conditions corresponding
to the relations of \refeq{circle} and so has a natural topology; whether
this description makes it into a smooth manifold is by
no means immediate. Therefore we use the bijection with
the moduli space of stable Higgs $V$-bundles, which is easily seen to be a
homeomorphism, to define a manifold structure on this representation space. In
summary we have the following theorem.
\bth{} Let $M$ be an orbifold Riemann surface with negative Euler
characteristic. Let $\Lambda$ be a fixed line $V$-bundle over $M$ and $(r_i)$
a
set of rotation numbers compatible with $\Lambda$. Then the representation
space ${\rm Hom\,}^{*,-1}_{(r_i)}(\pi_1^V(S(\Lambda)),SL_2({\Bbb C}))/SL_2({\Bbb C})$ is a complex
manifold of dimension $6(g-1)+2(n-n_0)$, where $n_0$ is the number of rotation
numbers congruent to 0 (mod $\alpha$). \eth
\bre{twist again} In \refre{roots} we noted that twisting by a non-trivial
topological root $L$ induces a map ${\cal M}(E,A_\Lambda) \leftrightarrow
{\cal M}(E\otimes L,A_\Lambda)$, preserving the topology of $\Lambda$ but altering
that of $E$. On the level of representations there is an equivalent map.
Given
any element $\widehat\rho_D \in {\rm Hom\,}^{-1}_{(r_i)}(\pi_1^V(S(\Lambda)),SL_2({\Bbb C}))$
we
can obtain a representation with different rotation numbers and covering the
same $PSL_2({\Bbb C})$-representation, by altering the signs of the images of certain
of the generators of \refeq{circle}. We can change the sign of
$\widehat\rho_D(q_i)$ (bringing about a change of rotation number $b_i \mapsto
\alpha_i-b_i$) provided $\alpha_i$ is even and provided an even number of such
changes
is made---these conditions preserve the relations $q_i^{\alpha_i} h^{y_i}=1$ and
$q_1\dots q_n[a_1,b_1]\dots[a_g,b_g]h^{-b}=1$. \end{remark}
When there are no reducible points we can apply, among other results,
\refpr{metric} and \refco{topology}. By \refle{rotation numbers} we can
discuss
the existence of reducible points in
terms of the rotation numbers. (Either $\Lambda$ or a specific set of
rotation numbers may provide an obstruction to the existence of reductions.)
The discussion in \refsu{ymhmod} shows that reductions exist if and
only if there exists an isotropy vector $(\epsilon_i)$ such that
\begin{eqnarray*} \sum_{i=1}^n\frac{\epsilon_i(x'_i-x_i) + (x'_i +
x_i)}{\alpha_i}\equiv c_1(\Lambda) \pmod 2. \end{eqnarray*}
A small calculation expresses this in terms of the rotation numbers. Thus we
obtain the following result.
\bpr{} Let $M$ be an orbifold Riemann
surface with negative Euler characteristic. Let $\Lambda$ be a fixed line
$V$-bundle over
$M$ with isotropy $(y_i)$ and $c_1(\Lambda) = b + \sum_{i=1}^n(y_i/\alpha_i)$. Let
$(r_i)$ be a
compatible set of rotation numbers. Then the representation space
${\rm Hom\,}^{-1}_{(r_i)}(\pi_1^V(S(\Lambda)),SL_2({\Bbb C}))/SL_2({\Bbb C})$ contains reducible
points if
and only if there exists an isotropy vector
$(\epsilon_i)$ such that
\begin{eqnarray*}
\sum_{i=1}^n\frac{\epsilon_ir_i}{\alpha_i}\equiv b \pmod 2.
\end{eqnarray*}
When no reducible points exist the complex manifold
${\rm Hom\,}^{-1}_{(r_i)}(\pi_1^V(S(\Lambda)), SL_2({\Bbb C}))/SL_2({\Bbb C})$
\begin{enumerate}
\item admits a complete
hyper-K\"ahler metric and
\item is connected and simply-connected.
\end{enumerate}
\end{proposition}
\bsu{Real Representations}{reprea}
In the previous subsection we discussed $SL_2({\Bbb C})$-representations of central
extensions of the orbifold fundamental group. Here we study the submanifold of
$SL_2({\Bbb R})$-representations. First notice that any irreducible representation
into $SL_2({\Bbb C})$ can fix at most one disk ${\cal H}^2\subset {\cal H}^3$ because
the intersection of two fixed disks would give a fixed line and hence define a
reduction of the representation. Moreover, any representation which does fix a
disk can be conjugated to a real representation and the conjugation action of
$SL_2({\Bbb C})$ then reduces to that of $SL_2({\Bbb R})$.
Now consider the action of complex conjugation on a representation. Recall
that,
via \refpr{representations} and \refco{twisted exists}, irreducible
representations
correspond to stable Higgs $V$-bundles. Note that $\pi_1^V(S(\Lambda))$ and
$\pi_1^V(S(\o\Lambda))$ are isomorphic via the map $h \mapsto h^{-1}$: the
following proposition follows, exactly as in \cite{si90}.
\bpr{} Let $E$ be a complex rank 2 $V$-bundle such that $\Lambda$ has a fixed
Hermitian
metric and compatible Yang-Mills connexion. Let
$\widehat\rho_D:\pi_1^V(S(\Lambda)) \to
SL_2({\Bbb C})$ be an irreducible
representation, sending $h$ to $-I_2$, with corresponding stable Higgs
$V$-bundle structure on $E$, $(E_A,\phi)$. Then the complex conjugate
representation
(thought of as a representation of $\pi_1^V(S(\o\Lambda))$) determines a Higgs
$V$-bundle
structure on $\o E$, isomorphic to $(E_A,-\phi)^*$. \end{proposition}
\bco{} Let $E$ be a complex rank 2 $V$-bundle such that $\Lambda$ has a fixed
Hermitian
metric and
compatible Yang-Mills connexion. Let $\widehat\rho_D$ be an irreducible {\em
real} representation
$\widehat\rho_D:\pi_1^V(S(\Lambda)) \to SL_2({\Bbb R})$, sending $h$ to $-I_2$, with
corresponding Higgs $V$-bundle structure $(E_A,\phi)$. Then there
is an isomorphism of Higgs $V$-bundles $(E_A,\phi)\cong (E_A,-\phi)$. \end{corollary}
Consider the involution on the moduli space of stable Higgs $V$-bundles (with
fixed unitary structure and determinants) defined by $\sigma: (E,\phi) \mapsto
(E,-\phi)$, where now $E$ denotes a holomorphic $V$-bundle and $(E,\phi)$ is a
stable Higgs $V$-bundle. The fixed points of $\sigma$ can be determined much
as
the fixed points of the circle action were in the proof of \refth{Morse}. If
$(E,\phi)$ is itself fixed then $\phi=0$ and $E$ is a stable $V$-bundle.
Suppose now that $\phi\ne 0$. If $(E,\phi)$ is only fixed up to complex
gauge-equivalence then we have an element $g \in {\cal G}^c$ such that
$g(E,\phi) = (E,-\phi)$. Since $g$ fixes $E$ it must fix the Chern connexion
$A$ and since $g$ cannot be a scalar it leads to a reduction of $A$ to a direct
sum of $U(1)$-connexions. Hence we have a holomorphic decomposition $E = L
\oplus L^*\Lambda$, where, without loss of generality, we may suppose that $2c_1(L)
- c_1(\Lambda) \ge 0$. Since $(A,\phi)$ is an irreducible pair, $g$ must have
order
2 in ${\cal G}^c$ and fix $A$. It follows that with respect to this
decomposition (or, if $A$ has stabiliser $SU(2)$, {\em choosing} a
decomposition) we can write
\begin{eqnarray*} g =\pm\left( \begin{array}{ll} {\rm i} & 0\\ 0 &
-{\rm i} \end{array} \right) &\mbox{and}& \phi = \phantom{\pm}\left(
\begin{array}{rr} t & u\\ v & -t \end{array} \right). \end{eqnarray*}
(Since our Higgs $V$-bundle is stable, we must have $v$ non-zero.) Calculating
the conjugation-action of $g$ on $\phi$ we find that $t=0$.
Recall that we chose $L$ with $2c_1(L) - c_1(\Lambda) \ge 0$ but to avoid
semi-stable points
(when $u=0$) we suppose that there is strict inequality. Exactly as in the
proof of
\refth{Morse} we consider the topological possibilities
$L=L_{(m,(\epsilon_i))}$:
we can have any $(m,(\epsilon_i))$ such that $2c_1(L) > c_1(\Lambda)$ and
$c_1(\wo{KL^{-2}\Lambda}) = r \ge 0$. Then the possible holomorphic structures and
the values
of $v$, modulo the ${\Bbb C}^*$ automorphism group, are given by the effective
(integral)
divisors of order $r$ and taking square roots. A difference from
\refth{Morse}
is that $u$ needn't be zero; indeed, $u$ can take any value in
$H^0(KL^2\Lambda^*)$. We
obtain the following result, where $l$ is defined as in \refsu{orbint}.
\bpr{real
manifolds}
Let $M$ be an orbifold Riemann surface of negative Euler characteristic and
suppose that
$E\to M$ admits no reducible Yang-Mills-Higgs\ pairs. Then the fixed points of the
involution
induced on ${\cal M}(E,A_\Lambda)$ by the mapping $(A,\phi) \mapsto (A,-\phi)$ consist
of complex
$(3g-3+n-n_0)$-dimensional submanifolds ${\cal M}_0$ and ${\cal M}_{(m,(\epsilon_i))}$,
for every
integer $m$ and isotropy vector $(\epsilon_i)$ such that
\begin{eqnarray*}
l < 2m + \sum_{i=1}^n \frac{\epsilon_i(x'_i-x_i)}{\alpha_i} \le l + 2g -2 +
\sum_{i=1}^n\frac{\epsilon_i(x_i'-x_i)}{\alpha_i} + n_-.
\end{eqnarray*}
The manifold ${\cal M}_0$ is the moduli space of stable $V$-bundles with fixed
determinants, while ${\cal M}_{(m,(\epsilon_i))}$
is a rank $(2m -l + g -1 +n_+)$ vector-bundle over a
$2^{2g}$-fold covering of $S^r\wo
M$, where $r = l -2m + 2g -2 + n_-$.
\end{proposition}
We interpret this as a result about $PSL_2({\Bbb R})$-representations of
$\pi_1^V(M)$. Again, a representation $\rho_D$ of $\pi_1^V(M)$ into $PSL_2({\Bbb R})$
induces a
central ${\Bbb Z}_2$-extension $\Gamma$ of $\pi_1^V(M)$, as in \refeq{extensor},
which is just $\pi_1^V(S(\Lambda))$ with the added relation $h^2=1$, for some
square-free
$\Lambda$. Consider the points of ${\rm Hom\,}^{-1}(\pi_1^V(S(\Lambda)),SL_2({\Bbb R}))$
covering $\rho_D$. On the level of
representations there are $2^{2g+n_2-1}$ (or $2^{2g}$ if $n_2 = 0$) choices of
sign for
the images of certain generators and these correspond to twisting a stable
Higgs
$V$-bundle by any of the $2^{2g+n_2-1}$ (or $2^{2g}$) holomorphic roots of the
trivial
line $V$-bundle. In particular, if $n_2 \ge 1$ then the topology of the
associated $E$ is
only determined up to twisting by the $2^{n_2-1}$ non-trivial topological roots
(see
\refre{twist again}).
Excluding the topologically non-trivial roots, we have an action of
${\Bbb Z}_2^{2g}$ on the fixed point submanifolds of \refpr{real manifolds} which is
easily
seen to be free if $E$ admits no reducible Yang-Mills-Higgs\ pairs. Moreover, even when
$E$
admits reducibles there will be fixed submanifolds ${\cal M}_{(m,(\epsilon_i))}$
with
\begin{eqnarray*}
l \le 2m + \sum_{i=1}^n \frac{\epsilon_i(x'_i-x_i)}{\alpha_i} \le l + 2g -2 +
\sum_{i=1}^n\frac{\epsilon_i(x_i'-x_i)}{\alpha_i} + n_-,
\end{eqnarray*}
exactly as in \refpr{real manifolds}, and the actions of ${\Bbb Z}_2^{2g}$ on these
will be free provided the first inequality is strict.
The quantity $2m -l + \sum_{i=1}^n \{\epsilon_i(x'_i-x_i)/\alpha_i\} =
2c_1(L_{(m,(\epsilon_i))}) -
c_1(\Lambda)$ is just the Euler class of the flat ${\Bbb R}\P^1$ $V$-bundle
$S(\rho_D) = S(L_{(m,(\epsilon_i))}^2 \Lambda^*)$ associated to the
$PSL_2({\Bbb R})$-representation (this is well-defined as it is invariant under
twisting $E$
by non-trivial topological roots). Note that, just as it is possible to have
topologically
distinct line $V$-bundles with the same Chern class, it is possible to have
topologically
distinct ${\Bbb R}\P^1$ $V$-bundles with the same Euler class---they are
distinguished by their
isotropy. The central ${\Bbb Z}$-extensions of $\pi_1^V(M)$
induced by the universal covering $\widetilde{PSL_2{\Bbb R}} \to PSL_2{\Bbb R}$ are just
the (orbifold) fundamental groups of the flat ${\Bbb R}\P^1$ $V$-bundles
$S(\rho_D)$ (see \cite{jn85}). Using the above discussion and the method of
\refpr{real manifolds}, we obtain the following result (compare \cite{jn85})
and, as a corollary, a Milnor-Wood inequality.
\bpr{psl2r reps}
Let $M$ be an orbifold Riemann surface of negative Euler characteristic. For
$\rho_D$ a $PSL_2({\Bbb R}))$-representation of $\pi_1^V(M)$ let
${\rm Hom\,}_{\rho_D}(\pi_1^V(M),PSL_2({\Bbb R}))$ denote
the corresponding connected component. Let $(y_i)$ be the isotropy and $b
+ \sum_{i=1}^n (y_i/\alpha_i)$ the Euler class of the associated flat ${\Bbb R}\P^1$ $V$-bundle
$S(\rho_D)$. Provided $b + \sum_{i=1}^n (y_i/\alpha_i) > 0$,
${\rm Hom\,}_{\rho_D}(\pi_1^V(M),PSL_2({\Bbb R}))/PSL_2({\Bbb R})$ is a smooth complex
$(3g-3+n-n_0)$-dimensional manifold, diffeomorphic to a rank $(g - 1 + b + n
-n_0)$
vector-bundle over $S^{2g-2-b}\wo M$.
\end{proposition}
\bco{milnor-wood}
Let $M$ be an orbifold Riemann surface of negative Euler characteristic. Then
the Euler
class $b + \sum_{i=1}^n (y_i/\alpha_i)$ of any flat $PSL_2({\Bbb R})$ $V$-bundle
satisfies
\begin{eqnarray*}
|b + \sum_{i=1}^n \frac{y_i}{\alpha_i}| \le 2g -2 + n -\sum_{i=1}^n\frac1{\alpha_i}.
\end{eqnarray*}
\end{corollary}
\begin{proof}
In \refpr{psl2r reps} we must have $b \le 2g-2$. The result follows since $y_i
\le
\alpha_i-1$.
\end{proof}
\bsu{Teichm\"uller Space for Orbifold Riemann Surfaces}{reptei}
Assume, as usual, that $M$ is an orbifold Riemann surface of negative Euler
characteristic. For a Fuchsian group such as $\pi_1^V(M)$, Teichm\"uller
space,
denoted ${\cal T}(M)$, is the space of faithful representations onto a discrete
subgroup of $PSL_2{\Bbb R}$ modulo conjugation (see Bers's survey article
\cite{be72}). Our previous results allow us to identify Teichm\"uller space
with a submanifold of the moduli space.
Let ${\cal T}_{-4}(M)$ denote the space of orbifold Riemannian metrics of constant
sectional curvature -4, modulo the action of the group of diffeomorphisms
homotopic to the identity, ${\cal D}_0(M)$. There is a bijection between
${\cal T}_{-4}$ and ${\cal T}$ as each metric of constant negative curvature determines
an
isometry between the universal cover of $M$ and ${\cal H}^2$ and hence a faithful
representation of $\pi_1^V(M)$ onto a discrete subgroup of $PSL_2{\Bbb R}$ and
conversely each such representation realises $M$ as a geometric quotient of
${\cal H}^2$.
The results of \cite{jn85}, as well as those of \cite[\S 11]{hi87}, suggest
that
Teichm\"uller space is the component of the real representation space taking
the
extreme value in the Milnor-Wood inequality, \refco{milnor-wood}. Working with
the
holomorphic description, the results of the
previous subsection show that the extreme is achieved when $E=L\oplus
L^*\Lambda$ with
$L^2\Lambda^*$ having the topology of $K$ and a holomorphic structure such that
$\wo{KL^{-2}\Lambda}$ has sections: in other words we must have $L^2\Lambda^* =
K$
(holomorphically). We suppose then that $E=K \oplus 1$ ($\Lambda^2E$ can be
normalised to be square-free but this is not necessary).
The corresponding Higgs field is just \begin{eqnarray*} \phi = \left( \begin{array}{cc} 0 &
u\\ v & 0\\ \end{array}\right), \end{eqnarray*} where $u\in H^0(K^2)$ and $v\in
{\Bbb C}\setminus
\{0\}$. There is a ${\Bbb C}^*$-group of automorphisms so that we can normalise with
$v=1$.
Exactly as in \cite[theorem 11.2]{hi87}, we can identify Teichm\"uller
space with the choices of $u$ {\rm i.\,e.\ } with $H^0(K^2)$. The two preliminaries which
we need are the strong maximum principle for orbifolds (the proof is entirely
local and generalises immediately; see \cite{jt80}) and the following orbifold
version of a theorem of Sampson \cite{ee69}.
\bpr{Sampson}
Given two orbifold Riemannian metrics of constant
sectional curvature -4 on $M$, $h$ and $h'$, there is a unique element of
${\cal D}_0$ which is a harmonic map between $(M,h)$ and $(M,h')$. \end{proposition}
\begin{proof} This is a reformulation of
\refpr{Donaldson}. The metrics $h$ and $h'$ give two discrete, faithful
representations of
$\pi_1^V(M)$ into $PSL_2{\Bbb R}$, one of which we consider fixed and the other we
denote $\rho'$. The identity map on $M$ lifts to an
orientation-preserving diffeomorphism $g$
of ${\cal H}^2$ which is equivariant with respect to the actions of the two
representations.
Taking this $g$ as an initial section of the $V$-bundle $H_{\rho_D}={\cal H}^2
\times_{\rho'} {\cal
H}^3$ of \refpr{Donaldson} (via the inclusion ${\cal H}^2 \subset {\cal H}^3$) we obtain
a harmonic section
$g'$ homotopic to $g$. This is real and defines a harmonic diffeomorphism
between $(M,h)$
and $(M,h')$. As $g'$ is homotopic to $g$ the resulting harmonic
diffeomorphism
is homotopic to the identity.
Uniqueness follows either by a direct argument or from uniqueness over $\widehat
M$, where
$\widehat M$ is as in \refco{smooth covering}. \end{proof}
We obtain the following theorem, which agrees with classical results due to
Bers and others \cite{be72}.
\bth{ball} Let $M$ be an orbifold Riemann surface of negative Euler
characteristic. Let
${\cal T}(M)$ be the
Teichm\"uller space of the Fuchsian group $\pi_1^V(M)$ and ${\cal T}_{-4}(M)$ the
space of
orbifold Riemannian
metrics on $M$ of constant sectional curvature -4, modulo the action of the
group of
diffeomorphisms homotopic to the identity.
Then ${\cal T}(M)$ and
${\cal T}_{-4}(M)$ are homeomorphic to $H^0(K^2)$, the space of holomorphic
(orbifold)
quadratic differentials on $M$. Hence Teichm\"uller space is homeomorphic to
${\Bbb C}^{3g-3+n}$. \eth
We conclude by considering orbifold Riemannian metrics in greater detail.
Considered as a metric on the underlying Riemann surface, $\wo
M$, an orbifold Riemannian metric $h$ on $M$ has `conical singularities' at
the marked points. To see this recall that locally
$M$ is like $D^2/{\Bbb Z}_\alpha$ with
$h$ a ${\Bbb Z}_\alpha$-equivariant metric on $D^2$. If $c_h(r)$ denotes the
circumference of a
geodesic circle of radius $r$ about the origin in $D^2$ (with respect to $h$),
then
$lim_{r\to
0}(c_h(r)/r)=2\pi$. Since this circle covers a circle in $D^2/{\Bbb Z}_\alpha$ exactly
$\alpha$ times
the metric on the quotient has a \de{conical singularity} at the origin, with
\de{cone
angle} $2\pi/\alpha$.
Consider a Riemannian metric on $M$ which, near a marked point $D^2/{\Bbb Z}_\alpha$, is
compatible with the complex structure and so has the form $h(z) dz \otimes d\o
z$. If we
set $w=z^\alpha$, then $w$ is a local holomorphic coordinate on $\wo M$. We find
that
the resulting `Riemannian metric' on $\wo M$ is given by \begin{eqnarray*}
\frac{h(w^{1/\alpha})}{\alpha^2|w|^{2(1-1/\alpha)}} dw \otimes d\o w. \end{eqnarray*} Notice that
$h(w^{1/\alpha})$
is well-defined by the ${\Bbb Z}_\alpha$-equivariance of $h$. This `Riemannian metric'
has a singularity like
$|w|^{-2(1-1/\alpha)}$ at the origin and is compatible with the complex structure
away from
there. Hence we obtain a compatible \lq singular Riemannian metric' on $\wo
M$: the
induced metric on $\wo M$ is continuous and induces the standard topology.
How does such a singular Riemannian metric compares with a
(smooth) Riemannian metric on $\wo M$? Suppose that $g$ is a fixed
Riemannian metric on $\wo M$,
compatible with the complex structure. Since $\wo M$ is compact any two
Riemannian
metrics give metrics on $\wo M$ which are
mutually bounded and so will be equivalent for our purposes---we may as well
use the Euclidean
metric in any local chart. Now, $h$ and $g$ will give mutually bounded metrics
on any compact
subset of $M\setminus \{ p_1,\dots,p_n \}$. However, for small Euclidean
distance $r$ from $p$, the
singular metric has distance like $r^{1/\alpha}$. These are
exactly the types of
singularities of metrics considered by McOwen and Hulin-Troyanov in
\cite{mo88,ht92}: they
consider metrics which satisfy $h/g = O(r_g^{2k})$ as $r_g(z) = d_g(0,z) \to
0$,
for some $k\in (-1,\infty)$. As McOwen points out, our \lq singular Riemannian
metrics' have exactly this form with $k =-1 + 1/\alpha$. Interpreting
\refco{negative curvature} in the light of this discussion we obtain the
following result. (Our result is weaker than McOwen's since we consider only
$k
=-1 + 1/\alpha$ but the case of general $k\in (-1,\infty)$ can be obtained by a
limiting argument as in \cite{ns93}.)
\bthn{McOwen, Hulin-Troyanov}{conical} Let $\wo M$ be a Riemann surface with
marked points $\{p_1,\dots,p_n\}$ with orders of isotropy
$\{\alpha_1,\dots,\alpha_n\}$. If the
genus $g$ and orders of
isotropy satisfy \begin{eqnarray*} 2-2g-n+\sum_{i=1}^n 1/\alpha_i < 0 \end{eqnarray*} then $\wo
M\setminus
\{p_1,\dots,p_n\}$ admits a unique compatible Riemannian metric $h$ of constant
sectional curvature
-4 such that, for $i=1,\dots,n$, $h$ has a conical singularity at $p_i$ with
cone angle
$2\pi/\alpha_i$. \eth
|
1,314,259,993,446 | arxiv | \section{Introduction}
\label{s1} The (modified) nonlinear Boltzmann equation for the one
particle distribution function provides an accurate description of
transport phenomena in a low density gas of inelastic hard spheres or disks
\cite{GyS95,Du00,Go03,PyB03,ByP04}. These particles
are often used to model granular fluids \cite{JNyB96}, specially in
the rapid flow regime \cite{Ca90}. The Boltzmann equation
does not provide any direct information about correlations and
fluctuations in the gas, other that the particle velocity moments. Nevertheless, methods used in the
derivation of the Boltzmann equation have been extended to obtain
kinetic equations for the equal and different time correlations, in
the same low density approximation. The general idea is that in
order to obtain these equations the needed approximations are the
same as those used to derive the Boltzmann equation itself.
One of the earliest and physically more transparent methods to study
fluctuations is that of Langevin equations. Almost 40 years ago, in
a seminal paper Bixon and Zwanzig \cite{ByZ69} showed how a
Boltzmann-Langevin equation could be constructed by generalizing the
reasonings leading to the Boltzmann equation for molecular gases.
The latter describes the behavior of the average value of the
one-particle distribution function, while the former incorporates
the effects of the fluctuations. As the authors indicated themselves
in the paper, the derivation was based on physical intuition and
analogy. A more systematic derivation of the same result, starting
from first principles, was given in ref. \cite{BDyD89}.
A second approach to the study of correlations in dilute gases makes
use of functional analysis. Its more general result is a kinetic
equation for a generating functional at low density, from which all
multi-point correlations can be obtained by functional
differentiation \cite{MyD83}. A closely related general scheme for
the study of correlations is the hierarchical method
\cite{EyC81a,EyC81}. The starting point are hierarchies of coupled
equations for the time distribution functions describing the
fluctuations and correlations. Then, the hierarchies are closed by
using the same kind of approximations as needed to derive the
kinetic equation, i.e. the Boltzmann equation in the case of dilute
gases. This method has been recently extended to describe
fluctuations and correlations of dilute inelastic gases in their
simplest state, the homogeneous cooling state (HCS)
\cite{BGMyR04}. As an application, the fluctuations of the total
energy were studied and a good agreement between theory and
simulation results was found \cite{BGMyR04,Vetal06}.
One of the aims of this paper is to translate the above formalism in
terms of kinetic equations for the correlation functions into a
Langevin equation formulation, i.e. to extend the fluctuating
Boltzmann equation to the case of inelastic hard spheres or disks. The
relationship between kinetic equations and the fluctuating Boltzmann
equation has been analyzed in detail in molecular gases
\cite{EyC81,Tr84}. One advantage of the Langevin formulation is that
it is closer to the fluctuating hydrodynamic equations. Actually,
the fluctuating Boltzmann equation for molecular systems has been
shown \cite{ByZ69,Hi70,FyU70} to lead to the same Langevin equations
for the hydrodynamic fields as obtained by Landau and Lifshitz
\cite{LyL66} using thermodynamic fluctuation theory. The noise terms
in these equations are assumed to be white with second moments
determined by the Navier-Stokes transport coefficients of the fluid.
Their expressions are known as fluctuation-dissipation relations of
the second kind \cite{KTyH85}.
The derivation of fluctuating hydrodynamic equations from the
fluctuating Boltzmann equation for inelastic hard particles, will be
also addressed here. Attention will be focussed on a particular
state, the HCS, and on a specific hydrodynamic field, the
transverse component of the velocity. The main conclusion will be
that the fluctuation-dissipation relation for elastic gases can not
be directly extrapolated to inelastic ones, but it needs to be
significantly modified. The second moment of the noise is not determined by
the Navier-Stokes shear viscosity. Moreover, the noise can not be
assumed to be white. These theoretical predictions are in
qualitative and quantitative agreement with molecular dynamics
simulation results.
The consideration of the HCS does not imply by itself that the
results obtained here are not relevant for other states more
accesible experimentally. The HCS plays for
inelastic gases a role similar to the equilibrium state for
molecular gases. In the case of molecular systems, the expressions
of the transport coefficients obtained by linearizing around
equilibrium are the same as those appearing in the nonlinear
Navier-Stokes equations as predicted by the Chapman-Enskog method
and successfully used in many far from equilibrium problems
\cite{RyL77}. Also, the fluctuation-dissipation relations derived
for near-local-equilibrium states in the original Landau and Lifshitz theory
have proven to be accurate for many other hydrodynamic states
\cite{Ke87}. For dilute gases composed of inelastic hard particles,
the equivalence between the transport coefficients obtained by
linear perturbations of the HCS and by applying the Chapman-Enskog
procedure has also been established \cite{ByD05}. Something similar
might be expected for the fluctuations and correlations.
In the system being considered here, the particles move freely and
independently between consecutive collisions. More specifically, they are not
coupled to any external energy source or thermal bath,
contrary to the driven granular gas models. For
these models, the linear response to an external perturbation
\cite{PByL02} as well as the validity of the Einstein relation
\cite{PByV07} have been investigated by numerical simulations, and
some empirical models have been proposed. It is not evident a direct relation
between the free model considered here and the above driven models.
The plan of the paper is as follows. In Sec. \ref{s2}, the kinetic
equations for the one-time and two-time correlation functions of a
dilute gas in the HCS derived in ref.\ \cite{BGMyR04} are shortly
reviewed. These equations are translated into an equivalent
Boltzmann-Langevin equation for the one particle distribution
function in Sec.\ \ref{s3}. When written in the appropriate
variables, this equation is the linear Boltzmann equation to which a
fluctuating force term is added, similarly to what happens in
molecular elastic gases. An expression for the second moment of the
fluctuating force in terms of the collisional Boltzmann kernel is
derived. In Sec.\ \ref{s4}, the fluctuating hydrodynamic fields are
defined, and balance equations for them are obtained from the
Boltzmann-Langevin equation. They involve formal expressions for the
fluctuating pressure tensor, the fluctuating heat flux, and the
fluctuating cooling rate. In addition, an intrinsically inelastic
fluctuating force shows up in the equation for the energy.
To get a closed description for the hydrodynamic fluctuations,
expressions for the heat flux, the pressure tensor, and the cooling
rate in terms of the fluctuating hydrodynamic fields are needed.
This can be accomplished by means of the Chapman-Enskog procedure. Here
only the case of the transverse
component of the velocity field will be considered. As a
consequence, only the expression for the non-diagonal elements of the
pressure tensor is required. This is computed in Sec. \ref{s5}. The
final result is a Langevin equation, that is the linear macroscopic
equation for the transverse velocity field plus a fluctuating force
term. Therefore, the structure is similar to what one could expect
by extrapolating from the corresponding equation for molecular systems \cite{vNEByO97}.
Nevertheless, the noise term is not white and its second moment is
not given by the usual fluctuation-dissipation relation. It is
verified that the obtained theoretical predictions are in good
agreement with molecular dynamics simulation results. Section
\ref{s7} contains some general comments and conclusions. Finally,
the appendixes provide some details of the calculations needed to
derive the results presented in the bulk of the paper.
\section{Kinetic equations for the homogeneous cooling state}
\label{s2}
The system considered is a dilute gas of $N$ smooth inelastic hard
spheres ($d=3$) or disks ($d=2$) of mass $m$ and diameter $\sigma$.
The position and velocity of the {\em i}th particle at time $t$ will
be denote by ${\bm R}_{i}(t)$ and ${\bm V}_{i}(t)$, respectively.
The effect of a collision between particles $i$ and $j$ is to
instantaneously modify their velocities according to the rule
\begin{eqnarray}
\label{2.1} {\bm V}_{i} & \rightarrow &{\bm V}_{i}^{\prime} ={\bm
V}_{i} -\frac{1+\alpha}{2} \left( \widehat{\bm \sigma} \cdot {\bm
V}_{ij} \right) \widehat{\bm \sigma}, \nonumber
\\
{\bm V}_{j} & \rightarrow & {\bm V}_{j}^{\prime} = {\bm V}_{j}
+\frac{1+\alpha}{2} \left( \widehat{\bm \sigma} \cdot {\bm V}_{ij}
\right) \widehat{\bm \sigma},
\end{eqnarray}
where ${\bm V}_{ij}= {\bm V}_{i}-{\bm V}_{j}$ is the relative
velocity, $\widehat{\bm \sigma}$ is the unit vector pointing from
the center of particle $j$ to the center of particle $i$ at contact,
and $\alpha$ is the coefficient of normal restitution. It is defined
in the interval $ 0 < \alpha \leq 1$ and it will considered here as
constant, independent of the velocities of the particles involved in
the collision. A more realistic modeling of granular gases would
require to consider a velocity dependent restitution coefficient
\cite{ByP04}.
Given a trajectory of the system, one-point and two-point
microscopic densities in phase space at time $t$ are defined by
\begin{equation}
\label{2.2} F_{1}(x_{1},t)=\sum_{j=1}^{N} \delta \left[ x_{1}-
X_{j}(t) \right]
\end{equation}
and
\begin{equation}
\label{2.3} F_{2}(x_{1},x_{2},t)= \sum^{N}_{i} \sum^{N}_{j \neq i}
\delta \left[ x_{1}-X_{i}(t) \right] \delta \left[ x_{2} -X_{j}(t)
\right],
\end{equation}
respectively. Here $X_{i}(t) \equiv \left\{ {\bm R}_{i}(t),{\bm
V}_{i}(t) \right\} $, while the $x_{i} \equiv \left\{ {\bm r}_{i},
{\bm v}_{i} \right\}$ are field variables referring to the
one-particle phase space ($\mu$ space). The density $F_{1}(x_{1},t)$
obeys the equation \cite{EyC81a,BGMyR04}
\begin{equation}
\label{2.4} \left[ \frac{\partial}{\partial t} + {\bm v}_{1} \cdot \frac{\partial}{\partial {\bm r}_{1}}
\right] F_{1}(x_{1},t)= \int dx_{2} \overline{T}(x_{1},x_{2})
F_{2}(x_{1},x_{2},t),
\end{equation}
with
\begin{equation}
\label{2.6} \overline{T}(x_{i},x_{j}) = \sigma^{d-1} \int d
\widehat{\bm \sigma}\, \Theta ({\bm v}_{ij} \cdot \widehat{\bm
\sigma}) |{\bm v}_{ij} \cdot \widehat{\bm \sigma}| \left[
\alpha^{-2} \delta ({\bm r}_{ij}-{\bm \sigma}) b_{\bm
\sigma}^{-1}({\bm v}_{i},{\bm v}_{j})-\delta ({\bm r}_{ij}+{\bm
\sigma}) \right],
\end{equation}
where $d \widehat{\bm \sigma}$ is the solid angle element for
$\widehat{\bm \sigma} \equiv {\bm \sigma}/\sigma$, ${\bm r}_{12}
\equiv {\bm r}_{1}- {\bm r}_{2}$, $\Theta$ is the Heaviside step
function, and $b_{\bm \sigma}^{-1}({\bm v}_{1},{\bm v}_{2})$ is an
operator replacing all the functions of ${\bm v}_{1}$ and ${\bm
v}_{2}$ to its right by the same functions of the precollisional
values ${\bm v}^{*}_{1}$ and ${\bm v}^{*}_{2}$ given by
\begin{eqnarray}
\label{2.7} {\bm v}^{*}_{1} \equiv b_{\bm \sigma}^{-1} {\bm v}_{1}=
{\bm v}_{1}-\frac{1+\alpha}{2 \alpha} ( \widehat{\bm \sigma} \cdot
{\bm v}_{12} ) \widehat{\bm \sigma},
\nonumber \\
{\bm v}^{*}_{2} \equiv b_{\bm \sigma}^{-1} {\bm v}_{2}= {\bm
v}_{2}+\frac{1+\alpha}{2 \alpha} ( \widehat{\bm \sigma} \cdot {\bm
v}_{12} ) \widehat{\bm \sigma}.
\end{eqnarray}
It is seen that Eq. (\ref{2.4}) for $F_{1}$ involves the two
particle density $F_{2}$. Actually, it is the first equation of an
infinity hierarchy \cite{BGMyR04}.
The averages of $F_{1}(x_{1},t)$ and $F_{2}(x_{1},x_{2},t)$ over the
initial probability distribution of the system $\rho(\Gamma,0)$,
$\Gamma \equiv \left\{ X_{1}, \ldots, X_{N} \right\}$, are the usual
one-particle and two-particle distribution functions,
\begin{equation}
\label{2.8} f_{1}(x_{1},t) = \langle F_{1}(x_{1},t) \rangle , \quad
f_{2}(x_{1},x_{2},t) = \langle F_{2} (x_{1},x_{2},t) \rangle,
\end{equation}
where the notation
\begin{equation}
\label{2.9} \langle G\rangle \equiv \int d \Gamma\, G(\Gamma)
\rho(\Gamma,0)
\end{equation}
has been employed. Two-time reduced distribution functions can also
defined from the microscopic densities and the initial probability
distribution. The simplest one is the two-particle two-time
distribution function,
\begin{equation}
\label{2.10} f_{1,1}(x_{1},t ; x_{1}^{\prime},t^{\prime})= \langle
F_{1}(x_{1},t)F_{1}(x_{1}^{\prime},t^{\prime}) \rangle.
\end{equation}
From the definitions in Eqs. (\ref{2.8}) and (\ref{2.10}) it follows
that
\begin{equation}
\label{2.11} f_{1,1}(x_{1},t;x^{\prime}_{1},t) =\delta
(x_{1}-x^{\prime}_{1})f_{1}(x_{1},t)+f_{2}(x_{1},x^{\prime}_{1},t).
\end{equation}
It is convenient to introduce one-time and two-time correlation
functions by
\begin{equation}
\label{2.12} g_{2}(x_{1},x_{2},t) \equiv
f_{2}(x_{1},x_{2},t)-f_{1}(x_{1},t) f_{1}(x_{2},t),
\end{equation}
and
\begin{equation}
\label{2.13} h_{1,1}(x_{1},t;x^{\prime}_{1},t^{\prime}) \equiv
f_{1,1}(x_{1},t;x^{\prime}_{1},t^{\prime})
-f_{1}(x_{1},t)f_{1}(x_{1}^{\prime};t^{\prime}),
\end{equation}
respectively. Equation (\ref{2.11}) translates into
\begin{equation}
\label{2.14} h_{1,1}(x_{1},t;x^{\prime}_{1},t)
= \delta (x_{1}-x^{\prime}_{1}) f_{1}(x_{1},t)+ g_{2}(x_{1},x^{\prime}_{1},t).
\end{equation}
In the low density limit, a closed set of kinetic equations for
$f_{1}$, $g_{2}$, and $h_{1,1}$ can be derived \cite{BGMyR04} by
extending the methods developed for molecular gases \cite{EyC81}.
They can be used to analyze the average properties as well as
correlations and fluctuations in arbitrary states of a dilute
granular gas. Here attention will be restricted to a particular
state of a freely evolving granular gas, the so-called homogeneous
cooling state (HCS) \cite{Ha83}. Macroscopically, it is
characterized by a uniform number of particles density $n$, a
vanishing velocity field, and a uniform time-dependent temperature
$T(t)$. It is further defined by the one-particle distribution
function having the scaled form \cite{GyS95}
\begin{equation}
\label{2.15} f({\bm v},t)= n v_{0}^{-d}(t) \chi(c),
\end{equation}
where
\begin{equation}
\label{2.16} v_{0}(t) \equiv \left[\frac{2 T(t)}{m} \right]^{1/2}
\end{equation}
is a thermal velocity and $\chi(c)$ is an isotropic function of the
scaled velocity ${\bm c} \equiv {\bm v}/v_{0}(t)$. The distribution
$\chi (c)$ and the granular temperature $T(t)$ are specified by the
pair of coupled equations
\begin{equation}
\label{2.17}
\frac{\partial T}{\partial s}=-\zeta_{0}
T(s),
\end{equation}
\begin{equation}
\label{2.18} \frac{\zeta_{0}}{2} \frac{\partial}{\partial {\bm c}}
\cdot \left( {\bm c} \chi \right) = J_{c}[{\bm c}|\chi].
\end{equation}
In the above expressions,
\begin{equation}
\label{2.19} \zeta_{0}=\frac{(1-\alpha^{2})\pi^{\frac{d-1}{2}}}{2\,
\Gamma \left( \frac{d+3}{2} \right)d} \int d{\bm c}_{1} \int d{\bm
c}_{2}\, c_{12}^{3}\chi({c}_{1}) \chi({c}_{2})
\end{equation}
is the dimensionless cooling rate in the time scale $s$ defined by
\begin{equation}
\label{2.20} s \equiv \int_{0}^{t} dt_{1}
\frac{v_{0}(t_{1})}{\lambda}\, ,
\end{equation}
with $\lambda \equiv (n \sigma^{d-1})^{-1}$, and $J_{c}[{\bm c}|\chi
]$ is the inelastic Boltzmann collision term. Its explicit form is
\begin{equation}
\label{2.21} J_{c}[{\bm c}|\chi]= \int d{\bm c}_{1}\,
\overline{T}_{0}({\bm c},{\bm c}_{1}) \chi({c}) \chi({c}_{1}),
\end{equation}
\begin{equation}
\label{2.22} \overline{T}_{0}({\bm c},{\bm c}_{1})= \int d
\widehat{\bm \sigma}\, \Theta [({\bm c}-{\bm c}_{1}) \cdot
\widehat{\bm \sigma}] ({\bm c}-{\bm c}_{1}) \cdot \widehat{\bm
\sigma} \left[ \alpha^{-2} b_{\bm \sigma}^{-1}({\bm c},{\bm
c}_{1}) -1 \right].
\end{equation}
The variable $s$ defined in Eq.\ (\ref{2.20}) is proportional to the accumulated number
of collisions per particle. For thermal velocities, i.e values of $c$ of the
order of unity, a good approximation to the solution of Eqs.\ (\ref{2.17})
and (\ref{2.18}) is provided by the
first Sonine approximation, in which \cite{GyS95,vNyE98}
\begin{equation}
\label{2a.1} \chi(c)= \frac{e^{-c^{2}}}{\pi^{d/2}}\, \left[ 1
+a_{2}(\alpha) S^{(2)} (c^{2}) \right]
\end{equation}
with
\begin{equation}
\label{2a.2}
S^{(2)}(c^{2})= \frac{c^{4}}{4}-\frac{d+2}{2}\, c^{2} +\frac{d(d+2)}{8}
\end{equation}
and
\begin{equation}
\label{2a.3}
a_{2}(\alpha)= \frac{16(1-\alpha)(1-2 \alpha^{2})}{9+24d+(8d-41)\alpha+30 \alpha^{2}
-30 \alpha^{3}}\,.
\end{equation}
In the same approximation
\begin{equation}
\label{2a.4}
\zeta_{0}= \frac{ \sqrt{2} \pi^{(d-1)/2} (1-\alpha^{2})}{\Gamma \left(d/2 \right) d} \left[ 1+ \frac{3 a_{2}(\alpha)}{16} \right].
\end{equation}
A numerically exact solution of Eqs. (\ref{2.27}) and (\ref{2.18}) has been recently reported in
\cite{NBSyG07}. The two-particle one-time correlation function of the HCS is assumed
to have also a scaled form \cite{BGMyR04}
\begin{equation}
\label{2.23} g_{2}({\bm r}_{12},{\bm v}_{1},{\bm v}_{2},t) = n
\lambda^{-d} v_{0}^{-2d}(t) \widetilde{g} ({\bm l}_{12},{\bm
c}_{1},{\bm c}_{2} ),
\end{equation}
where the scaled length scale ${\bm l} \equiv {\bm r} / \lambda $
has been introduced. The dimensionless correlation $\widetilde{g}$
does not depend on $s$ and obeys the equation
\begin{equation}
\label{2.24}
\left[ {\bm c}_{12} \cdot \frac{\partial}{\partial
{\bm l}_{12}} - \Lambda ({\bm c}_{1})-\Lambda ({\bm c}_{2}) \right]
\tilde{g}({\bm l}_{12},{\bm c}_{1},{\bm c}_{2}) = \delta ({\bm
l}_{12}) \overline{T}_{0}({\bm c}_{1},{\bm c}_{2}) \chi( {c}_{1})
\chi({c}_{2}),
\end{equation}
where $\Lambda({\bm c}_{i})$ is the linearized Boltzmann collision
operator \cite{BDyR03},
\begin{equation}
\label{2.25} \Lambda({\bm c}_{i}) \equiv \int d {\bm c}_{3}\,
\overline{T}_{0}({\bm c}_{i},{\bm c}_{3}) (1+P_{i3}) \chi
({c}_{3})-\frac{\zeta_{0}}{2} \frac{\partial}{\partial {\bm c}_{i}}
\cdot {\bm c}_{i}.
\end{equation}
The operator $P_{ij}$ interchanges the labels of particles $i$ and
$j$ of the quantities to its right. For the two-particle two-time
correlation function the scaling reads \cite{BGMyR04}
\begin{equation}
\label{2.26} h_{1,1}(x_{1},t;x_{1}^{\prime},t^{\prime}) = n
\lambda^{-d} v_{0}^{-d}(t) v_{0}^{-d}(t^{\prime}) \widetilde{h}
({\bm l}_{1}-{\bm l}^{\prime}_{1}, {\bm c}_{1},s- s^{\prime}; {\bm
c}^{\prime}_{1})
\end{equation}
and the kinetic equation is
\begin{equation}
\label{2.27} \left[ \frac{\partial}{\partial s}+{\bm c}_{1} \cdot
\frac{\partial}{\partial {\bm l}_{1}} -\Lambda({\bm c}_{1}) \right]
\tilde{h}({\bm l}_{1}-{\bm l}^{\prime}_{1},{\bm c}_{1},s-s^{\prime};
{\bm c}_{1}^{\prime})=0,
\end{equation}
valid for $s>s^{\prime}>0$. The initial condition for this equation
is
\begin{eqnarray}
\label{2.28} \widetilde{h}({\bm l}_{1}-{\bm l}^{\prime}_{1},{\bm
c}_{1},0; {\bm c}_{1}^{\prime}) & \equiv & \tilde{h}_{1,1}({\bm
l}_{1}-{\bm l}_{1}^{\prime},{\bm c}_{1};{\bm c}_{1}^{\prime}) \nonumber \\
& = & \tilde{g}({\bm l}_{1}-{\bm l}^{\prime}_{1},{\bm c}_{1}, {\bm
c}^{\prime}_{1}) +\delta ({\bm c}_{1}-{\bm c}^{\prime}_{1}) \delta
({\bm l}_{1}-{\bm l}^{\prime}_{1}) \chi({c}_{1}).
\end{eqnarray}
An equation for this distribution follows from Eqs.\ (\ref{2.18})
and (\ref{2.24}),
\begin{equation}
\label{2.29} \left[ {\bm c}_{1} \cdot \frac{\partial}{\partial {\bm
l}_{1}}+{\bm c}_{1}^{\prime} \cdot \frac{\partial}{\partial {\bm
l}_{1}^{\prime}}-\Lambda({\bm c}_{1})-\Lambda({\bm c}_{1}^{\prime})
\right]\widetilde{h}({\bm l}_{1}-{\bm l}^{\prime}_{1},{\bm
c}_{1};{\bm c}^{\prime}_{1}) =\delta ({\bm l}_{1}-{\bm
l}^{\prime}_{1})\widetilde{\Gamma}({\bm c}_{1},{\bm
c}^{\prime}_{1}),
\end{equation}
with
\begin{equation}
\label{2.30} \widetilde{\Gamma} ({\bm c}_{1},{\bm c}^{\prime}_{1}) =
- \left[ \Lambda ({\bm c}_{1})+\Lambda ({\bm c}^{\prime}_{1})
\right] \delta ({\bm c}_{1}-{\bm c}^{\prime}_{1}) \chi ({c}_{1})
+\overline{T}_{0} ({\bm c}_{1},{\bm c}^{\prime}_{1}) \chi ({c}_{1})
\chi({c}^{\prime}_{1}).
\end{equation}
Equations (\ref{2.24}) and (\ref{2.27}) describe the correlations
between fluctuations in the HCS. They become closed once the
solution to Eqs.\ (\ref{2.17}) and (\ref{2.18}) is known. In the
next section, an alternative and consistent description to that
provided by these kinetic equations will be developed.
\section{Fluctuating Boltzmann equation around the HCS}
\label{s3} Equation (\ref{2.4}) is an exact consequence of the
dynamical equations governing the motion of the particles. The aim
of this section is to approximate it in such a way that give a
closed description of the effective dynamics of a dilute granular
gas in the HCS. To do so, the spatial separation between the centers
of colliding particles will be neglected in the operator
$\overline{T}(x_{1},x_{2})$, and $F_{2}(x_{1},x_{2},t)$ will be approximated by an
effective (Boltzmann) two-particle phase space density at the mesoscopic level
$F_{2}^{B}(x_{1},x_{2},t)$. Moreover, the dimensionless time scale
$s$ and length scale ${\bm l}$ introduced in the previous section
will be used. Then, Eq.\ (\ref{2.4}) becomes
\begin{equation}
\label{3.1} \left( \frac{\partial}{\partial s}+\frac{\zeta_{0}}{2}
\frac{\partial}{\partial {\bm c}_{1}} \cdot {\bm c}_{1} + {\bm
c}_{1} \cdot \frac{\partial}{\partial {\bm l}_{1}} \right)
\widetilde{F}_{1} ({\bm l}_{1},{\bm c}_{1},s) = \int d{\bm c}_{2}\,
\overline{T}_{0} ({\bm c}_{1}, {\bm c}_{2} ) \widetilde{F}_{2}^{B}
({\bm l}_{1},{\bm c}_{1},{\bm l}_{1},{\bm c}_{2},s),
\end{equation}
where dimensionless phase space densities have been defined by
\begin{equation}
\label{3.2} \widetilde{F}_{1} ({\bm l}_{1},{\bm c}_{1},s)=n^{-1}
v_{0}^{d}(t) F_{1}(x_{1},t),
\end{equation}
\begin{equation}
\label{3.3} \widetilde{F}_{2}^{B} ({\bm l}_{1},{\bm c}_{1},{\bm
l}_{2},{\bm c}_{2},s)= n^{-2} v_{0}^{2d}(t) F_{2}^{B}
(x_{1},x_{2},t).
\end{equation}
Comparison of the ensemble average of Eq.\ (\ref{3.1}) with Eq.
(\ref{2.18}) gives the conditions
\begin{equation}
\label{3.4} \langle \widetilde{F}_{1} ({\bm l}_{1},{\bm c}_{1},s)
\rangle_{\text{H}} = \chi({c}_{1}),
\end{equation}
\begin{equation}
\label{3.5} \int d{\bm c}_{2}\ \overline{T}_{0}({\bm c}_{1}, {\bm
c}_{2} ) \langle\widetilde{F}_{2}^{B} ({\bm l}_{1},{\bm c}_{1}, {\bm
l}_{1},{\bm c}_{2},s) \rangle_{\text{H}} = J_{c} \left[ {\bm c} |
\chi\right].
\end{equation}
The subindex $\text{H}$ in the angular brackets indicates that the
ensemble average is taken over the probability distribution for the
HCS.
The deviation of the microscopic density from its average value is
defined by
\begin{equation}
\label{3.6} \delta \widetilde{F}_{1} ({\bm l}_{1},{\bm c}_{1},s)
\equiv \widetilde{F}_{1} ({\bm l}_{1},{\bm c}_{1},s)- \chi
({c}_{1}).
\end{equation}
An evolution equation for this quantity follows by subtracting Eqs.\
(\ref{3.1}) and (\ref{2.18}),
\begin{eqnarray}
\label{3.7}
\left( \frac{\partial}{\partial s} \right. & + & \left.
\frac{\zeta_{0}}{2} \frac{\partial}{\partial {\bm c}_{1}} \cdot {\bm
c}_{1} + {\bm c}_{1} \cdot \frac{\partial}{\partial {\bm l}_{1}}
\right) \delta
\widetilde{F}_{1} ({\bm l}_{1},{\bm c}_{1},s) \nonumber \\
& = & \int d{\bm c}_{2}\, \overline{T}_{0} ({\bm c}_{1}, {\bm
c}_{2} ) \left[ \widetilde{F}_{2}^{B} ({\bm l}_{1},{\bm c}_{1},{\bm
l}_{1},{\bm c}_{2},s) -\chi({c}_{1})\chi ({c}_{2}) \right].
\end{eqnarray}
The structure of this equations suggests to introduce a cluster
decomposition for $\widetilde{F}_{2}^{B}$ of the form
\begin{equation}
\label{3.8} \widetilde{F}_{2}^{B} ({\bm l}_{1},{\bm c}_{1},{\bm
l}_{1},{\bm c}_{2},s) = \chi({c}_{1})\chi ({c}_{2}) + \chi({c}_{1})
\delta \widetilde{F}_{1}({\bm l}_{1},{\bm c}_{2},s) + \chi({c}_{2})
\delta \widetilde{F}_{1}({\bm l}_{1},{\bm c}_{1},s) +
\widetilde{\Phi}_{2}^{B}({\bm l}_{1},{\bm c}_{1},{\bm c}_{2},s).
\end{equation}
This equation defines the microscopic correlation density
$\widetilde{\Phi}_{2}^{B}({\bm l}_{1},{\bm c}_{1},{\bm c}_{2},s)$.
Substitution of its ensemble average in Eq.\ (\ref{3.5}) yields
\begin{equation}
\label{3.9} \int d{\bm c}_{2}\, \overline{T}_{0} ({\bm c}_{1}, {\bm
c}_{2}) \langle \widetilde{\Phi}_{2}^{B}({\bm l}_{1},{\bm
c}_{1},{\bm c}_{2},s) \rangle_{\text{H}}=0.
\end{equation}
Moreover, use of Eq.\ (\ref{3.8}) into Eq.\ (\ref{3.7}) allows to
rewrite the equation in the equivalent form
\begin{equation}
\label{3.10} \left[ \frac{\partial}{\partial s} +{\bm c}_{1} \cdot
\frac{\partial}{\partial {\bm l}_{1}} - \Lambda ({\bm c}_{1}) \right]
\delta \widetilde{F}_{1} ({\bm l}_{1},{\bm c}_{1},s)= \widetilde{S}
( {\bm l}_{1}, {\bm c}_{1},s ),
\end{equation}
where
\begin{equation}
\label{3.11} \widetilde{S} ( {\bm l}_{1}, {\bm c}_{1},s ) \equiv
\int d{\bm c}_{2}\, \overline{T}_{0} ({\bm c}_{1}, {\bm c}_{2})
\widetilde{\Phi}_{2}^{B}({\bm c}_{1},{\bm c}_{2},{\bm l}_{1},s)
\end{equation}
and the operator $\Lambda ({\bm c}_{1})$ was defined in Eq.\ (\ref{2.25}). Equation
(\ref{3.10}) can be interpreted as a fluctuating Boltzmann-Langevin equation
for the one-particle distribution function \cite{ByZ69,Hi70,FyU70},
with the ``noise term'' being $ \widetilde{S} ( {\bm l}_{1}, {\bm c}_{1},s )$. Of course,
this does add any new physical insight by itself in the
understanding of the starting equation (\ref{3.1}). The relevance and usefulness of this
representation will depend on the properties of the noise term. A
first one follows directly from Eq.\ (\ref{3.9}), that is equivalent
to
\begin{equation}
\label{3.12} \langle \widetilde{S} ({\bm l}_{1}, {\bm c}_{1},s
)\rangle_{\text{H}}=0,
\end{equation}
i.e. the noise has zero average. In the following, other properties
of $\widetilde{S}$ will be derived by requiring consistency with the
results derived in the previous section. Multiplication of Eq.\
(\ref{3.10}) by $\delta \widetilde{F}_{1} ({\bm l}_{1}^{\prime},
{\bm c}^{\prime}_{1},s^{\prime})$ with $s^{\prime}<s$, followed by averaging
gives
\begin{equation}
\label{3.13} \left[ \frac{\partial}{\partial s} +{\bm c}_{1} \cdot
\frac{\partial}{\partial {\bm l}} - \Lambda ({\bm c}_{1}) \right]
\langle \delta \widetilde{F}_{1} ({\bm l}_{1},{\bm c}_{1},s) \delta
\widetilde{F}_{1} ({\bm l}_{1}^{\prime},{\bm
c}_{1}^{\prime},s^{\prime})\rangle_{\text{H}} = \langle
\widetilde{S} ({\bm l}_{1},{\bm c}_{1},s) \delta \widetilde{F}_{1}
({\bm l}_{1}^{\prime},{\bm
c}_{1}^{\prime},s^{\prime})\rangle_{\text{H}}.
\end{equation}
From the definition of $\delta \widetilde{F}_{1} ({\bm l}_{1},{\bm
c}_{1},s)$ it is easily verified that
\begin{equation}
\label{3.14} \langle \delta \widetilde{F}_{1} ({\bm l}_{1},{\bm
c}_{1},s) \delta \widetilde{F}_{1} ({\bm l}_{1}^{\prime},{\bm
c}_{1}^{\prime},s^{\prime})\rangle_{\text{H}} = n^{-1}
\lambda^{-d} \widetilde{h}({\bm l}_{1}-{\bm l}_{1}^{\prime},{\bm
c}_{1},s-s^{\prime}; {\bm c}_{1}^{\prime}),
\end{equation}
where $\widetilde{h}$ is defined in Eq.\ (\ref{2.26}). Therefore,
consistency of Eqs. (\ref{3.13}) and (\ref{2.27}) implies that
\begin{equation}
\label{3.15} \langle \widetilde{S} ({\bm l}_{1},{\bm c}_{1},s)
\delta \widetilde{F}_{1} ({\bm l}_{1}^{\prime},{\bm
c}_{1}^{\prime},s^{\prime})\rangle_{\text{H}}=0,
\end{equation}
for $s>s^{\prime}$. Since, by hypothesis, the parameters of the
system are such that the HCS is stable, the long time solution of
Eq.\ (\ref{3.10}) is
\begin{equation}
\label{3.16} \delta \widetilde{F}_{1} ({\bm l}_{1}, {\bm c}_{1},s) =
\int_{-\infty}^{s} d\tau\, e^{(s-\tau)L({\bm l}_{1},{\bm c}_{1})}
\widetilde{S} ({\bm l}_{1},{\bm c}_{1}, \tau),
\end{equation}
where the linear operator
\begin{equation}
\label{3.17} L({\bm l}_{1},{\bm c}_{1}) \equiv \Lambda ({\bm
c}_{1})-{\bm c}_{1} \cdot \frac{\partial}{\partial {\bm l}_{1}}
\end{equation}
has been introduced. Using Eq. (\ref{3.16}), it is obtained
\[
\left[ L({\bm l}_{1},{\bm c}_{1})+L ({\bm l}_{1}^{\prime},{\bm
c}_{1}^{\prime}) \right] \langle \delta \widetilde{F}_{1} ({\bm
l}_{1},{\bm c}_{1},s) \delta \widetilde{F}_{1} ({\bm
l}_{1}^{\prime},{\bm c}_{1}^{\prime},s^{\prime})\rangle_{\text{H}}
\]
\[
= -\int_{- \infty}^{s} d\tau\, e^{(s-\tau)L({\bm l}_{1},{\bm
c}_{1})} \langle \widetilde{S} ({\bm l}_{1},{\bm c}_{1},\tau)
\widetilde{S} ({\bm l}_{1}^{\prime},{\bm
c}_{1}^{\prime},s)\rangle_{\text{H}} \]
\begin{equation}
\label{3.18} - \int_{- \infty}^{s} d\tau\, e^{(s-\tau)L({\bm
l}_{1}^{\prime},{\bm c}_{1}^{\prime})} \langle \widetilde{S} ({\bm
l}_{1},{\bm c}_{1},s) \widetilde{S} ({\bm l}_{1}^{\prime},{\bm
c}_{1}^{\prime},\tau)\rangle_{\text{H}}.
\end{equation}
This equation must be compared with Eq.\ (\ref{2.29}), having in
mind Eq. (\ref{3.14}). The time independence of the right hand side
of Eq.\ (\ref{2.29}) prompts to introduce the hypothesis that the
noise term $\widetilde{S}$ is Markovian, and write
\begin{equation}
\label{3.20} \langle \widetilde{S} ({\bm l}_{1},{\bm c}_{1},s)
\widetilde{S} ({\bm l}_{1}^{\prime},{\bm
c}_{1}^{\prime},s^{\prime})\rangle_{\text{H}} = H({\bm c}_{1},{\bm
c}^{\prime}_{1}) \delta ({\bm l}_{1}-{\bm l}_{1}^{\prime}) \delta
(s-s^{\prime}).
\end{equation}
On introduction of this into Eq.\ (\ref{3.18}) and comparison with Eq.\ (\ref{2.29}), it
follows that
\begin{equation}
\label{3.21} H({\bm c}_{1},{\bm c}_{1}^{\prime}) = n^{-1}
\lambda^{-d} \widetilde{\Gamma} ({\bm c}_{1},{\bm c}_{1}^{\prime}),
\end{equation}
with $\widetilde{\Gamma}$ defined in Eq.\ (\ref{2.30}).
The properties given by Eqs.\ (\ref{3.12}), (\ref{3.15}), and
(\ref{3.20}) guarantee that the description provided by the Langevin
equation (\ref{3.10}) leads to the same expressions for the
two-particle, one-time and two-time correlation functions as the
formulation in terms of reduced distributions functions reviewed in
Sec.\ \ref{s2}.
\section{Fluctuating hydrodynamic fields and balance equations}
\label{s4}
The fluctuating number of particles density, $N ({\bm r},t)$,
momentum density, ${\bm G}({\bm r},t)$, and energy density, $E({\bm
r},t)$, are defined in terms of the microscopic phase space density
as
\begin{equation}
\label{4.1} N({\bm r},t) = \int d{\bm v}\ F_{1}(x,t),
\end{equation}
\begin{equation}
\label{4.2} {\bm G}({\bm r},t) = \int d{\bm v}\, m {\bm v}
F_{1}(x,t),
\end{equation}
\begin{equation}
\label{4.3} E({\bm r},t) = \int d{\bm v}\, \frac{mv^{2}}{2}
F_{1}(x,t).
\end{equation}
Dimensionless deviations from their averages values in the HCS are given by
\begin{equation}
\label{4.4} \delta \rho ({\bm l},s) \equiv \frac{\delta N ({\bm
r},t)}{n} = \int d{\bm c}\, \delta \widetilde{F}_{1} ( {\bm l},{\bm
c},s),
\end{equation}
\begin{equation}
\label{4.5} \delta {\bm \omega} ({\bm l},s) \equiv \frac{\delta {\bm
G}({\bm r},t)}{m n v_{0}(t)} = \int d{\bm c}\, {\bm c} \delta
\widetilde{F}_{1} ( {\bm l},{\bm c},s),
\end{equation}
\begin{equation}
\label{4.6} \delta \epsilon ({\bm l},s) \equiv \frac{2 \delta E
({\bm r},t)}{d n T(t)}\, = \frac{2}{d} \int d{\bm c}\, c^{2} \delta
\widetilde{F}_{1} ( {\bm l},{\bm c},s).
\end{equation}
The quantity $\delta {\bm \omega}({\bm l},s)$ is the dimensionless
velocity field. Balance equations for these fluctuating fields
follow by taking velocity moments in the Langevin-Boltzmann equation
(\ref{3.10}) and using the properties of the noise $\widetilde{S}$.
Some details of the calculations are given in appendix \ref{ap1}.
The resulting equations read
\begin{equation}
\label{4.7}
\frac{\partial}{\partial s}\, \delta \rho ({\bm l},s)+\frac{\partial}{\partial {\bm l}} \cdot \delta
{\bm \omega} ({\bm l},s)=0,
\end{equation}
\begin{equation}
\label{4.8} \left( \frac{\partial}{\partial s} -\frac{\zeta_{0}}{2}
\right) \delta {\bm \omega} ({\bm l},s) + \frac{\partial}{\partial
{\bm l}}\, \cdot \delta {\sf \Pi} ({\bm l},s)=0,
\end{equation}
\begin{equation}
\label{4.9} \left( \frac{\partial}{\partial s} - \zeta_{0} \right)
\delta \epsilon ({\bm l},s) + \frac{d+2}{d}\,
\frac{\partial}{\partial {\bm l}}\, \cdot \delta {\bm \omega} ({\bm
l},s) + \delta \zeta_{0}({\bm l},s) + \frac{2}{d}
\frac{\partial}{\partial {\bm l}}\, \cdot \delta {\bm \phi} ({\bm
l},s) = \widetilde{S}_{\epsilon}({\bm l},s).
\end{equation}
In the above equations, $\delta {\sf \Pi}({\bm l}, s)$ and $\delta
{\bm \phi}({\bm l},s)$ are the fluctuating pressure tensor and heat
flux, respectively. Their definitions in terms of the fluctuating
one-particle distribution function are
\begin{equation}
\label{4.10} \delta {\sf \Pi}({\bm l}, s) = \frac{\delta \epsilon
({\bm l},s)}{2} {\sf I}+ \int d{\bm c}\, {\sf \Delta} ({\bm c})
\delta \widetilde{F}_{1} ( {\bm l},{\bm c},s),
\end{equation}
\begin{equation}
\label{4.11} \delta {\bm \phi}({\bm l},s) = \int d{\bm c}\, {\bm
\Sigma} ({\bm c}) \delta \widetilde{F}_{1} ( {\bm l},{\bm c},s),
\end{equation}
where ${\sf I}$ is the unit tensor of dimension $d$, and
\begin{equation}
\label{4.12} {\sf \Delta}({\bm c}) \equiv {\bm c} {\bm c}-
\frac{c^{2}}{d}\ {\sf I},
\end{equation}
\begin{equation}
\label{4.13} {\bm \Sigma}({\bm c}) \equiv \left( c^{2}-
\frac{d^{2}+2}{2} \right) {\bm c}.
\end{equation}
The term $\delta \zeta_{0} ({\bm l}, s)$ represents the fluctuations
of the cooling rate about its average value in the HCS. Its formal
expressions is
\begin{equation}
\label{4.14} \delta \zeta_{0} ({\bm l},s) =
\frac{(1-\alpha^{2})\pi^{\frac{d-1}{2}}}{ \Gamma \left(
\frac{d+3}{2} \right)d} \int d{\bm c}_{1} \int d{\bm c}_{2}\,
c_{12}^{3}\chi({c}_{1}) \delta \widetilde{F}_{1} (
{\bm l},{\bm c}_{2},s).
\end{equation}
Finally, $\widetilde{S}_{\epsilon}({\bm l},s)$ is a fluctuating
force term having the properties
\begin{equation}
\label{4.15} \langle \widetilde{S}_{\epsilon}({\bm l},s)
\rangle_{\text{H}} =0
\end{equation}
and
\begin{eqnarray}
\label{4.16} \langle \widetilde{S}_{\epsilon}({\bm l},s)
\widetilde{S}_{\epsilon}({\bm
l}^{\prime},s^{\prime})\rangle_{\text{H}} & = & n^{-1} \lambda^{-d}
\delta (s-s^{\prime}) \delta ({\bm l}-{\bm l}^{\prime}) \left[ \int
d{\bm c}_{1}\, \int d{\bm c}_{2}\, \chi(c_{1}) \chi(c_{2}) \Phi
({\bm c}_{1},{\bm c}_{2}) \right. \nonumber \\
&& \left. - \frac{8}{d^{2}}\, \zeta_{0} \int d{\bm c}\, c^{4}
\chi(c) \right],
\end{eqnarray}
with $\Phi ({\bm c}_{1},{\bm c}_{2})$ given by Eq.\ (\ref{ap1.16}).
This noise term is intrinsic to the inelasticity of collisions and
has no analogue in normal fluids. Of course, in the elastic limit
$\alpha \rightarrow 1$, $\chi(c)$ becomes a Gaussian and the
fluctuating force $\widetilde{S}_{\epsilon}$ is seen to vanish in
agreement with the well known results for hydrodynamic fluctuations
in molecular fluids \cite{LyL66}. The other main differences between
Eq.\ (\ref{4.9}) and the one for molecular gases is the presence of
the two terms involving the cooling rate, $\zeta_{0}$, and its
fluctuations, $\delta \zeta_{0}$. The presence of these terms is
directly associated with existence of the cooling term in the
macroscopic equation for the average energy
\cite{BDKyS98,ByC01,Go03}.
\section{Langevin equation for the velocity field}
\label{s5} Equations (\ref{4.7})-(\ref{4.9}) do not provide a closed
description of the hydrodynamic fluctuations of a granular gas in
the HCS until $\delta {\sf \Pi}$, $\delta {\bm \phi}$, and $\delta
\zeta_{0}$ are expressed in terms of the fluctuating hydrodynamic
fields. This turns out to be not a simple task, and attention will
be restricted in the following to the equation of the velocity field
$\delta {\bm \omega}({\bm l},s)$, Eq. (\ref{4.8}).
Given two functions $f({\bm c})$ and $g({\bm c})$, their scalar
product is defined as
\begin{equation}
\label{5.1} \langle f|g\rangle \equiv \int d{\bm c}\, \chi^{-1} (c)
f^{+}({\bm c}) g({\bm c}),
\end{equation}
where $f^{+}({\bm c})$ is the complex conjugate of $f({\bm c})$.
Next, a projection operator $\mathcal{P}$ is introduced by
\begin{equation}
\label{5.2} \mathcal{P} f({\bm c}) \equiv \sum_{\beta=1}^{d+2}
\xi_{\beta}({\bm c}) \langle\overline{\xi}_{\beta}|f\rangle.
\end{equation}
Here, the functions $\xi_{\beta}({\bm c}), \beta=1,\ldots,d+2$ are
the eigenfunctions of the linear Boltzmann operator $\Lambda ({\bm
c})$ defined in Eq.\ (\ref{2.25}), corresponding to the hydrodynamic
part of its spectrum. Therefore, they are solutions of the equation
\begin{equation}
\label{5.3} \Lambda ({\bm c}) \xi_{\beta}({\bm c}) = \lambda_{\beta}
\xi_{\beta}({\bm c}).
\end{equation}
Their expressions are \cite{BDyR03,ByD05}
\begin{equation}
\label{5.4} \xi_{1}({\bm c})= \chi(c)+\frac{\partial}{\partial {\bm
c}} \cdot \left[ {\bm c} \chi (c) \right], \quad {\bm \xi}_{2}({\bm
c})=-\frac{\partial \chi(c)}{\partial {\bm c}}, \quad \xi_{3}({\bm
c})= -\frac{\partial}{\partial {\bm c}} \cdot \left[ {\bm c} \chi
(c) \right].
\end{equation}
The associated eigenvalues are found to be
\begin{equation}
\label{5.5} \lambda_{1}=0, \quad \lambda_{2}=\frac{\zeta_{0}}{2}\, ,
\quad \lambda_{3} =-\frac{\zeta_{0}}{2},
\end{equation}
the eigenvalue $\lambda_{2}$ being $d$-fold degenerated. Finally the
functions $\overline{\xi}_{\beta}({\bm c})$ are
\begin{equation}
\label{5.6} \overline{\xi}_{1}({\bm c})=\chi(c), \quad \overline{\bm
\xi}_{2} ({\bm c})={\bm c} \chi(c), \quad \overline{\xi}_{3} ({\bm
c})= \left( \frac{c^{2}}{d} +\frac{1}{2} \right) \chi(c).
\end{equation}
The sets of functions $\{ \xi_{\beta}({\bm c})\}$ and $\{
\overline{\xi}_{\beta}({\bm c})\}$ are seen to have the
biorthogonality property
\begin{equation}
\label{5.7} \langle \overline{\xi}_{\beta} |\xi_{\beta^{\prime}}
\rangle =\delta_{\beta,\beta^{\prime}}\, ,
\end{equation}
$\beta, \beta^{\prime}=1,2,\ldots,d+2$. This guarantees that
$\mathcal{P}$ as defined by Eq.\ (\ref{5.2}) is really a projector
operator, i.e. it verifies $\mathcal{P}^{2} = \mathcal{P}$. It
projects any function of ${\bm c}$ onto the subspace spanned by the
hydrodynamic eigenfunctions of $\Lambda$.
In the following, it will be more convenient to work in the Fourier
representation. The Fourier transform of $\delta \widetilde{F}_{1} ({\bm
l},{\bm c},s)$ is
\begin{equation}
\label{5.8} \delta \widetilde{F}_{1} ({\bm k},{\bm c},s) = \int d{\bm l}\,
e^{-i {\bm k} \cdot {\bm l}} \delta \widetilde{F}_{1} ({\bm l},{\bm
c},s).
\end{equation}
By means of $\mathcal{P}$, $\delta \widetilde{F}_{1} ({\bm k},{\bm c},s)$
can be decomposed into its hydrodynamic
and non-hydrodynamic parts,
\begin{equation}
\label{5.9} \delta \widetilde{F}_{1} ({\bm k},{\bm c},s)= \mathcal{P}
\delta \widetilde{F}_{1} ({\bm k},{\bm c},s) +\mathcal{P}_{\perp}
\delta \widetilde{F}_{1} ({\bm k},{\bm c},s),
\end{equation}
where $\mathcal{P}_{\perp} \equiv 1- \mathcal{P}$. The Fourier
representation of the balance equation for the velocity
fluctuations, Eq. (\ref{4.8}), is
\begin{equation}
\label{5.10} \left( \frac{\partial}{\partial s} -
\frac{\zeta_{0}}{2} \right) \delta {\bm \omega} ({\bm k},s) + i {\bm
k} \cdot \delta {\sf \Pi} ({\bm k},s) =0,
\end{equation}
\begin{equation}
\label{5.11} \delta {\sf \Pi} ({\bm k},s) = \frac{\delta \epsilon
({\bm k},s)}{2}\, {\sf I} +\int d{\bm c}\, {\sf \Delta} ({\bm c})
\delta \widetilde{F}_{1} ({\bm k},{\bm c},s),
\end{equation}
where ${\sf \Delta}({\bm c})$ is defined in Eq.\ (\ref{4.12}).
Getting an explicit expression for $ \delta {\sf \Pi} ({\bm k},s) $
in terms of the fluctuating hydrodynamic fields is the next issue to be
addressed. By direct evaluation, it is easily verified that
\begin{equation}
\label{5.12} \int d{\bm c}\, {\sf \Delta} ({\bm c}) \xi_{\beta}({\bm c})=0,
\end{equation}
$\beta = 1, \ldots, d+2$. Hence Eq. (\ref{5.11}) is equivalent to
\begin{equation}
\label{5.13} \delta {\sf \Pi} ({\bm k},s) = \frac{\delta \epsilon
({\bm k},s)}{2}\, {\sf I} +\int d{\bm c}\, {\sf \Delta} ({\bm c})
\mathcal{P}_{\perp} \delta \widetilde{F}_{1} ({\bm k},{\bm c},s).
\end{equation}
To compute $ \mathcal{P}_{\perp} \widetilde{F}_{1} ({\bm k},{\bm
c},s)$, the operator $\mathcal{P}_{\perp} $ is applied to both sides
of the Fourier transform of the Boltzmann-Langevin equation
(\ref{3.10}),
\begin{equation}
\label{5.14} \left\{ \frac{\partial}{\partial s} -
\mathcal{P}_{\perp} \left[ \Lambda ({\bm c}) -i {\bm k} \cdot {\bm
c} \right] \right\} \mathcal{P}_{\perp} \delta \widetilde{F}_{1}
({\bm k},{\bm c},s) = - \mathcal{P}_{\perp} i {\bm k} \cdot {\bm c}
\mathcal{P} \delta \widetilde{F}_{1} ({\bm k},{\bm c},s) +
\mathcal{P}_{\perp} \widetilde{S}({\bm k},{\bm c},s),
\end{equation}
where use has been made of the property $\mathcal{P}_{\perp} \Lambda
\mathcal{P} =0$, that is a consequence of the fact that
$\mathcal{P}$ projects over a subspace generated by eigenfunctions
of $\Lambda$. The solution of the above equation can be formally
written as
\begin{eqnarray}
\label{5.15} \mathcal{P}_{\perp} \delta \widetilde{F}_{1} ({\bm
k},{\bm c},s) & = & \mathcal{U}({\bm k},{\bm c},s)
\mathcal{P}_{\perp}
\delta \widetilde{F}_{1} ({\bm k},{\bm c},0)+ \int_{0}^{s} ds^{\prime}\, \mathcal{U}({\bm k},{\bm
c},s^{\prime}) \mathcal{P}_{\perp} \left[ - i {\bm k} \cdot {\bm c}
\mathcal{P} \delta \widetilde{F}_{1} ({\bm k},{\bm c},s-s^{\prime})
\right. \nonumber \\
&& \left.+ \widetilde{S}({\bm k},{\bm c},s-s^{\prime}) \right],
\end{eqnarray}
with
\begin{equation}
\label{5.16} \mathcal{U}({\bm k},{\bm c},s) \equiv \exp \left[s
\mathcal{P}_{\perp} L({\bm k},c) \mathcal{P}_{\perp} \right],
\end{equation}
\begin{equation}
\label{5.17} L({\bm k},{\bm c}) \equiv \Lambda ({\bm c}) -i {\bm k}
\cdot {\bm c}.
\end{equation}
Taking again into account that the HCS is assumed to be stable for
the system considered, the first term on the right hand side of
Eq.\, (\ref{5.15}) can be neglected for large enough times $s$.
Moreover, to derive hydrodynamic equations valid to Navier-Stokes
order, only the first order in $k$ of the pressure tensor is needed.
To this order,
\[
\int_{0}^{s} ds^{\prime}\, \mathcal{U}({\bm k},{\bm c},s^{\prime})
\mathcal{P}_{\perp} \left( - i {\bm k} \cdot {\bm c} \right)
\mathcal{P} \delta \widetilde{F}_{1} ({\bm k},{\bm c},s-s^{\prime})
\]
\[ \rightarrow
\int_{0}^{s} ds^{\prime}\, e^{ s^{\prime} \mathcal{P}_{\perp}
\Lambda ({\bm c}) \mathcal{P}_{\perp}} \mathcal{P}_{\perp} \left( - i {\bm k} \cdot
{\bm c} \right) \mathcal{P} \delta \widetilde{F}_{1} ({\bm k},{\bm
c},s- s^{\prime})
\]
\begin{equation}
\label{5.18} = \int_{0}^{s} ds^{\prime}\,
\mathcal{P}_{\perp} e^{ s^{\prime}\Lambda ({\bm c})} \left( - i {\bm
k} \cdot {\bm c} \right) \mathcal{P} \delta \widetilde{F}_{1} ({\bm
k},{\bm c},s- s^{\prime}).
\end{equation}
Then, for large $s$ Eq.\ (\ref{5.15}) reduces to
\begin{eqnarray}
\label{5.19} \mathcal{P}_{\perp} \delta \widetilde{F}_{1} ({\bm
k},{\bm c},s) & =& \int_{0}^{s} ds^{\prime}\, \mathcal{P}_{\perp}
e^{ s^{\prime}\Lambda ({\bm c})} \left( - i {\bm k} \cdot {\bm c}
\right) \mathcal{P} \delta
\widetilde{F}_{1} ({\bm k},{\bm c},s- s^{\prime}) \nonumber \\
& & + \int_{0}^{s} d s^{\prime}\, \mathcal{U}({\bm k},{\bm
c},s^{\prime}) \mathcal{P}_{\perp} \widetilde{S}({\bm k},{\bm
c},s-s^{\prime}), \nonumber \\
\end{eqnarray}
and substitution of this into Eq.\ (\ref{5.13}) yields
\begin{equation}
\label{5.20} \delta {\sf \Pi} ({\bm k},s) = \frac{\delta \epsilon
({\bm k},s)}{2}\, {\sf I}+\delta_{1} {\sf \Pi} ({\bm k},s) + {\sf R}
({\bm k},s),
\end{equation}
where
\begin{equation}
\label{5.21} \delta_{1} {\sf \Pi} ({\bm k},s) = \int_{0}^{s} d
s^{\prime}\, \int d{\bm c}\, {\sf \Delta}({\bm c}) e^{s^{\prime}
\Lambda ({\bm c})} (-i {\bm k} \cdot {\bm c} ) \mathcal{P} \delta
\widetilde{F}_{1} ({\bm k}, {\bm c},s-
s^{\prime}),
\end{equation}
and
\begin{equation}
\label{5.22} {\sf R}({\bm k},s)= \int_{0}^{s} ds^{\prime} \int d{\bm
c}\, {\sf \Delta} ({\bm c}) \mathcal{U} ({\bm k},{\bm c},
s^{\prime}) \mathcal{P}_{\perp} \widetilde{S} ({\bm k},{\bm
c},s-s^{\prime}).
\end{equation}
Upon writing Eq. (\ref{5.21}), Eq.\ (\ref{5.12}) has been employed
to remove the operator $\mathcal{P}_{\perp}$ appearing in the first
term on the right hand side of Eq. (\ref{5.19}). Because of the
isotropy of the operator $\Lambda ({\bm c})$, only the projection
onto ${\bm \xi}_{2}({\bm c})$ gives a non-vanishing contribution to
the above expression for $\delta_{1} {\sf \Pi}({\bm k},s)$, that can
be simplified to
\begin{eqnarray}
\label{5.23} \delta_{1} {\sf \Pi} ({\bm k},s) & = & \int_{0}^{s} d
s^{\prime}\, \int d{\bm c}\, {\sf \Delta}({\bm c}) e^{s^{\prime}
\Lambda ({\bm c})} (-i {\bm k} \cdot {\bm c} ) {\bm \xi}_{2} ({\bm c}) \cdot \langle\overline{\bm
\xi}_{2}({\bm c}) | \delta \widetilde{F}_{1} ({\bm k},{\bm
c},s-s^{\prime}) \rangle \nonumber \\
& = & \int_{0}^{s} d s^{\prime}\, \int d{\bm c}\, {\sf \Delta}({\bm
c}) e^{s^{\prime} \Lambda ({\bm c})} (-i {\bm k} \cdot {\bm c} )
{\bm \xi}_{2} ({\bm
c}) \cdot \delta {\bm \omega} ({\bm k},s-s^{\prime}) \nonumber \\
& \simeq & \int_{0}^{s} d s^{\prime}\, \int d{\bm c}\, {\sf
\Delta}({\bm c}) e^{s^{\prime} \Lambda ({\bm c})} (-i {\bm k} \cdot
{\bm c} ) e^{-s^{\prime} \zeta_{0}/2} {\bm \xi}_{2} ({\bm c}) \cdot
\delta {\bm \omega} ({\bm k},s).
\end{eqnarray}
In the previous transformations, the definition in Eq. (\ref{4.5})
has been used, and it has been taken into account that to lowest
order in $k$,
\begin{equation}
\label{5.24} \delta \omega ({\bm k},s-s^{\prime})= e^{-s^{\prime}
\zeta_{0}/2}\delta \omega ({\bm k},s),
\end{equation}
according to the balance equation for the fluctuating velocity
field, Eq. (\ref{4.8}). Using again the symmetry of ${\bm
\Lambda}({\bm c})$, it is obtained:
\begin{equation}
\label{5.25} \delta_{1} {\Pi}_{ij} ({\bm k},s) = -i
\widetilde{\eta}(s) \left[ k_{i} \delta \omega_{j} ({\bm k},s) +
k_{j} \delta \omega_{i} ({\bm k},s) - \frac{2}{d}\, \delta_{ij} {\bm k} \cdot \delta
\widetilde{\bm \omega} ({\bm k},s) \right].
\end{equation}
This is the same as the Navier-Stokes expression for the pressure
tensor with the only difference that the average macroscopic
velocity field is substituted by the fluctuating one. It involves
the (time-dependent) dimensionless shear viscosity
$\widetilde{\eta}(s)$ defined by
\begin{equation}
\label{5.26} \widetilde{\eta}(s) = \frac{1}{d^{2}+d-2} \sum_{i}
\sum_{j} \int_{0}^{s} d s^{\prime} \int d {\bf c}\, \Delta_{ij}({\bm
c}) e^{s^{\prime} (\Lambda-\frac{\zeta_{0}}{2})}
\xi_{2,i}({\bm c}) c_{j}.
\end{equation}
This expression is equivalent to the one obtained from the nonlinear
Boltzmann equation for inelastic hard spheres or disks by the
Chapman-Enskog method \cite{BDKyS98,ByC01} and also to the
Green-Kubo formulas derived in ref. \cite{DyB02}. Let us remark that
the results obtained here apply in the limit of large $s$. It is in
this limit when hydrodynamics in the usual sense is expected to
apply. If this is true, the shear viscosity in Eq.\ (\ref{5.26})
will become independent of $s$. Although there is no a mathematical
proof of this ``ageing to hydrodynamics'' for granular gases up to
now, numerical evaluation of the right hand side of Eq. (\ref{5.26})
by using the direct Monte Carlo simulation method has shown the
existence of such a limit value \cite{ByR04}. Moreover, the
simulation results for the shear viscosity $\widetilde{\eta}$ are in
good agreement with the expression obtained by evaluating the
Chapman-Enskog result in the first Sonine approximation
\cite{BDKyS98,ByC01},
\begin{equation}
\label{5a.26}\widetilde{\eta}(\alpha)= \left[8
\widetilde{\nu}(\alpha) -\zeta_{0}(\alpha) \right]^{-1},
\end{equation}
\begin{equation}
\label{5b.26} \widetilde{\nu}(\alpha) = \frac{\pi^{\frac{d-1}{2}}}{2 \sqrt{2} d(d+2) \Gamma \left(
d/2 \right)} (3- 3 \alpha + 2d) (1+\alpha) \left[ 1 - \frac{a_{2} (\alpha)}{32}
\right].
\end{equation}
When Eq.\ (\ref{5.25}) is substituted into Eq.\ (\ref{5.20}) and the
resulting expression is used into Eq.\
(\ref{5.10}), a Langevin-like equation is obtained for the velocity
field,
\begin{equation}
\label{5.27} \left( \frac{\partial}{\partial s} -
\frac{\zeta_{0}}{2} \right) \delta {\bm \omega} ({\bm k},s) +
\frac{i}{2}\, \delta \epsilon ({\bm k},s) {\bm k} +
\widetilde{\eta} \left[ k^{2} \delta {\bm \omega} ({\bm k},s)
+ \frac{d-2}{d}\, {\bm k} \cdot \delta {\bm \omega} ({\bm
k},s) {\bm k} \right] ={\bm W}({\bm k},s),
\end{equation}
with the noise term given by
\begin{equation}
\label{5.28} {\bm W}({\bm k},s)\equiv -i {\bm k} \cdot {\sf R} ({\bm
k},s),
\end{equation}
where the term ${\sf R}({\bm k},s)$ is defined in Eq.\ (\ref{5.22}). It
follows from Eq. (\ref{3.12}) that
\begin{equation}
\label{5.29} \langle{\bm W}({\bm k},s)\rangle_{\text{H}}=0.
\end{equation}
A formal expression for the correlation function of ${\bm W}$ is
obtained directly by using Eq.\ (\ref{3.20}). Its conversion into an
explicit one, valid to Navier-Stokes order and, therefore,
consistent with the left hand side of Eq.\ (\ref{5.27}), will be
carried out in the next section for the particular case of the
transverse component of the velocity field.
\section{The noise term in the equation for the transverse velocity field}
\label{s6} The transverse part of the fluctuating velocity field, $\delta {\bm
\omega}_{\perp} ({\bm k},s)$, is defined by
\begin{equation}
\label{6.1} \delta {\bm \omega}_{\perp} ({\bm k},s) \equiv \delta
{\bm \omega}({\bm k}, s)- \delta {\bm \omega}({\bm k},s) \cdot
\frac{{\bm k}}{k^{2}}\, {\bm k}.
\end{equation}
Its evolution equation can be written down directly from Eq.\
(\ref{5.27}),
\begin{equation}
\label{6.2} \left( \frac{\partial}{\partial s} -\frac{\zeta_{0}}{2}
+ \widetilde{\eta} k^{2} \right) \delta {\bm \omega}_{\perp} ({\bm
k},s) = {\bm W}_{\perp}({\bm k},s),
\end{equation}
where
\begin{equation}
\label{6.3} {\bm W}_{\perp}({\bm k},s)= - i {\bm k} \left( {\sf I}
-\frac{{\bm k}{\bm k}}{k^{2}} \right) :{\sf R} ({\bm k},s).
\end{equation}
Substitution of the expression of ${\sf R}$ given in Eq.\
(\ref{5.22}) yields
\begin{equation}
\label{6.4} {\bm W}_{\perp}({\bm k},s) = -i \int_{0}^{s} d
s^{\prime} \int d{\bm c}\, {\bm k} \cdot {\bm c} \left( {\bm c} -
\frac{{\bm c} \cdot {\bm k}}{k^{2}}\, {\bm k} \right)
\mathcal{U} ({\bm k},{\bm c},s^{\prime})
\mathcal{P}_{\perp} \widetilde{S} ({\bm k},{\bm c},s-s^{\prime}).
\end{equation}
By using this expression, it is shown in appendix \ref{ap2} that for
two components, $W_{{\perp},i} ({\bm k},s)$ and
$W_{{\perp},j} ({\bm k},s)$, of the noise of the transverse velocity field,
to lowest order in $k$ it is
\begin{equation}
\label{6.5} \langle W_{{\perp},i} ({\bm k},s)W_{{\perp},j} ({\bm
k}^{\prime},s^{\prime})\rangle_{\text{H}} = \delta_{i,j}
\delta_{{\bm k},-{\bm k}^{\prime}} \frac{\widetilde{V}^{2}}{N} k^{2}
G(|s-s^{\prime}|),
\end{equation}
for $ s>s^{\prime} \gg1$. Here $\widetilde{V}$ is the volume of the
system in the length scale $l$, i.e. $\widetilde{V} \equiv N/n
\lambda^{d}$, and
\begin{equation}
\label{6.6} G(|s|) = \int d{\bm c}\int d{\bm c}^{\prime}
\Delta_{xy}({\bm c}) \Delta_{xy} ({\bm c}^{\prime}) \widetilde{\psi}_{\text{HCS}}
({\bm c},s ; {\bm c}^{\prime}),
\end{equation}
with
\begin{equation}
\label{6.7} \widetilde{\psi}_{\text{HCS}} ({\bm c},s ; {\bm
c}^{\prime}) = \int d{\bm l}\, \widetilde{h} ({\bm l},{\bm c},s;
{\bm c}^{\prime}).
\end{equation}
The distribution $\widetilde{h} ({\bm l},{\bm c},s; {\bm
c}^{\prime})$ is defined in Eq.\ (\ref{2.26}). Then, by integration
of Eq.\ (\ref{2.27}) it follows that $ \widetilde{\psi}_{\text{HCS}}
({\bm c},s ; {\bm c}^{\prime}) $ obeys the equation
\begin{equation}
\label{6.8} \left[ \frac{\partial}{\partial s} - \Lambda ({\bm c})
\right] \widetilde{\psi}_{\text{HCS}} ({\bm c},s ; {\bm c}^{\prime}) =0,
\end{equation}
valid for $s>0$. In principle, this equation must be solved with the
initial condition $ \widetilde{\psi}_{\text{HCS}} ({\bm c}; {\bm
c}^{\prime})$, given by
\begin{equation}
\label{6.9} \widetilde{\psi}_{\text{HCS}} ({\bm c}; {\bm
c}^{\prime}) = \int d{\bm l}\, \widetilde{g}({\bm l},{\bm c}, {\bm
c}^{\prime})+ \delta ({\bm c}-{\bm c}^{\prime}) \chi({c}),
\end{equation}
obtained by integration of Eq.\ (\ref{2.28}). Nevertheless, it has
been shown in ref \cite{ByR04} by particle simulation that
contributions from the correlations in the HCS of dilute granular
gases are negligible, at least for not too strong inelasticity, $\alpha \agt 0.5$. Therefore, the
term involving $\widetilde{g}$ in Eq.\ (\ref{6.9}) has been
neglected in the results reported below.
The solution of Eq. (\ref{6.2}) in the limit of large $s$ can be
written as
\begin{equation}
\label{6.10} \delta \omega_{\perp,i} ({\bm k},s)= \int_{-\infty}^{s}
ds_{1}\, e^{\lambda_{\perp}(k)(s-s_{1})} W_{\perp,i} ({\bm k},s),
\end{equation}
where
\begin{equation}
\label{6.11} \lambda_{\perp}(k) \equiv
\frac{\zeta_{0}}{2}-\widetilde{\eta} k^{2}.
\end{equation}
Next, using Eq. (\ref{6.5}) it is obtained that
\begin{equation}
\label{6.12} \langle\delta \omega_{\perp,i}({\bm k},s) \delta
\omega_{\perp,i} ({\bm k}^{\prime},s)\rangle_{\text{H}} = -
\frac{\widetilde{V}^{2}}{2N}\, k^{2}\delta_{{\bm k},-{\bm
k}^{\prime}} \frac{\widetilde{\eta}^{\prime}}{ \lambda_{\perp}(k)},
\end{equation}
with the coefficient $\widetilde{\eta}^{\prime}$ defined as
\begin{equation}
\label{6.13} \widetilde{\eta}^{\prime} = 2 \int_{0}^{\infty} ds\,
G(|s|) e^{\frac{\zeta_{0}}{2}}.
\end{equation}
This coefficient can be computed in the first Sonine approximation.
Some details of the calculations are given in appendix \ref{ap3}.
The result reads
\begin{equation}
\label{6.14} \widetilde{\eta}^{\prime} = \frac{1+a_{2}(\alpha)}{8
\widetilde{\nu}(\alpha) - 3 \zeta_{0}(\alpha)},
\end{equation}
where $\widetilde{\nu}(\alpha)$ was defined in Eq.\ (\ref{5b.26}).
The two-time self-correlation function of the transverse velocity
field can also be computed from Eqs.\ (\ref{6.5}) and (\ref{6.10}).
Again, details of the calculations are given in appendix \ref{ap3}.
For $s-s^{\prime}
>0$, it is obtained
\begin{equation}
\label{6.16} \langle\delta \omega_{\perp,i}({\bm k},s) \delta
\omega_{\perp,j}({\bm k}^{\prime},s^{\prime}) \rangle_{H} \simeq -
\frac{\widetilde{V}^{2}}{2 N}\, \delta_{i,j} \delta_{{\bm k},-{\bm
k}^{\prime}} \frac{ \left( \widetilde{\eta}^{\prime} +
\widetilde{\eta}_{1} \right)k^{2}}{ \lambda_{\perp}(k)}\,
e^{\lambda_{\perp}(k) (s-s^{\prime})},
\end{equation}
where
\begin{equation}
\label{6.17} \widetilde{\eta}_{1} = \int_{0}^{\infty} ds\, G(|s|)
\left[ e^{- \lambda_{\perp}(k)s} - e^{\lambda_{\perp}(k) s} \right]
\end{equation}
It is worth to stress that this result only holds after a transient
interval $s-s^{\prime}$, and for this reason it does not reduce to
Eq.\ (\ref{6.12}) for $s=s^{\prime}$. On the other hand, the
coefficient $\widetilde{\eta}_{1}$ can be expected to be small, since
the second factor in the integrand remains small for the decay of
the first one.
To check the theory developed along the paper, Eqs. (\ref{6.16}) and
(\ref{6.12}) has been used to measure the shear viscosity
$\widetilde{\eta}$ and $\widetilde{\eta}^{\prime}$ by means of
molecular dynamics simulation of dilute granular gases. The results
turn out to be in qualitative agreement with the theory, in the
sense that the scaled one-time correlation function is independent
of the variable $s$, and the two time correlation function only
depends on the difference $s-s^{\prime}$ and decays exponentially
after a short transient period. More details of the simulation
method employed and the analysis of the data is given in
\cite{BGyM08}. The comparison between the values of
$\widetilde{\eta}$ and $\widetilde{\eta}^{\prime}$, obtained from
the simulation results and the theoretical predictions given by
Eqs.\ (\ref{5a.26}) and (\ref{6.14}), respectively is shown in
Fig. \ref{fig1}. Of course, all the simulation data have been
obtained with low density systems in the HCS. It can be observed
that the agreement is very good over a quite range of values of the
restitution coefficient $\alpha$, then providing a very strong
support for both the theory developed here and the specific
algorithm used to compute the coefficients.
\begin{figure}
\includegraphics[angle=0,width=.6\textwidth]{bmyg08f1.eps}
\caption{(Color online) The shear viscosity $\widetilde{\eta}$ and
the new coefficient $\widetilde{\eta}^{\prime}$ determining the
transverse velocity fluctuations in granular gases in the HCS. The
solid and dashed lines are the theoretical predictions for
$\widetilde{\eta}$ and $\widetilde{\eta}^{\prime}$ given by Eqs.
(\protect{\ref{5a.26}}) and (\protect{\ref{6.14}}), respectively,
normalized by the elastic value of the shear viscosity,
$\widetilde{\eta}_e$. The circles
($\widetilde{\eta}/\widetilde{\eta}_e$) and squares
($\widetilde{\eta}^{\prime}/\widetilde{\eta}_e$) are molecular
dynamics simulation results. The dotted line is the result obtained
in the white noise approximation, Eq.\ (\protect{\ref{7.3}}).
\label{fig1}}
\end{figure}
\section{Summary and discussion}
\label{s7} The primary objective here has been to investigate
hydrodynamic fluctuations in the homogeneous cooling state of dilute
granular gases, modeled as an ensemble of inelastic hard particles.
From this point of view, the fluctuating balance equations
(\ref{4.7})-(\ref{4.9}), together with the fluctuating Boltzmann
equation (\ref{3.10}) provide a solid starting point. The remaining
task is to construct explicit expressions for the fluctuating flux
and the cooling rate in terms of the (fluctuating) hydrodynamic
fields by using, for instance, the Chapman-Enskog procedure. This
part of the analysis turns out to be technically rather complex, and
has been limited here to the particular case of the transverse flow
field.
The structure of the fluctuating balance equations is similar to
those for elastic, molecular systems with two main differences, both
in the equation for the energy, Eq.\ (\ref{4.9}). The equation
contains a term, $\delta \zeta_{0}$, associated to the fluctuations
of the cooling rate and also an intrinsic noise term
$\widetilde{S}_{\epsilon}$. Both give contributions even to zeroth
order in the gradients and therefore play a relevant role in
describing the fluctuations of global properties of the system
\cite{BGMyR04}.
With regards to the fluctuating transverse velocity field, it has
been found that it can be described by a Langevin equation, but
exhibiting two crucial differences as compared with the elastic
case. The form of the fluctuation dissipation relation changes both
qualitatively and quantitatively. The second moment of the noise
term is not determined by the shear viscosity. In addition, the
noise is not white, i.e. it presents memory effects. Both aspects
have been confirmed by the results obtained by molecular dynamics
simulations.
It could be wondered at what extension the memory effects mentioned
above are relevant. Suppose the hypothesis of a white noise would
have made and Eq. (\ref{6.5}) were substituted by
\begin{equation}
\label{7.1} \langle W_{{\perp},i} ({\bm k},s)W_{{\perp},j} ({\bm
k}^{\prime},s^{\prime})\rangle_{\text{H}} = \delta_{i,j}
\delta_{{\bm k},-{\bm k}^{\prime}} \frac{\widetilde{V}^{2}}{N} k^{2}
\widetilde{\eta}^{\prime \prime} \delta (s-s^{\prime}),
\end{equation}
with
\begin{equation}
\label{7.2} \widetilde{\eta}^{\prime \prime} = 2 \int_{0}^{\infty}
ds\, G(|s|).
\end{equation}
By using the same method as outlined in appendix \ref{ap3} it is
found that
\begin{equation}
\label{7.3} \widetilde{\eta}^{\prime \prime} = \frac{1
+a_{2}(\alpha)}{ 8 \widetilde{\nu}(\alpha)-2 \zeta_{0}(\alpha)}.
\end{equation}
This coefficient is also plotted in Fig.\ \ref{fig1}, and it is seen
to clearly underestimate the amplitude of the second moment of the
noise measured in the simulation. It is worth to stress that the
violation of the elastic fluctuation-dissipation relations is
already significant for values of the restitution coefficient
$\alpha$ of the order of $0.95$.
\section{Acknowledgements}
This research was supported by the Ministerio de Educaci\'{o}n y
Ciencia (Spain) through Grant No. FIS2008-01339 (partially financed
by FEDER funds). M.I.G.S. acknowledges financial support from Becas de
la Fundaci{\'o}n La Caixa and the French Government .
|
1,314,259,993,447 | arxiv | \section{Introduction}
Let $U_j, j=1,2,$ be the random, scalar wave field of
wavenumber
$k_j, j=1,2,$
The mutual coherence function and its cross-spectral
version, known as the two-frequency
mutual coherence function, defined by
\beq
\label{mut}
\Gamma_{12}(\mathbf x,\mathbf y)=
\left\langle U_1(\frac{\mathbf x}{k_1}+\frac{\mathbf y}{2k_1})U^*_2(\frac{\mathbf x}{k_2}
-\frac{\mathbf y}{2k_2})\right\rangle,
\eeq
where $\left\langle\cdot\right\rangle$ stands for the ensemble averaging,
is the central quantity of optical coherence theory,
from which the two-space, two-time correlation function can be
obtained via Fourier transform in frequency,
and therefore plays a
fundamental role in analyzing propagation of
random pulses \cite{BW, BM, Ish, MW, SF}. The motivation for the scaling factors in (\ref{mut})
will be given below, cf. (\ref{0.11}).
In this paper, we set out to analyze the two-frequency mutual coherence as function of the spatial displacement and
frequency difference for classical waves in
multiply scattering media. This problem has been extensively
studied in the physics literature (see \cite{BF, Ish, SHG, RN} and references therein). Here we derive from the multscale expansion (MSE)
the two-frequency version of the radiative transfer equation
which is then used to estimate qualitatively the three physical
parameters: the spatial and spatial frequency spreads,
and the coherence bandwidth, also known as the Thouless
frequency in condensed matter physics. Moreover, we show that the boundary layer behavior of the two-frequency
radiative transfer (2f-RT) equation is analytically
solvable in geometrical optics. The closed form solution (\ref{asym})
provides detailed information
of the two-frequency mutual coherence beyond the current physical
picture \cite{Sha, SHG, RN} (see the discussion about (\ref{current})).
To this end, we introduce the two-frequency Wigner
distribution whose ensemble average is equivalent to
the two-frequency mutual coherence
and
is a natural extension of
the standard Wigner distribution widely
used in optics \cite{Dra, josaa}. A different version
of two-frequency Wigner distribution for parabolic waves
was
introduced earlier \cite{2f-whn}
and with it the corresponding radiative transfer equation has been
derived with full mathematical rigor \cite{2f-crp,2f-rt-physa}.
In the case of anisotropic media fluctuating slowly in
the longitudinal direction the 2f-RT equation developed here reduces to that of
the paraxial waves in similar media which
lends support to the validity of MSE.
The other regime where the two frequency radiative transfer
equation has been obtained with full mathematical rigor is
geometrical optics \cite{2f-grt}.
The main difference between the 2f-RT and the standard theory
is that the former retains the wave nature of the process
and is not just about energy transport. Hence the governing equation can not be derived simply based on
the energy conservation law.
\section{Two-frequency Wigner distribution}
Let $U_j, j=1,2$ be governed by the reduced wave equation
\beq
\label{helm}
\Delta U_j(\mathbf r)+k_j^2 \big(\nu_j+ V_j(\mathbf r)\big)U_j(\mathbf r)=f_j(\mathbf r), \quad\mathbf r\in \IR^3, \quad j=1,2
\eeq
where $\nu_j$ and $V_j$ are respectively the mean
and fluctuation of the refractive index associated
with the wavenumber $k_j$ and
are in general complex-valued. The source terms $f_j$ may result
from the initial data or the external sources.
Here and below the vacuum phase speed is set to be
unity. To solve (\ref{helm}) one needs also
some boundary condition which is assumed to be
vanishing at the far field.
We define the two-frequency Wigner distribution as
\beq
\label{0.11}
W(\mathbf x,\mathbf p)=\frac{1}{(2\pi)^3}\int
e^{-i\mathbf p\cdot\mathbf y}
U_1 (\frac{\mathbf x}{k_1}+
\frac{\mathbf y}{2k_1}){U^*_2(\frac{\mathbf x}{k_2}
-\frac{\mathbf y}{2k_2})}d\mathbf y.
\eeq
In view of the definition, we see
that both $\mathbf x$ and $\mathbf p$ are dimensionless.
Here the choice of the scaling factors
is crucial; namely, the spatial dependence of the wave field should be measured w.r.t. the probing wavelength. The benefit
is that this choice leads to a closed form equation for
$W$.
It is easy to see that the ensemble average $\left\langle W\right\rangle$ is just
the (partial) Fourier transform of the mutual coherence function
(\ref{mut}).
The two-frequency Wigner distribution defined
here has a different scaling factor from the one introduced
for the parabolic waves \cite{2f-whn}.
The purpose of introducing the two-frequency Wigner distribution is to develop a two-frequency
theory in analogy to the well studied standard theory of
radiative transfer. Although the definition (\ref{0.11}) requires
the domain to be $\IR^3$, the governing radiative transfer equation,
once obtained, can be (inverse) Fourier transformed back to get the governing equation
for the two-point function $U_1(\mathbf r_1)U_2^*(\mathbf r_2)$ or $\Gamma_{12}$ as their boundary conditions
are usually easier to describe (cf. eq. (\ref{mean-eq2})).
The Wigner distribution
has the following easy-to-check properties:
\beq
\int |W|^2(\mathbf x,\mathbf p)d\mathbf x d\mathbf p&=&
\left(\frac{\sqrt{{k}_1{k}_2}}{2\pi}\right)^{3}
\int |U_1|^2(\mathbf x)d\mathbf x \int |U_2|^2(\mathbf x)d\mathbf x\nn\\
\label{2.2.2}
\int W(\mathbf x,\mathbf p)e^{i\mathbf p\cdot\mathbf y}d\mathbf p&=&
U_1
(\frac{\mathbf x}{k_1}+
\frac{\mathbf y}{2k_1})
U_2^{*}(\frac{\mathbf x}{
k_2}
-\frac{\mathbf y}{2k_2})\\
\int W(\mathbf x,\mathbf p)e^{-i\mathbf x\cdot
\mathbf q}d\mathbf x&=&\left({\pi^2k_1k_2}\right)^{3}
\widehat U_1(\frac{k_1\mathbf p}{4}
+\frac{k_1\mathbf q}{2})
{\widehat U}^{*}_2(\frac{k_2\mathbf p}{4}
-\frac{k_2\mathbf q}{2}),
\eeq
where $\widehat{\cdot}$ stands for the Fourier transform,
and hence contains all the information
in the two-point two-frequency function. In particular,
\beqn
\int \mathbf p W(\mathbf x,\mathbf p)d\mathbf p&=&-i\Big[\frac{1}{2k_1}
\nabla U_1(\frac{\mathbf x}{k_1}) U_2^*(\frac{\mathbf x}{k_2})-\frac{1}{2k_2}
U_1(\frac{\mathbf x}{k_1}) \nabla U_2^*(\frac{\mathbf x}{k_2})
\Big]
\eeqn
which, in the case of $k_1=k_2$, is proportional to
the energy flux density.
We now derive the equation for the two-frequency
Wigner distribution.
After taking the derivative $\mathbf p\cdot\nabla$ and
some calculation we have
\beq
\nn
\mathbf p\cdot\nabla W&=&\frac{i}{2(2\pi)^3}
\int e^{-i\mathbf p\cdot\mathbf y} U_1 (\frac{\mathbf x}{k_1}+
\frac{\mathbf y}{2k_1}){U^*_2(\frac{\mathbf x}{k_2}
-\frac{\mathbf y}{2k_2})}V_1(\frac{\mathbf x}{k_1}+
\frac{\mathbf y}{2k_1})d\mathbf y\\
&&
-\frac{i}{2(2\pi)^3}
\int e^{-i\mathbf p\cdot\mathbf y} U_1 (\frac{\mathbf x}{k_1}+
\frac{\mathbf y}{2k_1}){U^*_2(\frac{\mathbf x}{k_2}
-\frac{\mathbf y}{2k_2})}V_2^*(\frac{\mathbf x}{k_2}-
\frac{\mathbf y}{2k_2})d\mathbf y\nn\\
&&+\frac{i}{2}(\nu_1-\nu^*_2) W+F \label{raw}
\eeq
where the function $F$ depends linearly
on $U_j$ and $f_j$:
\beq
F&=&-\frac{i}{2(2\pi)^3}
\int e^{-i\mathbf p\cdot\mathbf y} f_1 (\frac{\mathbf x}{k_1}+
\frac{\mathbf y}{2k_1}){U^*_2(\frac{\mathbf x}{k_2}
-\frac{\mathbf y}{2k_2})}d\mathbf y\nn\\
&&
+\frac{i}{2(2\pi)^3}
\int e^{-i\mathbf p\cdot\mathbf y} U_1 (\frac{\mathbf x}{k_1}+
\frac{\mathbf y}{2k_1}){f^*_2(\frac{\mathbf x}{k_2}
-\frac{\mathbf y}{2k_2})}d\mathbf y.\label{fcn}
\eeq
Substituting the spectral representation of $V_j$
\beq
\label{spec}
V_j(\mathbf x)=\int e^{i\mathbf q\cdot\mathbf x} \hat V_j(d\mathbf q)
\eeq
in the expression and using the definition of $W$ we then
obtain the exact equation
\beq
\label{WME}
\lefteqn{\mathbf p\cdot\nabla W-\frac{i}{2}(\nu_1-\nu^*_2) W-F}\\
&=&
\frac{i}{2}\int \hat V_1(d\mathbf q)
e^{i\mathbf q\cdot\mathbf x/k_1} W(\mathbf x,\mathbf p-\frac{\mathbf q}{2k_1})-
\frac{i}{2}\int \hat V^*_2(d\mathbf q)
e^{-i\mathbf q\cdot\mathbf x/k_2} W(\mathbf x,\mathbf p-\frac{\mathbf q}{2k_2}).\nn
\eeq
Here and below $\hat V_2^*$ is the complex-conjugate
of the Fourier spectral measure $\hat V_2$. The full derivation of (\ref{WME}) is given in
Appendix A.
Let us pause to compare the classical wave with
the quantum wave function in the context of two-frequency
formulation.
The quantum wave functions $\Psi_j$ at two
different frequencies $\omega_1,\omega_2$ satisfy
the stationary Schr\"odigner equaiton
\beq
\label{sch2}
\frac{\hbar^2}{2}\Delta \Psi_j+ \big(\nu_j+ V_j(\mathbf x)\big)\Psi_j&=&
-\omega_j \hbar \Psi_j+f_j,\quad j=1,2,
\eeq
where $\nu_j+V_j$ are hypothetical, energy-dependent
real-valued potentials. Here the source terms $f_j$ equal the initial data $f$ of the time dependent problem. Usually in the quantum mechanical
context, the potential function does not explicitly depend on
the energy level (i.e. dispersionless).
The natural definition of the two-frequency Wigner distribution
for the quantum wave functions is
\beq
W(\mathbf x,\mathbf p)=\frac{1}{(2\pi)^3}
\int e^{-i\mathbf p\cdot\mathbf y} \Psi_1(\mathbf x+\frac{\hbar\mathbf y}{2})
\Psi^*_2(\mathbf x-\frac{\hbar\mathbf y}{2})d\mathbf y
\eeq
which satisfies the Wigner-Moyal equation
\beq
\label{WM2}
\lefteqn{\mathbf p\cdot\nabla W+i(\omega_2-\omega_1)W+\frac{i}{\hbar}(\nu_2^*-\nu_1)W }\\
&=&\frac{i}{\hbar}\int \hat V_1(d\mathbf q)
e^{i\mathbf q\cdot\mathbf x} W(\mathbf x,\mathbf p-\frac{\hbar\mathbf q}{2})-
\frac{i}{\hbar}\int \hat V^*_2(d\mathbf q)
e^{-i\mathbf q\cdot\mathbf x} W(\mathbf x,\mathbf p-\frac{\hbar\mathbf q}{2})+F\nn
\eeq
where $F$ has a similar expression to (\ref{fcn}).
The main difference between the quantum and classical waves
in the Wigner formulation is that the derivation of a closed-form
equation does not require
rescaling each energy component w.r.t. its de Broglie
wavelength. The implication in radiative transfer will be further discussed
(see the remark following eq. (\ref{rt-sch})).
\section{Two-frequency radiative transfer scaling}
We assume that $V_j(\mathbf x), j=1,2$ are {\em real-valued}, centered, random stationary
(i.e.
statistically homogeneous) {\em ergodic} field
admitting the spectral representation
(\ref{spec}) with the spectral measures $\hat{V}_j(d\mathbf p), j=1,2$
such that
\[
\left\langle \hat{V}_j(d\mathbf p)\hat{V}_j^*(d\mathbf q)\right\rangle
=\delta(\mathbf p-\mathbf q)\Phi_j(\mathbf p)d\mathbf p d\mathbf q
\]
\commentout{
$\hat{V}_j(d\mathbf p)=A_j(d\mathbf p)+iB_j(d\mathbf p), j=1,2,$ where
$A_j$ and $B_j$ are the real and imaginary parts, respectively.
We assume that for $j=1,2$
\beq
\left\langle A_j(d\mathbf p)A_j(d\mathbf q)\right\rangle=\left\langle B_j(d\mathbf p)B_j(d\mathbf q)\right\rangle
&=&\frac{1}{2}\delta(\mathbf p-\mathbf q)\Phi_j(\mathbf p)d\mathbf p d\mathbf q,\\
\left\langle A_j(d\mathbf p)B_j(d\mathbf q)\right\rangle&=&0, \quad\forall \mathbf p, \mathbf q
\eeq
}
where $\Phi_j$ are the (nonnegative-valued) power spectral densities of
the random fields $V_j, j=1,2$. The above $\delta$ function
is a consequence of the statistical homogeneity of
the random field $V_j$.
As $V_j, j=1,2$ are real-valued, $\hat V^*_j (d\mathbf p)=\hat V_j(-d\mathbf p)$ and hence
the power spectral densities $\Phi_j(\mathbf p)$
satisfy the symmetry property $\Phi_j(\mathbf p)=\Phi_j(-\mathbf p),\forall \mathbf p$.
We will also need the cross-frequency correlation
and we postulate the existence of
the cross-frequency spectrum $\Phi_{12}$ such
that
\[
\left\langle \hat{V}_1(d\mathbf p)\hat{V}_2^*(d\mathbf q)\right\rangle=
\delta(\mathbf p-\mathbf q)\Phi_{12}(\mathbf p)d\mathbf p d\mathbf q.
\]
Here $\Phi_{12}$ needs not be real-valued.
\commentout{
\beq
\left\langle A_1(d\mathbf p)A_2(d\mathbf q)\right\rangle=\left\langle B_1(d\mathbf p)B_2(d\mathbf q)\right\rangle
&=&\frac{1}{2}\delta(\mathbf p-\mathbf q)\Phi_{12}(\mathbf p)d\mathbf p d\mathbf q,\\
\left\langle A_1(d\mathbf p)B_2(d\mathbf q)\right\rangle=\left\langle A_2(d\mathbf p)B_1(d\mathbf q)\right\rangle
&=&0, \quad\forall \mathbf p, \mathbf q
\eeq
}
An important regime of multiple scattering of classical waves
takes
place when the scale of medium fluctuation is much smaller than
the propagation distance but is comparable or
much larger than the wavelength \cite{Ish, Mis}.
Radiative transfer regime can be characterized by the scaling limit
which replaces $\nu_j+V_j$ in eq. (\ref{helm}) with
\beq
\label{scaling}
\frac{1}{\theta^2\ep^2}\Big(\nu_j+ \sqrt{\ep}V_j(\frac{\mathbf r}{\ep})
\Big),\quad \theta>0,\quad\ep\ll 1
\eeq
where $\ep$ is
the ratio of the scale of medium fluctuation to the $O(1)$ propagation distance and $\theta$ the ratio
of the wavelength to the scale of medium fluctuation.
Hence $\theta\ep$ is the ratio of the wavelength
to the propagation distance and
the prefactor $(\theta\ep)^{-2}$ arises from
rescaling the wavenumber
$k\to k/(\ep\theta)$. This is so called the
weak coupling (or disorder) limit in kinetic theory which prohibits
the Anderson localization from happening \cite{Spo}.
Note that the resulting
medium fluctuation
$
{\ep^{-3/2}} V_j({\mathbf r}/{\ep})
$
converges to a spatial white-noise in three dimensions.
Physically speaking the radiative transfer scaling belongs to the diffusive wave regime under the condition of a large dimensionless conductance $g= N\ell_t/L$,
where $\ell_t$ is the transport mean free path,
$L$ is the sample size in the direction of propagation
and $N=2\pi A/\lambda^2$ is the number of transverse modes, limited by the illuminated area $A$
and the wavelength of radiation $\lambda$ \cite{BF, SHG}.
The dimensionless conductance $g$ can be expressed as $g=k\ell_t/\hbox{Fr}$
with the inverse Fresnel number $\hbox{Fr}=\lambda L/A$.
With the scaling (\ref{scaling}), $k\ell_t \sim \hbox{Fr}^{-1}\sim \theta^{-1}\ep^{-1}$
and hence
$g\sim \theta^{-2}\ep^{-2}\gg 1$ for any finite $\theta$ as $\ep\to 0$.
Anticipating small-scale fluctuation due to (\ref{scaling}) we modify the definition of
the two-frequency Wigner distribution in
the following way
\beqn
W(\mathbf x,\mathbf p)=\frac{1}{(2\pi)^3}\int
e^{-i\mathbf p\cdot\mathbf y}
U_1 (\frac{\mathbf x}{k_1}+
\frac{\theta\ep\mathbf y}{2k_1}){U^*_2(\frac{\mathbf x}{k_2}
-\frac{\theta\ep\mathbf y}{2k_2})}d\mathbf y
\eeqn
Eq. (\ref{WME}) now becomes
\beq
\label{wig}
{\mathbf p\cdot\nabla W}-F
&=&\frac{i}{2\ep\theta}(\nu_1-\nu^*_2) W+\frac{1}{\sqrt{\ep}}\cL W
\eeq
where the operator $\cL$ is defined by
\beq
\cL W(\mathbf x,\mathbf p)&=&
\frac{i}{2\theta}\int \hat V_1(d\mathbf q)
e^{i\frac{\mathbf q\cdot\mathbf x}{\ep k_1}} W(\mathbf x,\mathbf p-\frac{\theta\mathbf q}{2k_1})-\frac{i}{2\theta}\int \hat V^*_2(d\mathbf q)
e^{-i\frac{\mathbf q\cdot\mathbf x}{\ep k_2}} W(\mathbf x,\mathbf p-\frac{\theta\mathbf q}{2k_2}).\nn
\eeq
To capture the cross-frequency correlation
in the radiative transfer regime we also need
to restrict the frequency difference range
\beq
\label{band}
\lim_{\ep\to 0}{k}_1=\lim_{\ep\to 0}{k}_2={k},
\quad \frac{{k}_2-{k}_1}{\ep\theta k}=\beta
\eeq
where $k, \beta>0$ are independent of $\ep$ and $\theta$.
Assuming the differentiability of the mean refractive index's dependence
on the wavenumber we write
\beq
\label{band'}
\frac{\nu_2^*-\nu_1}{2\ep\theta}=\nu'
\eeq
where $\nu'$ is independent of $\ep, \theta$.
\section{Multi-scale expansion (MSE)}
\label{sec-mse}
To derive the radiative transfer equation for
the two-frequency Wigner distribution we employ
MSE \cite{BLP, RPK} which begins with introducing the fast variable
\[
{\tilde{\mathbf x}}=\mathbf x/\ep
\]
and treating ${\tilde{\mathbf x}}$ as independent from the slow variable
$\mathbf x$. Consequently the derivative $\mathbf p\cdot\nabla$ consists of two terms
\beq
\label{fast}
\mathbf p\cdot\nabla=\mathbf p\cdot\nabla_\mathbf x+\ep^{-1}\mathbf p\cdot\nabla_{{\tilde{\mathbf x}}}.
\eeq
Then MSE posits the following asymptotic expansion:
\beq
\label{mse}
W(\mathbf x,\mathbf p)=\bar W(\mathbf x,{\tilde{\mathbf x}},\mathbf p)+\sqrt{\ep} W_1(\mathbf x,{\tilde{\mathbf x}},\mathbf p)+\ep W_2(\mathbf x,{\tilde{\mathbf x}},\mathbf p)+O(\ep^{3/2}),\quad {\tilde{\mathbf x}}=\mathbf x\ep^{-1}
\eeq
whose proper sense will be explained
below.
Substituting the ansatz into eq. (\ref{wig}) and using
(\ref{fast}) we determine each term of (\ref{mse})
by equating terms of the same order of magnitude
starting with the highest order $\ep^{-1}$.
The $\ep^{-1}$-order equation has one term:
\beqn
\mathbf p\cdot\nabla_{{\tilde{\mathbf x}}} \bar W=0
\eeqn
which can be solved by setting $\bar W=\bar W(\mathbf x,\mathbf p)$.
Namely, to the leading order $W$ is independent of
the fast variable. Since the fast variable is due to
medium fluctuation, this suggests that $\bar W$ is
deterministic.
The next is the $\ep^{-1/2}$-order equation:
\beq
\label{w1}
\mathbf p\cdot\nabla_{{\tilde{\mathbf x}}} W_1=\cL\bar W.
\eeq
\commentout{
\frac{i}{2}\int \hat V_1(d\mathbf q)
{e^{i\frac{\mathbf q\cdot{\tilde{\mathbf x}}}{k_1}}}\bar W(\mathbf x,\mathbf p-\frac{\mathbf q}{2k_1})-
\frac{i}{2}\int \hat V^*_2(d\mathbf q)
e^{-i\frac{\mathbf q\cdot{\tilde{\mathbf x}}}{ k_2}} \bar W(\mathbf x,\mathbf p-\frac{\mathbf q}{2k_2})
\eeqn
}
We seek a solution that is stationary in ${\tilde{\mathbf x}}$, square-integrable in $\mathbf p$ and
has finite second moment. The solvability condition
(Fredholm alternative)
is that the right hand side, $\cL \bar W$, satisfies $\int\IE\big[
\Psi^* \cL\bar W\big]d\mathbf p=0$
for any ${\tilde{\mathbf x}}$-stationary, square-integrable field $\Psi({\tilde{\mathbf x}},\mathbf p)$ satisfying $\mathbf p\cdot\nabla_{\tilde{\mathbf x}} \Psi=0$.
The solvability condition is, however, not easy to enforce.
Alternatively we consider the regularized equation
\beq
\label{w12}
\ep W^\ep_1+\mathbf p\cdot\nabla_{\tilde{\mathbf x}} W^\ep_1=
\cL\bar W
\eeq
which is always solvable for $\ep>0$ and admits the solution
\beq
\label{w1'}
W^\ep_1(\mathbf x,{\tilde{\mathbf x}},\mathbf p)&=&
\frac{i}{2\theta}\int \hat V_1(d\mathbf q)
\frac{e^{i\frac{\mathbf q\cdot{\tilde{\mathbf x}}}{k_1}}}{\ep+i\mathbf q\cdot\mathbf p/k_1} \bar W(\mathbf x,\mathbf p-\frac{\theta\mathbf q}{2{k}_1})\\
&&-
\frac{i}{2\theta}\int \hat V^*_2(d\mathbf q)
\frac{e^{-i\frac{\mathbf q\cdot{\tilde{\mathbf x}}}{k_2}}}{\ep-i\mathbf q\cdot\mathbf p/k_2} \bar W(\mathbf x,\mathbf p-\frac{\theta\mathbf q}{2{k}_2}).\nn
\eeq
In the jargons of asymptotic analysis \cite{BLP}, $\sqrt{\ep} W_1^\ep$ is
called the first {\em corrector}.
In order to control the first corrector,
we choose $\bar W$ such that $\cL\bar W$ has zero mean.
This is a necessary condition as we seek a ${\tilde{\mathbf x}}$-stationary solution and consequently $\left\langle \mathbf p\cdot\nabla_{\tilde\mathbf x} W_1\right\rangle=\mathbf p\cdot\nabla_{{\tilde{\mathbf x}}}\left\langle W_1\right\rangle=0$. Needless to say, this condition
is weaker than the solvability condition stated above
and is satisfied for any {\em deterministic} $\bar W$ since both
$V_1$ and $V_2$ have zero mean.
Indeed, under the assumption of deterministic $\bar W$,
the resulting equation will be much simplified so we impose
this property on $\bar W$ from now on. The fact that in the limit $\bar W$
is deterministic can be proved rigorously in the paraxial regime \cite{2f-rt-physa}.
Finally the $O(1)$ equation is
\beq
\nn
\mathbf p\cdot\nabla_{\tilde{\mathbf x}} W_2(\mathbf x,{\tilde{\mathbf x}}, \mathbf p)&=& -\mathbf p\cdot\nabla_\mathbf x \bar W(\mathbf x,\mathbf p)
-i\nu'\bar W + F
+\frac{i}{2\theta}\int \hat V_1(d\mathbf q)
{e^{i\frac{\mathbf q\cdot{\tilde{\mathbf x}}}{k_1}}}W^\ep_1(\mathbf x, {\tilde{\mathbf x}}, \mathbf p-\frac{\theta\mathbf q}{2{k}_1})\\
&&-
\frac{i}{2\theta}\int \hat V^*_2(d\mathbf q)
e^{-i\frac{\mathbf q\cdot{\tilde{\mathbf x}}}{ k_2}} W^\ep_1(\mathbf x,{\tilde{\mathbf x}}, \mathbf p-\frac{\theta\mathbf q}{2{k}_2})\label{w2}
\eeq
which can be solved with regularization as in (\ref{w12}) and yields
the second corrector ${\ep}W^\ep_2$.
Again we impose on the right hand side of (\ref{w2}) the weaker
condition of zero mean.
Using (\ref{w1'}) in (\ref{w2}),
taking the ensemble average and passing
to the limit $\ep\to 0$
we obtain the governing equation
for $\bar W$:
\beqn
\lefteqn{\mathbf p\cdot\nabla_\mathbf x \bar W(\mathbf x,\mathbf p)+i\nu' \bar W-\left\langle F\right\rangle }\\
&=&-\frac{k_1^{3}}{2\theta^{4}}
\int d\mathbf q \Phi_1\big(\frac{k_1}{\theta}(\mathbf p-\mathbf q)\big)\pi\delta(|\mathbf p|^2
-{|\mathbf q|^2}) \bar W (\mathbf x,\mathbf p)+\frac{ik_1^{3}}{2\theta^{4}}
\int\!\!\!\!\!\! - \ d\mathbf q\frac{\Phi_1\big(\frac{k_1}{\theta}(\mathbf p-\mathbf q)\big)}{|\mathbf p|^2-|\mathbf q|^2}
\bar W (\mathbf x,\mathbf p)\\
&&\nn-\frac{k_2^{3}}{2\theta^{4}}
\int d\mathbf q \Phi_2\big(\frac{k_2}{\theta}(\mathbf p-\mathbf q)\big)\pi\delta(|\mathbf p|^2
-{|\mathbf q|^2})\bar W (\mathbf x,\mathbf p)-\frac{ik_2^{3}}{2\theta^{4}}
\int\!\!\!\!\!\! - \ d\mathbf q\frac{\Phi_2\big(\frac{k_2}{\theta}(\mathbf p-\mathbf q)\big)}{|\mathbf p|^2-|\mathbf q|^2}
\bar W (\mathbf x,\mathbf p)\\
&&+\frac{1}{4\theta^2}
\int d\mathbf q \Phi_{12}(\mathbf q)e^{i{\tilde{\mathbf x}}\cdot\mathbf q (k_1^{-1}-k^{-1}_2)}\pi\delta\big(\frac{\mathbf q}{k_2}\cdot(\mathbf p-\frac{\theta\mathbf q}{2k_1})\big) \bar W (\mathbf x, \mathbf p-\frac{\theta\mathbf q}{2k_1}-\frac{\theta\mathbf q}{2k_2})\\
&&\nn
+\frac{1}{4\theta^2}
\int d\mathbf q \Phi_{12}(\mathbf q)e^{i{\tilde{\mathbf x}}\cdot\mathbf q(k_1^{-1}-k^{-1}_2)}\pi\delta\big(\frac{\mathbf q}{k_1}\cdot(\mathbf p-\frac{\theta\mathbf q}{2k_2})\big)\bar W (\mathbf x, \mathbf p-\frac{\theta\mathbf q}{2k_1}-\frac{\theta\mathbf q}{2k_2})\\
&&
+\frac{i}{4\theta^2}\int\!\!\!\!\!\! - \ d\mathbf q\Big[\frac{1}{\frac{\mathbf q}{k_2}\cdot(\mathbf p-\frac{\theta\mathbf q}{2k_1})}
-\frac{1}{\frac{\mathbf q}{k_1}\cdot(\mathbf p-\frac{\theta\mathbf q}{2k_2})}\Big]
\Phi_{12}(\mathbf q)e^{i\tilde\mathbf x\cdot\mathbf q(k_1^{-1}-k^{-1}_2)}\bar W(\mathbf x, \mathbf p-\frac{\theta\mathbf q}{2k_1}-\frac{\theta\mathbf q}{2k_2})
\commentout{
+\frac{i}{4\theta^2}\int\!\!\!\!\!\! - \ \frac{\Phi_{12}(\mathbf q)e^{i\mathbf x\cdot\mathbf q\beta\theta}}{\frac{\mathbf q}{k_2}\cdot(\mathbf p-\frac{\theta\mathbf q}{2k_1})}
\bar W(\mathbf x, \mathbf p-\frac{\theta\mathbf q}{2k_1}-\frac{\theta\mathbf q}{2k_2})
-\frac{i}{4\theta^2}\int\!\!\!\!\!\! - \ \frac{\Phi_{12}(\mathbf q)e^{i\mathbf x\cdot\mathbf q\beta\theta}}{\frac{\mathbf q}{k_1}\cdot(\mathbf p-\frac{\theta\mathbf q}{2k_2})}
\bar W (\mathbf x, \mathbf p-\frac{\theta\mathbf q}{2k_1}-\frac{\theta\mathbf q}{2k_2})
}
\eeqn
where we have used the fact that in the sense of generalized function
\[
\lim_{\eta\to 0} \frac{1}{\eta+i\xi}= \pi \delta(\xi)-\frac{i}{\xi}
\]
with the second term giving rise to the Cauchy principal value integral denoted by $\int\!\!\!\!\!\! - \ $. From (\ref{fcn}) we have
the expression for $\left\langle F\right\rangle$
\beqn
\left\langle F\right\rangle &=&-\frac{i}{2(2\pi)^3}
\int e^{-i\mathbf p\cdot\mathbf y} f_1 (\frac{\mathbf x}{k_1}+
\frac{\mathbf y}{2k_1})\left\langle {U^*_2(\frac{\mathbf x}{k_2}
-\frac{\mathbf y}{2k_2})}\right\rangle d\mathbf y\nn\\
&&
+\frac{i}{2(2\pi)^3}
\int e^{-i\mathbf p\cdot\mathbf y}\left\langle U_1 (\frac{\mathbf x}{k_1}+
\frac{\mathbf y}{2k_1})\right\rangle {f^*_2(\frac{\mathbf x}{k_2}
-\frac{\mathbf y}{2k_2})}d\mathbf y.
\eeqn
which depends only on the mean fields $\left\langle U_1\right\rangle,
\left\langle U_2\right\rangle$, both assumed known throughout the paper.
Putting all the terms together with the regularization
we arrive at the following MSE
\beq
\label{mse22}
W(\mathbf x,\mathbf p)=\bar W(\mathbf x,\mathbf p)+\sqrt{\ep} W^\ep_1(\mathbf x,{\tilde{\mathbf x}},\mathbf p)+\ep W^\ep_2(\mathbf x,{\tilde{\mathbf x}},\mathbf p)
\eeq
which satisfies
\beq
\label{mse2}
\lefteqn{\Big(\mathbf p\cdot\nabla -\frac{1}{\sqrt{\ep}} \cL\Big)W+i\nu' W-F}\\
&=&
(i\nu'-1)\sqrt{\ep} W^\ep_1 +\sqrt{\ep} \mathbf p\cdot\nabla_\mathbf x W^\ep_1-\sqrt{\ep} \cL W^\ep_2 +(i\nu'-1)\ep W_2^\ep
+\ep \mathbf p\cdot\nabla_\mathbf x W^\ep_2.\nn
\eeq
Unfortunately the right hand side of (\ref{mse2})
does not vanish in the strong $L^2$-topology but only in the weak topology as in
\be
\label{corr22}
\lim_{\ep\to
0}\ep \int\,d\mathbf x\,\left\langle\left|\int\,d\mathbf p\,W^\ep_1(\mathbf x,\frac{\mathbf x}{\ep},\mathbf p)\psi(\mathbf p)\right|^2\right\rangle
=0,\quad \forall \psi \in L^2
\ee
(see Appendix B). It is not clear at this point how to
justify the preceding argument and construction of asymptotic solution with full mathematical rigor. Fortunately, in the regime of geometrical optics, the rigorous asymptotic result
can be obtained by a probabilistic method \cite{2f-grt}
and is the same as derived by MSE
(see Section \ref{sec-grt}). Another regime for
which the asymptotic result
can be fully justified is paraxial waves which we will
turn to in the next section.
Due to the assumption (\ref{band}) and the assumed continuous dependence of the medium fluctuation on the frequency
we have $\lim\Phi_1=\lim\Phi_2=\lim\Phi_{12}=\Phi$.
As a consequence, all the Cauchy principal value integrals cancel out. With some
changes of variables
the governing equation for $ \bar W$
takes the much simplified form:
\beq
\label{wb}
\lefteqn{\mathbf p\cdot\nabla_\mathbf x \bar W+ i\nu' \bar W-\left\langle F\right\rangle}\\
&=&\frac{\pi k^{3}}{\theta^{4}}
\int d\mathbf q \Phi\big(\frac{k}{\theta}(\mathbf p-\mathbf q)\big)\delta(|\mathbf p|^2
-{|\mathbf q|^2})\Big[e^{i\mathbf x\cdot(\mathbf p-\mathbf q )\beta}
\bar W \big(\mathbf x,\mathbf q\big)
-\bar W (\mathbf x,\mathbf p)\Big].\nn
\eeq
The $\delta$-function in the scattering kernel is
due to elastic scattering which preserve the
wavenumber.
When $\beta=0$ (then $\nu_1=\nu_2$ and $i\nu' \sim $ the imaginary part of $\nu$), eq. (\ref{wb})
reduce to the standard form of radiative transfer equation
for the phase space energy density \cite{Sch, Hop, Cha, Mis}. For
$\beta>0$, the wave featue is retained in (\ref{wb}). When $\beta\to\infty$,
the first term in the bracket on the right hand side of (\ref{wb}) drops out,
due to rapid phase fluctuation, so the random scattering effect
is pure damping:
\beq
\label{damp}
{\mathbf p\cdot\nabla_\mathbf x \bar W + i\nu' \bar W-\left\langle F\right\rangle}
&=&-\frac{\pi k^{3}}{\theta^{4}}
\int d\mathbf q \Phi\big(\frac{k}{\theta}(\mathbf p-\mathbf q)\big)\delta(|\mathbf p|^2
-{|\mathbf q|^2}) \bar W (\mathbf x,\mathbf p).\nn
\eeq
As a comparison, for Schr\"odinger equation (\ref{sch2})
in the frequency domain,
we modify the Wigner distribution as
\beqn
W(\mathbf x,\mathbf p)=\frac{1}{(2\pi)^3}
\int e^{-i\mathbf p\cdot\mathbf y} \psi_1(\mathbf x+\frac{\ep\hbar\mathbf y}{2})
\psi^*_2(\mathbf x-\frac{\ep\hbar\mathbf y}{2})d\mathbf y
\eeqn
and in the limit $\ep\to 0$ obtain the radiative transfer
equation following the same procedure
\beq
\label{rt-sch}
\lefteqn{\mathbf p\cdot\nabla_\mathbf x \bar W+ i(\omega_2-\omega_1)\bar W+\frac{2i}{\hbar}\nu' \bar W-\left\langle F\right\rangle}\\
&=&\frac{4\pi }{\hbar^{4}}
\int d\mathbf q \Phi\big(\frac{\mathbf p-\mathbf q}{\hbar}\big)\delta(|\mathbf p|^2
-{|\mathbf q|^2})\Big[
\bar W \big(\mathbf x,\mathbf q\big)
-\bar W(\mathbf x,\mathbf p)\Big].\nn
\eeq
The absence of the factor $e^{i\mathbf x\cdot(\mathbf p-\mathbf q )\beta} $ in
eq. (\ref{rt-sch}), and therefore the cross-frequency interference, is the main characteristic of
2f-RT for
quantum waves.
\commentout{
The convergence of
the above scaling limit is probably provable
by extending the rigorous diagrammatic method
developed for the time dependent Schr\"odinger
equation in \cite{Sp}, \cite{Sp2}, \cite{HLW}, \cite{EY}.
Here instead of the time dependent Schr\"odinger equation,
we have the stationary Schr\"odinger equation with two
different energy-dependent potentials.
However, the diagrammatic approach, rigorous or not, is more complicated
to carry out
and we will be content to give an explanation
in line with the multi-scale expansion in Appendix B.
}
\section{Paraxial 2f-RT: anisotropic medium}
\label{prt}
Forward-scattering approximation, also called paraxial approximation, is valid when back-scattering is negligible
and, as we show now, this is the case for anisotropic media fluctuating
slowly in the (longitudinal) direction of propagation. Let $z$ denote the longitudinal coordinate and $\mathbf x_\perp$ the transverse coordinates. Let $p$ and $\mathbf p_\perp$ denote the longitudinal and
transverse components of $\mathbf p\in \IR^3$, respectively.
Let $\mathbf q=(q, \mathbf q_\perp)\in \IR^3$ be likewise defined.
Consider now a highly anisotropic spectral density
for a medium fluctuating much more
slowly in the longitudinal direction, i.e.
replacing $\Phi\big((\mathbf p-\mathbf q)k/\theta\big)$ in (\ref{wb}) by
\[
\frac{1}{\eta}\Phi\left(\frac{k}{\eta \theta}(p-q), \frac{k}{\theta} (\mathbf p_\perp-\mathbf q_\perp)\right),\quad\eta\ll 1,
\]
which, in the limit $\eta\to 0$, tends to
\beq
\label{aniso}
\frac{\theta}{k}\delta(p-q) \int dw \Phi\left(w,
\frac{k}{\theta} (\mathbf p_\perp-\mathbf q_\perp)\right).
\eeq
Writing
$ \bar W= \bar W(z,\mathbf x_\perp,p, \mathbf p_\perp)$
we can approximate eq. (\ref{wb}) by \beq
\nn
\lefteqn{p\partial_z \bar W+\mathbf p_\perp\cdot\nabla_{\mathbf x_\perp}\bar W+i\nu' \bar W-\left\langle F\right\rangle}\\
&=&\frac{\pi k^{2}}{\theta^{3}}
\int d\mathbf q_\perp \int dw \Phi\big(w, \frac{k}{\theta}(\mathbf p_\perp-\mathbf q_\perp)\big)\delta(|\mathbf p_\perp|^2
-{|\mathbf q_\perp|^2})\nn \\
&&\times \Big[e^{i\mathbf x_\perp\cdot(\mathbf p_\perp-\mathbf q_\perp)\beta}
\bar W\big(z, \mathbf x_\perp,p, \mathbf q_\perp\big)
-\bar W(z, \mathbf x_\perp,p, \mathbf p_\perp)\Big].\label{rt-para}
\eeq
Eq. (\ref{rt-para}) is identical to the 2f-RT equation
rigorously derived directly
from the paraxial wave equation for similar
anisotropic media \cite{2f-crp, 2f-rt-physa}. This is somewhat surprising in view
of the different scaling factors in the definition
of two-frequency Wigner distributions in the two cases.
Note that in eq. (\ref{rt-para})
the longitudinal momentum $p$ plays the role
of a parameter and does not change during propagation and scattering. An important implication of this observation is
that eq. (\ref{rt-para}) can be solved as an evolution equation in
the direction of increasing $z$ with the {\em one-sided} boundary condition (e.g. at $z=\hbox{const.}$).
In other words, the influence from the other boundary
vanishes as the longitudinal direction is infinitely long.
The initial value problem of (\ref{rt-para}) is much
easier to solve than the boundary value problem of (\ref{wb}).
\section{Two-frequency geometrical radiative transfer (2f-GRT)}
\label{sec-grt}
Let us consider the further limit $\theta\ll 1$ when the wavelength is much shorter
than the correlation length of the medium fluctuation. To this end, the following form
is more convenient to work with
\beq
\label{wb2}
\lefteqn{\mathbf p\cdot\nabla_\mathbf x \bar W + i\nu' \bar W-\left\langle F\right\rangle }\\
&=&\frac{\pi k}{2\theta^2}
\int d\mathbf q \Phi\big(\mathbf q \big)\delta\big(\mathbf q\cdot(\mathbf p-\frac{\theta\mathbf q}{2k})\big)\Big[e^{i\mathbf x\cdot\mathbf q \beta\theta/k}
\bar W\big(\mathbf x,\mathbf p-\frac{\theta\mathbf q}{k}\big)
-\bar W(\mathbf x,\mathbf p)\Big]\nn
\eeq
which is obtained from eq. (\ref{wb}) after a change of
variables.
We expand the right hand side of (\ref{wb2}) in $\theta$ and pass to the limit
$\theta\to 0$ to obtain
\beq
\label{go}\label{fp}
{\mathbf p\cdot\nabla_\mathbf x \bar W+ i\nu' \bar W-\left\langle F\right\rangle }
&=&
\frac{1}{4k}\left(\nabla_\mathbf p-i{\beta}\mathbf x\right)
\cdot \mathbf D\cdot
\left(\nabla_\mathbf p-i{\beta}\mathbf x\right) \bar W
\eeq
with the (momentum) diffusion coefficient
\beq
\label{diffusion}
\mathbf D(\mathbf p)=\pi\int \Phi(\mathbf q)\delta(\mathbf p\cdot\mathbf q)\mathbf q\otimes\mathbf q d\mathbf q.
\eeq
The symmetry $\Phi(\mathbf p)=\Phi(-\mathbf p)$ plays an explicit role here in rendering the right hand side of eq. (\ref{wb2})
a second-order operator in the limit $\theta\to 0$.
Eq. (\ref{go}) can be rigorously derived
from geometrical optics by a probabilistic method
\cite{2f-grt}.
\subsection{Spatial (frequency) spread and coherence bandwidth}
\label{sec-iso}
Through dimensional analysis, eq. (\ref{go})
yields qualitative information about
important physical parameters of the stochastic medium.
To show this, let us assume for simplicity the isotropy of the medium, i.e. $\Phi(\mathbf p)=\Phi(|\mathbf p|)$,
so that $\mathbf D={C} |\mathbf p|^{-1} \Pi(\mathbf p)$ where
\beq
\label{const}
{C}=\frac{\pi}{3}\int\delta\Big(\frac{\mathbf p}{|\mathbf p|}\cdot\frac{\mathbf q}{|\mathbf q|}\Big)\Phi(|\mathbf q|)|\mathbf q| d\mathbf q
\eeq
is a constant
and $\Pi(\mathbf p)$ the orthogonal projection
onto the plane perpendicular to $\mathbf p$.
In view of (\ref{go}) $C$ (and $\mathbf D$) has the dimension of inverse length while the variables $\mathbf x$ and $\mathbf p$ are dimensionless.
Now consider the following change of variables
\beq
\label{change}
\mathbf x=\sigma_x k\tilde \mathbf x,\quad \mathbf p=\sigma_p\tilde\mathbf p/k,
\quad \beta=\beta_c\tilde\beta
\eeq
where $\sigma_x$ and $\sigma_p$ are respectively the
spreads in position and spatial frequency, and $\beta_c$ is
the coherence bandwidth. Let us substitute (\ref{change})
into eq. (\ref{fp}) and aim for the standard form
\beq
\label{stan}
{\tilde\mathbf p\cdot\nabla_{\tilde\mathbf x} \bar W + i\nu' \bar W-\left\langle F\right\rangle }
&=&
\left(\nabla_{\tilde\mathbf p}-i{\tilde\beta}\tilde\mathbf x\right)
\cdot |\tilde\mathbf p|^{-1}\Pi(\tilde\mathbf p)
\left(\nabla_{\tilde\mathbf p}-i{\tilde\beta}\tilde\mathbf x\right) \bar W.
\eeq
The 1-st term on the left side yields the first duality relation
\beq
\label{du1}
\sigma_x/\sigma_p\sim 1/k^2.
\eeq
The balance of terms in each pair of parentheses yields
the second duality relation
\beq
\label{du2}
\sigma_x\sigma_p\sim \frac{1}{\beta_c}
\eeq
whose left hand side is the {\em space-spread-bandwidth product.}
Finally the removal of the constant $C$ determines
\beq
\sigma_p\sim k^{2/3} C^{1/3}
\eeq
from which
$\sigma_x$ and $\beta_c$ can be determined
by using (\ref{du1}) and (\ref{du2}):
\[
\sigma_x\sim k^{-4/3} C^{1/3},\quad \beta_c\sim k^{2/3} C^{-2/3}.
\]
We do not know if, as it stands, eq. (\ref{stan}) is analytically solvable
but we can solve analytically for its boundary layer behavior.
\subsection{Boundary layer asymptotics: paraxial 2f-GRT}
Consider the half space $z\geq 0$ occupied by the random medium and a collimated narrow-band beam propagating
in the $z$ direction and incident normal to
the boundary ($z=0$) of the medium. Near the point of incidence on the boundary the corresponding two-frequency Wigner distribution
would be highly concentrated at the longitudinal momentum,
say,
$p=1$. Hence we can assume that the projection $\Pi(\mathbf p)$ in
(\ref{stan}) is effectively just the projection onto the transverse plane coordinated by $\mathbf x_\perp$
and approximate eq. (\ref{go}) by
\beq
\label{para}
{\Big[\partial_{z}+{\mathbf p_\perp\cdot\nabla_{\mathbf x_\perp} \Big]\bar W + i\nu' \bar W-\left\langle F\right\rangle}}
&=&
\frac{C_\perp}{4k|p|}\left(\nabla_{\mathbf p_\perp}-i{\beta}\mathbf x_\perp\right)^2
\bar W
\eeq
where the constant $C_\perp$ is the paraxial approximation of (\ref{diffusion}) for $|p|=1$:
\[
C_\perp=\frac{\pi}{2}\int \Phi(0,\mathbf q_\perp)|\mathbf q_\perp|^2
d\mathbf q_\perp.
\]
Here we have assumed the isotropy of $\Phi$ in
the transverse dimensions.
Note that the longitudinal (momentum) diffusion vanishes
and that the longitudinal momentum $p$ plays
the role of a parameter in eq. (\ref{para}) which then
can be solved in the direction of increasing $z$ as an evolution equation with
initial data given at a fixed $z$.
This is another instance of paraxial approximation.
Let $\sigma_*$ be
the spatial spread in the transverse coordinates $\mathbf x_\perp$, $\ell_c$ the coherence length in the transverse dimensions
and $\beta_c$ the coherence bandwidth. Let $L$ be
the scale of the boundary layer.
We then seek the following change of
variables
\beq
\label{change2}
\tilde\mathbf x_\perp=\frac{\mathbf x_\perp}{\sigma_* k},\quad
\tilde\mathbf p_\perp=\mathbf p_\perp k\ell_c, \quad\tilde z=\frac{z}{Lk},
\quad
\tilde\beta=\frac{\beta}{\beta_c}
\eeq
to remove all the physical parameters from
(\ref{para}) and to aim for
the form
\beq
\label{fp'}
\partial_{\tilde z} \bar W+\tilde\mathbf p_\perp\cdot\nabla_{\tilde\mathbf x_\perp} \bar W
+ Lki\nu'\bar W-Lk\left\langle F\right\rangle
=\left(\nabla_{\tilde\mathbf p_\perp}-i{\tilde\beta}\tilde\mathbf x_\perp\right)^2\bar W.
\eeq
The same reasoning as above now leads to
\beqn
\ell_c \sigma_*\sim L/k,\quad \sigma_*/\ell_c \sim {1}/{\beta_c},\quad
\ell_c\sim k^{-1}L^{-1/2} C_\perp^{-1/2}
\eeqn
and hence \[
\sigma_*\sim L^{3/2} C_\perp^{1/2},\quad
\beta_c\sim k^{-1}C_\perp^{-1} L^{-2}.
\]
The layer thickness $L$ may be determined by $\ell_c\sim 1$, i.e. $L\sim k^{-2}C_\perp^{-1}$.
After the inverse Fourier
transform eq. (\ref{fp'}) becomes
\beq
\label{mean-eq2}
\partial_{\tilde z} \Gamma
-{i}\nabla_{\tilde\mathbf y_\perp}\cdot\nabla_{\tilde\mathbf x_\perp} \Gamma + Lki\nu' \Gamma-Lk\left\langle F\right\rangle
&=&-\big|\tilde\mathbf y_\perp+{\tilde\beta}\tilde\mathbf x_\perp\big|^2
\Gamma\eeq
which is the governing equation for the two-frequency
mutual coherence in the normalized variables. With
data given on $\tilde{z}=0$ and vanishing far-field boundary
condition in the transverse directions, Eq. (\ref{mean-eq2}) can be solved analytically
and its Green function is
given by
\beq
\label{asym}
&&\frac{e^{-iLk\nu'}(i4\tilde\beta)^{1/2}}{(2\pi)^2\tilde z\sinh{\big[(i4\tilde\beta)^{1/2}\tilde z\big]}}
\exp{\left[\frac{1}{i4\tilde\beta\tilde z}
\left|\tilde\mathbf y_\perp-\tilde\beta\tilde\mathbf x_\perp-\mathbf y'_\perp+\tilde\beta\mathbf x'_\perp\right|^2\right]}
\\
\nn&&\times \exp{\left[{-\frac{\coth{\big[(i4\tilde\beta)^{1/2}\tilde z\big]}}{(i4\tilde\beta)^{1/2}}
\left|
\tilde\mathbf y_\perp+\tilde\beta\tilde\mathbf x_\perp
-\frac{\mathbf y'_\perp+\tilde\beta\mathbf x'_\perp}{
\cosh{\big[(i4\tilde\beta)^{1/2}\tilde z\big]}}\right|^2}\right]}\\
\nn&&\times
\exp{\left[-\frac{\tanh{\big[(i4\tilde\beta)^{1/2}
\tilde z\big]}}{(i4\tilde\beta)^{1/2}}
\left|\mathbf y'_\perp+\tilde\beta\mathbf x'_\perp\right|^2\right]}.
\eeq
Formula (\ref{asym}) is consistent with
the asymptotic result in the literature
which mainly concerns with the cross-frequency correlation
of {\em intensity}. In the radiative transfer regime considered
here, the cross-spectral correlation of intensity is the square
of
the two-frequency mutual coherence and has the commonly accepted form \cite{Sha, Gen, RN}
\beq
\label{current}
\exp{\Big[-2\sqrt{2\tilde\beta}\Big]}
\eeq
which is just the large $\tilde\beta$ asymptotic of the squared factor
$|\sinh{[(i4\tilde\beta)^{1/2}\tilde z]}|^{-2}$ in (\ref{asym})
at $\tilde z=1$ (see \cite{2f-grt} for detailed comparison).
Moreover (\ref{asym})
provides
detailed information about the simultaneous
dependence of the mutual coherence on the frequency difference and spatial displacement for $\tilde z\in (0,1)$ \cite{RN, SHG}.
Surprisingly, a closely related equation arises in the two-frequency formulation of the Markovian
approximation of the paraxial waves \cite{2f-whn}.
The closed form solution is crucial for analyzing the performance
of time reversal communication with broadband signals
\cite{pulsa}. The solution procedure for (\ref{asym})
is similar to that given elsewhere \cite{pulsa} and
is omitted here.
\subsection{Paraxial 2f-GRT in anisotropic media}
\label{gan}
We use here the setting and notation defined in Section \ref{prt} for anisotropic media. For simplicity we will set $p=1$
and omit writing it out in $\bar W$.
In view of (\ref{aniso}) we replace $\Phi(\mathbf q)$ in (\ref{diffusion}) by
\[
\delta(q) \int dw \Phi(w,\mathbf q_\perp)
\]
and obtain the transverse diffusion coefficient
\[
\mathbf D_\perp(\mathbf p_\perp)=\pi \int d\mathbf q_\perp \int dw \Phi(w,\mathbf q_\perp)
\delta(\mathbf p_\perp\cdot\mathbf q_\perp)\mathbf q_\perp\otimes\mathbf q_\perp
\]
whereas the longitudinal diffusion coefficient is zero.
For simplicity we assume the isotropy in the transverse dimensions, $\Phi(w,\mathbf p_\perp)=\Phi(w,|\mathbf p_\perp|)$, so that
$\mathbf D_\perp={C_\perp} |\mathbf p_\perp|^{-1} \Pi_\perp(\mathbf p_\perp)$ where
\[
{C_\perp}=\frac{\pi}{2}\int\delta\Big(\frac{\mathbf p_\perp}{|\mathbf p_\perp|}\cdot\frac{\mathbf q_\perp}{|\mathbf q_\perp|}\Big)\Phi(w, |\mathbf q_\perp|)|\mathbf q_\perp| dw d\mathbf q_\perp
\]
is a constant
and
$\Pi_\perp(\mathbf p_\perp)$ is
the orthogonal projection onto the transverse line perpendicular to
$\mathbf p_\perp$.
Hence eq. (\ref{go}) reduces to
\beq
\label{go-para}
\lefteqn{\Big[\partial_{z}+{\mathbf p_\perp\cdot\nabla_{\mathbf x_\perp} \Big]\bar W + i\nu' \bar W-\left\langle F\right\rangle}}\\
&=&
\frac{C_\perp}{4k}\left(\nabla_{\mathbf p_\perp}-i{\beta}\mathbf x_\perp\right)\cdot |\mathbf p_\perp|^{-1} \Pi_\perp(\mathbf p_\perp)
\left(\nabla_{\mathbf p_\perp}-i{\beta}\mathbf x_\perp\right)
\bar W. \nn
\eeq
Alternatively, eq. (\ref{go-para}) can also be derived from eq. (\ref{rt-para})
by taking the geometrical optics limit as described in the beginning
of Section 6.
Consider the change of variables (\ref{change2}) to remove all the physical parameters from
(\ref{go-para}) and to aim for
the form
\beq
\label{go-para-std}
\lefteqn{\Big[\partial_{\tilde z}+{\tilde \mathbf p_\perp\cdot\nabla_{\tilde \mathbf x_\perp} \Big]\bar W +Lk i\nu' \bar W-Lk\left\langle F\right\rangle}}\\
&=&
\left(\nabla_{\tilde\mathbf p_\perp}-i{\tilde\beta}\tilde\mathbf x_\perp\right)\cdot |\tilde\mathbf p_\perp|^{-1} \Pi_\perp(\tilde\mathbf p_\perp)
\left(\nabla_{\tilde\mathbf p_\perp}-i{\tilde\beta}\tilde\mathbf x_\perp\right)
\bar W\nn
\eeq
where $L$ should be interpreted as the distance of propagation.
Following the same line of reasoning, we obtain that
\[
\ell_c\sigma_*\sim L/k,\quad \sigma_*/\ell_c\sim 1/\beta_c,\quad\ell_c\sim C_\perp^{-1/3} L^{-1/3} k^{-1}
\]
and hence
\[
\sigma_*\sim C_\perp^{1/3} L^{4/3},\quad
\beta_c\sim C_\perp^{-2/3} L^{-5/3}k^{-1}.
\]
Unlike (\ref{para}) it is unclear if a closed-form solution to eq. (\ref{go-para}) exists or not.
\section{ Discussion and conclusion}
The standard (one-frequency) RT can be formally derived from the wave equation in at least two ways: the diagrammatic expansion method, as the ladder approximation of the Bethe-Salpeter equation
\cite{RN, Mis},
and the multi-scale expansion method advocated here \cite{BLP}. The latter is considerably simpler than the former in terms
of the amount of calculation involved. Both approaches have been developed with full mathematical rigor
in some special cases (see \cite{rad-arma, rad-crm} and the references therein). There are two regimes for which the 2f-RT equation has been derived with
full mathematical rigor: first,
for
the paraxial wave equation
by using the so called martingale method in probability
theory \cite{2f-crp, 2f-rt-physa}; second, for the spherical waves in geometrical optics by the path-integration method \cite{2f-grt}.
These rigorous results coincide with those derived here for
the respective regimes and hence support the validity
of MSE.
Within the framework of 2f-RT, a paraxial form arises naturally
in anisotropic media which fluctuate slowly in the longitudinal
direction. Another form of paraxial 2f-RT takes place
in the boundary layer asymptotics of isotropic media.
The latter equation turns out to be exactly solvable
and the boundary layer behavior is given in a closed form, revealing highly
non-trivial structure of the two-frequency mutual coherence.
In any case, dimensional analysis with the 2f-GRT equations
yields qualitative scaling behavior
of the spatial spread, the spatial frequency spread
and the coherent bandwidth in various regimes.
\commentout{
On the other hand,
the two-frequency radiative transfer limit for (\ref{helm}) may be dealt with
by extending the diagrammatic method
developed for the time dependent Schr\"odinger
equation \cite{Sp, Sp2, HLW, EY}.
By analogy, (\ref{helm})
is like the stationary Schr\"odinger equation with an energy-dependent potential.
However, the diagrammatic approach, rigorous or not, is more complicated than MSE
to carry out,
so, in addition to the formal expansion, we will be content to give a brief analysis of MSE in Appendix B.
}
\commentout{
Using MSE we have given a formal derivation of the two-frequency radiative transfer equation for the classical wave equation in
terms of the new two-frequency Wigner distribution.
The validity of the derivation is supported by
the interchangeability of the paraxial approximation
and the two-frequency radiative transfer limit.
}
From the point of view of computation, especially Monte Carlo
simulation, it appears to be natural to introduce the new
quantity
\[
\mathfrak{W}(\mathbf x,\mathbf p)=e^{-i\beta\mathbf x\cdot\mathbf p} \bar W(\mathbf x,\mathbf p)
\]
and rewrite eq. (\ref{wb}) in the following form
\beqn
\label{wbtilde}
\lefteqn{\mathbf p\cdot\nabla_\mathbf x \mathfrak{W}+i\beta|\mathbf p|^2 \mathfrak{W}+ i\nu' \mathfrak{W}-e^{-i\beta\mathbf x\cdot\mathbf p}\left\langle F\right\rangle}\\
&=&\frac{\pi k^{3}}{\theta^{4}}
\int d\mathbf q \Phi\big(\frac{k}{\theta}(\mathbf p-\mathbf q)\big)\delta(|\mathbf p|^2
-{|\mathbf q|^2})\Big[
\mathfrak{W} \big(\mathbf x,\mathbf q\big)
-\mathfrak{W} (\mathbf x,\mathbf p)\Big].\nn
\eeqn
The solution $\mathfrak{W}$ can then be expressed as
a path integration over the Markov process generated by
the operator $\cA$ defined by
\[
\cA \mathfrak{W}=-\mathbf p\cdot\nabla_\mathbf x \mathfrak{W}+\frac{\pi k^{3}}{\theta^{4}}
\int d\mathbf q \Phi\big(\frac{k}{\theta}(\mathbf p-\mathbf q)\big)\delta(|\mathbf p|^2
-{|\mathbf q|^2})\Big[
\mathfrak{W} \big(\mathbf x,\mathbf q\big)
- \mathfrak{W} (\mathbf x,\mathbf p)\Big]
\]
when $V$ is real-valued and $\Phi$ is nonnegative.
We will pursue this observation in a separate publication \cite{2f-grt}.
\commentout{
The present approach can be generalized
to the polarized waves described by the Maxwell
equations or the vector wave equation, which will be
presented
elsewhere.
}
|
1,314,259,993,448 | arxiv | \subsubsection{Transition Matrix}
\transmat
\subsubsection{Stationary Distribution}
\statdis
\subsubsection{Slack time\xspace and Throughput}
\watithput
\subsubsection{Number of Failed Retries\xspace}
\label{sec:nbf}
\failedres
}
\newcommand\treib{
The lock-free stack by Treiber~\cite{lf-stack} is a fundamental data structure\xspace
that provides \FuncSty{Pop}\xspace and \FuncSty{Push}\xspace operations. To \FuncSty{Pop}\xspace an element, the
top pointer is read and the next pointer of the initial element is
obtained. The latter pointer will be the new value of the {\it CAS}\xspace that
linearizes the operation. So, accessing the next pointer of the
topmost element represents \ema{\mathit{cw}} as it takes place between the {\it Read}\xspace and
the {\it CAS}\xspace.
We initialize the stack by pushing elements with or without
a stride from a contiguous chunk of memory. By this way, we are able
to introduce both costly or not costly cache misses. We also vary the
number of elements popped at the same time to obtain different \ema{\mathit{cw}};
the results, with different \ema{\mathit{cw}} values are illustrated
in Figure~\ref{fig:stack}.}
\newcommand\synth{
We first evaluate our models using a set of synthetic tests
that have been constructed to abstract
different possible design patterns of lock-free data structures (value of \ema{\mathit{cw}})
and different application contexts (value of \ema{\mathit{pw}}).
The critical work\xspace is either constant, or follows a Poisson distribution; in
Figure~\ref{fig:synt}, its mean value \ema{\mathit{cw}} is indicated at the top of
the graphs.
A steep decrease in throughput, as \ema{\mathit{pw}} gets low, can be observed for the cases with
low \ema{\mathit{cw}}, that mainly originates due to expansion.
When \ema{\mathit{cw}} is high, performance continues to increase when \ema{\mathit{pw}}
decreases, though slightly. The expansion is indeed low but the
slack time\xspace, which appears as a more dominant factor, decreases as the
number of threads inside the retry loop increases.
When looking into the differences between the constructive and the average-based\xspace
approach: the average-based\xspace approach estimations come out
to be less accurate for mid-contention cases as it only differentiates
between contended and non-contended modes. In addition, it fails to capture
the failing retries when measured throughput starts to deviate
from the theoretical upper bound, as \ema{\mathit{pw}} gets lower. In contrast, the
constructive approach provides high accuracy in all metrics for almost
every case.
We have also run the same synthetic tests with a parallel work\xspace that follows a
Poisson distribution (Figure~\ref{fig:synt-poisson}) or is constant
(Figure~\ref{fig:synt-const}), in order to observe the impact of the
distribution nature of the parallel work\xspace. Compared to the exponential
distribution, a better throughput is achieved with a Poisson
distribution on the parallel work\xspace. The throughput becomes even better with a
constant parallel work\xspace, since the slack time is minimized due to the
synchronization between the threads, as explained
in~\cite{our-disc15}.
}
\newcommand\synthtreib{
Here, we consider lock-free operations that can be completed
with a single successful {\it CAS}\xspace.
and provide predictions using both the average-based\xspace
and the constructive approach together with the theoretical upper bound.}
\newcommand\finemm{%
One quantum of the collection
phase is the collection of the list of one thread, while three nodes
are reclaimed during one quantum of the reclamation phase. The
traditional MM scheme was parameterized by a threshold based on the number
of the removed nodes; the fine-grain MM scheme
is parameterized by the number of quanta that are executed at each
call.
We apply different MM schemes on the \FuncSty{Dequeue}\xspace operation of the
Michael-Scott queue, and plot the results in Figure~\ref{fig:mm_perf}.
We initialize the queue with enough elements. Threads execute
\FuncSty{Dequeue}\xspace, which returns an element, then call the MM scheme.
On the left side, we compare a pure queue (without MM), a queue with
the traditional MM (complete reclamation once in a while) and a queue with
fine-grain MM (according to the numbers of quanta that are executed
at each call). Note that the
performance of the traditional MM is also subject to the tuning of the
threshold parameter. We have tested and kept only the best parameter
on the studied domain.
First, unsurprisingly, we can observe that the pure queue
outperforms the others as its \ema{\mathit{cw}} is lower (no need to maintain the
list of nodes that a thread is accessing).
Second, as the fine-grain MM is called after each completed \FuncSty{Dequeue}\xspace,
adding a constant work, the MM can be seen as a part of the parallel work\xspace. We
highlight this idea on the second experiment (on the right side). We first
measure the work done in a quantum. It follows that, for each value
of the granularity parameter, we are able to estimate the effective
parallel work\xspace as the sum of the initial \ema{\mathit{pw}} and the work added by the
fine-grain MM. Finally, we run the queue with the fine-grain MM, and
plot the measured throughput, according to the effective parallel
work, together with our two approaches instantiated with the effective
\ema{\mathit{pw}}. The graph shows the validity of the model estimations for all
values of the granularity parameter.
}
\newcommand\adaptsine{
Numerous scientific applications are built upon a pattern of
alternating phases, that are communication- or
computation-intensive. If the application involves data structures\xspace, it is
expected that the rate of the modifications to the data structures\xspace is high
in the data-oriented phases, and conversely.
These phases could be clearly separated, but the application can also
move gradually between phases. The rate of modification to a data structure\xspace
will anyway oscillate periodically between two extreme values.
We place ourselves in this context, and evaluate the two MMs
accordingly. The parallel work\xspace still follows an exponential distribution of
mean \ema{\mathit{pw}}, but \ema{\mathit{pw}} varies in a sinusoidal manner with time, in order to
emulate the numerical phases. More precisely, \ema{\mathit{pw}} is a step
approximation of a sine function. Thus, two additional parameters
rule the experiment: the period of the oscillating function represents
the length of the phases, and the number of steps within a period
depicts how continuous are the phase changes.
}
\newcommand\fulldeq{
We consider the deque\xspace designed in~\cite{deq}. \FuncSty{PushLeft}\xspace and
\FuncSty{PushRight}\xspace (resp. \FuncSty{PopLeft}\xspace and \FuncSty{PopRight}\xspace) operations are exactly the same, except
that they operate on the different ends of the deque\xspace.
The status flags, which depict the state of the deque\xspace, and the
pointers to the leftmost element and the rightmost element are
together kept in a single double-word variable, so-called
{\it Anchor}\xspace, which could be modified by a double-word {\it CAS}\xspace
atomically.
A \FuncSty{PopLeft}\xspace operation linearizes and even completes in one stage\xspace that ends
with a double-word {\it CAS}\xspace that just sets the left pointer of the anchor
to the second element from left.
A \FuncSty{PushLeft}\xspace operation takes three stages\xspace to complete. In the first stage\xspace,
the operation is linearized by setting the left pointer of the {\it Anchor}\xspace
to the new element and at the same time changing the status flags to
``left unstable''\pr{.}{, to indicate the status of the incomplete but
linearized \FuncSty{PushLeft}\xspace operation.} In the second stage\xspace, the left pointer of
the leftmost element is redirected to the recently pushed element.
In the third stage\xspace, a {\it CAS}\xspace is executed on {\it Anchor}\xspace to bring the deque\xspace
status flags into ``stable state''. Every operation can help an incomplete
\FuncSty{PushLeft}\xspace or \FuncSty{PushRight}\xspace until the deque\xspace comes into the stable state; in this
state, the other operations can attempt to linearize anew.
As noticed, the first and the third stage\xspace execute a {\it CAS}\xspace on the same
variable ({\it Anchor}\xspace) so it is possible to delay the third stage\xspace of the
success period\xspace by executing a {\it CAS}\xspace in the first stage\xspace. This implies
that the expansion in stage\xspace one should also be considered when the
delay in the third stage\xspace is considered, and the other way around. This
can be done by summing expansion estimates of the stages\xspace that run the
{\it CAS}\xspace on the same variable and using this expansion value in all these
stages\xspace. Again, it just requires simple modifications in the expansion
formula by keeping assumptions unchanged.
We first run pop-only and push-only experiments where dedicated
threads operate on both ends of the deque\xspace, in a half-half
manner. We provide predictions by plugging the slightly modified
expansion estimate, as explained above, into the average-based\xspace approach. Then,
we take one step further and mix the operations, assigning the threads
inequally among push and pop operations.
And, we obtain estimates for them by simply taking the weighted
average (depending on the number of threads running each operation) of
the success period\xspace of pop-only and push-only experiments, with
the corresponding \ema{\mathit{pw}} value.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=.8\textwidth]{deq_rr}
\end{center}
\caption{Operations on deque\xspace\label{fig:deq}}
\end{figure}
In Figure~\ref{fig:deq}, results are illustrated; they are
satisfactory for the push-only and pop-only cases.
For the mixed-case experiments, the results are mixed: our analysis
follows the trend and becomes less accurate
when the \ema{\mathit{pw}} gets lower, as experimental
curves tend toward push-only success period\xspace. This, presumably, happens because
the first stage\xspace of a \FuncSty{PushLeft}\xspace (or \FuncSty{PushRight}\xspace) operation is shorter than the
first stage\xspace of a \FuncSty{PopLeft}\xspace (or \FuncSty{PopRight}\xspace) operation. This brings indeed an
advantage to push operations, under contention: they have higher chances
to linearize before pop operations after the data structure\xspace comes into the stable
state. It\rr{ also} provides an interesting observation which highlights
the lock-free nature of operations: it is improbable to complete a pop
operation if numerous threads try to push, due to the
difference of work inside the first stage\xspace of their retry loop\xspace.
}
\newcommand{\fullenq}{
As a first step, we consider the \FuncSty{Enqueue}\xspace operation of the MS queue to
validate our approach. This operation requires two
pointer updates leading to two stages\xspace, each ending with a {\it CAS}\xspace. The
first stage\xspace, that linearizes the operation, updates the next pointer
of the last element to the newly enqueued element. In the next and
last stage\xspace, the queue's head pointer is updated to point to the recently
enqueued element, which could be done by a helping thread, that brings
the data structure into a stable state. Here, we determine the \ema{\mathit{cw}} by
subtracting the \ema{\mathit{rc}} and \ema{\mathit{cc}} from the non-contended cost of \FuncSty{Enqueue}\xspace operation.
\rr{
\begin{figure}[h!]
\begin{center}
\includegraphics[width=.8\textwidth]{enqueue_rr_disc}
\end{center}
\caption{Enqueue on MS Queue \label{fig:enqueue}}
\end{figure}
We estimate the expansion in the success period\xspace as described above and
throughput as explained in Section~\ref{sec:avba}. The results for the
\FuncSty{Enqueue}\xspace experiments where all threads execute \FuncSty{Enqueue}\xspace are presented in
Figure~\ref{fig:enqueue}.
}
}
\newcommand{\fulladsexp}{
Consider an operation such that, the success period\xspace (ignoring the slack time) is composed
of $S$ stages\xspace (denoted by $\stag{1}, \dots, \stag{S}$) where each stage represents a step
towards the completion of the operation.
Let \casn{i} denote the {\it CAS}\xspace operation at the end of the \stag{i}.
From a system-wide perspective, $\{ \casn{1}, \dots , \casn{S}\}$ is the set of \cas{}'s\xspace that
have to be successfully and consecutively executed to complete an operation, assuming
all threads are executing the same operation.
This design enforces that \casn{i} can be successful only if the last successful {\it CAS}\xspace
is a \casn{i-1}. And, \casn{1} can be successful only if the last successful {\it CAS}\xspace
is a \casn{S}. In other words, another operation can not linearize before the
completion of the linearized but incomplete operation.
Now, let $e_i$ denote the expected expansion of \casn{i}. If the data structure\xspace
is in the stable state (\textit{i.e.}\xspace is in \stag{1}, where a new operation can be
linearized), then we have to consider the probability, for all threads
except one, to expand the successful \casn{1} which linearizes the
operation. After the linearization, this operation will be completed
in the remaining stages where again the successful \cas{}'s\xspace at the end
of the stages are subject to the same expansion possibility by the
threads in the retry loop\xspace, as they might be still trying to help for the
completion of the previously completed operation.
Similar to the~\cite{our-disc15}, our assumption here is that any
thread that is in the retry loop\xspace, can launch \casn{i}, with probability $h$,
that might expand the successful \casn{i}. We consider, the starting
point of a failing \casn{i} is a random variable which is distributed
uniformly within the retry loop\xspace, which is composed of expanded stages\xspace of the
operation. This is because an obsolete thread can launch a \casn{i},
regardless of the stage\xspace in which the data structure\xspace is in (equally, regardless of the
last successful {\it CAS}\xspace). Due to the uniformity assumption, the expansion
for the successful \cas{}'s\xspace in all stages, would be equal. Similar to
the~\cite{our-disc15}, we estimate the expansion $e_i$ by considering the
impact of a thread that is added to the retry loop\xspace. Let the cost function
$\mathit{delay}_i$ provide the amount of delay that the additional thread
introduces, depending on the point where the starting point of its
\casn{i} hits. By using these cost functions, we can formulate the
total expansion increase that each new thread introduces and derive
the differential equation below to calculate the expected total
expansion in a success period\xspace , where $\avexp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}=\sum^{S}_{i=1}
\aexpi{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}$. Note that, we assume that the expansion starts as soon
as strictly more than 1 thread are in the retry loop, in
expectation.
\begin{lemma}
\label{lem.1}
The expansion of a {\it CAS}\xspace operation is the solution of the following
system of equations, where $\ema{\mathit{rlw}} = \sum^{S}_{i=1} \ema{\mathit{rlw}}_i =
\sum^{S}_{i=1}(\ema{\mathit{rc}}_i + \ema{\mathit{cw}}_i + \ema{\mathit{cc}}_i)$:
\[ \left\{
\begin{array}{lcl}
\expansionp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} &=& \ema{\mathit{cc}} \times \dfrac{S \times \frac{\ema{\mathit{cc}}}{2} + \ema{CAS_{exp}}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}}{ \ema{\mathit{rlw}} + \ema{CAS_{exp}}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}}\\
\ema{CAS_{exp}}{\ema{\trl^{(0)}}} &=& 0
\end{array} \right., \text{ where \ema{\trl^{(0)}} is the point where expansion begins.}
\]
\end{lemma}
\begin{proof}
We compute $\ema{CAS_{exp}}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}} + h}$, where $h\leq1$, by assuming that
there are already \ema{\xoverline[.7]{\ct_{\mathit{rl}}}} threads in the retry loop\xspace, and that a new thread
attempts to {\it CAS}\xspace during the retry\xspace, within a probability $h$. For
simplicity, we denote $a^i_j = (\sum_{j=1}^{i-1} \ema{\mathit{rlw}}_j + e_j(\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}))
+ \ema{\mathit{rc}}_i + \ema{\mathit{cw}}_i$.
\begin{align*}
\ema{CAS_{exp}}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}} + h}
&= \ema{CAS_{exp}}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} + h\times
\sum^{S}_{i=1} \rint{0}{\ema{\rlw^{\exppl}}}{\frac {\shifti{i}{t_i}}{\ema{\rlw^{\exppl}}}}{t_i} \\
&= \ema{CAS_{exp}}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}
+ h \times \sum^{S}_{i=1} \Big( \rint{0}{a^i_j - \ema{\mathit{cc}}}{\frac{\shifti{i}{t_i}}{\ema{\rlw^{\exppl}}}}{t_i} +
\rint{a^i_j - \ema{\mathit{cc}}}{a^i_j}{\frac{\shifti{i}{t_i}}{\ema{\rlw^{\exppl}}}}{t_i}\\
& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad + \rint{a^i_j}{a^i_j + \aexpi{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}}{\frac{\shifti{i}{t_i}}{\ema{\rlw^{\exppl}}}}{t_i}
+ \rint{a^i_j + \aexpi{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}}{\ema{\rlw^{\exppl}}}{\frac{\shifti{i}{t_i}}{\ema{\rlw^{\exppl}}}}{t_i}\Big)\\
&= \ema{CAS_{exp}}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} + h \times \sum^{S}_{i=1} \Big(
\rint{a^i_j-\ema{\mathit{cc}}}{a^i_j}{\frac{t_i}{\ema{\rlw^{\exppl}}}}{t_i}
+ \rint{a^i_j}{a^i_j + \aexpi{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}}{\frac{\ema{\mathit{cc}}}{\ema{\rlw^{\exppl}}}}{t_i} \Big)\\
\ema{CAS_{exp}}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}} + h} &= \ema{CAS_{exp}}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} + h \times \frac{ (\sum^{S}_{i=1} \frac{\ema{\mathit{cc}}^2}{2}) + \ema{CAS_{exp}}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}\times\ema{\mathit{cc}}}{\ema{\rlw^{\exppl}}}
\end{align*}
This leads to
\[ \quad\frac{\ema{CAS_{exp}}{\ema{\ct_{\mathit{rl}}} + h}- \ema{CAS_{exp}}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}}{ h} = \frac{ S \times \frac{\ema{\mathit{cc}}^2}{2} + \ema{CAS_{exp}}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}\times\ema{\mathit{cc}}}{\ema{\rlw^{\exppl}}}.\]
When making $h$ tend to $0$, we finally obtain
\[ \expansionp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} = \ema{\mathit{cc}} \times \frac{S \times \frac{\ema{\mathit{cc}}}{2} + \ema{CAS_{exp}}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}}{ \ema{\mathit{rlw}} + \ema{CAS_{exp}}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}}. \qedhere\]
\end{proof}
In addition, if a set $S_k$ of \cas{}'s\xspace are operating on the same
variable $var_k$, then $\casn{i} \in S_k$ can be expanded by the
$\casn{j} \in S_k$. In this case, we can obtain $\aexp{k}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}$ by
using the reasoning above. The calculation simply ends up as follows:
Consider the problem as if no {\it CAS}\xspace shares a variable and denote
expansion in \stag{i} with $\aexpi{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}^{(\mathit{old})}$. Then, $\aexp{k}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}
= \sum_{{\it CAS}\xspace_i \in S_k} \aexpi{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}^{(\mathit{old})}$.
}
\newcommand{\fulladswati}{
We assume here the slack time can only occur after the completion of
an operation (\textit{i.e.}\xspace before stage 1), as the other stages are expected to
start immediately due to the thread that completes the previous
stage. Similar to Section~\ref{sec:litt-slack}, we consider that, at
any time, the threads that are running the retry loop have the same
probability to be anywhere in their current retry. Thus, a thread can
be in any stage just after the successful CAS that completes the
operation. So, we need to consider the thread which is closest to the
end of its current stage when the operation is completed. We denote
the execution time of the expanded retry loop with \ema{\rlw^{\exppl}} and the
number of stages\xspace with $S$. For a thread executing \stag{i} when the
operation completes, the time before accessing the data structure\xspace is then
uniformly distributed between 0 and $\ema{\rlw^{\exppl}}_i$.
Here, we take another assumption and consider all stages can be
completed in the same amount of time (\textit{i.e.}\xspace for all (i, j) in $\{1,
\dots ,S\}^2$, $\ema{\rlw^{\exppl}}_i = \ema{\rlw^{\exppl}}_j = \ema{\rlw^{\exppl}}/S$). This assumption
does not diverge much from the reality and provides a reasonable
approximation. With these assumption and using
Lemma~\ref{lem:unif-min}, we conclude that:
\begin{equation}
\label{eq:slack-multiple}
\avwati{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} = \frac{\ema{\rlw^{\exppl}}}{S \times (\ema{\xoverline[.7]{\ct_{\mathit{rl}}}} +1)}.
\end{equation}
}
\newcommand{\sumads}{
Here, we consider data structures\xspace that apply immediate helping, where threads
help for the completion of a recently linearized operation until the
data structure\xspace comes into a stable state in which a new operation can be
linearized. The crucial observation is that the data structure\xspace goes through
multiple stages\xspace in a round robin fashion. The first stage\xspace is the one
where the operation is linearized. The remaining ones are the stages\xspace
in which other threads, that execute another operation, might help for
the completion of the linearized operation, before attempting to
linearize their own operations. Thus, the success period\xspace (ignoring the slack time\xspace)
can be seen as the sum of the execution time of these stages\xspace, each
ending with a {\it CAS}\xspace that updates a pointer. The {\it CAS}\xspace in the first stage
might be expanded by the threads that are competing for the
linearization of their operation, and consequent \cas{}'s\xspace might be
expanded by the helper threads, which are still trying to help an
already completed operation. Also, there might be slack time before
the start of the first stage\xspace as the other stages\xspace will start
immediately due to the thread that has completed the previous stage\xspace.
Although it is hard to stochastically reconstruct the executions
with Markov chains, our average-based\xspace approach
provides the flexibility required to estimate the performance by
plugging the expected success period\xspace, given the number of threads inside the
retry loop\xspace, into the Little's Law. As the impacting factors are similar, we
estimate the success period\xspace in the same vein as in Section~\ref{sec:avba}; with
a minor adaptation of the expansion formula
and by slightly adapting the slack time estimation based on the
same arguments.
}
\newcommand\bothmm{
\falseparagraph{Fine-grain Memory Management Scheme}
We divide the routine (and further the phases) of the traditional MM
mechanism into quanta (equally-sized chunks).%
\finemm
\falseparagraph{Adaptive Memory Management Scheme}
We build the adaptive MM scheme on top of the fine-grain MM mechanism by
adding a monitoring routine that tracks the number of failed retry loops\xspace,
employing a sliding windows. Given a granularity parameter and a
number of failed retry loops\xspace, we are able to estimate the parallel work and
the throughput, hence we can decide a change in the granularity
parameter to reach the peak performance. Note that one can avoid
memory explosion by specifying a threshold like the traditional
implementation in case the application provides a durable low
contention; in the worst case, it performs like the traditional MM.
}
\newcommand\fullbosy{
In Figure~\ref{fig:bo-synt}, we compare, on a synthetic workload, this
constant back-off strategy against widely known strategies, namely
exponential and linear, where the back-off amount increases
exponentially or linearly after each failing retry loop starting from
a \cycles{115} step size.
}
\newcommand\figsynthcst{
\begin{figure}[b!]
\begin{center}
\includegraphics[width=.85\textwidth]{synthetic_rr_disc}
\end{center}
\caption{Synthetic program with exponentially distributed parallel work\xspace\label{fig:synt}}
\end{figure}}
\newcommand\figsynthpoi{
\begin{figure}[b!]
\begin{center}
\includegraphics[width=.85\textwidth]{synthetic_pois_rr_disc}
\end{center}
\caption{Synthetic program with parallel work\xspace following Poisson\label{fig:synt-poisson}}
\end{figure}}
\newcommand\figsynthconst{
\begin{figure}[b!]
\begin{center}
\includegraphics[width=.85\textwidth]{synthetic_const_rr_disc}
\end{center}
\caption{Synthetic program with Constant parallel work\xspace \label{fig:synt-const}}
\end{figure}}
\newcommand\figtreib{
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\textwidth]{stack_noback_rr}
\end{center}
\caption{Treiber's Stack\label{fig:stack}}
\end{figure}}
\newcommand\figcompmm{
\begin{figure}[b!]
\begin{center}
\pr{\includegraphics[width=.8\textwidth]{dequeue_xp_rr_disc}}{\includegraphics[width=.9\textwidth]{dequeue_xp_rr_disc}}
\end{center}
\caption{Performance of memory management mechanisms\label{fig:mm_perf}}
\end{figure}}
\section{Introduction}
During the last two decades, lock-free data structures\xspace have received a lot of
attention in the literature, and have been accepted in industrial
applications, \textit{e.g.}\xspace in the Intel's Threading Building Blocks
Framework~\cite{itbbf}, the Java concurrency package~\cite{jav-conc}
and the Microsoft .NET Framework~\cite{mic-net-f}.
\rr{Lock-free implementations provide indeed a way out of several
limitations of their lock-based counterparts, in robustness,
availability and programming flexibility. Last but not least, the
advent of multi-core processors has pushed lock-freedom on top of the
toolbox for achieving scalable synchronization.}
Naturally, the development of lock-free data structures\xspace was accompanied by
studies on the performance of such data structures\xspace, in order to characterize their
scalability.
Having no guarantee on the execution time of an individual operation,
the time complexity analyses of lock-free algorithms have turned
towards amortized analyses.
The so-called amortized analyses are thus interested in the worst-case\xspace
behavior over a sequence of operations, which can be seen as a worst-case\xspace
bound on the average time per operation.
In order to cover various contention environments, the time complexity
of the algorithms is often parametrized by different contention
measures, such as point~\cite{point-contention}, interval~\cite{interval-contention}
or step~\cite{step-contention} contention.
Nonetheless these investigations are targeting worst-case asymptotic
behaviors. There is a lack of analytical results in the literature
capable of describing the execution of lock-free algorithms on top of a
hardware platform, and providing predictions that are close to what is
observed in practice.
Asymptotic bounds are particularly useful to rank different
algorithms, since they rely on a strong theoretical background, but
the presence of potentially high constants might produce misleading
results. Yet, an absolute prediction of the performance can be of
great importance by constituting the first step for further
optimizations.
The common measure of performance for data structures\xspace is throughput, defined
as the number of operations on the data structure\xspace per unit
of time.
To this end, this performance measure is usually obtained by
considering an algorithm that strings together a pure sequence of
calls to an operation on the data structure\xspace. However, when used in a more
realistic context, the calls to the operations are mixed with
application-specific code (that we call here parallel work\xspace). For instance, in a
work-stealing environment designed with deques\xspace, a thread basically runs
one of the following actions: pushing a new-generated task in its
deque\xspace, popping a task from a deque\xspace or executing a task. The
modifications on the deques\xspace are thus interleaved with deque\xspace-independent
work. There exist some papers that consider in their experiments local
computations between calls to operations during their respective
evaluations, but the amount of local computations follows a given
distribution varying from paper to paper, \textit{e.g.}\xspace constant~\cite{lf-queue-michael}, uniform \cite{scalable-stack-uniform},
exponential \cite{Val94}.
In this work, we derive a general approach for unknown distributions of
the size of the application-specific code, as well as a tighter method
when it follows an exponential distribution.
As for modeling the data structure\xspace itself, we use as a basis the universal construction
described by Herlihy in~\cite{herli-univ-const}, where it is shown
that any abstract data type can get such a lock-free implementation, which
relies on one retry loop\xspace.
Moreover, we have particularly focused our experiments on data structures\xspace
that present a low level of disjoint-access
parallelism~\cite{disjoint-access} (stack, queue, shared counter,
deque\xspace). Coming back to amortized analyses, the time complexity of an
operation is often expressed as a contention-free time complexity
added with a contention overhead. In this paper, we want to model and
analyze the impact of contention.
Loosely speaking, the data structures\xspace that exhibit low level of disjoint-access parallelism
have lightweight operations (\textit{i.e.}\xspace low contention-free complexity)
and they are prone to high contention overheads. In contrast, the data structures\xspace
that present high level of disjoint-access parallelism, or
that employ
contention alleviation techniques, provide heavyweight operations
(\textit{i.e.}\xspace high contention-free complexity) and behave differently, compared to the
previous ones, under contention.
Our analyses examine this trade-off and then facilitate conscious decisions in the
data structures\xspace design and use.
We propose two different approaches that analyze the performance of
such data structures\xspace.
On the one hand, we derive an average-based\xspace approach invoking queuing theory,
which provides the throughput of a lock-free algorithm without any
knowledge about the distribution of the parallel work\xspace. This approach
is flexible but allows only a coarse-grained analysis, and hence a
partial knowledge of the contention that stresses the data structure\xspace.
On the other hand, we exhibit a detailed picture of the execution of
the algorithm when the parallel work\xspace is instantiated with an exponential
distribution, through a second complementary approach. We prove that
the multi-threaded execution follows a Markovian process and a Markov
chain analysis allows us to pursue and reconstruct the execution, and
to compute a more accurate throughput.
We finally show several ways to use our analyses and we evaluate the
validity of our ideas by experimental results. Those two analysis
approaches give a good understanding of the phenomena that drive the
performance of a lock-free data structure\xspace, at a high-level for the average-based\xspace
approach, and at a detailed level for the constructive
method. Moreover, our results provide a common framework to compare
different implementations of a data structure\xspace, in a fair manner. We also
emphasize that there exist several concrete paths to apply our
analyses.
To this end, based on the knowledge about the application at hand, we
implement two back-off strategies. We show the applicability of these strategies by tuning a Delaunay triangulation
application~\cite{caspar} and a streaming pipeline component which is
fed with trade exchange workloads~\cite{taq-se}.
These experiments reveal the validity of our analyses in the application
domain, under non-synthetic workloads and diverse access patterns.
We
confirm the benefits of our theoretical results by designing
a new adaptive memory management mechanism for
lock-free data structures in dynamic environments which surpasses the
traditional scheme and which is such that the loss in performance,
when compared to a static data structure without memory management, is
largely leveraged. This memory management mechanism is based on the analyses presented in this paper.
\rr{
The rest of the paper is organized as follows: we start by presenting
related work in Section~\ref{sec:related}, then we define the
algorithm and the platform that we consider, together with concepts
that are common to our both approaches in Section~\ref{sec:preli}. The
average-based\xspace approach is described in Section~\ref{sec:avba}, while the
constructive analysis is exposed in Section~\ref{sec:cons}, and both
methods are evaluated in the experiment part that is presented in
Section~\ref{sec:xp}.
}
\vspp{-.4}
\section{Related Work}
\label{sec:related}
\rr{Approaches that are based on Markov chains and queueing theory, are
commonly employed to analyze the performance of parallel programs
in concurrent environments.
Yu \textit{et al.}\xspace~\cite{yu-markov} have provided an analytical model to
estimate the mean transaction completion time for the transactional
memory systems. They make use of a continuous-time Markov chain
queuing model to analyze the execution of transactions, in which
they formulate the state transition rates by considering the arrival
rate, the service time for the transactions together with other parameters
such as conflict rate that statistically quantifies the spatial
(intersecting data set) and temporal (overlapped time) aspects of
conflicts.
\rr{In~\cite{bahra}, Al{-}Bahra has mentioned
Little's Law as an appropriate tool to understand the effects of
contention on serialized resources for synchronization paradigms.}
Closer to our work, }Alistarh \textit{et al.}\xspace~\cite{ali-same} have studied the
same class of lock-free data structures\xspace that we consider in this
paper.
They show initially that the lock-free algorithms
are statistically wait-free and going further they exhibit upper bounds
on the performance.
Their analysis is done in terms of scheduler steps, in a system where
only one thread can be scheduled (and can then run) at each step. If
compared with execution time, this is particularly appropriate to a
system
where the instructions of the threads cannot be done in parallel (\textit{e.g.}\xspace
multi-threaded program on a multi-core processor with only
writes on the same cache line of the shared memory). In our paper, the
execution is evaluated in terms of processor cycles, strongly related
to the execution time. In addition, the ``parallel work\xspace'' and the ``critical work\xspace'' can
be done in parallel. Also,
\rr{they bound the asymptotic expected system
latency (with a big O, when the number of threads tends to
infinity), while }%
in our paper we estimate the throughput (close to
the inverse of system latency) for any number of threads.
\textbf{\textit{Comparing to our previous work:}} In~\cite{our-disc15}, we illustrate the
performance impacting
factors and the model we use to cover a subset of lock-free structures
that we consider in this paper. In the former paper, the analysis is built upon
properties that arise only when the sizes of the critical work\xspace and the parallel work\xspace are
\textit{constant}. There, we show that the execution is not memoryless due to the
natural synchrony provided by the retry loops\xspace; at the end of the line, we
prove that the execution is cyclic and use this property to bound the
rate of failed retries\xspace.
Here, we provide two new approaches which serve different purposes. In the first
approach, we relax the assumptions regarding the critical work\xspace and parallel work\xspace parameters, that we
respectively use to model the data structure\xspace operations and the application specific code
from which the data structure\xspace operations are called. The first approach relies on the expected values of
the size of the critical work\xspace and the parallel work\xspace. This allows us to cover, compared to our previous analysis, more
advanced
lock-free data structure\xspace operations, see Section~\ref{sec:advanced-ds}. Also, we can analyse the data structures\xspace running on
a larger variety of application specific environments, thanks to the relaxed
assumption on the size of the parallel work\xspace.
The second approach provides a tight analysis when the parallel work\xspace follows an exponential
distribution. We can observe a significant decrease in the performance when
the parallel work\xspace is initiated with exponential distribution in comparison to the cases
where the parallel work\xspace is constant as in our previous work, see \pr{Appendix~\ref{app:xp-basic}}{Section~\ref{sec:synt_tests}}. The
tight analyses, in our previous work and the second approach presented in this paper, reveal
for the first time an analytical understanding of this phenomenon.
This paper is complementary to the previous work, not only because of the
difference in the analysis tools, the extensive set of data structures\xspace and the application specific
environments that it considers
but also because they together exhibit the impact of the size distributions
of the parallel work\xspace on the performance of lock-free data structures\xspace.
\vspp{-.15}
\section{Preliminaries}
\label{sec:preli}
We describe in this section the structure of the algorithm that is covered by
our model. We explain how to analyze the execution of an instance of
such an algorithm when executed by several threads, by slicing this
execution into a sequence of adjacent success periods\xspace, where a success period\xspace is an interval of
time during which exactly one operation returns. Each of the success periods\xspace
is further split into two by the first access to the data structure\xspace in the
considered retry loop\xspace.
This execution pattern reflects fundamental phases of both analyses,
whose first steps and general direction are outlined at the end of the
section.
\vspp{-.2}
\subsection{System Settings}
All threads call Procedure~\ref{alg:gen-name} (see
Figure~\ref{alg:gen-nb}) when they are spawned. So each thread follows
a simple though expressive pattern: a sequence of calls to an
operation on the data structure\xspace, interleaved with some parallel work\xspace during which the
thread does not try to modify the data structure\xspace.
For instance, it can represent a work-stealing algorithm, as
described in the introduction.
The algorithm is decomposed in two main sections: the {\it parallel section\xspace},
represented on line~\ref{alg:li-ps}, and the {\it retry loop\xspace} (which
represents one operation on the shared data structure\xspace) from
line~\ref{alg:li-bcs} to line~\ref{alg:li-ecs}. A {\it retry\xspace} starts at
line~\ref{alg:li-bbcs} and ends at line~\ref{alg:li-ecs}. The outer
loop that goes from line~\ref{alg:li-bwl} to line~\ref{alg:li-ecs} is
designated as the {\it work loop\xspace}\rr{.
}\pp{.
In each retry\xspace, a thread tries to modify the data structure\xspace and does not exit the
retry loop\xspace until it has successfully modified the data structure\xspace.
\rr{It firstly reads the
access point \DataSty{AP} of the data structure\xspace, then, according to the value
that has been read, and possibly to other previous computations that
occurred in the past, the thread prepares, during the critical work\xspace,
the new desired value as an access point of
the data structure\xspace. Finally, it atomically tries to perform the change through a
call to the {\it CAS}\xspace primitive. If it succeeds, \textit{i.e.}\xspace if the access point
has not been changed by another thread between the first {\it Read}\xspace and the
{\it CAS}\xspace, then it goes to the next parallel section\xspace, otherwise it repeats the
process. }%
The retry loop\xspace is composed of at least one retry\xspace (and the first
iteration of the retry loop\xspace is strictly speaking not a retry\xspace, but a try).
We denote by \ema{\mathit{cc}} the execution time of a {\it CAS}\xspace{} when the executing
thread does not own the cache line in exclusive mode, in a setting
where all threads share a last level cache. Typically, there
exists a thread that touches the data between two requests of the same
thread, therefore this cost is paid at every occurrence of a {\it CAS}\xspace.
As for the {\it Read}\xspace{}s, \ema{\mathit{rc}} holds for the execution time of a cache
miss. When a thread executes a failed {\it CAS}\xspace, it immediately reads the
same cache line (at the beginning of the next retry\xspace), so the cache line
is not missing, and the execution time of the {\it Read}\xspace is considered as
null. However, when the thread comes back from the parallel section\xspace, a cache miss
is paid.
To conclude with the parameters related to the platform, we dispose of
\ema{P} cores, where the {\it CAS}\xspace (resp. the {\it Read}\xspace) latency is identical for all
cores, \textit{i.e.}\xspace \ema{\mathit{cc}} (resp. \ema{\mathit{rc}}) is constant.
The algorithm is parametrized by two execution times. In the general
case, the execution time of an occurrence of the parallel section\xspace
(application-specific section) is a random variable that follows an
unknown probability distribution. In the same way, the execution time
of the critical work\xspace (specific to a data structure\xspace) can vary while following an unknown
probability distribution. The only provided information is the mean
value of those two execution times: \ema{\mathit{cw}} for the critical work\xspace, and \ema{\mathit{pw}} for the
parallel work\xspace. These values will be given in units of work, where $\uow{1} =
\cycles{50}$.\vspp{-.3}
\subsection{Execution Description}
\label{sec:fra-exe}
It has been underlined in~\cite{our-disc15} that there are two
main conflicts that degrade the performance of the data structures\xspace which do not
offer a great degree of disjoint-access parallelism: logical and
hardware conflicts.
{\it Logical conflicts} occur when there are more than one thread
in the retry loop at a given time (happens typically when the
number of threads is high or when the parallel section is small).
At any time, considering only the threads
that are in the retry loop\xspace, there is indeed at most one thread whose retry\xspace will
be successful (\textit{i.e.}\xspace whose ending {\it CAS}\xspace will succeed), which implies the
execution of more retries\xspace for the failing threads.
In addition, after a thread executes successfully its final
{\it CAS}\xspace, the other threads of the retry loop\xspace have first to finish their current
retry\xspace before starting a potentially successful retry\xspace, since they are not
informed yet that their current retry\xspace is doomed to failure. This creates
some ``holes'' in the execution where all threads are executing useless
work.
The threads will also experience {\it hardware conflicts}: if several
threads are requesting for the same data, so that they can operate a
{\it CAS}\xspace on it, a single thread will be satisfied. All the other threads
will have to wait until the current {\it CAS}\xspace is finished, and give a new
try when this {\it CAS}\xspace is done. While waiting for the ownership of the
cache line, the requesting threads cannot perform any useful
work. This waiting time is referred to as {\it expansion}.
\def.5{.5}
\def1{2}
\defblack!20{black!20}
\defblack!4{black!4}
\defblue!20!black!40!red!{blue!20!black!40!red!}
\defblack!20!green{black!20!green}
\newcommand{\pha}[5]{%
\node[it,text width=#4em,right= 0 of #2,fill=#5] (#1) {#3};
}
\newcommand{\supfig}{
\begin{center}
\begin{tikzpicture}[%
it/.style={%
rectangle,
text width=11em,
text centered,
minimum height=3em,
draw=black!50,
scale=.85
}
]
\coordinate (O) at (0,0);
\pha{pcas}{O}{successful\\{\it CAS}\xspace}{\pr{4.5}{5}}{black!20!green}
\pha{sla}{pcas}{useless\\work}{\pr{4}{8}}{black!20}
\pha{acc}{sla}{{\it Access}\xspace}{\pr{3.5}{4}}{orange}
\pha{cri}{acc}{\ema{\mathit{cw}}}{\pr{2}{4}}{blue!20!black!40!red!}
\pha{exp}{cri}{expansion}{\pr{4.5}{6}}{black!20}
\pha{fcas}{exp}{successful\\{\it CAS}\xspace}{\pr{4.5}{5}}{black!20!green}
\draw [decorate,decoration={brace,mirror,amplitude=10pt}]
(sla.south west) -- (sla.south east)
node [black,midway,yshift=-20pt] {slack time\xspace};
\draw [decorate,decoration={brace,mirror,amplitude=10pt}]
(acc.south west) -- (fcas.south east)
node [black,midway,yshift=-20pt] {completion time\xspace};
\draw [decorate,decoration={brace,amplitude=10pt}]
(sla.north west) -- (fcas.north east)
node [black,midway,yshift=20pt] (supw) {success period\xspace};
\node (anot) at (.8,1) {{\scriptsize can be null}};
\draw[very thin] ($(anot.south east)!.8!(anot.east)$) -- ++(.4,0) -- ($(sla.north)+(0,-.1)$);
\draw[very thin] ($(anot.north east)!.8!(anot.east)$) -- ++(.4,0) -- ($(exp.north)+(0,-.1)$);
\end{tikzpicture}
\end{center}
}
\rr{\begin{figure}[t!]
\abstalgo
\caption{Thread procedure}\label{alg:gen-nb}
\end{figure}}
\rr{\begin{figure}[t!]
\supfig
\captionof{figure}{Success period\xspace\label{fig:seq}}
\end{figure}}
\pp{\begin{figure}[t!]
\centering
\begin{minipage}{.4\textwidth}
\abstalgo
\captionof{figure}{Thread procedure\label{alg:gen-nb}}
\end{minipage}\hfill%
\begin{minipage}{.6\textwidth}
\supfig
\captionof{figure}{Success period\xspace\label{fig:seq}}
\end{minipage}
\end{figure}}
We now refine the description of the execution of the algorithm. The
timeline is initially decomposed into a sequence of success periods\xspace that will
define the throughput. A success period\xspace is an interval of time of the
execution that
(i) starts after a successful {\it CAS}\xspace,
(ii) contains a single successful {\it CAS}\xspace,
(iii) finishes after this successful {\it CAS}\xspace.
\pr{To}{As explained in the previous subsection, to} be successful in its retry\xspace,
a thread has first to access the data structure\xspace, then modify it locally, and
finally execute a {\it CAS}\xspace, while no other thread performs changes on the
data structure\xspace. That is why each success period\xspace is further cut into two main phases (see
Figure~\ref{fig:seq}). During the first phase, whose duration is
called the {\it slack time\xspace}, no thread is accessing the data structure\xspace. The second
phase, characterized by the {\it completion time\xspace}, starts with the first access
to the data structure\xspace (by any thread). Note that this {\it Access}\xspace could be either a {\it Read}\xspace
(if the concerned thread just exited the parallel section\xspace) or a failed {\it CAS}\xspace (if the thread
was already in the retry loop\xspace).
The next successful {\it CAS}\xspace will come at least
after \ema{\mathit{cw}} (one thread has to traverse the critical work\xspace anyway), that is why we
split the latter phase into: \ema{\mathit{cw}}, then expansion, and finally a
successful {\it CAS}\xspace.
\subsection{Our Approaches}
\label{sec:fra-app}
In this work, we propose two different approaches to compute the
throughput of a lock-free algorithm, which we name as average-based\xspace and
constructive. The average-based\xspace approach relies on queuing theory and is
focused on the average behavior of the algorithm: the throughput is
obtained through the computation of the expectation of the success period\xspace at a
random time.
As for the constructive approach, it describes precisely the instants
of accesses and modifications to the data structure\xspace in each success period\xspace: in this way,
we are able to deconstruct and reconstruct the execution, according to
observed events. The constructive approach leads to a more accurate
prediction at the expense of requiring more information about the
algorithm: the distribution functions of the critical and parallel works\xspace have
indeed to be instantiated.
In both cases, we partition the domain space into different levels of
contention (or {\it modes}); these partitions are independent across
approaches, even if we expect similarities, but in each case, cover
the whole domain space (all values of critical work\xspace, parallel work\xspace and number of
threads).
\medskip
\subsubsection{Average-based\xspace Analysis}
\label{sec:fra-app-asy}
We distinguish two main modes in which the algorithm can run:
contended and non-contended. In the non-contended mode, \textit{i.e.}\xspace when the
parallel work\xspace is large or the number of threads is low,
concurrent operations are not likely to collide. So every retry loop\xspace will
count a single retry\xspace, and atomic primitives will not delay each
other. In the contended mode, any operation is likely to experience
unsuccessful retries\xspace before succeeding (logical conflicts), and a retry\xspace
will last longer than in the non-contended mode because of the
collision of atomic primitives (hardware conflicts).
Once all the parameters are given,
the analysis is centered around the calculation of a single variable
\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}, which represents the expectation of the number of threads
inside the retry loop\xspace at a random instant. Based on this variable, we are
able to express the expected expansion \avexp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} at a random
time. As a next step, we show how this expansion can be used to
estimate the expected slack time\xspace \avwati{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} and the expected completion time\xspace
\rwh{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}, and at the end, the expected time of a success period\xspace
\avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}.
\subsubsection{Constructive Method}
\label{sec:fra-app-con}
The previous average-based\xspace reasoning is founded on expected values at a random time,
while in the constructive approach, we study each success period\xspace individually,
based on the number of threads at the beginning of the considered
success period\xspace. So we are able to exhibit more clearly the instants of
occurrences of the different accesses and modifications to the data structure\xspace,
and thus to predict the throughput more accurately.
We rely on the same set of values used in the average-based\xspace approach, but
these values are now associated with a given
success period\xspace.
Thus the number of threads inside the retry loop\xspace \ema{\ct_{\mathit{rl}}}, as well as the slack time\xspace
and the completion time\xspace are evaluated at the beginning of each success period\xspace.
We denote these times in the same way as in the first approach, but
remove the bar on top since these values are not expectations any
more.
The different contention modes do not characterize here the
steady-state of the data structure\xspace as in the previous approach but are
associated with the current success period\xspace. Accordingly, the contention can
oscillate through different modes in the course of the execution.
First, a success period\xspace is not
contended when $\ema{\ct_{\mathit{rl}}}=0$, \textit{i.e.}\xspace when there is no thread in the retry loop\xspace after
a successful {\it CAS}\xspace.
In this case, the first thread that exits the parallel section\xspace
will be successful, and the {\it Access}\xspace of the sequence will be a {\it Read}\xspace.
Second, the contention of a success period\xspace is high when at any time during
the success period\xspace, there exists a thread that is executing a {\it CAS}\xspace. In other
words, at the end of each {\it CAS}\xspace, there is at least one thread that is
waiting for the cache line to operate a {\it CAS}\xspace on it. This implies that
the first access of the success period\xspace is a {\it CAS}\xspace and occurs immediately after
the preceding successful {\it CAS}\xspace: the slack time\xspace is null.
Third, the mid-contention mode takes place when $\ema{\ct_{\mathit{rl}}}>0$, while at the
same time, there are not enough requesting threads to fill the whole
success period\xspace with \cas{}'s\xspace (which implies a non-null slack time\xspace). Since these
requesting threads have synchronized in the previous success period\xspace, \cas{}'s\xspace do
not collide in the current success period\xspace, and because of that, the expansion
is null.
\section{Average-based Approach}
\label{sec:avba}
We propose in this section our coarse-grained analysis to predict the
performance of lock-free data structures\xspace.
Our approach utilizes
fundamental queuing theory techniques, describing the average
behavior of the algorithm.
In turn, we need only a minimal knowledge about the algorithm: the mean
execution time values \ema{\mathit{cw}} and \ema{\mathit{pw}}.
As explained in Section~\ref{sec:fra-app-asy}, the system runs in one of
the two possible modes: either contended or uncontended.
\subsection{Contended System}
We first consider a system that is contended.
When the system is contended, we use Little's law to obtain, at a
random time, the expectation of the success period\xspace, which is the interval of
time between the last and the next successful \cas{}'s\xspace
(see Figure~\ref{fig:seq}).
The stable system that we observe is the parallel section\xspace: threads are entering it
(after exiting a successful retry loop\xspace) at an average rate, stay inside,
then leave (while entering a new retry loop\xspace).
The average number of threads inside the parallel section\xspace is $\ema{\xoverline[.7]{\ct_{\mathit{ps}}}} = \ema{P} - \ema{\xoverline[.7]{\ct_{\mathit{rl}}}}$,
each thread stays for an average duration of \ema{\mathit{pw}}, and in average, one
thread is exiting the retry loop\xspace every success period\xspace \avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}, by definition of
the success period\xspace.
According to Little's law~\cite{littles-law}, we have:
\pr{
\begin{equation}
\label{eq:little-gen}
\ema{\xoverline[.7]{\ct_{\mathit{ps}}}} = \ema{\mathit{pw}} \times 1 / \avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}, \text{ \textit{i.e.}\xspace} \quad \avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} = \ema{\mathit{pw}} / (\ema{P} - \ema{\xoverline[.7]{\ct_{\mathit{rl}}}})
\end{equation}
\[ \ema{\xoverline[.7]{\ct_{\mathit{ps}}}} = \ema{\mathit{pw}} \times \frac{1}{\avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}}, \text{ \textit{i.e.}\xspace} \]
\begin{equation}
\label{eq:little-gen}
\frac{1}{\ema{\mathit{pw}}} \times \avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} = \frac{1}{\ema{P} - \ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}
\end{equation}
}
\pr{We decompose a success period\xspace into two parts: slack time\xspace and completion time\xspace (as explained
in Section~\ref{sec:fra-exe}).
We express the expectation of the success period\xspace time as}%
{As explained in Section~\ref{sec:fra-exe}, we further decompose a success period\xspace
into two parts, separated by the first access to the data structure\xspace after a
successful {\it CAS}\xspace. We can then write the average success period\xspace as the sum of:
(i) the expected time before some thread starts its
{\it Access}\xspace (the slack time\xspace), and
(ii) the expected completion time\xspace.
We compute these two expectations independently and gather them into
the success period\xspace thanks to:}
\begin{equation}
\label{eq:little-sp}
\avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} = \avwati{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} + \rwh{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}.
\end{equation}
When the data structure\xspace is contended, a thread is likely to be
successful after some failed retries\xspace. Therefore a thread that is
successful was already in the retry loop\xspace when the previous
successful {\it CAS}\xspace occurred.
\rr{This implies that the {\it Access}\xspace to the data structure\xspace will
be due to a failed {\it CAS}\xspace, instead of a {\it Read}\xspace.}%
The time before a thread starts its {\it Access}\xspace is then the time before a
thread finishes its current critical work\xspace since there is a thread
currently executing a {\it CAS}\xspace.
\smallskip
\subsubsection{Expected Completion time\xspace}
\label{sec:little-expa}
Since the data structure\xspace is contended, numerous threads are inside the retry loop\xspace, and,
due to hardware conflicts, a retry\xspace can experience expansion:
the more threads inside the retry loop\xspace, the longer time between a {\it CAS}\xspace
request and the actual execution of this {\it CAS}\xspace. The expectation of the
completion time\xspace can be written as
\begin{equation}
\label{eq:little-ret}
\rwh{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} = \ema{\mathit{cc}} + \ema{\mathit{cw}} + \avexp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} + \ema{\mathit{cc}} ,
\end{equation}
where \avexp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} is the expectation of expansion when there are \ema{\xoverline[.7]{\ct_{\mathit{rl}}}}
threads inside the retry loop\xspace, in expectation.
This expansion can be computed in the same way as
in~\cite{our-disc15}, through the following differential equation:\\%\vspp{.2}
\pr{\begin{minipage}{.4\textwidth}
\begin{equation*}
\left\{
\begin{array}{lcl}
\difavexp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} &=& \ema{\mathit{cc}} \times \dfrac{\frac{\ema{\mathit{cc}}}{2} + \avexp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}}{ \ema{\mathit{cc}} +\ema{\mathit{cw}}}%{\ema{W_{rl}} + \ema{\mathit{cc}} + \avexp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}},\\
\avexp{1} &=& 0
\end{array} \right.
\end{equation*}\end{minipage}\hfill\begin{minipage}{.5\textwidth}
by assuming that the expansion starts as soon as strictly more than 1
thread are in the retry loop\xspace, in expectation.\end{minipage}}{
\begin{equation*}
\left\{
\begin{array}{lcl}
\difavexp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} &=& \ema{\mathit{cc}} \times \dfrac{\frac{\ema{\mathit{cc}}}{2} +
\avexp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}}{ \ema{\mathit{cc}} +\ema{\mathit{cw}}}%{\ema{W_{rl}} + \ema{\mathit{cc}} + \avexp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}}\\
\avexp{1} &=& 0
\end{array} \right.,
\end{equation*}
by assuming that the expansion starts as soon as strictly more than 1
thread are in the retry loop\xspace, in expectation.
}
\vspace*{.3cm}
\subsubsection{Expected Slack Time\xspace}
\label{sec:litt-slack}
Concerning the slack time\xspace, we consider that, at any time, the threads that
are running the retry loop\xspace have the same probability to be anywhere in their
current retry\xspace. However, when a thread is currently executing a {\it CAS}\xspace, the
other threads cannot execute as well a {\it CAS}\xspace. The other threads are
thus in their critical work\xspace or expansion. For every thread, the time
before accessing the data structure\xspace is then uniformly distributed
between $0$ and $\ema{\mathit{cw}}+\avexp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}$.
\pr{Using a well-known formula on the expectation of the minimum of
uniformly distributed random variables, we show
in Appendix~\ref{app:lemsl} that
According to Lemma~\ref{lem:unif-min}, we conclude that
}
\begin{equation}
\label{eq:slack-cont}
\pp{\hspace{5cm}}\avwati{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} = \left( \ema{\mathit{cw}} + \avexp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} \right) / (\ema{\xoverline[.7]{\ct_{\mathit{rl}}}} +1).
\end{equation}
\rr{\lemsl
\vspace*{.1cm}
\subsubsection{Expected Success Period\xspace}
We just have to combine Equations~\ref{eq:little-sp},
\ref{eq:little-ret}, and~\ref{eq:slack-cont} to obtain the general
expression of the expected success period\xspace under contention: \pr{$\avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} = \left( 1 + 1/(\ema{\xoverline[.7]{\ct_{\mathit{rl}}}} +1) \right) \left( \ema{\mathit{cw}} + \avexp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} \right) + 2\ema{\mathit{cc}}$,}%
{\[\avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} = \left( 1 + \frac{1}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}} +1} \right) \left( \ema{\mathit{cw}} + \avexp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} \right) + 2\ema{\mathit{cc}}, \]}
which leads, according to Equation~\ref{eq:little-gen}, to
\begin{equation}
\label{eq:little-co}
\frac{1}{\ema{\mathit{pw}}} \times \left(
\frac{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}} +2}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}} +1} \left( \ema{\mathit{cw}} + \avexp{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} \right)
+ 2\ema{\mathit{cc}}
\right) = \frac{1}{\ema{P} - \ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}.
\end{equation}
\subsection{Non-contended System}
When the system is not contended, logical conflicts are not likely to
happen, hence each thread succeeds in its retry loop\xspace at its first {\it
retry\xspace}. \textit{A fortiori}\xspace, no hardware conflict occurs. Each thread still
performs one success every work loop\xspace, and the success period\xspace is given by
\pr{%
$\avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} = (\ema{\mathit{pw}} + \ema{\mathit{rc}}+\ema{\mathit{cw}}+\ema{\mathit{cc}})/\ema{P}$.
Moreover, a thread spends in average \ema{\mathit{pw}} units of time
in the retry loop\xspace within each work loop\xspace. As this holds for
every thread, we deduce
$\ema{P} - \ema{\xoverline[.7]{\ct_{\mathit{rl}}}} = \ema{\xoverline[.7]{\ct_{\mathit{ps}}}} = \ema{\mathit{pw}}/(\ema{\mathit{pw}} + \ema{\mathit{rc}}+\ema{\mathit{cw}}+\ema{\mathit{cc}}) \times \ema{P}$.
Combining the two previous equations, we obtain
\begin{equation}
\label{eq:little-nc}
\frac{\avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}}{\ema{\mathit{pw}}} = \frac{1}{\ema{P} - \ema{\xoverline[.7]{\ct_{\mathit{rl}}}}},
\text{ where } \avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} = \frac{\ema{\mathit{rc}}+\ema{\mathit{cw}}+\ema{\mathit{cc}}}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}.
\end{equation}
}%
{
\begin{equation}
\label{eq:little-avsp}
\avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} = \frac{\ema{\mathit{pw}} + \ema{\mathit{rc}}+\ema{\mathit{cw}}+\ema{\mathit{cc}}}{\ema{P}}.
\end{equation}
Moreover, a thread spends in average $\ema{\mathit{rc}}+\ema{\mathit{cw}}+\ema{\mathit{cc}}$ units of time
in the retry loop\xspace within each work loop\xspace. As this holds for
every thread, we can obtain the following expression for the total
average number of threads inside the retry loop\xspace:
\begin{equation}
\label{eq:little-trl}
\ema{\xoverline[.7]{\ct_{\mathit{rl}}}} = \frac{\ema{\mathit{rc}}+\ema{\mathit{cw}}+\ema{\mathit{cc}}}{\ema{\mathit{pw}} + \ema{\mathit{rc}}+\ema{\mathit{cw}}+\ema{\mathit{cc}}} \times \ema{P} = \frac{\ema{\mathit{rc}}+\ema{\mathit{cw}}+\ema{\mathit{cc}}}{\avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}}
\end{equation}
Equation~\ref{eq:little-avsp} also gives $ \ema{\mathit{rc}}+\ema{\mathit{cw}}+\ema{\mathit{cc}} = \ema{P} \times
\avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} - \ema{\mathit{pw}}$, hence, thanks to Equation~\ref{eq:little-trl},
\begin{equation}
\label{eq:little-nc}
\ema{\xoverline[.7]{\ct_{\mathit{rl}}}} = \frac{\ema{P} \times \avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}-\ema{\mathit{pw}}}{\avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}}, \text{ \textit{i.e.}\xspace} \quad
\frac{\avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}}{\ema{\mathit{pw}}} = \frac{1}{\ema{P} - \ema{\xoverline[.7]{\ct_{\mathit{rl}}}}},
\end{equation}
where $\avsupe{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}} = \frac{\ema{\mathit{rc}}+\ema{\mathit{cw}}+\ema{\mathit{cc}}}{\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}}$.}
\subsection{Unified Solving}
\pp{
\proofswitch
We show in the following theorem how to compute the throughput estimate; the proof,
presented in~\ref{app:fixed-point},
manipulates equations in order to be
able to use the fixed-point Knaster-Tarski theorem.
}
\rr{
\proofswitch
We show in the following theorem how to compute the throughput estimate; the proof
manipulates equations in order to be
able to use the fixed-point Knaster-Tarski theorem.
}
\begin{theorem}
\label{th:fixed-point}
The throughput can be obtained iteratively through a fixed-point
search, as $\ema{T} = \left( \avsupe{\lim_{n \rightarrow \ema{+\infty}} u_n} \right) ^{-1}$, wher
\[ \left\{\begin{array}{ll}
u_0 = \frac{\ema{\mathit{rc}} + \ema{\mathit{cw}} + \ema{\mathit{cc}}}{\ema{\mathit{pw}} + \ema{\mathit{rc}} + \ema{\mathit{cw}} + \ema{\mathit{cc}}} \times \ema{P} &\\
u_{n+1} = \frac{u_n \avsupe{u_n}}{\ema{\mathit{pw}} + u_n \avsupe{u_n}} \times \ema{P} & \quad \text{for all } n \geq 0.
\end{array}\right.
\]
\end{theorem}
\rr{
\begin{proof}
\prooffp
\end{proof}
}
\section{Constructive Approach}
\label{sec:cons}
In this section, we instantiate the probability distribution of the
parallel work\xspace with an exponential distribution. We have therefore a
better knowledge of the behavior of the algorithm, particularly in
medium contention cases, which allows us to follow a fine-grained
approach that studies individually each successful operation together
with every {\it CAS}\xspace occurrence. We provide an elegant and efficient
solution that relies on a Markov chain analysis.
\vspace{-.3cm}
\subsection{Process}
\label{sec:mark-proc}
We have seen in Section~\ref{sec:fra-app-con} that
we split the contention domain into three modes:
no contention, medium contention or high
contention.
\pr{We }{The main idea is to }start from a configuration with a given
number of threads \ema{\ct_{\mathit{rl}}} \rr{just }after a successful {\it CAS}\xspace, and describe
what will happen until the next successful {\it CAS}\xspace: what will be the mode
of the next success period\xspace, and\rr{ even} more precisely, which will be the
number of threads at the beginning of the next success period\xspace.
As a basis, we consider the execution that would occur without any
other thread exiting the parallel section\xspace (then entering the retry loop\xspace); we call this
execution the {\it internal execution}. This execution follows the
success period\xspace pattern described in Figure~\ref{fig:seq} (with an infinite
slack time\xspace if the system is not contended).
On top of this basic success period\xspace, we inject the threads that can exit the
parallel section\xspace, which has a double impact. On the one hand, they increase the
number of threads inside the retry loop\xspace for the next success period\xspace. On the other
hand, if the first thread that exits the parallel section\xspace starts its retry\xspace during the
slack time\xspace of the success period\xspace of the internal execution, then this thread will
succeed its {\it Access}\xspace, which is a {\it Read}\xspace, and will shrink the actual slack time\xspace
of the current success period\xspace.
According to the distribution probability of the arrival of the new
threads, we can compute the probability for the next success period\xspace to
start with any number of threads. The expression of this stochastic
sequence of success periods\xspace in terms of Markov chains results in the throughput
{estimate}.
\vspp{-.4}
\subsection{Expansion}
\label{sec:mark-expa}
The expansion, as before, represents the additional time in the
execution time of a retry\xspace, due to the serialization of atomic
primitives. However, in contrary to Section~\ref{sec:little-expa}, we
compute here this additional time in the current success period\xspace, according to
the number of threads \ema{\ct_{\mathit{rl}}} inside the retry loop\xspace at the beginning of the
success period\xspace.
The expansion only appears when the success period\xspace is highly contended, \textit{i.e.}\xspace
when we can find a continuous sequence of \cas{}'s\xspace all through the
success period\xspace.
The expansion is highly correlated with the way the cache coherence
protocol handles the exchange of cache lines between threads. We rely
on the experiments of the research report associated
with~\cite{ali-same}, which show that if several threads request for
the same cache line in order to operate a {\it CAS}\xspace, while another thread
is currently executing a {\it CAS}\xspace, they all have an equal probability to
obtain the cache line when the current {\it CAS}\xspace is over.
\def.5{.5}
\def1{1}
\def2.5{2.5}
\newcommand{\dcasf}[2]
\draw[pattern=north west lines, draw=none] (0,-#2*.5) rectangle (#1*1,-#2*.5+.5);
\draw[fill=red] (#1*1,-#2*.5) rectangle ++(1,.5) node[midway, align=center] {{\it CAS}\xspace};
}
\newcommand{\dcass}[2]
\draw[fill=black!20!green] (#1*1,-#2*.5) rectangle ++(1,.5) node[midway, align=center] {{\it CAS}\xspace};
}
\newcommand{\dcw}[2]
\draw[fill=blue!20!black!40!red!] (#1*1,-#2*.5) rectangle ++(2.5,.5) node[midway, align=center] {\ema{\mathit{cw}}};
}
\newcommand{\dpw}[3]
\draw[fill=black!20] (#1*1,-#2*.5) rectangle ++(#3*1,.5) node[midway, align=center] {\ema{\mathit{pw}}};
}
\newcommand{\dwt}[3]
\draw[densely dashed] (#1*1,-#2*.5+.5*.5) -- ++(#3*1,0);
}
\def1{1}
\def.2{.2}
\def.35{.35}
\newcommand\arcod[2]{
\draw[draw=blue,very thick,fill=blue] #1 -- ++ (0,-#2) -- ++(.2,.35) -- ++(-.2,0);
}
\newcommand\arcou[2]{
\draw[draw=blue, very thick,fill=blue] #1 -- ++ (0,#2) -- ++(.2,-.35) -- ++(-.2,0);}
\pp{
\begin{figure}
\begin{center}
\hspace*{-0cm}\begin{minipage}{.4\textwidth}
\begin{center}
\begin{tikzpicture}[scale=.73]
\clip (-1*1,1*.5) rectangle (8*1,-9.5*.5);
\dcass{0}{0}\dpw{1}{0}{8}
\dcasf{1}{3}\dcw{2}{3}
\dcasf{2}{2}\dcw{3}{2}
\dcasf{3}{5}\dcw{4}{5}
\dcasf{4}{4}\dcw{5}{4}
\dcasf{5}{1}\dcw{6}{1}
\dcasf{6}{6}\dcw{7}{6}
\dwt{2+2.5}{3}{3.5}
\dwt{3+2.5}{2}{1.5}\dcass{3+1.5+2.5}{2}
\dwt{4+2.5}{5}{1.5}
\dwt{5+2.5}{4}{.5}
\draw[blue,densely dotted, thick] (5*1,0) -- ++(0,-6.4*.5) -- ++(-.5*1,0) -- ++(0,-.5*.5) ;
\draw[blue,densely dotted, thick] (6*1,0) -- ++(0,-6.9*.5);
\draw[blue,densely dotted, thick] (7*1,0) -- ++(0,-6.4*.5) -- ++(.5*1,0) -- ++(0,-.5*.5) ;
\draw[blue,densely dotted, thick] (1*1,0) -- ++(0,-6.9*.5);
\node[text width=1cm, align=center] at (4.5*1,-8*.5) { {\color{red} $\ema{\ct_{\mathit{rl}}}-4$}\\ {\color{black} vs}\\ {\color{green}1}};
\node[text width=1cm, align=center] at (6*1,-8*.5) { {\color{red} $\ema{\ct_{\mathit{rl}}}-5$}\\ {\color{black} vs}\\ {\color{green}2}};
\node[text width=1cm, align=center] at (7.5*1,-8*.5) { {\color{red} $\ema{\ct_{\mathit{rl}}}-6$}\\ {\color{black} vs}\\ {\color{green}3}};
\node[text width = 3cm, align=center] at (1.2*1,-7.8*.5) {\ema{\ct_{\mathit{rl}}} threads inside\\ the retry loop\xspace};
\node[draw=black,rounded corners=4,scale=.8] at (-.5*1,0.5*.5 ) {\tiny Thread 1};
\node[draw=black,rounded corners=4,scale=.8] at (-.5*1,-0.5*.5) {\tiny Thread 2};
\node[draw=black,rounded corners=4,scale=.8] at (-.5*1,-1.5*.5) {\tiny Thread 3};
\node[draw=black,rounded corners=4,scale=.8] at (-.5*1,-2.5*.5) {\tiny Thread 4};
\node[draw=black,rounded corners=4,scale=.8] at (-.5*1,-3.5*.5) {\tiny Thread 5};
\node[draw=black,rounded corners=4,scale=.8] at (-.5*1,-4.5*.5) {\tiny Thread 6};
\node[draw=black,rounded corners=4,scale=.8] at (-.5*1,-5.5*.5) {\tiny Thread 7};
\end{tikzpicture}
\end{center}
\captionof{figure}{Highly-contended execution\label{fig:mark-expa}}
\end{minipage}%
\hspace*{.35cm}\begin{minipage}{.65\textwidth}
\begin{center}
\begin{tikzpicture}[%
it/.style={%
rectangle,
text width=11em,
text centered,
minimum height=\pr{2}{3}em,
draw=black!50,
scale=.7,
}
]
\coordinate (O) at (0,0);
\pha{pcas}{O}{{\it CAS}\xspace}{\pr{2}{5}}{black!20!green}
\pha{sla}{pcas}{\wati{i}}{\pr{5}{8}}{black!20}
\pha{acc}{sla}{{\it CAS}\xspace}{\pr{2}{4}}{red}
\pha{cri}{acc}{\ema{\mathit{cw}}}{\pr{3}{4}}{blue!20!black!40!red!}
\pha{exp}{cri}{\reexp{i}}{\pr{4}{6}}{black!20}
\pha{fcas}{exp}{{\it CAS}\xspace}{\pr{2}{5}}{black!20!green}
\coordinate (intsl) at ($(sla.north west)+(0,.4*1)$);
\coordinate (intsr) at ($(sla.north east)+(0,.4*1)$);
\coordinate (inte) at ($(fcas.north east)+(0,.4*1)$);
\draw [decorate,decoration={brace,amplitude=10pt}]
(intsl) -- (intsr)
node [black,midway,yshift=15pt] {\scriptsize 0 new thread};
\draw [decorate,decoration={brace,amplitude=10pt}]
(intsr) -- (inte)
node [black,midway,yshift=15pt] {\scriptsize $k+1$ new threads};
\arcod{($(acc.north west)!.2! (acc.north east) + (0,.5*1)$)}{.4*1}
\arcod{($(exp.north west)!.1! (exp.north east) + (0,.5*1)$)}{.4*1}
\arcod{($(exp.north west)!.9! (exp.north east) + (0,.5*1)$)}{.4*1}
\coordinate (extsl) at ($(sla.south west)+(0,-.4*1)$);
\coordinate (extsr) at ($(sla.south east)+(0,-.4*1)$);
\draw [decorate,decoration={brace,mirror,amplitude=10pt},text width=11em, align=center] (extsl) -- (extsr)
node (caca) [black,midway,yshift=-13pt,xshift=26pt] {\scriptsize at least 1 new thread};
\coordinate (Ob) at ($(sla.south west)!.2!(sla.south) + (0,-1.5*1)$);
\pha{accb}{Ob}{{\it Read}\xspace}{\pr{2}{4}}{yellow}
\pha{crib}{accb}{\ema{\mathit{cw}}}{\pr{3}{4}}{blue!20!black!40!red!}
\pha{expb}{crib}{\reexp{i}}{\pr{4}{6}}{black!20}
\pha{fcasb}{expb}{{\it CAS}\xspace}{\pr{2}{5}}{black!20!green}
\arcou{(accb.south west)}{1.65*1}
\arcou{($(crib.south west)!.2! (crib.south east) + (0,-.6*1)$)}{.4*1}
\arcou{($(expb.south west)!.9! (expb.south east) + (0,-.6*1)$)}{.4*1}
\draw[very thick, draw=blue] (accb.south west) -- (fcasb.south east) -- (fcasb.north east) -- (accb.north west);
\coordinate (extnl) at ($(accb.south west)+(0,-.5*1)$);
\coordinate (extnr) at ($(fcasb.south east)+(0,-.5*1)$);
\draw [decorate,decoration={brace,mirror,amplitude=10pt},text width=11em, align=center]
(extnl) -- (extnr)
node [black,midway,yshift=-15pt] {\scriptsize $k$ new threads};
\draw[dotted, draw=black] (intsl) -- (extsl);
\draw[dotted, draw=black] (intsr) -- (extsr);
\draw[dotted, draw=black] (inte) -- (fcas.south east);
\draw[dotted, draw=black] (accb.north west) -- (extnl);
\draw[dotted, draw=black] (fcasb.north east) -- (extnr);
\node[text width=1.2cm, align=center] (inttext) at ($(pcas.west) + (-1*1,0)$) {Internal\\ execution};
\node[anchor=east] (eint) at ($(inttext.east) + (0.2,1.2*1)$) {\eve{int}};
\node[anchor=east] (eext) at ($(inttext.east) + (0.2,-2*1)$){\eve{ext}};
\path[->,out=90,in=-180] ($(inttext.north west)!.3!(inttext.north)$) edge (eint);
\path[->,out=-90,in=-180] ($(inttext.south west)!.3!(inttext.south)$) edge (eext);
\end{tikzpicture}
\end{center}
\captionof{figure}{Possible executions\label{fig:ex-eint-eext}}
\end{minipage}
\end{center}
\end{figure}
}
\rr{
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=1.6]
\clip (-1.2*1,1*.5) rectangle (8*1,-9*.5);
\dcass{0}{0}\dpw{1}{0}{8}
\dcasf{1}{3}\dcw{2}{3}
\dcasf{2}{2}\dcw{3}{2}
\dcasf{3}{5}\dcw{4}{5}
\dcasf{4}{4}\dcw{5}{4}
\dcasf{5}{1}\dcw{6}{1}
\dcasf{6}{6}\dcw{7}{6}
\dwt{2+2.5}{3}{3.5}
\dwt{3+2.5}{2}{1.5}\dcass{3+1.5+2.5}{2}
\dwt{4+2.5}{5}{1.5}
\dwt{5+2.5}{4}{.5}
\draw[blue,densely dotted, thick] (5*1,0) -- ++(0,-6.4*.5) -- ++(-.5*1,0) -- ++(0,-.5*.5) ;
\draw[blue,densely dotted, thick] (6*1,0) -- ++(0,-6.9*.5);
\draw[blue,densely dotted, thick] (7*1,0) -- ++(0,-6.4*.5) -- ++(.5*1,0) -- ++(0,-.5*.5) ;
\draw[blue,densely dotted, thick] (1*1,0) -- ++(0,-6.9*.5);
\node[text width=2cm, align=center] at (4.5*1,-8*.5) { {\color{red} $\ema{\ct_{\mathit{rl}}}-4$}\\ {\color{black} vs}\\ {\color{green}1}};
\node[text width=2cm, align=center] at (6*1,-8*.5) { {\color{red} $\ema{\ct_{\mathit{rl}}}-5$}\\ {\color{black} vs}\\ {\color{green}2}};
\node[text width=2cm, align=center] at (7.5*1,-8*.5) { {\color{red} $\ema{\ct_{\mathit{rl}}}-6$}\\ {\color{black} vs}\\ {\color{green}3}};
\node[text width = 3cm, align=center] at (1.2*1,-7.8*.5) {\ema{\ct_{\mathit{rl}}} threads inside\\ the retry loop\xspace};
\node[draw=black,rounded corners=4,scale=.8] at (-.6*1,0.5*.5 ) {\small Thread 1};
\node[draw=black,rounded corners=4,scale=.8] at (-.6*1,-0.5*.5) {\small Thread 2};
\node[draw=black,rounded corners=4,scale=.8] at (-.6*1,-1.5*.5) {\small Thread 3};
\node[draw=black,rounded corners=4,scale=.8] at (-.6*1,-2.5*.5) {\small Thread 4};
\node[draw=black,rounded corners=4,scale=.8] at (-.6*1,-3.5*.5) {\small Thread 5};
\node[draw=black,rounded corners=4,scale=.8] at (-.6*1,-4.5*.5) {\small Thread 6};
\node[draw=black,rounded corners=4,scale=.8] at (-.6*1,-5.5*.5) {\small Thread 7};
\end{tikzpicture}
\end{center}
\caption{Highly-contended execution\label{fig:mark-expa}}
\end{figure}
}
We draw an illustrative example in Figure~\ref{fig:mark-expa}. The
green \cas{}'s\xspace are successful while the red \cas{}'s\xspace fail. To lighten
the picture, we hide what happened for the threads before they
experience a failed {\it CAS}\xspace. The horizontal dash lines represent the time
where a thread wants to access the data in order to operate a {\it CAS}\xspace
but has to wait because another thread owns the data in exclusive
mode.
We can observe in this example that the first thread that accesses
the data structure\xspace is not the thread whose operation returns.
We are given that \ema{\ct_{\mathit{rl}}} threads are inside the retry loop\xspace at the end of the
previous successful {\it CAS}\xspace, and we only consider those threads. When
such a thread executes a {\it CAS}\xspace for the first time, this {\it CAS}\xspace is
unsuccessful. The thread was in the retry loop\xspace when the successful {\it CAS}\xspace has
been executed, so it has read a value that is not up-to-date
anymore. However, this failed {\it CAS}\xspace will bring the current version of
the value (to compare-and-swap) to the thread, a value that will be
up-to-date until a successful {\it CAS}\xspace occurs.
So we have firstly a sequence of failed \cas{}'s\xspace until the first thread
that operated its {\it CAS}\xspace within the current success period\xspace finishes its critical work\xspace. At
this point, there exists a thread that is executing a {\it CAS}\xspace. When this
{\it CAS}\xspace is finished, some threads compete to obtain the cache line. We
have two bags of competing threads: in the first bag, the thread that
just ended its critical work\xspace is alone, while in the second bag, there are all
the threads that were in the retry loop\xspace at the beginning of the success period\xspace, and
did not operate a {\it CAS}\xspace yet. The other, non-competing, threads are
running their critical work\xspace and do not yet want to access the data.
As described before, every thread has the same probability to become the next
owner of the cache line. If a thread from the first bag is drawn, then
the {\it CAS}\xspace will be successful and the success period\xspace ends. Otherwise, the {\it CAS}\xspace is
a failure, and we iterate at the end of this failed {\it CAS}\xspace. However, the
thread that just failed its {\it CAS}\xspace is now executing its critical work\xspace,
and does not request for a new {\it CAS}\xspace until this work has been done,
thus it is not anymore in the second bag. In addition, the thread that
had executed its {\it CAS}\xspace after the thread of the first bag is now back
from its critical work\xspace and falls into the first bag. The process iterates until
a thread is drawn from the first bag.
As a remark, note that we do not consider threads that are not in the
retry loop\xspace at the beginning of the success period\xspace since even if they come back from
the parallel section\xspace during the success period\xspace, their {\it Read}\xspace will be delayed and their {\it CAS}\xspace is
likely to occur after the end of the success period\xspace.
Theorem~\ref{th:mark-expa} \pp{, proved in Appendix~\ref{app:aliexp},
gives the explicit formula for the expansion\pr{.}{, based on the previous
explanations.}
\newcommand{\prsu}[1]{\ema{p_{#1}}}
\begin{theorem}
\label{th:mark-expa}
The expected time between the end of the critical work\xspace of the first
thread that operates a {\it CAS}\xspace in the success period\xspace and the beginning of a
successful {\it CAS}\xspace is given by:\vspp{0}
\[ \reexp{\ema{\ct_{\mathit{rl}}}} = \lceil\ema{\mathit{cw}}/\ema{\mathit{cc}}\rceil\ema{\mathit{cc}} - \ema{\mathit{cw}} +
\sum_{i=1}^{\ema{\ct_{\mathit{com}}}} \frac{i(i-1)}{\left(\ema{\ct_{\mathit{com}}}\right)^i} \frac{(\ema{\ct_{\mathit{com}}}-1)!}{(\ema{\ct_{\mathit{com}}}-i)!} \times \ema{\mathit{cc}}, \pp{\quad\text{where }\ema{\ct_{\mathit{com}}} = \ema{\ct_{\mathit{rl}}} - \lceil\ema{\mathit{cw}}/\ema{\mathit{cc}}\rceil +1.}\]
\rr{where $\ema{\ct_{\mathit{com}}} = \ema{\ct_{\mathit{rl}}} - \lceil\ema{\mathit{cw}}/\ema{\mathit{cc}}\rceil +1$.}
\end{theorem}
\rr{
\begin{proof}
\proofaliexp
\end{proof}
}
\subsection{Formalization}
The parallel work\xspace follows an exponential distribution, whose mean is
\ema{\mathit{pw}}. More precisely, if a thread starts a parallel section\xspace at the instant $t_1$,
the probability distribution of the execution time of the parallel section\xspace is
\pr{$ t \mapsto \lambda \expu{-\lambda (t-t_1)} \indi{[t_1,\ema{+\infty}[}{t}$, where $\lambda = 1/\ema{\mathit{pw}}. $}
{\[ t \mapsto \lambda \expu{-\lambda (t-t_1)} \indi{[t_1,\ema{+\infty}[}{t}, \text{ where } \lambda = \frac{1}{\ema{\mathit{pw}}}. \]}
This probability distribution is memoryless, which implies that the
threads that are executing their parallel section\xspace cannot be differentiated: at a
given instant, the probability distribution of the remaining execution
time is the same for all threads in the parallel section\xspace, regardless of when the parallel section\xspace began. For all
threads, it is defined by:
\pr{ $ t \mapsto \lambda \expi{-\lambda t}$, where $\lambda = 1/\ema{\mathit{pw}}. $}
{\[ t \mapsto \lambda \expu{-\lambda t}, \text{ where } \lambda = \frac{1}{\ema{\mathit{pw}}}. \]}
For the behavior in the retry loop\xspace, we rely on the same approximation as in
the previous section, \textit{i.e.}\xspace when a successful thread exits its
retry loop\xspace, the remaining execution time of the retry\xspace of every other thread
that is still in the retry loop\xspace is uniformly distributed between $0$ and the
execution time of a whole retry\xspace. We have seen that the expectation of
this remaining time is the size of the execution time of a retry\xspace divided
by the number of threads inside the retry loop\xspace plus one. Here, we assume that
a thread will start a retry\xspace at this time.
This implies another kind of memoryless property: the behavior of a
thread that is in the retry loop\xspace does not depend on the moment that it
entered its retry loop\xspace.
To tackle the problem of estimating the throughput of such a system,
we use an approach based on Markov chains. We study the behavior of
the system over time, step by step: a state of the Markov chain
represents the state of the system when the current success period\xspace began (\textit{i.e.}\xspace
just after a successful {\it CAS}\xspace) and (thus) the system changes state at
the end of every successful {\it CAS}\xspace.
According to the current state, we are able to compute the probability
to reach any other state at the beginning of the next success period\xspace.
In addition, the two memoryless properties render the description of a
state easy to achieve: the number of threads inside the retry loop\xspace when the
current success begins, indeed fully characterizes the system.
We recall that \ema{\ct_{\mathit{rl}}} is the number of threads inside the retry loop\xspace when the
success period\xspace begins.
The Markov chain is strongly connected with \ema{\ct_{\mathit{rl}}}, since it is composed of
\ema{P} states $\sta{0}, \sta{1}, \dots, \sta{\ema{P}-1}$, where, for all $i
\in \inte{\ema{P}-1}$, the success period\xspace is in state \sta{i} iff $\ema{\ct_{\mathit{rl}}}=i$. For all
$(i,j) \in \inte{\ema{P}-1}^2$, $\pro{\sta{i} \rightarrow \sta{j}}$
denotes the probability that a success characterized by \sta{j}
follows a success in state \sta{i}.
$\wati{\sta{i} \rightarrow \sta{j}}$ denotes the
slack time\xspace that passed while the system has gone from state \sta{i} to
state \sta{j}.
This slack time\xspace can be expressed based on the slack time\xspace \wati{i} of the
internal execution, \textit{i.e.}\xspace the execution that involves only the $i$
threads of the retry loop\xspace and ignores the other threads (see
Section~\ref{sec:mark-proc}). \rr{Recall that we consider that the slack time\xspace of
the internal execution with $0$ thread is infinite, since no thread will
access the data structure\xspace.}
In the same way, we denote by \rw{i} the completion time\xspace of the internal
execution, hence $\rw{i} = \ema{\mathit{cc}} + \ema{\mathit{cw}} + \reexp{i} + \ema{\mathit{cc}}$.
\newcommand{\intgen}[1]{\ema{\mathcal{I}_{\mathrm{#1}}}}
\newcommand{\intgen{noc}}{\intgen{noc}}
\newcommand{\intgen{mid}}{\intgen{mid}}
\newcommand{\intgen{hi}}{\intgen{hi}}
\newcommand{\ema{i_{\mathrm{hi}}}}{\ema{i_{\mathrm{hi}}}}
We have seen that the level of contention (mode) is determined by
\ema{\ct_{\mathit{rl}}}, hence the interval \inte{\ema{P}-1} can be partitioned into
\pr{$\inte{\ema{P}-1} = \intgen{noc} \cup \intgen{mid} \cup \intgen{hi}$,}
{\[ \inte{\ema{P}-1} = \intgen{noc} \cup \intgen{mid} \cup \intgen{hi}, \]}
where the partitions correspond to the different contention levels.
So, by definition, $\intgen{noc} = \{ 0 \}$, and for all $i \in \intgen{noc}
\cup \intgen{mid}$, $\reexp{i} = 0$ (see Section~\ref{sec:fra-app-con}).
The success period\xspace is highly-contended, \textit{i.e.}\xspace we have a continuous sequence of
\cas{}'s\xspace in the success period\xspace, if the sum of the execution time of all the
\cas{}'s\xspace that need to be operated exceeds the critical work\xspace. Hence
$\intgen{hi} = \inte[\ema{i_{\mathrm{hi}}}]{\ema{P}-1}$, where
\pr{$\ema{i_{\mathrm{hi}}} = \min \{ i \in \inte[1]{\ema{P}-1} \; | \; i \times \ema{\mathit{cc}} > \ema{\mathit{cw}} \}$.}
{\[ \ema{i_{\mathrm{hi}}} = \min \{ i \in \inte[1]{\ema{P}-1} \; | \; i \times \ema{\mathit{cc}} > \ema{\mathit{cw}} \}. \]}
In addition, as the sequence of \cas{}'s\xspace is continuous when the
contention is high, the slack time\xspace is null when the success period\xspace is highly
contended, \textit{i.e.}\xspace, for all $i \in \intgen{hi}$, $\wati{i} = 0$, and \textit{a fortiori}\xspace,
$\wati{\sta{i} \rightarrow \sta{\star}} = 0$.
Otherwise, the success period\xspace is in medium contention, hence $\intgen{mid} =
\inte[1]{\ema{i_{\mathrm{hi}}}-1}$. Moreover, if $i \in \intgen{mid}$, $\wati{i} > 0$, and
$\reexp{i} = 0$, because the \cas{}'s\xspace synchronized during the previous
success period\xspace and will not collide any more in the current success period\xspace.
\pp{Everything is now in place to be able to obtain the stationary
distribution of the Markov chain, and in turn to compute the
throughput and the failure rate estimates. The reasoning that leads
to the computation of the probability of going from state \sta{i} to
state \sta{i+k} can be roughly summarized by
Figure~\ref{fig:ex-eint-eext}, where we start from an internal
execution with $i$ threads inside the retry loop\xspace and the blue arrows
represent the threads that exit the parallel section\xspace. Two non-overlapping events
can then potentially occur: either (event \eve{ext}) the first thread exiting the parallel section\xspace
starts within $[0,\wati{i}[$, \textit{i.e.}\xspace in the slack time\xspace of
the internal execution, or (event \eve{int}) the first thread entering the retry loop\xspace
starts after $t=\wati{i}$. The details can be
found in Appendix~\ref{app:markov}.
}
\rr{\wholemarkov}
\section{Experiments}
\label{sec:xp}
To validate our analysis results, we use two main types of lock-free
algorithms.
In the first place, we consider a set of basic algorithms where operations
can be completed with a single successful {\it CAS}\xspace.
This set of algorithms includes: (i) synthetic
designs, that cover the design space of possible lock-free data structures;
(ii) several fundamental designs of data structure\xspace operations such as lock-free
stacks~\cite{lf-stack} (\FuncSty{Pop}\xspace, \FuncSty{Push}\xspace),
queues~\cite{lf-queue-michael} (\FuncSty{Dequeue}\xspace), counters~\cite{count-moir}
(\FuncSty{Increment}\xspace, \FuncSty{Decrement}\xspace).
As a second step, we consider more advanced lock-free operations that
involve helping mechanisms, and show how to use our analysis in this
context. Finally, in order to highlight the benefits of the analysis
framework, we show how it can be applied to i) determine a beneficial back-off
strategy and ii) optimize the memory management scheme used by a data structure\xspace,
in the context of an application.
We also give insights about the strengths of our two approaches.
\pr{The }{On the one hand, the }constructive approach exhibits better predictions
due to the tight estimation of the failing retries. On the other hand, the
average-based\xspace approach is applicable to a broader spectrum of algorithmic designs
as it leaves room to abstract complicated algorithmic designs.
\vspp{-.2}
\subsection{Setting}
We have conducted experiments on an Intel ccNUMA workstation
system. The system is composed of two sockets equipped with
Intel Xeon E5-2687W v2 CPUs\pp{.}\rr{ with frequency band
\ghz{1.2\text{-}3.4.} The physical cores have private L1, L2 caches
and they share an L3 cache, which is \megb{25}.} In a socket, the
ring interconnect provides L3 cache accesses and core-to-core
communication. \rr{Due to the bi-directionality of the ring
interconnect, uncontended latencies for intra-socket communication
between cores do not show significant variability.}\pp{Threads are
pinned to a single socket to minimize non-uniformity in {\it Read}\xspace and {\it CAS}\xspace
latencies. }%
\pp{The methodology in~\cite{david-emp-atom} is used to measure the
{\it CAS}\xspace and {\it Read}\xspace latencies, while the parallel work\xspace is implemented
by a for-loop of {\it Pause} instructions. }%
\rr{Our model assumes uniformity in the {\it CAS}\xspace and {\it Read}\xspace latencies on the
shared cache line. Thus, threads are pinned to a single socket to
minimize non-uniformity in {\it Read}\xspace and {\it CAS}\xspace latencies. In the
experiments, we vary the number of threads between 4 and 8 since the
maximum number of threads that can be used in the experiments are
bounded by the number of physical cores that reside in one socket. }%
We show the experimental results with 8 threads.
In all figures, the y-axis shows both the throughput\rr{ values}, \textit{i.e.}\xspace
number of operations completed per second, and the ratio of failing to
successful retries (multiplied by $10^6$, for readability),
while the mean of the exponentially distributed parallel work\xspace \ema{\mathit{pw}} is
represented on the x-axis.
The number of failures per success in the average-based\xspace
approach is computed as $\ema{\xoverline[.7]{\ct_{\mathit{rl}}}}-1$ and \pr{in the constructive approach
by stochastically counting the failing \cas{}'s\xspace inside a success period\xspace
(see Appendix~\ref{sec:nbf}).}{ is described in Section~\ref{sec:nbf}
for the constructive approach.}
\rr{\figsynthcst}
We have also added a straightforward upper bound as a baseline
approach,\rr{ which is} defined as the minimum of $1/(\ema{\mathit{rc}}+\ema{\mathit{cw}}+\ema{\mathit{cc}})$ (two
successful retries\xspace cannot overlap) and $\ema{P}/(\ema{\mathit{pw}}+\ema{\mathit{rc}}+\ema{\mathit{cw}}+\ema{\mathit{cc}})$ (a
thread can succeed only once in each work loop\xspace).
\vspp{-.2}
\subsection{Basic Data Structures}
\pr{Firstly, we consider lock-free operations that can be completed
with a single successful {\it CAS}\xspace.
We provide predictions, on the one
hand, on a set of synthetic tests that have been constructed to
abstract different possible design patterns of lock-free data
structures (value of \ema{\mathit{cw}}) and different application contexts (value
of \ema{\mathit{pw}}), and, on the other hand, on the well-known Treiber's
stack. The results are depicted in Appendix~\ref{app:xp-basic}.
}
{
\synthtreib
\figsynthpoi
\figsynthconst
\subsubsection{Synthetic Tests}
\label{sec:synt_tests}
\synth
\subsubsection{Treiber's Stack}
\figtreib
\treib
}
\subsection{Towards Advanced Data Structure Designs}
\label{sec:advanced-ds}
Advanced lock-free operations generally require multiple pointer
updates that cannot be done with a single {\it CAS}\xspace.
One way to design such operations, in a lock-free manner, is to
use helping mechanisms: an inconsistency will be fixed eventually by
some thread.
Here we consider two data structures\xspace that apply immediate helping, the queue
from~\cite{lf-queue-michael} and the deque designed in~\cite{deq}. In the queue
experiment (Figure~\ref{fig:enqueue}), we run the \FuncSty{Enqueue}\xspace operation on the
queue with and without memory management; in the deque\xspace experiment, each
thread is dedicated to an end of the deque\xspace (equally distributed), while
we vary the proportion of push operations (colors in Figure~\ref{fig:deq}).
\pp{More details about the implementations and the throughput estimate
obtained through a slight modification of the average-based\xspace approach can be
found in Appendix~\ref{app:ads}.
\pp{\figsidebyside{h!}{.5}{enqueue_pp_disc}{Enqueue on MS queue}{enqueue}{.5}{deq_pp}{Operations on deque\xspace}{deq}}
\rr{
\sumads
\subsubsection{Expected Expansion for the Advanced Data Structures}
\fulladsexp
\subsubsection{Expected Slack Time for the Advanced Data Structures}
\fulladswati
\subsubsection{Enqueue on Michael-Scott Queue}
\fullenq
\subsubsection{Deque\xspace}
\label{sec:xp-deq}
\fulldeq}
\subsection{Applications}
\pr{\figsidebyside{b!}{.5}{back-offs-new}{Performance impact of our back-off tunings}{fig:bos}{.5}{adaptive_pp_disc}{Adaptive MM with varying mean \ema{\mathit{pw}}}{fig:mm_adaptive}} {
\begin{figure}[h!]
\includegraphics[width=\textwidth]{back-offs-new}
\captionof{figure}{Performance impact of our back-off tunings \label{fig:bos}}
\end{figure}
}
\subsubsection{Back-off Optimizations}
When the parallel work\xspace is known, we can deduce from our analysis a simple and
efficient back-off strategy: as we are able to estimate the value for
which the throughput is maximum, we just have to back-off for the time
difference between the peak \ema{\mathit{pw}} and the actual \ema{\mathit{pw}}.
\pr{In Appendix~\ref{app:bo}, we compare this back-off strategy
against widely known strategies, namely exponential and linear, on a
synthetic workload. }%
{\fullbosy}%
In Figure~\ref{fig:bos}, we apply our constant
back-off on a Delaunay triangulation application~\cite{caspar},
provided with several workloads. The application uses a stack in two
phases, whose first phase pushes elements on top of the stack without
delay. We are able to estimate a corresponding back-off time, and we
plot the results by normalizing the execution time of our back-offed
implementation with the execution time of the initial implementation.
A measure or an estimate of \ema{\mathit{pw}} is not always available (and could
change over time, see next section), therefore we propose also an
adaptive strategy: we incorporate in the data structure\xspace a monitoring routine that tracks
the number of failed retries\xspace, employing a sliding window. As our analysis
computes an estimate of the number of failed retries\xspace as a function of
\ema{\mathit{pw}}, we are able to estimate the current \ema{\mathit{pw}}, and hence the
corresponding back-off time like previously.
We test our adaptive back-off mechanism on a workload originated
from~\cite{taq-se}, where global operators of exchanges for financial
markets gather data of trades with a microsecond accuracy. We assume
that the data comes from several streams, each
of them being associated with a thread. All threads enqueue the
elements that they receive in a concurrent queue, so that they can be
later aggregated. We extract from the original data a trade stream
distribution that we use to generate similar streams that reach the
same thread; varying the number of streams to the same thread leads to
different workloads. The results, represented as the normalized
throughput (compared to the initial throughput) of trades that are enqueued when the
adaptive back-off is used, are plotted in Figure~\ref{fig:bos}.
For any number of threads, the queue is not
contended on workload s3, hence our improvement is either small or
slightly negative. On the contrary, the workload s50 contends the
queue and we achieve very significant improvement.
\rr{\begin{figure}[h!]
\begin{center}
\includegraphics[width=.7\textwidth]{stack_back_rr}
\end{center}
\caption{Back-off Tuning on Treiber's Stack\label{fig:bo-synt}}
\end{figure}}
\subsubsection{Memory Management Optimization}
Memory Management (MM) is an inseparable part of dynamic concurrent
data structures\xspace. In contrary to lock-based implementations, a node that has been
{\it removed} from a lock-free data structure\xspace can still be accessed by other threads,
\textit{e.g.}\xspace if they have been delayed. Collective decisions are thus required
in order to {\it reclaim} a node in a safe manner.
A well-known solution to deal with this problem is the hazard pointers
technique~\cite{Mic04b}. \pp{In an implementation of such design each
thread lists the nodes that it accesses and the nodes that it has
removed. When the number of nodes it has removed reaches a threshold,
it reclaims its listed removed nodes that are not listed as
accessed by any thread.}
\newcommand\nodhp[1]{\ema{\mathcal{N}_{#1}}}
\newcommand\delhp[1]{\ema{\mathcal{D}_{#1}}}
\rr{A traditional design to implement this technique works as follows.
Each thread \thr{i}, maintains two lists of nodes: \nodhp{i} contains
the nodes that \thr{i} is currently accessing, and \delhp{i} stores
the nodes that have been removed from the data structure\xspace by \thr{i}. Once a
threshold on the size of \delhp{i} is reached, \thr{i} calls a routine
that: (i) collects the nodes that are accessed by any other thread, \textit{i.e.}\xspace
\nodhp{j} for $j \neq i$ (collection phase), and (ii) for each element
in \delhp{i}, checks whether someone is accessing the element, \textit{i.e.}\xspace
whether it belongs to $\cup_{j \neq i} \nodhp{j}$, and if not,
reclaims it (reclamation phase).}
The main goal of our adaptive MM scheme is to distribute this
extra-work in a way that the loss in performance is largely leveraged,
knowing that additional work can be an advantage under high-contention
(see previous section).
The optimization is based on two main
modifications.
\pr{First, we divide the reclamation phase of the traditional MM scheme
into quanta (equally-sized chunks), whose finer granularity allows for
accurate back-off times. Second, we track continuously the contention
level in the same way as our adaptive back-off. See Appendix~\ref{app:mm}.}%
{First, the granularity has to be finer, since the
additional quantum that the back-off mechanism uses, has to
be rather small (hundreds of cycles for a queue). Second, we need to
track the contention level on the data structure\xspace in order to be able to inject the work
at a proper execution point.}
\rr{\figcompmm}
\rr{\begin{figure}[t!]
\begin{center}
\includegraphics[width=.9\textwidth]{adaptive_rr_disc}
\end{center}
\caption{Adaptive MM with varying mean \ema{\mathit{pw}}\label{fig:mm_adaptive}}
\end{figure}}
\rr{\bothmm}
\pr{%
We emulate the behavior of many scientific applications, that are
built upon a pattern of alternating phases, that are
communication-intensive (synchronization phase) or
computation-intensive. Here we assume a synchronization ensured
through a shared data structure\xspace, hence the communication-intensive phases
correspond to a high access rate to the data structure\xspace, while data structure\xspace is accessed
at a low rate during a computation-intensive phase. The parallel work\xspace still
follows an exponential distribution of mean \ema{\mathit{pw}}, but \ema{\mathit{pw}} varies in a
sinusoidal manner with time.
To study also the impact of the continuity of the change in \ema{\mathit{pw}}, \ema{\mathit{pw}}
is set as a step approximation of a sine function. Thus, two
additional parameters rule the experiment: the period of the
oscillating function represents the length of the phases, and the
number of steps within a period depicts how continuous are the phase
changes.
}%
{\adaptsine}
In Figure~\ref{fig:mm_adaptive}, we compare our approach with the
traditional implementation for different periods of the sine function,
on the \FuncSty{Dequeue}\xspace of the Michael-Scott queue~\cite{lf-queue-michael}.
The adaptive MM, that relies on the analysis presented
in this paper, outperforms the traditional MM
because it provides an advantage both under low contention due to the
costless (since delayed) invocation of the MM and under high
contention due to the back-off effect.
\pp{\vspace{-.2cm}}
\section{Conclusion}
\label{sec:conc}
In this paper we have presented two analyses for calculating the
performance of lock-free data structures\xspace in dynamic environments. The first
analysis has its roots in queuing theory, and gives the flexibility to
cover a large spectrum of configurations. The second analysis makes
use of Markov chains to exhibit a stochastic execution; it gives
better results, but it is restricted to simpler data structures\xspace and exponentially
distributed parallel work\xspace.
\rr{We have evaluated the quality of the prediction on basic data structures\xspace like
stacks, as well as more advanced data structures\xspace like optimized queues and
deques\xspace. Our results can be directly used by algorithmicians to gain a
better understanding of the performance behavior of different designs,
and by experimentalists to rank implementations within a fair
framework. }%
We have\rr{ also} shown how to use our results to tune applications using
lock-free codes. These tuning methods include: (i) the calculation of
simple and efficient back-off strategies whose applicability is
illustrated in application contexts; (ii) a new adaptative memory
management mechanism that acclimates to a changing environment.
The main differences between the data structures\xspace of this paper and linked lists,
skip lists and trees occur when the size of the data structure
grows. With large sizes, the performance is dominated by the traversal
cost that is ruled by the cache parameters. The reduction in the size
of the data structure decreases the traversal cost which in turn
increases the probability of encountering an on-going {\it CAS}\xspace operation
that delays the threads which traverse the link.
The expansion, which can additionally be supported unfavorably by
helping mechanisms, appears then as the main performance degrading
factor.
While the analysis becomes easier for high degrees of parallelism
(large data structure\xspace size), being able to describe the behavior of lock-free
data structures as the degree of parallelism changes constitutes the
main challenge of our future work.
\pp{\small{ |
1,314,259,993,449 | arxiv | \section{Introduction}
Adversarial robustness is an important field to protect models from adversarial attacks, since deep neural networks that are highly accurate can be vulnerable to small input perturbations. For example, the projected gradient descent (PGD) attack can worsen the ResNet50, from 95.25\% accuracy to 0.00\% using a small perturbation of $l_\infty$ magnitude 8/255 on CIFAR10, as well as from 76.13\% accuracy to around 0.01\% under the same attack on ImageNet \cite{madry2018towards}. This vulnerability is observed not only in computer vision but also in natural language processing, e.g. a word-level attack TextFooler \cite{jin2020bert} has 90\% success rate of fooling BERT \cite{kenton2019bert} on SST-2 dataset by \cite{zeng2021openattack}. At the core of adversarial robustness, one need the adversarial perturbation to construct adversarial examples, which are used to evaluate a model's robustness and to defend against attacks via the adversarial training (sometimes improving the accuracy as well).
Mathematically speaking, the adversarial perturbation is the solution to a constrained maximization problem in the input space:
\begin{align}
\bm{p}^*=\text{arg max}_{\bm{p}\in \mathcal{B}} L(f(\bm{x}+\bm{p};\mathbf{W}),\bm{y})
\label{eq:maximization adv}
\end{align}
where $\bm{x}$ is the data feature, $\bm{y}$ is the label, $\mathbf{W}$ is the model parameters, $f$ is the model function, $L$ is the loss function, $\bm{p}^*$ is the adversarial perturbation in the attack domain $\mathcal{B}$, e.g. $\ell_\infty$ ball with some radius $\epsilon$ known as the attack magnitude. Many optimizers (known as the attack methods) including the fast gradient sign method (FGSM; \cite{goodfellow2014explaining}) and PGD have been proposed to iteratively compute $\frac{\partial L}{\partial \bm{p}}$ and to solve \eqref{eq:maximization adv}. These methods have achieved empirical success over the decades.
\begin{algorithm}[!htb]
\caption{Adversarial perturbation by optimization}
\label{alg:adv}
\begin{algorithmic}
\STATE {\bfseries Input:} data $\bm{x}$, label $\bm{y}$, model parameter $\mathbf{W}$
\FOR{$k=1,2,\cdots,K$}
\STATE forward propagation $\hat\bm{y}=f(\bm{x}+\bm{p};\mathbf{W})$
\STATE backward propagation $\nabla_{\bm{p}} L=\frac{\partial L(\hat\bm{y},\bm{y})}{\partial \bm{p}}$
\STATE input update $\bm{p}=\text{projection}_{\mathcal{B}}(\bm{p}+\eta_k\nabla_{\bm{p}} L)$
\ENDFOR
\end{algorithmic}
\end{algorithm}
Generally speaking, the optimization \eqref{eq:maximization adv} is non-convex and hard-to-solve. Therefore, strong adversarial perturbation is usually solved by applying the attack methods for a number of steps, i.e. $K\geq 1$. However, this approach may cause a heavy computational burden, e.g. about $20\times$ time complexity in the 20-step PGD adversarial training, or $2\times$ time complexity in FGSM adversarial training, in comparison with the standard non-robust training.
\subsection{Contributions}
We propose a trick, the semi-backward, to reduce the time complexity of backward propagation to half. We analyze the effectiveness of the semi-backward trick in theory and in practice across different attacks and libraries, all of which only require to add a single line of code.
\subsection{Organization}
In Section 2, we will discuss the backward propagation and its 2 processes, with a brief introduction of their time complexity. In Section 3, we propose to only compute 1 out of 2 processes: we can do semi-backward propagation by only computing the output gradient. This is valid by the chain rules and implemented by adding \texttt{[p.requires\_grad\_(False) for p in model.parameters()]}. In Section 4, we experiment with state-of-the-art adversarial robustness libraries and attack methods to demonstrate the advantage of semi-backward propagation. Notice that no accuracy results are provided, because we focus on the efficiency and the accuracy is the same for both full backward and semi-backward propagation. At last, we discuss some details about the Pytorch implementation of adversarial perturbation.
\section{Preliminaries on a Single Layer}
\subsection{Forward and backward propagation}
To understand and then improve the computation of adversarial perturbation, we start from the standard training. The forward propagation on the $l$-th layer\footnote{Here the demonstration is on a linear layer for simplicity, which can easily extend to other layers.} is:
$$\a_{l+1}=\phi_{l+1}(\bm{s}_{l}) \text{ and } \bm{s}_{l+1}=\a_{l+1}\mathbf{W}_{l+1}+\b_{l+1}$$
Here $\a$ is the layer's input (i.e. the activations), $\bm{s}$ is the layer's output (i.e. the pre-activation), $\mathbf{W},\b$ are the layer's weights and biases, $\phi$ is the inter-layer operation such as the activation functions (ReLU, tanh, etc.) and the pooling.
The backward propagation of the $l$-th layer has 2 underlying processes:
\begin{enumerate}
\item the computation of \textbf{output gradient}
\begin{align}
\frac{\partial {L}}{\partial \bm{s}_{l}}=\frac{\partial {L}}{\partial \bm{s}_{l+1}}\mathbf{W}_{l+1}\circ \phi'(\bm{s}_{l}),
\label{eq:output grad}
\end{align}
\item the computation of \textbf{parameter gradient}
\begin{align}
\begin{split}
\frac{\partial {L}}{\partial \mathbf{W}_{l}}&=\frac{\partial {L}}{\partial \bm{s}_{l}}^\top\frac{\partial \bm{s}_{l}}{\partial \mathbf{W}_{l}}=\frac{\partial {L}}{\partial \bm{s}_{l}}^\top\a_{l},
\\
\frac{\partial {L}}{\partial \b_{l}}&=\frac{\partial {L}}{\partial \bm{s}_{l}}^\top\frac{\partial \bm{s}_{l}}{\partial \b_{l}}=\frac{\partial {L}}{\partial \bm{s}_{l}}^\top\mathbf{1},
\end{split}
\label{eq:param grad}
\end{align}
\end{enumerate}
The derivation for both computations is obvious from the chain rules. We will visualize these computations in sequential order in \Cref{fig:my_label}.
\subsection{Complexity of propagation}
It is well-known that the forward propagation and each process in the backward propagation have the same time complexity (up to lower order terms\footnote{It suffices to state the highest order term here as it dominates the time complexity, e.g. if the layer maps from $\mathbb{R}^{BTd}$ to $\mathbb{R}^{BTp}$, the computation of output gradient is $2BTdp+2BTd\approx 2BTdp$.}), as summarized in \Cref{fact:two process same time}.
\begin{fact}\label{fact:two process same time}
For one layer, the forward propagation (i.e. the computation of one layer's input $\a$ and output $\bm{s}$), the computation of output gradient and the computation of parameter gradient have (almost) the same time complexity as $2BTM$, where $B$ is batch size, $T$ is hidden feature dimension, and $M$ is the number of parameters in this layer.
\end{fact}
\begin{remark}
Here $T$ is the sequence length for text data (e.g. sentence length); $T$ is the dimension product (height $\times$ width) of hidden feature for image data.
\end{remark}
\section{Semi-backward Propagation for Adversarial Perturbation}
By \Cref{fact:two process same time}, the optimization of adversarial perturbation (forward and backward) has a time complexity $6BTM$. This can be reduced to $4BTM$ but still give the same adversarial perturbation:
$$\underbrace{\text{forward propagation }}_\text{time complexity $2BTM$} + \underbrace{\text{ output gradient }}_{2BTM} + \underbrace{\cancel{\text{ parameter gradient}}}_{2BTM}$$
We term this pipeline as the semi-backward propagation.
\subsection{A computation graph viewpoint}
We claim that during the optimization in \Cref{alg:adv}, only the output gradient is needed, thus the time complexity of backward propagation can be halved since we eliminate 1 out of 2 processes. To see this, we present the computation graph of all layers and leverage the single-layer results from the previous section.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.3\linewidth]{adv_computation_graph.png}
\caption{Computation graph of backward propagation.}
\label{fig:my_label}
\end{figure}
Note that the output gradient is computed via \eqref{eq:output grad} (orange arrow), the parameter gradient is computed via \eqref{eq:param grad} (blue arrow), and perturbation gradient is computed via \eqref{eq:data grad} (green arrow).
\begin{align}
\begin{split}
\textbf{Forward: }&
\bm{s}_1=(\bm{x}+\bm{p})\mathbf{W}_1+\b_1
\\
\textbf{Backward: }&
\frac{\partial {L}}{\partial \bm{p}}=\frac{\partial {L}}{\partial \bm{s}_{1}}^\top\frac{\partial\bm{s}_{1}}{\partial \bm{p}}=\frac{\partial {L}}{\partial \bm{s}_{1}}^\top\mathbf{W}_1.
\end{split}
\label{eq:data grad}
\end{align}
\subsection{Reduced Complexity and Speedup}
\begin{fact}
For a neural network, the time complexity of computing the adversarial perturbation gradient is $6B\sum_l T_l M_l$ without semi-backward propagation, and $4B\sum_l T_l M_l$ with semi-backward propagation.
\end{fact}
In terms of training speed, the improvement is $4/2=2\times$ on the backward propagation and $6/4=1.5\times$ overall. This complexity reduction is for each step of perturbation optimization, and thus the speedup holds for multi-step and single-step attacks.
Although the improvement is mainly on the time complexity, the semi-backward has some additional improvement on the space complexity, e.g. about 5\% reduction of memory cost for the PGD experiment in \Cref{sec:PGD benchmark} using Torchattacks \cite{kim2020torchattacks}.
\subsection{Code snippet}
This high-level idea is applicable to the general auto-differentiation frameworks, such as Pytorch \cite{paszke2019pytorch}. We provide a code snippet based on \texttt{torchattacks}(v3.3.0)\cite{kim2020torchattacks}, \texttt{timm}(v0.6.11)\cite{rw2019timm} and \texttt{torch}(v1.12)\cite{paszke2019pytorch}, that adds only one line (in red text) to implement the semi-backward propagation.
\begin{Verbatim}[commandchars=\\\{\}]
import timm, torchattacks
model=timm.create_model('resnet50')
image=torch.rand(size=(16,3,224,224))
label=torch.Tensor([1]*16).long()
\textcolor{red}{[p.requires_grad_(False) for p in model.parameters()]}
atk = torchattacks.PGD(model)
adv_images = atk(image,label)
\end{Verbatim}
\section{Experiments}
We evaluate the effectiveness of semi-backward propagation, which enjoys a theoretical speedup by $1.5\times$, on widely-used Python libraries in their latest version: Torchattacks (v3.3.0)\cite{kim2020torchattacks}, Adversarial-Robustness-Toolbox (ART; v1.12.1)\cite{art2018}, AdverTorch (v0.2.4)\cite{ding2019advertorch}, DeepRobust (v0.2.5)\cite{li2020deeprobust}, Foolbox (v3.3.3)\cite{rauber2017foolboxnative}, CleverHans (v4.0.0)\cite{papernot2018cleverhans}, and OpenAttack (v2.1.1)\cite{zeng2021openattack}. We use 1 Tesla T4 GPU and record the time in seconds.
\subsection{PGD benchmarks}
\label{sec:PGD benchmark}
We use 16 ImageNet images of $3\times 224\times 224$ with different libraries. We attack the ResNet50 model (62 million parameters, \cite{he2016deep}) with PGD. We do not report the choice of attack domain $\mathcal{B}$, attack magnitude nor learning rate, because these hyperparameters do not affect the efficiency and the accuracy will be the same with/without semi-backward propagation.
\begin{table*}[!htb]
\centering
\resizebox{0.9\linewidth}{!}{
\begin{tabular}{c|c|c|c|c|c|c}
&Torchattacks& Foolbox &ART&AdverTorch& DeepRobust &CleverHans
\\\hline
original&6.89$\pm$0.11&7.38$\pm$0.08&7.09$\pm$0.03&7.43$\pm$0.08&7.23$\pm$0.07&7.36$\pm$0.09
\\
semi-backward&4.74$\pm$0.02 &5.44$\pm$0.02&5.00$\pm$0.06&5.48$\pm$0.02&4.94$\pm$0.01&4.96$\pm$0.03
\\
speedup&1.45$\times$ &1.36$\times$&1.44$\times$&1.36$\times$&1.46$\times$&1.48$\times$
\end{tabular}
}
\caption{Time over 10 runs ($\pm$ standard deviation) for adversarial perturbation of 50-step PGD on ResNet50 with 16 images. The speedup remains constant for different number of PGD steps.}
\label{tab:pgd}
\end{table*}
We also test different batch sizes and model architectures, including Vision Transformer (ViT \cite{dosovitskiy2020image}, 86 million parameters) on Torchattacks \cite{kim2020torchattacks}.
\begin{table*}[!htb]
\centering
\resizebox{0.8\linewidth}{!}{
\begin{tabular}{c|c|c|c|c|c|c}
Batch size (ResNet50)&4&8&16&32&64&128
\\\hline
original&2.06&3.99&6.89&13.83&27.24&54.6
\\
semi-backward&1.41&2.75&4.74&9.44&18.60&36.6
\\
speedup&1.45$\times$ &1.45$\times$&1.45$\times$&1.46$\times$&1.46$\times$&1.49$\times$
\\\hline\hline
Model (Batch size 16)&ResNet18&ResNet34&ResNet50&ResNet101&ResNet152&ViT-base
\\\hline
original&2.13&3.57&6.89&11.92&16.49&26.5
\\
semi-backward&1.52&2.54&4.74&7.89&11.01&19.2
\\
speedup&1.40$\times$&1.41$\times$&1.45$\times$ &1.50$\times$&1.50$\times$&1.38$\times$
\end{tabular}
}
\caption{Average time over 10 runs for adversarial perturbation of 50-step PGD on ResNets with different batch sizes, using Torchattacks.}
\label{tab:pgd2}
\end{table*}
\subsection{Attack on images}
We test different attacks on ResNet50 using ART and Torchattacks (shorthanded as TA), covering AutoAttack \cite{croce2020reliable}, BIM \cite{kurakin2018adversarial}, Sqaure \cite{andriushchenko2020square}, APGD \cite{croce2020reliable}, CW \cite{carlini2017towards}, MIFGSM \cite{dong2018boosting}, SINIFGSM \cite{lin2019nesterov}, and VMIFGSM \cite{wang2021enhancing}. For each attack we use the default hyperparameters in the corresponding library.
\begin{table*}[!htb]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{c|c|c|c|c|c||c|c|c|c|c||c}
&PGD&AutoAttack&BIM&Square&FGSM&APGD&CW&MIFGSM&SINIFGSM&VMIFGSM&\multirow{2}{*}{Average}
\\\cline{1-11}
library&ART&ART&ART&ART&ART& TA& TA& TA& TA& TA
\\\hline
original&15.2&0.08&15.2&0.14&0.20&0.07&6.85&1.18&6.81&8.27&-
\\
semi-backward &10.2&0.06&10.2&0.11&0.15&0.05&4.76&0.94&4.71&5.68&-
\\
speedup&1.48$\times$&1.25$\times$&1.49$\times$&1.27$\times$&1.31$\times$&1.35$\times$&1.44$\times$&1.26$\times$&1.45$\times$&1.46$\times$&1.38$\times$
\end{tabular}
}
\caption{Average time over 10 runs for adversarial perturbation of attacks with default setting on ResNet50 with 16 images.}
\label{tab:nlp}
\end{table*}
\subsection{Attack on texts}
We test different attacks on BERT (109 million parameters) using the OpenAttack library\footnote{See \url{https://github.com/thunlp/OpenAttack\#attack-built-in-victim-models}.}, covering TextFooler \cite{jin2020bert}, PWWS \cite{ren2019generating}, Genetic \cite{alzantot2018generating}, SememePSO \cite{zang2020word}, BERTAttack \cite{li2020bert}, FD \cite{papernot2016crafting}, DeepWordBug \cite{gao2018black}, and VIPER \cite{eger2019text}. For each attack we use the default hyperparameters in the corresponding library.
\begin{table*}[!htb]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{c|c|c|c|c|c|c|c|c||c}
&\multicolumn{6}{|c|}{Word-level}&\multicolumn{2}{|c|}{Character-level}&\multirow{2}{*}{Average}
\\\cline{1-9}
&TextFooler& PWWS& Genetic&SememePSO&BERTAttack&FD& DeepWordBug &VIPER
\\\hline
original&90.4&113.9&918.7&387.8&147.2&198.1&70.7&237.1&-
\\
semi-backward &65.9&80.1&603.3&312.9&114.4&158.0&53.5&184.5&-
\\
speedup&1.37$\times$&1.42$\times$&1.52$\times$&1.24$\times$&1.29$\times$&1.25$\times$&1.32$\times$&1.29$\times$&1.34$\times$
\end{tabular}
}
\caption{Average time over 10 runs for adversarial perturbation of attacks with default setting on BERT with 20 examples.}
\label{tab:nlp}
\end{table*}
\subsection{Adversarial training}
We extend our discussion to the adversarial training with $K$-step attacks:
\begin{align}
\min_\mathbf{W} \max_{\bm{p}\in \mathcal{B}} L(f(\bm{x}+\bm{p};\mathbf{W}),\bm{y})
\label{eq:adv train}
\end{align}
We claim that, without semi-backward propagation, the time complexity of one iteration (1 $\mathbf{W}$ update, $K \bm{p}$ updates) is $(6*1+6*K)B\sum_l T_l M_l$; with semi-backward propagation, the time complexity of one iteration is $(6*1+4*K)B\sum_l T_l M_l$. Therefore, the theoretical speedup is $\frac{6K+6}{4K+6}$. We demonstrate the empirical speedup in \Cref{fig:train}, using the example in DeepRobust \url{https://github.com/DSE-MSU/DeepRobust\#image-attack-and-defense} with CIFAR10 dataset.
\begin{figure}[!htb]
\centering
\includegraphics[height=7cm]{adv_train.pdf}
\caption{Per-epoch training time with/without semi-backward propagation in adversarial training, using ResNet18 on CIFAR10.}
\label{fig:train}
\end{figure}
To use semi-backward propagation in adversarial training but full backward propagation during the parameter update, we need to turn on and off the computation of parameter gradient, as demonstrated in Appendix.
\section{Discussion}
In this paper, we show that using semi-backward propagation in place of full backward propagation can significantly accelerate the adversarial perturbation, as well as slightly reduce the memory cost. Notice that in Pytroch, turning off \texttt{param.requires\_grad} is different from using \texttt{torch.autograd.grad(loss,$\bm{p}$)} or \texttt{loss.backward(inputs=$\bm{p}$)}\footnote{These approaches have been used in DeepRobust, ART, Torchattacks, etc., which are still accelerated by semi-backward trick in \Cref{tab:pgd}.}, where $\bm{p}$ is the adversarial perturbation: the first approach applies before the forward propagation to define the computation graph.
|
1,314,259,993,450 | arxiv | \section{Introduction}
\label{sec: introduction}
One of many interesting phenomena in active fluids~\cite{marchetti2013hydrodynamics,bechinger2016active, doostmohammadi2018active,bar2020self,gompper20202020} is the enhancement of mixing of suspended particles due to the motion of the microwimmers and accompanying hydrodynamic flows.
This effect, observed in recent experiments~\cite{kim2004enhanced,sokolov2009enhanced,leptos2009dynamics,mino2011enhanced,kurtuldu2011enhancement} and simulations~\cite{underhill2008diffusion,ishikawa2010fluid,pushkin2013fluid}, might be utilized, e.g., in microfluidic devices~\cite{kim2007controlled,suh2010review}.
However, it is largely unexplored how the complex emerging large-scale flow patterns, which are typical for such microswimmer suspensions, impact the mixing and transport properties of these non-equilibrium systems.
Here, we quantify active fluid transport in the framework of an experimentally validated model for polar active fluids, which enables a precise control of the flow states ranging from vortex lattices~\cite{wioland2016ferromagnetic,nishiguchi2018engineering,reinken2019anisotropic,reinken2020organizing,reinken2022ising,james2018turbulence,james2021emergence} to active turbulence~\cite{wensink2012meso,bratanov2015new,alert2021active,oza2016generalized,james2018turbulence,james2021emergence}, either externally, e.g.~through obstacles~\cite{nishiguchi2018engineering,reinken2020organizing,reinken2022ising,sone2019anomalous,zhang2020oscillatory}, or by changing the fluid parameters~\cite{reinken2019anisotropic,james2018turbulence,james2021emergence}.
Previous work has mostly focused on characterizing transport deep in the turbulent regime, drawing on analogies either to two-dimensional turbulence \cite{james2018vortex,sanjay2020friction} or to stochastic processes such as L{\'e}vy walks~\cite{mukherjee2021anomalous,ariel2017chaotic}.
Much less is known about transport close to the transition region between different flow states.
Interestingly, equilibrium systems close to structural phase transitions often display anomalous transport properties, e.g., anisotropic and enhanced diffusion near transitions to nematic phases in liquid crystals~\cite{lowen1999anisotropic,lettinga2005self,lettinga2010hydrodynamic} or peaks in the thermal conductivity close to structural changes in crystalline solids~\cite{aramberri2017thermal,chen2018enhancement,chen2019anomalous}.
In contrast, for a nonequilibrium, active system, we rather expect the interplay of the spatial and temporal persistence of flow structures to have a profound impact on transport.
For example, in a stationary vortex lattice, tracer trajectories will be trapped by individual vortices, following closed streamlines.
This situation is comparable to the transport of electrons in electrostatic potentials, where trajectories are determined by equipotential lines~\cite{reuss1996low,reuss1998percolation,krommes2002fundamental,vlad2004lagrangian,padberg2007lagrangian}.
When the flow becomes turbulent, particles will travel between vortices, giving rise to a diffusive transport for long times, similar to the transport of charged particles in time-varying fields~\cite{isichenko1992percolation,vlad1998diffusion,bakunin2008turbulence}.
Inspired by these analogies, we perform an analysis of the transport properties of active turbulence in terms of the Kubo number $K$, comparing temporal and spatial correlations of the flow field.
We find that the diffusion coefficient as a function of $K$ obtains a maximum close above the transition from a vortex lattice to the turbulent state.
In this regime, the interplay between temporal correlations and spatial structure of the underlying flow leads to optimal transport conditions.
\section{Model}
\label{sec: model}
In this work, we employ an established model for dense microswimmer suspensions~\cite{wensink2012meso,dunkel2013fluid,dunkel2013minimal,bratanov2015new,heidenreich2016hydrodynamic,reinken2018derivation,reinken2019anisotropic,james2018vortex,james2018turbulence} in which density fluctuations can be neglected~\cite{be2020phase} and the dynamics can be described on a coarse-grained (order parameter) level via an effective microswimmer velocity field $\mathbf{v}$~\cite{reinken2018derivation}.
We choose this model over different models that have been shown to exhibit similar pattern formation~\cite{grossmann2014vortex,slomka2017geometry,sone2019anomalous} because it can be derived from microscopic Langevin dynamics including coupling to the solvent flow~\cite{reinken2018derivation} and has been shown to capture experiments on bacteria in unconfined bulk flow~\cite{wensink2012meso} as well as in the presence of geometric confinement, e.g., obstacle lattices~\cite{reinken2020organizing}.
The dynamics of $\mathbf{v}$ is given by
\begin{eqnarray}
\label{eq: dynamic equation}
&\displaystyle\partial_t \mathbf{v} + \lambda \mathbf{v}\cdot\nabla\mathbf{v} = -\frac{\delta \mathcal{F}}{\delta \mathbf{v}}, \\
&\displaystyle \mathcal{F} = \int d\mathbf{x} \left[q \nabla\cdot\mathbf{v} - \frac{a}{2} |\mathbf{v}|^2 + \frac{b}{4} |\mathbf{v}|^4 + \frac{1}{2}\left|(1 + \nabla^2)\mathbf{v}\right|^2 \right], \nonumber
\end{eqnarray}
where $q$ is a pressure-like quantity that ensures the incompressibility condition $\nabla \cdot \mathbf{v} = 0$.
The dynamics can be characterized as a competition between gradient dynamics determined by the functional $\mathcal{F}$ and nonlinear advection ($\lambda \mathbf{v}\cdot\nabla\mathbf{v}$), where $\lambda$ is the advection strength, which can be related to mesoscopic parameters such as the self-propulsion speed~\cite{reinken2018derivation,reinken2022ising}.
For high activity, i.e., $0 < a < 1$, the minimum of $\mathcal{F}$ is a vortex pattern with square lattice symmetry characterized by two perpendicular modes with characteristic wavelength $\Lambda = 2 \pi$ \cite{james2018turbulence}.
When the nonlinear advection strength $\lambda$ is increased above some critical value $\lambda^\star$, which depends on the other parameters $a$ and $b$, the stationary square lattice pattern is destabilized and the advection term induces a dynamic, fluctuating state denoted as mesoscale turbulence~\cite{wensink2012meso,reinken2018derivation}.
We consider $N$ passively advected tracer particles that simply follow the effective microswimmer velocity field $\mathbf{v}$ governed by Eq.~(\ref{eq: dynamic equation}), according to
\begin{equation}
\label{eq: equation of motion tracers}
\partial_t \mathbf{X}_i (t) = \mathbf{v}(\mathbf{X}_i(t),t) \, ,
\end{equation}
where $\mathbf{X}_i = (X_i,Y_i)$ denotes the position of tracer $i$.
In other words, the tracers sample the Lagrangian trajectories (moving with the flow) of the evolving field $\mathbf{v}(\mathbf{x},t)$.
When considering Eq.~(\ref{eq: equation of motion tracers}) as governing equation for a single tracer, one could argue that it should contain a term stemming from molecular noise, e.g., Brownian diffusion.
However, here we will focus on the impact of advective transport and neglect the influence of molecular diffusion, hence, there is no noise term in Eq.~(\ref{eq: equation of motion tracers}).
To justify this assumption, let us briefly consider the Peclet number $Pe$, which gives the ratio between advective and diffusive transport in a system and is calcuated via $Pe = \Lambda_\mathrm{a} v_\mathrm{a} / D_0$, where $\Lambda_\mathrm{a}$ denotes the length scale of the advecting flow, $v_\mathrm{a}$ is the average advective velocity and $D_0$ is the bare molecular diffusion~\cite{bakunin2008turbulence}.
To estimate the Peclet number for tracers in bacterial suspensions exhibiting bacterial turbulence, e.g., \textit{Bacillus Subtilis}~\cite{dunkel2013fluid}, we can first calculate an approximate molecular diffusion coefficient using the Stokes-Einstein relation~\cite{einstein1906theorie}, which is valid for a spherical tracer in three dimensions at low Reynolds number.
The relation is given by $D_0 = k_\mathrm{B}T/(3\pi \eta d)$, where $k_\mathrm{B}$ is the Boltzmann constant, $T$ is the temperature, $\eta$ is the viscosity of the solvent medium and $d$ the diameter of the tracer.
For a tracer of micron size ($d \approx \SI{1}{\micro\meter}$) in water at normal conditions ($T=\SI{293.15}{\K}$, $\eta \approx \SI{0.001}{\newton\second\per\meter\squared}$), we obtain $D_0 \approx \SI{0.4}{\square\micro\meter\per\second}$.
Secondly, the length scale in bacterial turbulence observed in \textit{Bacillus Subtilis} is approximately given by the mean vortex radius $\Lambda_\mathrm{a}\approx\SI{40}{\micro\meter}$~\cite{dunkel2013fluid} and the average velocity $v_\mathrm{a}$ is of the order of $\SI{10}{\micro\meter\per\second}$~\cite{dunkel2013fluid}, depending on the strength of activity.
This estimate yields a Peclet number of $Pe \approx 10^3$, i.e., advective transport is at least three orders of magnitude stronger than the bare molecular diffusion.
These considerations indicate that neglecting molecular diffusion and, thus, a corresponding noise term in Eq.~(\ref{eq: equation of motion tracers}), is indeed a valid assumption, at least for bacterial suspensions exhibiting a mesoscale-turbulent state.
We also neglect any other sources of noise stemming, e.g., from chemical heterogeneities.
We employ a pseudo-spectral method to solve Eq.~(\ref{eq: dynamic equation}) in a two-dimensional system with periodic boundary conditions starting from random initial values and simultaneously evolve the tracer trajectories according to Eq.~(\ref{eq: equation of motion tracers}), see appendix~\ref{app: numerical methods} for details on the numerical methods.
\section{Transition to turbulence}
\label{sec: transition}
\begin{figure}
\includegraphics[width=0.99\linewidth]{Fig1}
\caption{\label{fig: transition} Transition to turbulence. (a) Numerically obtained threshold value of the nonlinear advection strength $\lambda^\star$ as a function of $a$ ($b=1.6$). Below $\lambda^\star$, the system settles into a stationary vortex lattice, whereas above $\lambda^\star$ a mesoscale-turbulent state emerges. Snapshots of the vorticity field $\omega = (\nabla \times \mathbf{v})_z$ at $a=0.5$ in the lattice state at (b) $\lambda=1.7$ and in the mesoscale-turbulent state at (c) $\lambda=4$. Tracers move along instantaneous streamlines (solid lines) of the flow, which continuously break up and reconnect in the turbulent state. (d) MSD as a function of time lag $\tau$ for different $\lambda$ and $a=0.5$ (e) Sample trajectories for $\lambda=1.7$ (red), $\lambda = 2.0$ (violet), $\lambda = 4$ (blue) and $\lambda = 10$ (green) and elapsed time $\Delta t = 250$. The same scale is used for (b), (c) and (e), where the gray bar indicates a length of $2\pi$.}
\end{figure}
We first discuss spatial and temporal correlations of $\mathbf{v}(\mathbf{x},t)$, which are characterized in the Eulerian frame (fixed in space).
Note that we make the assumption of statistically homogeneous, stationary and isotropic turbulence throughout this work.
Thus, the correlation functions do not depend on space $\mathbf{x}$, time $t$ and orientation, but only on the distance $r$ and time lag $\tau$.
The longitudinal correlation function $f(r)$~\cite{davidson2015turbulence} is defined via
\begin{equation}
\label{eq: longitudinal correlation function}
f(r) = v^{-2}\langle v_x(\mathbf{x},t) v_x(\mathbf{x}+r\mathbf{e}_x, t) \rangle \, ,
\end{equation}
where $\mathbf{e}_x$ denotes the unit vector in $x$-direction and the average $\langle \dots \rangle$ is performed over all $\mathbf{x}$ and $t$.
Further, the component-wise root-mean-square velocity $v$, defined via $v^2 = \langle v_x^2 \rangle = \langle v_y^2\rangle$, gives a measure of the overall strength of the flow field.
The Eulerian temporal correlation function $C_\mathrm{E}(\tau)$ is defined via
\begin{equation}
\label{eq: Eulerian temporal correlation function}
C_\mathrm{E}(\tau) = \langle \mathbf{v}(\mathbf{x},t)\cdot \mathbf{v}(\mathbf{x}, t+\tau) \rangle \, .
\end{equation}
Integrating the correlation functions over $r$ and $\tau$, respectively, we define a characteristic length and time scale of the evolving field $\mathbf{v}(\mathbf{x},t)$~\cite{davidson2015turbulence}.
In particular, the longitudinal correlation length $\ell$ and the Eulerian correlation time $\tau_\mathrm{E}$ are calculated via
\begin{equation}
\label{eq: integral length scale and Eulerian correlation time}
\ell = \int_0^\infty f(r) dr \quad \text{and} \quad
\tau_\mathrm{E} = \frac{1}{2v^2}\int_0^\infty C_\mathrm{E}(\tau) \ d\tau \, .
\end{equation}
The field $\mathbf{v}(\mathbf{x},t)$ displays two types of spatio-temporal structures depending on the nonlinear advection strength $\lambda$.
In particular, there is a threshold value $\lambda^\star$, below which the system settles into a regular square vortex lattice (see also appendix~\ref{app: vortex lattice}) given as the minimum of the functional $\mathcal{F}$ in Eq.~(\ref{eq: dynamic equation}).
In contrast, for $\lambda > \lambda^\star$, this stationary, non-fluctuating state is destabilized and the system exhibits a dynamic, mesoscale-turbulent state, see Fig.~\ref{fig: transition} for snapshots.
When approaching the threshold value $\lambda^\star$ from above, the Eulerian correlation time will diverge due to the development of a non-zero tail of $C_\mathrm{E}(\tau)$ (see appendix~\ref{app: Eulerian correlations}).
The value $\lambda^\star$ depends on the other parameters and can be obtained numerically by determining when a stationary square lattice emerges.
Fig.~\ref{fig: transition}(a) shows the value of $\lambda^\star$ as a function of the coefficient of the linear term in Eq.~(\ref{eq: dynamic equation}), $a$, at $b=1.6$.
For the values of $a$ considered in this work, the threshold value is in the range $\lambda^\star \approx 1.7 \dots 2.8$.
\section{Transport properties}
\label{sec: transport properties}
The emerging states of the flow field $\mathbf{v}(\mathbf{x},t)$ determine the shape of the tracer trajectories $\mathbf{X}_i(t)$ via Eq.~(\ref{eq: equation of motion tracers}).
In particular, in the stationary vortex lattice below $\lambda^\star$, the tracers move along closed loops, whereas above $\lambda^\star$, the trajectories become increasingly irregular.
The resulting transport behavior is quantified by the mean squared displacement (MSD),
\begin{equation}
\label{eq: MSD}
\langle \Delta \mathbf{X}^2 \rangle(\tau) = \frac{1}{N} \sum_{i=1}^N \big(\mathbf{X}_i(t_0 +\tau) - \mathbf{X}_{i}(t_0)\big)^2 \, ,
\end{equation}
where $\mathbf{X}_{i}(t_0)$ is the initial position of tracer $i$ at time $t_0$.
In Fig.~\ref{fig: transition}(d), the MSD is plotted as a function of time lag $\tau$ at different values of $\lambda$, while
corresponding sample trajectories are shown in (e).
Initially, we observe ballistic behavior, i.e., $\langle \Delta \mathbf{X}^2 \rangle \propto \tau^2$.
For longer time lag, the behavior depends on the type of emerging state.
Below $\lambda^\star$, due to the trapping of tracers in closed loops within vortices of the stationary lattice, see Fig.~\ref{fig: transition}, the MSD will saturate at a constant value.
In contrast, for $\lambda > \lambda^\star$, the emerging turbulent state allows the tracers to escape closed loops, introducing randomness to the trajectories, see Fig.~\ref{fig: transition}(e).
As a consequence, the behavior becomes diffusive after the initial ballistic time scale
The slope $\langle \Delta \mathbf{X}^2 \rangle(\tau)$ is proportional to the diffusion coefficient $D$, defined analogously to Brownian motion, i.e., $\langle \Delta \mathbf{X}^2 \rangle(\tau) = 2 d D \tau$, where $d$ denotes the dimension ($d=2$ in our case).
Upon further increase of $\lambda$, we observe another effect:
As tracers are transported with $\mathbf{v}$, but the structures in the flow field itself with $\lambda \mathbf{v}$ [see Eq.~(\ref{eq: dynamic equation})], the tracer motion increasingly decouples from that of the flow field, see Supplementary Movies.
As a result, the ballistic time scale decreases, trajectories become more irregular, see Fig.~\ref{fig: transition}(e), and the long-time MSD shifts towards smaller values.
Aiming for a more systematic analysis, we plot the diffusion coefficient in Fig.~\ref{fig: diffusion}(a) as a function of $\lambda$ for different values of $0 < a < 1$.
After an initial trapping regime, where $D = 0$, diffusion increases rapidly above $\lambda^\star$ for all values of $a$.
Strikingly, we observe a clear maximum in an intermediate regime, $3 < \lambda < 6$, above which $D$ slowly decreases.
Further note that an increase of the activity (measured by the coefficient $a$) consistently leads to higher $D$ due to enhancement of the mean kinetic energy~$\propto v^2$ of the flow, see appendix~\ref{app: Lagrangian correlations}.
\begin{figure}
\includegraphics[width=0.99\linewidth]{Fig2}
\caption{\label{fig: diffusion} (a) Diffusion coefficient $D$ as a function of $\lambda$ for different values of $a$ ($b=1.6$). (b) Kubo number $K$ as a function of $\lambda$. Inset: inverse Kubo number $K^{-1}$ as a function of $\lambda$. The dashed line gives the result obtained via Kraichnan's random sweeping hypothesis with $c = 0.33$, see Eq.~(\ref{eq: inverse Kubo number sweeping}). (c) Dimensionless diffusion coefficient $D/(\ell v)$ as a function of $K$. The scaling in the different regimes is given as solid and dashed line, respectively. Error bars represent the standard error.}
\end{figure}
To unravel the nonmonotonic behavior of $D(\lambda)$ and, in particular, the location of the maximum, we take a closer look at the subtle interplay between different time scales of the flow field.
To this end, we will borrow a concept commonly used for transport problems in random fields, e.g., electrons in turbulent magnetized plasmas~\cite{reuss1996low,reuss1998percolation,vlad1998diffusion,padberg2007lagrangian}: the dimensionless Kubo number $K$~\cite{kubo1963stochastic,krommes2002fundamental,bakunin2008turbulence} defined via
\begin{equation}
\label{eq: Kubo number}
K = \frac{\tau_\mathrm{E}}{\tau_\mathrm{tr}} \, .
\end{equation}
It compares the Eulerian correlation time $\tau_\mathrm{E}$ [Eq.~(\ref{eq: integral length scale and Eulerian correlation time})] with the transport time scale $\tau_\mathrm{tr} = \ell/v$, often denoted as large eddy turnover time in inertial turbulence~\cite{davidson2015turbulence}.
We recall that $\ell$ is completely determined by spatial correlations of the advecting flow.
Thus, $\tau_\mathrm{tr}$ measures how long it takes a tracer to be transported a distance $\ell$ by $\mathbf{v}(\mathbf{x},t)$.
For fields evolving very quickly compared to $\tau_\mathrm{tr}$, we have $K \ll 1$, whereas for slowly evolving fields, we have $K \gg 1$.
Fig.~\ref{fig: diffusion}(b) shows $K$ as a function of $\lambda$.
The divergent behavior when approaching the transition to the stationary state at $\lambda^\star$ is inherited from the likewise diverging Eulerian correlation time.
Remarkably, we observe a proportionality $K \propto \lambda^{-1}$ for larger advection strength, i.e., $K^{-1} \propto \lambda$, see inset.
The plausibility of this linear relation can be shown on theoretical grounds using Kraichnan's idealized random sweeping hypthesis~\cite{kraichnan1964kolmogorov,wilczek2012wave} based on the assumption that decorrelation processes are dominated by the sweeping of small-scale structures by the large-scale flow.
In mathematical terms, this can be written as an advection problem,
\begin{equation}
\label{eq: dynamic equation linearized sweeping}
\partial_t \mathbf{v} \approx - \lambda \mathbf{v}_\mathrm{s} \cdot \nabla \mathbf{v}\, ,
\end{equation}
where we have modified the original ansatz~\cite{kraichnan1964kolmogorov,wilczek2012wave} by adding the advection strength $\lambda$ as a prefactor on the right-hand side, motivated by the structure of Eq.~(\ref{eq: dynamic equation}).
As the fluctuations of the sweeping velocity field $\mathbf{v}_\mathrm{s}$ ultimately stem from the flow field $\mathbf{v}$, it is reasonable to assume that the sweeping velocity variance $v_\mathrm{s}^2$ is proportional to the variance of $\mathbf{v}$ given by $v$.
Here, we introduce the proportionality constant $c$ via $v_\mathrm{s} = c\, v$, to be determined by comparison with numerical results.
Since the calculation is quite involved, we detail all the steps in appendices~\ref{app: sweeping} and \ref{app: connection energy spectrum integral length} and only state the final result here:
\begin{equation}
\label{eq: inverse Kubo number sweeping}
K^{-1} = \frac{4c\lambda}{\sqrt{2\pi}} \propto \lambda \, .
\end{equation}
As seen from the inset of Fig.~\ref{fig: diffusion}(b), the sweeping hypothesis is remarkably accurate for the highly dynamic, turbulent flow field at larger values of both $\lambda$ and $a$.
Via comparison with numerical results, we find $c \approx 0.33$ for the proportionality constant.
Surprisingly, this value seems to be quite universal across wide ranges of parameter sets and does not depend on $a$ and $b$ as long as $a > 0.2$ (see appendix~\ref{app: variation of b} for a variation of $b$).
The resulting linear relation is shown in Fig.~\ref{fig: diffusion} as the dashed line.
As only part of the energy of the flow field resides in the large-scale structures responsible for the sweeping effect, a value $c < 1$ seems indeed plausible from an intuitive point of view.
Closer to the transition, where the dynamics becomes much slower, the sweeping effect does not seem to be the dominant driver of temporal decorrelation and more subtle interactions between pattern formation and nonlinear advection evidently play a larger role.
Having introduced the Kubo number, we are now equipped to characterize different regimes of transport.
For smaller values of $\lambda$ close to the transition at $\lambda^\star$, the flow field evolves very slowly, i.e., $\tau_\mathrm{E} \gg \tau_\mathrm{tr}$, which yields $K \gg 1$, see Fig.~\ref{fig: diffusion}(b).
In this regime, vortices persist for long times before moving or vanishing, which means that tracers are frequently trapped and their trajectories describe full circular orbits for multiple rotations before being transported further away, see also Fig.~\ref{fig: transition}(e).
Increasing $\lambda$ leads to faster dynamics of the flow field, the trapping effect becomes less dominant, and the diffusion coefficient increases as is shown in Fig.~\ref{fig: diffusion}(a).
When the maximum of $D$ is reached, we have $K \approx 1$.
Here, the two time scales $\tau_\mathrm{E}$ and $\tau_\mathrm{tr}$ become comparable, which means that it takes approximately the same amount of time for the flow field to rearrange itself as it takes a tracer to move to the other side of a vortex comparable in size to $\ell$.
Before the orbit can transport the tracer back to its original position, $\mathbf{v}(\mathbf{x},t)$ has changed considerably and the original vortex has moved or vanished.
This interplay between timescales creates optimal transport conditions.
Increasing $\lambda$ even further, we observe a third regime where $D$ decreases.
Here, $\tau_\mathrm{E} \ll \tau_\mathrm{tr}$ (yielding $K \ll 1$) and tracers are not able to reach the other side of a vortex before rearrangement.
The Lagrangian correlation time $\tau_\mathrm{L}$ approaches the Eulerian correlation time $\tau_\mathrm{E}$ and the spatial structure of $\mathbf{v}(\mathbf{x},t)$ becomes increasingly unimportant for turbulent diffusion.
As a consequence, the diffusion coefficient scales as $D\propto\tau_\mathrm{E} \propto K$, see SM for details.
Indeed, for particle transport by a random potential~\cite{isichenko1992percolation,vlad2004lagrangian,padberg2007lagrangian}, this proportionality is an established result and usually denoted as quasi-linear regime, valid for $K \ll 1$.
To investigate the scaling behavior further, we plot the nondimensionalized diffusion coefficient $D/(\ell v)$ in a log-log plot as a function of $K$, see Fig.~\ref{fig: diffusion}(c).
After the quasi-linear regime for $K \ll 1$ (larger $\lambda$), where $D \propto K$, we find the maximum of $D$ at $K \approx 1$.
Then, the diffusion coefficient decreases for $K \gg 1$ (smaller $\lambda$) due to the dominance of trapping effects.
To understand this high-Kubo-number regime, we have to remind ourselves that transport of tracers always follows the instantaneous streamlines of the flow field, see Eq.~(\ref{eq: equation of motion tracers}).
As these persist for long times due to the slow dynamics, the main ``drivers'' of transport are those streamlines in between vortices that continue for large distances and do not curve back on themselves, compare Fig.~\ref{fig: transition}(c).
This picture is reminiscent of percolation theory, on the basis of which a scaling law can be derived analytically for transport in random potentials acting like stream functions~\cite{gruzinov1990two}.
The derived scaling is $D \propto K^{-0.3}$, which has also been confirmed numerically~\cite{reuss1996low,reuss1998percolation,vlad1998diffusion}.
Remarkably, the scaling is also consistent with our results as shown in Fig.~\ref{fig: diffusion}(c).
We stress that this scaling is robust against variation of the coefficient $b$, see appendix~\ref{app: variation of b}.
\section{Conclusions}
\label{sec: conclusions}
We have investigated transport of particles in active turbulence using a continuum model for polar active fluids, which exhibits a range of flow states.
Maximal diffusion coefficients are reached for intermediate advection parameters slightly above the transition from a regular vortex lattice to active turbulence.
Drawing on analogies to transport in random fields and, in particular, magnetized plasmas, we borrow the concept of the Kubo number and show that this optimal turbulent transport occurs at $K \approx 1$, where the flow balances spatio-temporal persistence and dynamics.
Additionally, we rationalize the Kubo number scaling for large active advection using Kraichnan's random sweeping hypothesis~\cite{kraichnan1964kolmogorov,wilczek2012wave}, establishing analogies to transport in classical hydrodynamic turbulence.
From a more general perspective, our work describes a novel, striking example of how a non-equilibrium transition between different collective states of an active system leads to a diffusion anomaly.
While such anomalies are well established in the context of structural phase transitions of {\em equilibrium} systems, e.g., liquid crystals \cite{lowen1999anisotropic,lettinga2005self,lettinga2010hydrodynamic}, corresponding non-equilibrium effects are less explored and often restricted to simple models such as driven Brownian particles~\cite{reimann2001giant,goychuk2021nonequilibrium} or lattice models~\cite{hinrichsen2000non}.
In contrast, here we have established a clear link between a large-scale pattern formation phenomenon in a generic continuum model of a polar active fluid, and a diffusion maximum.
As discussed in section~\ref{sec: model}, we neglect the influence of molecular diffusion as we expect the diffusion coefficient $D_0$ to be much smaller than the turbulent diffusion coefficient observed in this study.
We note that for transport in two-dimensional \textit{stationary fields}, molecular diffusivity independent of advection is indeed essential to facilitate transport by letting tracers escape from otherwise closed streamlines~\cite{bakunin2008turbulence}.
Such a \textit{seed} diffusivity $D_0$ would of course alter the behavior in the stationary vortex lattice state and for very high Kubo numbers, where trapping effects play a significant role.
However, in the region of optimal turbulent transport, where the maximum of the diffusion coefficient is reached, the flow field already rearranges itself quite quickly and trapping effects do not play a dominant role.
Thus, a strong impact of molecular diffusion on the behavior in this region is not expected.
Still, a systematic investigation of the influence of (molecular) noise on the tracer dynamics is an interesting subject for future studies.
Further, starting from the present findings for passive point-like tracers, it seems promising to investigate additional effects such as impact of inertia, shape or activity of the tracers~\cite{li2020diffusion}.
Indeed, for passive flows, such aspects have already been explored in some detail~\cite{pandit2017overview}.
Active flows provide an intriguing generalization of such studies, whose implications for phenomena like mixing have yet to be explored.
\begin{acknowledgments}
This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer 163436311 - SFB 910.
\end{acknowledgments}
|
1,314,259,993,451 | arxiv | \section{Introduction}
With the imminent advent of the LHC, particle physics experiments are
poised to explore the TeV energy range directly for the first time.
There are several reasons to expect new physics in this energy range,
such as the origin of particle masses and electroweak symmetry
breaking, the hierarchy problem and the nature of dark matter. In
parallel with the direct exploration of the TeV scale, precision
experiments at low energies continue to place important constraints on
the possible flavour and CP-violating structure of any TeV-scale
physics. Prominent examples include experiments on $B$ and $K$ mesons,
and probes of electric dipole moments~\cite{Nath}. It is clearly
desirable to develop computational tools that can be used to calculate
consistently observables for both low- and high-energy experiments in
a coherent numerical framework. This is particularly desirable in view
of the possibility that the dominance of matter over antimatter in the
Universe may be due to CP-violating interactions at the TeV scale~\cite{BARYO}.
Supersymmetry is one of the most prominent possibilities for new
TeV-scale physics, and the minimal supersymmetric extension of
the Standard Model (MSSM) provides a natural cold dark matter
candidate as well as stabilizing the electroweak scale and
facilitating the unification of the fundamental interactions. There
are many computational tools available for calculations within
the MSSM. The first to include CP-violating phases was {\tt CPsuperH}~\cite{cpsuperh}
based on the renormalization-group-(RG-)improved effective potential approach.
The Higgs-boson pole-mass shifts are calculated by employing the
RG-improved diagrammatic approach.
The recent versions of {\tt FeynHiggs}~\cite{feynhiggs} are based the Feynman
diagrammatic approach. There are merits in both approaches and the difference between
two programs may be
attributed to some unknown higher-order corrections.
Some of us have recently published an analysis of several $B$-physics
observables taking into account the most
general set of CP-violating parameters allowed under the assumption
of minimal flavour violation in the supersymmetric
sector~\cite{MCPMFV}. For this purpose we used an
updated and extended computational tool, {\tt CPsuperH2.0},
which we introduce and describe in this paper.
The main new features of {\tt CPsuperH2.0} are its inclusion of a
number of $B$ observables, including the branching ratios of
$B_s \to \mu^+ \mu^-$,
$B_d \to \tau^+ \tau^-$, $B_u \to \tau \nu$, $B \to X_s \gamma$ and the latter's
CP-violating asymmetry ${\cal A}_{\rm CP}$, and the supersymmetric
contributions to the $B^0_{s,d} - {\bar B^0_{s,d}}$ mass differences.
In addition, {\tt CPsuperH2.0} includes a more complete treatment
of Higgs-boson pole masses, based on a full treatment of the $4 \times 4$
neutral Higgs propagator matrix including the Goldstone boson and
a more complete treatment of threshold effects in self-energies and Yukawa couplings.
It also includes improved treatments of two-body Higgs decays, some important three-body
decays, and two-loop Higgs-mediated contributions to electric dipole moments.
Therefore, {\tt CPsuperH2.0} provides an essentially complete,
self-contained and consistent computational tool for evaluating flavour and
CP-violating physics at energies up to the TeV scale.
The structure of this paper is as follows. Several updated features of
{\tt CPsuperH2.0} are described in Section~2. In particular, in
Subsection~2.1 we introduce the
improved treatment of Higgs-boson pole masses, and Section~2.2
contains a description of the improvements in the treatment of Higgs decay modes.
Then, in Section~3 we describe the {\tt CPsuperH2.0} treatment
of two-loop Higgs effects on electric dipole moments.
The most important new features are described in Section~4, where we
discuss its treatment of $B$ observables. In each Section, we illustrate
in figures some typical results obtained using {\tt CPsuperH2.0}.
\section{Updated Features of {\tt CPsuperH2.0}}
\label{sec:cpsuperh2.0}
It is to be understood that, throughout this paper, we follow the
notations and conventions defined and adopted in {\tt CPsuperH}
for the mixing matrices of neutral Higgs bosons, charginos, neutralinos
and third--generation sfermions, as well as their masses and couplings, etc.
The updates to the original version of {\tt CPsuperH}~\cite{cpsuperh} that are
presented here reflect, in part, feedback from users, as well as extending it
to $B$ observables.
New common blocks {\tt /HC\_RAUX/} and {\tt /HC\_CAUX/} have been introduced for the general
purpose of storing new numerical outputs which are available in {\tt CPsuperH2.0}:
\begin{itemize}
\item {\tt COMMON /HC\_RAUX/ RAUX\_H }
\item {\tt COMMON /HC\_CAUX/ CAUX\_H }
\end{itemize}
The two arrays {\tt RAUX\_H} and {\tt CAUX\_H} are {\tt NAUX}$=$999 dimensional and
only parts of them are being used presently as shown in Tables~\ref{tab:raux} and
\ref{tab:caux}. The contents of these two new arrays are explained
in the corresponding following subsections. These common blocks can also be used by
users for their specific purposes.
\subsection{Improved Treatment of Higgs-Boson Masses and Propagators}
\begin{figure}[ht]
\hspace{ 0.0cm}
\vspace{-0.5cm}
\centerline{\epsfig{figure=mhtb_mch160.eps,height=13.5cm,width=13.5cm}}
\vspace{-0.5cm}
\caption{\it
The masses of the neutral Higgs bosons as functions of $\tan\beta$ for the CPX scenario
\cite{Carena:2000ks}
taking $\Phi_3=\Phi_{A_{t,b,\tau}}=90^\circ$ in the convention $\Phi_\mu=0$,
$M_{\rm SUSY}=0.5$ TeV, and
the charged Higgs-boson pole mass $M_{H^\pm}=160$ GeV.
In each frame, the dashed line is for the case
{\tt IFLAG\_H(12)}$=1$ and the solid line for other case indicated.
}
\label{fig:mhtb}
\end{figure}
\begin{figure}[htb]
\hspace{ 0.0cm}
\vspace{-0.5cm}
\centerline{\epsfig{figure=mhpa_p3m90.eps,height=14.0cm,width=14.0cm}}
\vspace{-0.5cm}
\caption{\it
The masses of the neutral Higgs bosons as functions of the common phase $\Phi_A$
for the trimixing scenario \cite{Ellis:2004fs}
taking $\Phi_3=-90^\circ$. Specifically, in this scenario,
$\tan\beta=50$ and $M_{H^\pm}=155$ GeV.
The lines are the same as in Fig.~\ref{fig:mhtb}.
}
\label{fig:mhpa}
\end{figure}
In {\tt CPsuperH2.0} we make three main
improvements in the calculation of the Higgs-boson pole masses.
\begin{itemize}
\item
The finite threshold corrections induced by the exchanges of gluinos
and charginos have been included
in the top- and bottom-quark self-energies of the
neutral and charged Higgs bosons.
For the explicit expressions
of the self-energies, we refer to Eqs.~(B.14), (B.15), and (B.16) of
Ref.~\cite{Carena:2001fw}~\footnote{We find that
overall minus signs are missing in the expressions of $\Pi_{11,22}^{P,(c)}(s)$.}.
\item Also included are the threshold corrections to the Yukawa couplings $|h_{t,b}|$ in the
one-loop running quartic couplings, $\lambda_i^{(1)}(Q=m_t^{\rm pole})$
with $i=1-4$. For the explicit expressions of $\lambda_i^{(1)}$, we
refer to Eqs.~(3.3)-(3.6) of Ref.~\cite{Carena:2000yi}.
\item An improved
iterative method has been employed for the calculation of the pole masses.
\end{itemize}
As a help in assessing
the improvements in the calculation of Higgs sector, new flags {\tt IFLAG\_H(12)} and
{\tt IFLAG\_H(60)} have been introduced as follows:
\begin{itemize}
\item {\tt IFLAG\_H(12)}:
\begin{itemize}
\item {\tt IFLAG\_H(12)}$=1$: Gives the same result as that obtained by the
older version of {\tt CPsuperH}.
\item {\tt IFLAG\_H(12)}$=\!\!2$: Includes only the
threshold corrections to the neutral and charged Higgs-boson quark self-energies.
\item {\tt IFLAG\_H(12)}$=3$: Includes only the
threshold corrections to $\lambda_i^{(1)}$.
\item {\tt IFLAG\_H(12)}$=4$: Includes only the
iterative method for the pole masses.
\item {\tt IFLAG\_H(12)}$=5$ or $0$: All the improvements are fully included.
\end{itemize}
\item {\tt IFLAG\_H(60)}$=1$: This is an error message that appears
when the iterative method for the pole masses fails.
\end{itemize}
\noindent
The improvement in the threshold corrections to the top- and
bottom-quark Yukawa couplings is important when $\tan\beta$ is large and
the charged Higgs boson is light.
In Figs.~\ref{fig:mhtb} and \ref{fig:mhpa}, we show the pole masses of the neutral Higgs bosons
for the CPX \cite{Carena:2000ks} and trimixing \cite{Ellis:2004fs}
scenarios, respectively, when ${\tt IFLAG\_H(12)}=2$-$5$ as indicated. In each frame, the
old calculation with {\tt IFLAG\_H(12)}$=1$ (dashed line) is also shown for comparison.
Finally, {\tt RAUX\_H(1-6)}, {\tt RAUX\_H(10-36)}, and {\tt CAUX\_H(1-2)} are allocated for
numerical information on the Higgs-sector calculation based on a renormalization-group-improved
diagrammatic approach including dominant higher-order logarithmic and threshold
corrections~\cite{Carena:2000yi,Carena:2001fw}, see Tables~\ref{tab:raux} and \ref{tab:caux}.
\begin{figure}[htb]
\hspace{ 0.0cm}
\vspace{-0.5cm}
\centerline{\epsfig{figure=dh3_i12_5.eps,height=13.5cm,width=13.5cm}}
\vspace{-0.5cm}
\caption{\it
The absolute value of each component of the neutral Higgs-boson propagator matrix
$D^{H^0}({\hat{s}})$ with (red solid lines) and without (black dashed lines)
including off-diagonal absorptive parts in the trimixing scenario with
$\Phi_A=-\Phi_3=90^\circ$ and {\tt IFLAG\_H(12)}$=5$. We note that
$|D^{H^0}_{4\,4}({\hat{s}})|=1$.
The three Higgs-boson pole masses are indicated by thin vertical
lines.
}
\vspace{-0.2cm}
\label{fig:dh3}
\end{figure}
In situations where two or more MSSM Higgs bosons contribute
simultaneously to a process, the transitions between the Higgs-boson
mass eigenstates need to be considered before their decays. For this
reason, we include the {\em complete} $4\times 4$-dimensional
propagator matrix $D^{H^0}(\hat{s})$ spanned by the
basis~$(H_1,H_2,H_3,G^0)$~\cite{APNPB}, including off-diagonal
absorptive parts~\cite{Ellis:2004fs}. The dimensionless neutral
Higgs-boson propagator matrix is given by
\begin{eqnarray}
\label{eq:hprop}
D^{H^0} (\hat{s}) &=& \nonumber \\
&&\hspace{-2.5cm}
\hat{s}\,
\left(\begin{array}{cccc}
\hat{s}-M_{H_1}^2+i\Im {\rm m}\widehat{\Pi}_{11}(\hat{s}) &
i\Im {\rm m}\widehat{\Pi}_{12}(\hat{s})&
i\Im {\rm m}\widehat{\Pi}_{13}(\hat{s}) &
i\Im {\rm m}\widehat{\Pi}_{14}(\hat{s}) \\
i\Im {\rm m}\widehat{\Pi}_{21}(\hat{s}) &
\hat{s}-M_{H_2}^2+i\Im {\rm m}\widehat{\Pi}_{22}(\hat{s})&
i\Im {\rm m}\widehat{\Pi}_{23}(\hat{s}) &
i\Im {\rm m}\widehat{\Pi}_{24}(\hat{s}) \\
i\Im {\rm m}\widehat{\Pi}_{31}(\hat{s}) &
i\Im {\rm m}\widehat{\Pi}_{32}(\hat{s}) &
\hat{s}-M_{H_3}^2+
i\Im {\rm m}\widehat{\Pi}_{33}(\hat{s}) &
i\Im {\rm m}\widehat{\Pi}_{34}(\hat{s}) \\
i\Im {\rm m}\widehat{\Pi}_{41}(\hat{s})&
i\Im {\rm m}\widehat{\Pi}_{42}(\hat{s}) &
i\Im {\rm m}\widehat{\Pi}_{43}(\hat{s}) &
\hat{s} +i\Im {\rm m}\widehat{\Pi}_{44}(\hat{s})
\end{array}\right)^{-1} , \nonumber \\
\label{eq:Hprop}
\end{eqnarray}
where $M_{H_{1,2,3}}$ are the one-loop Higgs-boson pole masses, and
higher-order absorptive effects on $M_{H_{1,2,3}}$ have been
ignored~\cite{Carena:2001fw}. The label `4' refers to
the would-be Goldstone boson of the $Z$ boson. The absorptive part of
the Higgs-boson propagator matrix receives contributions from loops of
fermions, vector bosons, associated pairs of Higgs and vector bosons,
Higgs-boson pairs, and sfermions:
\begin{equation}
\Im {\rm m}\widehat{\Pi}_{ij}(\hat{s})=
\Im {\rm m}\widehat{\Pi}^{ff}_{ij}(\hat{s})+
\Im {\rm m}\widehat{\Pi}^{VV}_{ij}(\hat{s})+\Im {\rm m}\widehat{\Pi}^{HV}_{ij}(\hat{s}) +
\Im {\rm m}\widehat{\Pi}^{HH}_{ij}(\hat{s}) +
\Im {\rm m}\widehat{\Pi}^{\tilde{f}\tilde{f}}_{ij}(\hat{s})\,,
\end{equation}
respectively. We refer to Ref.~\cite{Ellis:2004fs} for their explicit
expressions. For the Goldstone-Higgs mixings,
$\Im {\rm m}\widehat{\Pi}_{i4\,,4i}$ and $\Im {\rm m}\widehat{\Pi}_{44}$, we take
the leading contributions ignoring all gauge-coupling mediated parts.
We also include the $2\times2$-dimensional propagator matrix for the
charged Higgs bosons $D^{H^\pm}(\hat{s})$ spanned by the basis
$(H^\pm\,,G^\pm)$, including off-diagonal absorptive parts:
\begin{equation}
D^{H^\pm}(\hat{s})=
\hat{s}\,\left(\begin{array}{cc}
\hat{s}-M_{H^\pm}^2+i\Im {\rm m}\widehat{\Pi}_{H^\pm H^\pm}(\hat{s}) &
i\Im {\rm m}\widehat{\Pi}_{H^\pm G^\pm}(\hat{s}) \\
i\Im {\rm m}\widehat{\Pi}_{G^\pm H^\pm}(\hat{s}) &
\hat{s}+i\Im {\rm m}\widehat{\Pi}_{G^\pm G^\pm}(\hat{s})
\end{array}
\right)^{-1}\,.
\end{equation}
The relevant Goldstone-boson couplings are given in
Appendix~\ref{sec:Goldstone}.
For the 16 elements of the neutral Higgs-boson propagator matrix
$D^{H^0}({\hat{s}})$ and for the 4 elements of the charged Higgs-boson
propagator matrix $D^{H^\pm}({\hat{s}})$, the slots {\tt
CAUX\_H(100-119)} are used as shown in Table~\ref{tab:caux}.
In Fig.~\ref{fig:dh3}, as an example, we show the absolute value of
all components of the Higgs-boson propagator matrix
$D^{H^0}({\hat{s}})$ as functions of $\sqrt{\hat{s}}$ for the
trimixing scenario with $\Phi_A=-\Phi_3=90^\circ$.
It is important to remark that the $4\times 4$ propagator
matrix~(\ref{eq:hprop}) is sufficient to encode all $H_i - Z$- and $G^0 -
Z$ mixing effects within the Pinch Technique (PT)
framework~\cite{APNPB,PT}, which has been adopted here to remove consistently
gauge-dependent and high-energy unitarity-violating terms from
$\Im {\rm m} \widehat{\Pi}_{ij} (\hat{s})$~\cite{Ellis:2004fs}. For
example, the self-energy transition $H_i \to Z_\mu$,
$\widehat{\Pi}^\mu_{Z H_i} = p^\mu \widehat{\Pi}_{Z H_i}$, is related
to $\widehat{\Pi}_{G^0 H_i}$ through
\begin{equation}
\hat{s}\; \widehat{\Pi}_{Z H_i} (\hat{s})\ =\ -\, i\, M^2_Z\;
\widehat{\Pi}_{G^0 H_i} (\hat{s})\; ,
\end{equation}
with $\hat{s} = p^2$. We recall that the self-energy transitions $H_i\to
\gamma$ and $G^0\to \gamma$ are completely absent within the PT
framework. More details may be found in~\cite{APNPB}.
Note that the elements of the propagator matrix depend on the
center-of-mass energy, denoted by $\sqrt{\hat{s}}$, which is stored in
{\tt RAUX\_H(101)}, see Table~\ref{tab:raux}.
Along with $D^{H^0\,,H^\pm}(\hat{s})$, the $\hat{s}$-dependent
couplings of the neutral Higgs bosons to two gluons,
$S^{g}_i(\sqrt{\hat{s}})$ and $P^{g}_i(\sqrt{\hat{s}})$, and two
photons, $S^{\gamma}_i(\sqrt{\hat{s}})$ and
$P^{\gamma}_i(\sqrt{\hat{s}})$, are needed when we consider the
production of the neutral Higgs bosons and study its CP properties at the
LHC~\cite{gluon_fusion,lhc_cp,Ellis:2004fs}
and a $\gamma \gamma$ collider~\cite{photon_collider,photon_cp,Ellis:2004hw}.
They are calculated and stored in {\tt CAUX\_H(130-135)} and {\tt
CAUX\_H(140-145)} as shown in Table~\ref{tab:caux}.
We have included the dominant contributions coming from the
$\tan\beta$ enhanced loops of sbottoms and gluinos and the subdominant ones
coming from the stop-higgsino mediated diagrams.
Also included are
the resummed corrections to Yukawa couplings.
For the electroweak corrections, see next subsection.
For the next-to-leading-order QCD corrections, appropriately calculated
$K$ factors should be taken into account separately in the calculation of
production cross sections~\cite{QCD1,QCD2}.
Two additional flags are used to control the inclusion of the
off-diagonal absorptive parts and print out the the
$\hat{s}$-dependent propagator matrix and the $\hat{s}$-dependent
Higgs couplings to two photons and gluons:
\begin{itemize}
\item {\tt IFLAG\_H(13)}$=1$: Does not include the off-diagonal
absorptive parts in the propagator matrices
$D^{H^0\,,H^\pm}({\hat{s}})$.
\item {\tt IFLAG\_H(14)}$=1$: Prints out each component of the
Higgs-boson propagator matrices $D^{H^0\,,H^\pm}({\hat{s}})$ and the
$\hat{s}$-dependent couplings $S^{\gamma\,,g}_i(\sqrt{\hat{s}})$ and
$P^{\gamma\,,g}_i(\sqrt{\hat{s}})$.
\end{itemize}
\subsection{Improved Treatment of Higgs-Boson Couplings and Decays}
The main updates include:
\begin{itemize}
\item The electroweak corrections to the neutral Higgs couplings to pairs of tau
leptons and $b$-quarks~\cite{Guasch:2001wv}. The explicit formulae used in the code
for the corrections, including non-vanishing CP phase effects,
could be found in Ref.~\cite{Ellis:2004fs} and
Eqs.(A.1)-(A.2) of Ref.~\cite{cpsuperh}.
\item The three-body decay $H^+ \rightarrow t^* \bar{b} \rightarrow W^+ b \bar{b}$.
Some three-body decays play important role in Higgs searches~\cite{three_body}.
In addition to the three-body decays involving more than one massive gauge boson
considered previously, we include
the three-body decay $H^+ \rightarrow t^* \bar{b} \rightarrow W^+ b \bar{b}$ in the new
version. The decay width is given by
\begin{eqnarray}
&&\hspace{-1.0cm}\Gamma(H^+ \rightarrow W^+ b \bar{b}) = \nonumber \\
&&\hspace{1.0cm}N_C\frac{g^2\,g_{tb}^2\,M_{H^\pm}}{512\pi^3}
\int_0^{1-\kappa_W}{\rm d}x_1
\int_{1-\kappa_W-x_1}^{1-\frac{\kappa_W}{1-x_1}}{\rm d}x_2
\frac{F(x_1,x_2)}{(1-x_2-\kappa_t+\kappa_b)^2+\kappa_t\gamma_t} ,
\end{eqnarray}
where $\kappa_x\equiv m_x^2/M_{H^\pm}^2$,
$\gamma_t\equiv \Gamma_t^2/M_{H^\pm}^2$ and $x_i\equiv 2E_i/M_{H^\pm}$
with $E_1$ and $E_2$ being the energies of the $b$ and $\bar{b}$ quarks, respectively.
In the charged Higgs-boson rest frame, the function $F(x_1,x_2)$ is given by
\begin{eqnarray}
F(x_1,x_2) &=&
\Bigg\{|g_L|^2\Bigg[\kappa_t\Bigg(\frac{(1-x_1)(1-x_2)}{\kappa_W}
+2x_1+2x_2-3+2\kappa_W\Bigg) -2\kappa_b\kappa_t\Bigg]
\nonumber \\ &&
+|g_R|^2\Bigg[\frac{x_2^3+x_1x_2^2-3x_2^2-2x_1x_2+3x_2+x_1-1}{\kappa_W}
\nonumber \\ && \hspace{1.5cm}
+(x_2^2+2x_1x_2-4x_2-2x_1+3-2\kappa_W)
\nonumber \\ && \hspace{1.5cm}
+\kappa_b\Bigg(-2x_1+3+2\kappa_W+
\frac{-2x_2^2-x_1x_2+5x_2+x_1-3}{\kappa_W}\Bigg)-2\kappa_b^2 \Bigg]
\nonumber \\ &&
+2\sqrt{\kappa_b\kappa_t}\,\Re {\rm e}({g_Lg_R^*})\Bigg[
\frac{(x_2-1)^2}{\kappa_W}+(-x_2+1-2\kappa_W)+2\kappa_b\Bigg]\Bigg\} ,
\end{eqnarray}
where $g_L\equiv g^S_{H^+\bar{t}b}-ig^P_{H^+\bar{t}b}$ and
$g_R\equiv g^S_{H^+\bar{t}b}+ig^P_{H^+\bar{t}b}$.
\item The contributions from tau-lepton and charm-quark loops to the couplings
$S^\gamma_i(M_{H_i})$ and $P^\gamma_i(M_{H_i})$.
\item A new flag {\tt IFLAG\_H(57)}$=1$: This is an error message that appears
when one of the magnitudes of the complex input parameters is negative.
\end{itemize}
The {\tt CPsuperH} homepage has been continuously brought up to date
after its first appearance to include the updates discussed in this subsection and
others not mentioned here. We refer to the file {\tt 0LIST\_V1}
for a full list of updates to the original version which can be found in the {\tt CPsuperH}
homepage.
\section{Higgs-Mediated Two-Loop Electric Dipole Moments}
\begin{figure}[t]
\hspace{ 0.0cm}
\vspace{-1.2cm}
\centerline{\epsfig{figure=dtl_cpx.eps,height=13.5cm,width=13.5cm}}
\vspace{-0.7cm}
\caption{\it
The Thallium EDM $\hat{d}_{\rm Tl} \equiv d^H_{\rm Tl}
\times 10^{24}$~$[e\,cm]$ in the CPX scenario with $\Phi_A=\Phi_3=90^\circ$
and $M_{\rm SUSY}=0.5$ TeV taking {\tt IFLAG\_H(12)}$=5$~\cite{Lee:2007ai}.
The different shaded regions correspond to different ranges of $|\hat{d}_{\rm Tl}|$ as
shown. Specially, in the narrow region denoted by black squares,
one has $|\hat{d}_{\rm Tl}|<1$,
consistent with the current Thallium EDM constraint.
}
\label{fig:dtl}
\end{figure}
The CP phases in the MSSM are significantly constrained by measurements
of Electric Dipole Moments (EDMs).
In particular, the EDM of the Thallium atom may provide currently the most stringent
constraint on MSSM scenarios with explicit CP violation.
The atomic EDM of $^{205}$Tl gets its main contributions
from two terms ~\cite{KL,PR}:
\begin{eqnarray}
d_{\rm Tl}\,[e\,cm]\ &=&\ -585\cdot d_e\,[e\,cm]\:
-\: 8.5\times 10^{-19}\,[e\,cm]\cdot (C_S\,{\rm TeV}^2)+ \cdots\,, \nonumber \\
&\equiv & (d_{\rm Tl})^e\,[e\,cm] + (d_{\rm Tl})^{C_S}\,[e\,cm] + \cdots\,,
\end{eqnarray}
where $d_e$ denotes the electron EDM and $C_S$ is the coefficient of
the CP-odd electron-nucleon interaction ${\cal
L}_{C_S}=C_S\,\bar{e}i\gamma_5 e\,\bar{N}N$. The dots denote
sub-dominant contributions from 6-dimensional tensor and
higher-dimensional operators.
The contributions of the first- and second-generation phases,
$\Phi_{A_{e,\mu}}$ and $\Phi_{A_{d,s}}$, to EDMs can be
drastically reduced either by assuming that these phases sufficiently
small, or if the first- and second-generation squarks and sleptons are
sufficiently heavy. However, even
when the contributions of the first and second generation phases to
EDMs are suppressed, there are still sizeable contributions to EDMs
from Higgs-mediated two-loop diagrams ~\cite{CKP}.
The Higgs-mediated two-loop Thallium ($d^H_{\rm Tl}$), electron ($d^H_e$), and
muon ($d^H_\mu$) EDMs are calculated and stored in {\tt RAUX\_H(111-120)}
as shown in Table~\ref{tab:raux}. The Thallium and electron EDMs consist of:
\begin{eqnarray}
d^H_{\rm Tl}&=&(d^H_{\rm Tl})^e+(d^H_{\rm Tl})^{C_S}\,,\nonumber \\
d^H_e&=&(d^H_e)^{\tilde{t}}+(d^H_e)^{\tilde{b}}+(d^H_e)^t+(d^H_e)^b
+(d^H_e)^{\tilde{\chi}^\pm}\,.
\end{eqnarray}
The explicit expressions for the EDMs in the {\tt CPsuperH} conventions and notations may be
found in Ref.~\cite{Ellis:2005ik}. A flag {\tt IFLAG\_H(15)}$=1$ is used to print out the
results of the EDM calculations:
\begin{itemize}
\item {\tt IFLAG\_H(15)}$=1$: Print out EDMs.
\end{itemize}
In Fig.~\ref{fig:dtl}, we show the rescaled Thallium EDM $\hat{d}_{\rm
Tl} \equiv d^H_{\rm Tl} \times 10^{24}$ in units of $e\,cm$ in the
$\tan\beta$-$M_{H_1}$ plane, in the CPX scenario with {\tt IFLAG\_H(12)}$=5$.
We observe, when $\tan\beta \raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 5$ and $M_{H_1}\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 10$ GeV,
one may have $|\hat{d}_{\rm Tl}|<1$ only in the narrow region denoted by black
squares which
is consistent with the current 2-$\sigma$ upper bound on the Thallium EDM
\cite{Regan:2002ta}: $|d_{\rm Tl}|\ \raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~\ 1.3\times 10^{-24}\,[e\,cm]$.
We note that the region $8~{\rm GeV}\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ M_{H_1}\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 10$ GeV
with $\tan\beta \raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 10$
has not been excluded by the combined constraints from
the LEP searches~\cite{LEP_HIGGS}
and the $\Upsilon(1S)\rightarrow \gamma H_1$ decay~\cite{upsilon_visible}.
The Thallium EDM constraint can be evaded by assuming cancellations
between the two-loop contributions considered here and possible
one-loop contributions which depend on
different CP-odd phases related to the first and second generations
of squarks and sleptons. For example, assuming cancellation of less than 1 part
in 10, the region with $1 \leq |\hat{d}_{\rm Tl}| < 10$ in Fig.~\ref{fig:dtl} is
allowed.
In the future, this treatment of the most important two-loop
contributions to the Thallium EDM will be supplemented by a more
complete implementation of calculations of the well-known 1-loop
contributions to this and other EDMs.
\section{$B$-Meson Observables}
An important innovation in {\tt CPsuperH2.0} is the inclusion of
the following important Higgs-mediated
$B$-meson observables:
\begin{itemize}
\item The branching ratio of $B_s$ meson into a pair of muons: $B(B_s\to \mu\mu)$,
\item The branching ratio of $B_d$ meson into a pair of tau leptons: $B(B_d\to \tau\tau)$,
\item The SUSY contribution to the $B_d^0$-$\bar{B}_d^0$ mass difference:
$\Delta M_{B_d}^{\rm SUSY}$,
\item The SUSY contribution to the $B_s^0$-$\bar{B}_s^0$ mass difference:
$\Delta M_{B_s}^{\rm SUSY}$,
\item The ratio of the branching ratio $B(B_u\to \tau\nu)$ to the SM value:\\
$$R_{B\tau\nu}=\frac{B(B_u^-\to \tau\nu)}{B^{\rm SM}(B_u^-\to \tau\nu)}$$,
\item The branching ratio $B(B\to X_s \gamma)$ and the direct CP asymmetry ${\cal A}_{\rm
CP}(B\to X_s\gamma)$.
\end{itemize}
We adopt the most recent gauge-invariant and flavour-covariant formalism to calculate
the flavour-changing effective Lagrangian for the interactions of the
neutral and charged Higgs fields to the up- and down-type quarks including a new class of
dominant subleading contributions~\cite{MCPMFV}.
In the current version, the single-Higgs insertion approximation is used.
For the calculations of $B$-meson observables, the array {\tt SMPARA\_H} for the SM
parameters has been
extended to include information on the CKM matrix,
parameterized via $\lambda$, $A$, $\bar\rho$, and
$\bar\eta$, as seen in Table~\ref{tab:smpara}. The CKM matrix is constructed as~\cite{PDG}
\begin{equation}
V=\left(
\begin{array}{ccc}
c_{12}\,c_{13} & s_{12}\,c_{13} & s_{13}\,e^{-i\,\delta} \\
-s_{12}\,c_{23}-c_{12}\,s_{23}\,s_{13}\,e^{i\,\delta} &
c_{12}\,c_{23}-s_{12}\,s_{23}\,s_{13}\,e^{i\,\delta} & s_{23}\,c_{13} \\
s_{12}\,s_{23}-c_{12}\,c_{23}\,s_{13}\,e^{i\,\delta} &
-c_{12}\,s_{23}-s_{12}\,c_{23}\,s_{13}\,e^{i\,\delta} & c_{23}\,c_{13}
\end{array}
\right)\,,
\end{equation}
where $s_{ij}=\sin\theta_{ij}$, $c_{ij}=\cos\theta_{ij}$, and $\delta$ is the KM phase
with $s_{ij}\,,c_{ij} \geq 0$. In terms of $\lambda$, $A$, $\bar\rho$, and
$\bar\eta$, they are given by
\begin{equation}
s_{12}=\lambda \,, \ \ \ s_{23}=A\lambda^2\,, \ \ \
s_{13}\,e^{i\,\delta}=\frac{A\lambda^3(\bar\rho+i\,\bar\eta)\sqrt{1-A^2\lambda^4}}
{\sqrt{1-\lambda^2}\left[1-A^2\lambda^4(\bar\rho+i\,\bar\eta)\right]}\,,
\end{equation}
and $c_{ij}=\sqrt{1-|s_{ij}|^2}$.
The SUSY parameter array {\tt SSPARA\_H} is also extended to include the
hierarchy factors
$\rho_{\tilde{Q},\tilde{U},\tilde{D},\tilde{L},\tilde{E}}$ between the first two and
third generations~\cite{DP}, see Table~\ref{tab:sspara}. In the super-CKM basis,
the $3\times 3$ squark mass matrices squared are taken to be diagonal:
\begin{eqnarray}
{\bf \widetilde{M}}^2_Q \ &=& \ m_{\tilde{Q}_3}^2 \
\times \ {\rm diag}\,(\rho_{\tilde{Q}}^2,\rho_{\tilde{Q}}^2,1)\,, \nonumber \\
{\bf \widetilde{M}}^2_U \ &=& \ m_{\tilde{U}_3}^2 \
\times \ {\rm diag}\,(\rho_{\tilde{U}}^2,\rho_{\tilde{U}}^2,\,1)\,, \nonumber \\
{\bf \widetilde{M}}^2_D \ &=& \ m_{\tilde{D}_3}^2 \
\times \ {\rm diag}\,(\rho_{\tilde{D}}^2,\rho_{\tilde{D}}^2,1)\,, \nonumber \\
{\bf \widetilde{M}}^2_L \ &=& \ m_{\tilde{L}_3}^2 \
\times \ {\rm diag}\,(\rho_{\tilde{L}}^2,\rho_{\tilde{L}}^2,1)\,, \nonumber \\
{\bf \widetilde{M}}^2_E \ &=& \ m_{\tilde{E}_3}^2 \
\times \ {\rm diag}\,(\rho_{\tilde{E}}^2,\rho_{\tilde{E}}^2,1)\,.
\end{eqnarray}
Finally, the results for the $B$-meson observables are stored in {\tt RAUX\_H(130-136)} as
shown in Table~\ref{tab:raux}. The SUSY contributions to the $\Delta B=2$ transition
amplitudes are stored in {\tt CAUX\_H(150)} and {\tt CAUX\_H(151)}, see Table~\ref{tab:raux}.
Note the relations ${\tt RAUX\_H(132)}=2\times|{\tt CAUX\_H(150)}|$ and
${\tt RAUX\_H(133)}=2\times|{\tt CAUX\_H(151)}|$.
Two flags {\tt IFLAG\_H(16)} and {\tt IFLAG\_H(17)}
are used to print out the results of the calculation of
$B$-meson observables:
\begin{itemize}
\item {\tt IFLAG\_H(16)}$=1$: Print out $B$-meson observables.
\item {\tt IFLAG\_H(17)}$=1$: Print out details of the $B\to X_s \gamma$ calculation.
\end{itemize}
For numerical examples of $B$-meson observables, we take the CPX
scenario~\cite{Carena:2000ks} with $M_{\rm SUSY}=0.5$ TeV and the
common $A$-term phase $\Phi_A\equiv\Phi_{A_t}=\Phi_{A_t}=\Phi_{A_\tau}$
in the convention $\Phi_\mu=0^\circ$.
We take account of the dependence on the hierarchy factors
$\rho_{\tilde{Q},\tilde{U},\tilde{D}}$ between the first two and the third generations,
taking a common value $\rho$ for the three of them.
Figure~\ref{fig:bsmm_p3} shows the dependence of the branching ratio
$B(B_s\to\mu^+\mu^-)$ on the phase of the
gluino mass parameter $\Phi_3$ for four values of $\tan\beta$. The charged Higgs-boson
pole mass is fixed at $M_{H^\pm}=200$ GeV. In each frame, two sets of three lines
are shown. The upper lines are for higher $\rho=10$ and the lower ones for $\rho=1$.
For fixed $\rho$, three lines show the cases of $\Phi_A=0^\circ$ (solid),
$90^\circ$ (dashed), and $180^\circ$ (dash-dotted). The $\rho$ dependence is shown in
Fig.~\ref{fig:bsmm_rho}. We clearly see the {\it GIM operative point} mechanism
discussed in Ref.~\cite{DP} around $\rho\sim 1.2$ when
$(\Phi_3\,,\Phi_A)=(0^\circ\,,180^\circ)$ (solid lines).
Figure~\ref{fig:bsmm_mh1tb} shows
the rescaled branching ratio $\widehat{B}_\mu\equiv B(B_s\to\mu^+\mu^-)\times 10^7$
in the $M_{H_1}$-$\tan\beta$ plane
when the phases are fixed at $\Phi_A=\Phi_3=90^\circ$. The unshaded region is not
theoretically allowed. Only the region with $\widehat{B}_\mu < 0.58$ is consistent
with the current experimental upper limit at 95 \% C.L., corresponding to
$\tan\beta \raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 20\,(8)$ for $\rho=1\,(10)$.
The rescaled branching ratio
$\widehat{B}_{s\gamma}\equiv B(B\to X_s\gamma)\times 10^4$
is shown in Fig.~\ref{fig:bsg_mh1tb}. In contrast to the $B_s\to\mu^+\mu^-$ case,
we observe that higher $\tan\beta$ region is experimentally allowed:
$\tan\beta \raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 35\,(20)$ for $\rho=1\,(10)$. This is because the charged-Higgs
contribution is suppressed due to the threshold corrections when $\tan\beta$ is large.
The charged-Higgs contribution to $B\to X_s\gamma$ is proportional
to $1/(1+|\kappa|^2\tan^2\beta)$~\cite{Carena:2000uj},
where $\kappa$ represents the threshold corrections with
$|\kappa|\simeq 0.05$ for the parameters chosen~\cite{Borzumati:2004rd}.
Figure~\ref{fig:rbtn_mh1tb} shows the ratio of the branching ratio $B(B_u\to \tau\nu)$ to
its SM value, $R_{B\tau\nu}$. In the left frame with $\rho=1$, we see
two connected bands of the experimentally
allowed 1-$\sigma$ region, $0.62 < R_{B\tau\nu} <1.38$. If we consider the
2-$\sigma$ limit, only the upper-left region with
$M_{H_1}\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 95$ GeV and $\tan\beta \raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 35$
is not allowed. For larger $\rho=10$, the
allowed region becomes narrower.
In Fig.~\ref{fig:three_mh1tb}, we show the region satisfying the
experimental constraints from $B(B_s \to \mu^+\mu^-)$ (95 \%),
$B(B \to X_s\gamma)$ (2 $\sigma$), and $R_{B\tau\nu}$ (1 $\sigma$). First we observe
that there is no region that satisfies the $B_s \to \mu^+\mu^-$ and
$B \to X_s\gamma$ constraints simultaneously for both $\rho=1$ and $10$.
If one neglects the constraint
from $B(B_s \to \mu^+\mu^-)$, only the high-$\tan\beta$ region would remain.
Taking account of $B_u\to \tau\nu$ constraint, the region with
$\tan\beta\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 36$ and $M_{H_1}\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 80$ GeV is allowed when $\rho=1$.
On the other hand, neglecting the constraint from $B(B \to X_s\gamma)$, the allowed region is
constrained in the parameter space with $\tan\beta\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 20$ and $M_{H_1}\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 10$ GeV
for $\rho=1$. For $\rho=10$, the $B \to X_s\gamma$ constraint is relaxed but those
from $B(B_s \to \mu^+\mu^-)$ and $R_{B\tau\nu}$ become more stringent.
Finally, in Fig.~\ref{fig:bdtt_dmb}, we show
the region allowed experimentally by the measurement $B(B_d\to
\tau^+\tau^-) < 4.1 \times 10^{-3}$ (90 \%)~\cite{BDTT} (upper frames) and the regions
where the SUSY contribution is smaller than the measured values of
$B_s^0$-$\bar{B}_s^0$ mass difference~\cite{Evans:2007hq} (middle frames)
and $B_d^0$-$\bar{B}_d^0$ mass difference~\cite{PDG} (lower frames).
We see that the $B(B_d \to \tau^+\tau^-)$ constraint has the least impact on
these parameter planes, whereas the impacts of the $B_s^0$-$\bar{B}_s^0$
and $B_d^0$-$\bar{B}_d^0$ mass differences are similar.
These examples illustrate the possible interplays between the different
$B$-meson observables, and how they may vary significantly with the values of
the CP-violating phases. {\tt CPsuperH2.0} provides a unique tool for combining
these constraints and pursuing their implications for other observables.
In the future, the {\tt CPsuperH2.0} treatment of these important $B$-meson observables
will be supplemented by the implementation of calculations
of other flavour observables, including the $K$ sector.
\section{Summary and Outlook}\label{sec:summary}
We have presented in this paper a description of the new features of
the Fortran code {\tt CPsuperH2.0}. In addition to improved
calculations of the Higgs-boson poles masses with more complete
treatment of threshold effects in self-energies and Yukawa couplings,
the {\em complete} $4\times 4$ ($2\times 2$) neutral (charged)
Higgs-boson propagator matrices with the Goldstone-Higgs mixing
effects have been consistently implemented. Specifically, the neutral
Higgs-boson propagator matrix constitutes a necessary ingredient for
the studies of a system of strongly-mixed Higgs bosons at colliders
together with the center-of-mass dependent Higgs-boson couplings to
gluons and photons. It also provides the improved Higgs-boson
couplings to tau leptons, $b$ quarks, and two photons. The important
three-body decay $H^+ \to t^* \bar{b} \to W^+ b \bar{b}$ is included.
In order to provide a more complete, consistent tool for calculating
CP-violating observables in the MSSM, and specifically to incorporate
the important constraints coming from precision experiments at
low energies, {\tt CPsuperH2.0} has been extended to include a number
of $B$-meson observables, as well as the Higgs-mediated two-loop
contributions to EDMs of the Thallium atom, electron and muon.
The currently available $B$-meson observables are the branching ratios
of $B_s \to \mu^+ \mu^-$, $B_d \to \tau^+ \tau^-$, $B_u \to \tau \nu$,
$B \to X_s \gamma$ and the latter's CP-violating asymmetry ${\cal
A}_{\rm CP}$, and the supersymmetric contributions to the $B^0_{s,d} -
{\bar B^0_{s,d}}$ mass differences. Further low-energy observables
are to be included in future updates.
The improved Fortran code {\tt CPsuperH2.0} provides a coherent and
complete numerical framework in which one can calculate consistently
observables in both low- and high-energy experiments probing physics
beyond the SM.
\vspace{-0.2cm}
\subsection*{Acknowledgements}
\vspace{-0.3cm}
\noindent
The work of J.S.L. was supported in part by the Korea Research
Foundation and the Korean Federation of Science and Technology
Societies Grant funded by the Korea Government (MOEHRD, Basic Research
Promotion Fund) and in part by the National Science Council of Taiwan, R.O.C.
under Grant No. NSC 96-2811-M-008-068.
The work of A.P. was support in part by the STFC
research grant: PP/D000157/1.
Work at ANL is supported in part by the US DOE, Div. of HEP, Contract
DE-AC02-06CH11357 . Fermilab is operated by Universities Research Association Inc.
under contract no. DE-AC02-76CH02000 with the DOE.
We thank S.Y.~Choi and M.~Drees for past
collaboration on {\tt CPsuperH}, and for discussions on this updated version.
\begin{figure}[htb]
\hspace{ 0.0cm}
\vspace{-0.5cm}
\centerline{\epsfig{figure=bsmm_p3.eps,height=14cm,width=14cm}}
\vspace{-0.5cm}
\caption{\it The branching ratio $B(B_s \to \mu^+\mu^-) \times 10^7$ as a function of
$\Phi_3$ for four values of $\tan\beta$: $\tan\beta=10$ (upper left), 20 (upper right),
30 (lower left), and 40 (lower right).
The CPX scenario is taken with $M_{\rm SUSY}=0.5$ TeV and $M_{H^\pm}=200$ GeV
in the convention $\Phi_\mu=0$. In each frame, the lower three lines are for the case
$\rho\equiv\rho_{\tilde{Q}}=\rho_{\tilde{U}}=\rho_{\tilde{D}}=1$ and the upper lines for
$\rho=10$ where the solid, dashed, and dash-dotted lines are for $\Phi_A=0^\circ$,
$90^\circ$, and $180^\circ$, respectively. The current 95 \% experimental upper
bound, $5.8 \times 10^{-8}$~\cite{CDF:2007kv}, is also shown as
a horizontal line in each frame. }
\label{fig:bsmm_p3}
\end{figure}
\begin{figure}[htb]
\hspace{ 0.0cm}
\vspace{-0.5cm}
\centerline{\epsfig{figure=bsmm_rho.eps,height=14cm,width=14cm}}
\vspace{-0.5cm}
\caption{\it The branching ratio $B(B_s \to \mu^+\mu^-) \times 10^7$ as a function of the
common hierarchy factor $\rho\equiv\rho_{\tilde{Q}}=\rho_{\tilde{U}}=\rho_{\tilde{D}}$
for four values of $\tan\beta$: $\tan\beta=10$ (upper left), 20 (upper right), 30,
(lower left), and 40 (lower right).
The CPX scenario is taken with $M_{\rm SUSY}=0.5$ TeV and $M_{H^\pm}=200$ GeV
in the convention $\Phi_\mu=0$. In each frame, the solid line is for
$(\Phi_3\,,\Phi_A)=(0^\circ\,,180^\circ)$ and the dashed one for $(90^\circ\,,90^\circ)$.
The current 95 \% experimental upper bound, $5.8 \times
10^{-8}$~\cite{CDF:2007kv}, is also shown as
a horizontal line in each frame. }
\label{fig:bsmm_rho}
\end{figure}
\vspace{-0.5cm}
\begin{figure}[htb]
\hspace{ 0.0cm}
\vspace{-0.5cm}
\begin{center}
\includegraphics[width=7.5cm]{bsmm_mh1tb_1.eps}
\includegraphics[width=7.5cm]{bsmm_mh1tb_10.eps}
\end{center}
\vspace{-0.5cm}
\caption{\it The branching ratio $\widehat{B}_\mu\equiv B(B_s \to \mu^+\mu^-) \times 10^7$
in the $(\tan\beta\,,M_{H_1})$ plane. The CPX scenario is taken
with $\Phi_A=\Phi_3=90^\circ$ and $M_{\rm SUSY}=0.5$ TeV for two values of
the common hierarchy factor: $\rho=1$ (left) and $10$ (right).
The unshaded region is not theoretically allowed.
The different shaded regions correspond to different ranges of
$\widehat{B}_\mu$, as shown: specifically,
$\widehat{B}_\mu < 0.58$ in the lowest (blue) low-$\tan\beta$
region, consistent with the current
upper limit at 95 \% C.L. }
\label{fig:bsmm_mh1tb}
\end{figure}
\vspace{-1.5cm}
\begin{figure}[htb]
\hspace{ 0.0cm}
\vspace{-0.5cm}
\begin{center}
\includegraphics[width=7.5cm]{bsg_mh1tb_1.eps}
\includegraphics[width=7.5cm]{bsg_mh1tb_10.eps}
\end{center}
\vspace{-0.5cm}
\caption{\it The branching ratio $\widehat{B}_{s\gamma}\equiv B(B \to X_s \gamma)
\times 10^4$ in the $(\tan\beta\,,M_{H_1})$ plane. The same CPX scenario with
$\Phi_A=\Phi_3=90^\circ$ is taken
as in Fig.~\ref{fig:bsmm_mh1tb}.
The different shaded regions correspond to different ranges of
$\widehat{B}_{s\gamma}$, as shown: specifically, $3.03<\widehat{B}_{s\gamma} \leq 4.07$
in the upmost (blue) high-$tan\beta$ region,
consistent with the current experimentally allowed
2-$\sigma$ region, $3.03<\widehat{B}_{s\gamma} \leq 4.07$~\cite{HFAG}.}
\label{fig:bsg_mh1tb}
\end{figure}
\begin{figure}[htb]
\hspace{ 0.0cm}
\vspace{-0.5cm}
\begin{center}
\includegraphics[width=7.5cm]{rbtn_mh1tb_1.eps}
\includegraphics[width=7.5cm]{rbtn_mh1tb_10.eps}
\end{center}
\vspace{-0.5cm}
\caption{\it The ratio $R_{B\tau\nu}$ in the $(\tan\beta\,,M_{H_1})$ plane.
The same CPX scenario with $\Phi_A=\Phi_3=90^\circ$
is taken as in Fig.~\ref{fig:bsmm_mh1tb} for two values of $\rho$:
$\rho=1$ (left) and $10$ (right).
The different shaded regions correspond to the regions allowed at the
1-$\sigma$ and 2-$\sigma$ levels
by the recent BELLE and BABAR results: $R_{B\tau\nu}^{\rm
EXP}=1.0\pm 0.38$~\cite{Btaunu,MCPMFV}. In the right frame, specifically,
the 2-$\sigma$ excluded
regions are shown as $R_{B\tau\nu} > 1.76$ (in the high-$\tan\beta$ region)
and $R_{B\tau\nu} \leq 0.24$ (in the middle-$\tan\beta$ region). }
\label{fig:rbtn_mh1tb}
\end{figure}
\begin{figure}[htb]
\hspace{ 0.0cm}
\vspace{-0.5cm}
\begin{center}
\includegraphics[width=7.5cm]{three_mh1tb_1.eps}
\includegraphics[width=7.5cm]{three_mh1tb_10.eps}
\end{center}
\vspace{-0.5cm}
\caption{\it The experimental constraints from
$B(B_s \to \mu^+\mu^-)$ (95 \%), $B(B \to X_s\gamma)$ (2 $\sigma$), and $R_{B\tau\nu}$
(1 $\sigma$) in the $(\tan\beta\,,M_{H_1})$ plane for two values of $\rho$. The same CPX
scenario with $\Phi_A=\Phi_3=90^\circ$ is taken as in Fig.~\ref{fig:bsmm_mh1tb}.
}
\label{fig:three_mh1tb}
\end{figure}
\begin{figure}[htb]
\hspace{ 0.0cm}
\vspace{-0.5cm}
\centerline{\epsfig{figure=bdtt_dmb_mh1tb.eps,height=14cm,width=14cm}}
\vspace{-0.5cm}
\caption{\it The region allowed experimentally by the measurement $B(B_d\to
\tau^+\tau^-) < 4.1 \times 10^{-3}$ (90 \%)~\cite{BDTT} (upper frames) and the regions
where the SUSY contribution is smaller than the measured values of
$B_s^0$-$\bar{B}_s^0$ mass difference~\cite{Evans:2007hq} (middle frames)
and $B_d^0$-$\bar{B}_d^0$ mass difference~\cite{PDG} (lower frames), in the
$(\tan\beta\,,M_{H_1})$ plane. The left three
frames are for $\rho=1$ and the right ones for $\rho=10$. The same CPX
scenario with $\Phi_A=\Phi_3=90^\circ$ is taken as in Fig.~\ref{fig:bsmm_mh1tb}. }
\label{fig:bdtt_dmb}
\end{figure}
\newpage
\def\Alph{section}.\arabic{equation}{\Alph{section}.\arabic{equation}}
\begin{appendix}
\setcounter{equation}{0}
\section{List of changes}
Here we summarize the improved features
introduced in {\tt CPsuperH2.0} compared to the prior version
of {\tt CPsuperH}.
\begin{itemize}
\item New common blocks:
\begin{itemize}
\item {\tt COMMON /HC\_RAUX/ RAUX\_H(NAUX$=$999)}, see Table~\ref{tab:raux}
\item {\tt COMMON /HC\_CAUX/ CAUX\_H(NAUX$=$999)}, see Table~\ref{tab:caux}
\end{itemize}
\item Extended arrays for input parameters:
\begin{itemize}
\item {\tt SMPARA\_H(NSMIN$=$19)}, see Table~\ref{tab:smpara}
\item {\tt SMPARA\_H(NSSIN$=$26)}, see Table~\ref{tab:sspara}
\end{itemize}
\item New names for improved {\tt FORTRAN} files:
\begin{itemize}
\item {\tt cpsuperh.f} $~\longrightarrow ~$ {\tt cpsuperh2.f}
\item {\tt fillpara.f} $~\longrightarrow ~$ {\tt fillpara2.f}
\item {\tt fillhiggs.f} $\,\longrightarrow ~$ {\tt fillhiggs2.f}
\item {\tt fillcoupl.f} $\,\longrightarrow ~$ {\tt fillcoupl2.f}
\item {\tt fillgambr.f} $\,\longrightarrow ~$ {\tt fillgambr2.f}
\end{itemize}
\item New {\tt FORTRAN} files:
\begin{itemize}
\item {\tt filldhpg.f} is to calculate the full propagator matrices
$D^{H^0\,,H^\pm}(\hat{s})$ and
the $\hat{s}$-dependent couplings $S^{g\,,\gamma}_i(\sqrt{\hat{s}})$
and $P^{g\,,\gamma}_i(\sqrt{\hat{s}})$.
\item {\tt higgsedm.f} is to calculate Higgs-mediated two-loop EDMs of Thallium, electron, and muon.
\item {\tt fillbobs.f} is to calculate the $B$-meson observables:
$B(B_s\to \mu\mu)$, $B(B_d\to \tau\tau)$,
$\Delta M_{B_d}^{\rm SUSY}$, $\Delta M_{B_s}^{\rm SUSY}$,
$R_{B\tau\nu}$, $B(B\to X_s \gamma)$, and ${\cal A}_{\rm CP}(B\to X_s\gamma)$.
\end{itemize}
\item New flags:
\begin{itemize}
\item ${\tt IFLAG\_H(12)=0-5}$: For the level of improvement in the calculation of the
Higgs-boson pole masses.
\item ${\tt IFLAG\_H(13)=1}$: Not to include the off-diagonal absorptive parts in the
propagator matrices $D^{H^0\,,H^\pm}(\hat{s})$.
\item ${\tt IFLAG\_H(14)=1}$: Print out the the elements of the
full propagator matrices $D^{H^0\,,H^\pm}(\hat{s})$ and
the $\hat{s}$-dependent couplings $S^{g\,,\gamma}_i(\sqrt{\hat{s}})$
and $P^{g\,,\gamma}_i(\sqrt{\hat{s}})$.
\item ${\tt IFLAG\_H(15)=1}$: Print out EDMs.
\item ${\tt IFLAG\_H(16)=1}$: Print out $B$-meson observables.
\item ${\tt IFLAG\_H(17)=1}$: Print out $B \to X_s\,\gamma$ details.
\item ${\tt IFLAG\_H(57)=1}$: This is an error message that appears
when one of the magnitudes of the complex SUSY input parameters is negative.
\item ${\tt IFLAG\_H(60)=1}$: This is an error message that appears
when the iterative method for the neutral Higgs-boson pole masses fails.
\end{itemize}
\end{itemize}
\section{Goldstone-boson couplings to third-generation fermions and sfermions}
\label{sec:Goldstone}
Here we present the Goldstone-(s)fermion-(s)fermion couplings in the {\tt CPsuperH}
convention.
\begin{itemize}
\item[$\bullet$] \underline{$G^0$-$\bar{f}$-$f$}
\begin{eqnarray}
{\cal L}_{G^0\bar{f}f}=-\sum_{f=t,b,\tau}\frac{g\,m_f}{2M_W}\,G^0\,\bar{f}
\left(i\, g^P_{G^0\bar{f}f}\, \gamma_5\right)f\,,
\end{eqnarray}
where
\begin{equation}
g^P_{G^0\bar{t}t}=-1\,, \ \ \
g^P_{G^0\bar{b}b}=g^P_{G^0\bar{\tau}\tau}=+1\,.
\end{equation}
\item[$\bullet$] \underline{$G^\pm$-$\bar{f}$-$f^\prime$}
\begin{eqnarray}
{\cal L}_{G^\pm\bar{f}f^\prime}& =& \frac{g}{\sqrt{2} M_W}\, \hskip -0.2cm
\sum_{(f_{\uparrow},f_{\downarrow})=(t,b),(\nu,\tau)} \hskip -0.3cm
G^+\, \bar{f}_{\uparrow}\,\Big(\, m_{f_{\uparrow}}\,
P_L\ -\ m_{f_{\downarrow}}\, P_R\, \Big)\, f_{\downarrow}\ +\ {\rm h.c.} \\
&=&
-g_{tb}\,
G^+\,\bar{t}\,(g^S_{G^+\bar{t}b}+ig^P_{G^+\bar{t}b}\gamma_5)\,b
-g_{\nu_\tau\tau}\,
G^+\,\bar{\nu}_\tau\,(g^S_{G^+\bar{\nu}_\tau\tau}+ig^P_{G^+\bar{\nu}_\tau\tau}\gamma_5)\,\tau
\ +\ {\rm h.c.}\,,\nonumber
\end{eqnarray}
where
\begin{eqnarray}
&&g_{tb}=-\frac{g\,m_t}{\sqrt{2} M_W}\,, \ \ \
g^S_{G^+\bar{t}b}=\frac{1-m_b/m_t}{2}\,, \ \ \
g^P_{G^+\bar{t}b}=i\,\frac{1+m_b/m_t}{2}\,; \nonumber \\
&&g_{\nu_\tau\tau}=-\frac{g\,m_\tau}{\sqrt{2} M_W}\,, \ \
g^S_{G^+\bar{\nu}_\tau\tau}=-\frac{1}{2}\,, \hspace{1.50cm}
g^P_{G^+\bar{\nu}_\tau\tau}=i\,\frac{1}{2}\,.
\end{eqnarray}
\item[$\bullet$] \underline{$G^0$-$\tilde{f}^*$-$\tilde{f}$}
\begin{equation}
{\cal L}_{G^0\tilde{f}\tilde{f}}=v\sum_{f=t,b,\tau}\,g_{G^0\tilde{f}^*_i\tilde{f}_j}
(G^0\,\tilde{f}^*_i\,\tilde{f}_j)\,,
\end{equation}
where
\begin{equation}
v\,g_{G^0\tilde{f}^*_i\tilde{f}_j}
=\left(\Gamma^{G^0\tilde{f}^*\tilde{f}}\right)_{\alpha\beta}
U^{\tilde{f}*}_{\alpha i} U^{\tilde{f}}_{\beta j}\,.
\end{equation}
The couplings in the weak-interaction basis are given by
\begin{eqnarray}
\Gamma^{G^0\tilde{t}^*\tilde{t}} &=& \frac{1}{\sqrt{2}}\left(
\begin{array}{cc}
0 & i\,h_t^*(s_\beta A_t^*-c_\beta \mu) \\
-i\,h_t(s_\beta A_t-c_\beta \mu^*) & 0
\end{array} \right)\,,
\nonumber \\
\Gamma^{G^0\tilde{b}^*\tilde{b}} &=& \frac{1}{\sqrt{2}}\left(
\begin{array}{cc}
0 & -i\,h_b^*(c_\beta A_b^*-s_\beta \mu) \\
i\,h_b(c_\beta A_b-s_\beta \mu^*) & 0
\end{array} \right)\,,
\nonumber \\
\Gamma^{G^0\tilde{\tau}^*\tilde{\tau}} &=& \frac{1}{\sqrt{2}}\left(
\begin{array}{cc}
0 & -i\,h_\tau^*(c_\beta A_\tau^*-s_\beta \mu) \\
i\,h_\tau(c_\beta A_\tau-s_\beta \mu^*) & 0
\end{array} \right)\,.
\end{eqnarray}
\item[$\bullet$] \underline{$G^\pm$-$\tilde{f}^*$-$\tilde{f}^\prime$}
\begin{equation}
{\cal L}_{G^\pm\tilde{f}\tilde{f'}}=v\,g_{G^+\tilde{t}^*_i\tilde{b}_j}
(G^+\,\tilde{t}^*_i\,\tilde{b}_j)\,
+\,v\,g_{G^+\tilde{\nu}_\tau^*\tilde{\tau}_i}
(G^+\,\tilde{\nu}_\tau^*\,\tilde{\tau}_i)\,+{\rm h.c.}\,,
\end{equation}
where
\begin{equation}
v\,g_{G^+\tilde{t}^*_i\tilde{b}_j}
=\left(\Gamma^{G^+\tilde{t}^*\tilde{b}}\right)_{\alpha\beta}
U^{\tilde{t}*}_{\alpha i} U^{\tilde{b}}_{\beta j} \ \ \ \ {\rm and} \ \ \ \
v\,g_{G^+\tilde{\nu}^*_\tau\tilde{\tau}_i}
=\Gamma^{G^+\tilde{\nu}^*_\tau\tilde{\tau}_\alpha}\,
U^{\tilde{\tau}}_{\alpha i}\,.
\end{equation}
The couplings in the weak-interaction basis are given by
\begin{eqnarray}
\Gamma^{G^+\tilde{t}^*\tilde{b}}\ &=&\ \left(
\begin{array}{cc} \frac{1}{\sqrt{2}}\,(|h_u|^2 s_\beta^2 - |h_d|^2 c_\beta^2)\,v \,
+\,\frac{1}{2\sqrt{2}}\,g^2c_{2\beta}\,v
&- h_d^*\, ( c_\beta A^*_d - s_\beta \mu )\\
h_u\,( s_\beta A_u - c_\beta \mu^*) &
0 \end{array}\right)\,,
\nonumber\\
\Gamma^{G^+\tilde{\nu}_\tau^*\tilde{\tau}_L}\ &=& \
-\frac{1}{\sqrt{2}}\,|h_\tau|^2c_\beta^2\,v\,+\,\frac{1}{2\sqrt{2}}\,g^2c_{2\beta}\,v\,,
\nonumber \\
\Gamma^{G^+\tilde{\nu}_\tau^*\tilde{\tau}_R}\ &=& \
-h^*_\tau \left(c_\beta A^*_\tau-s_\beta \mu\right)\,.
\end{eqnarray}
\end{itemize}
\setcounter{equation}{0}
\section{Sample new outputs}
Here we show the new outputs of {\tt CPsuperH2.0} for the CPX scenario
with $\tan\beta=5$, $M_{H^\pm}=300$ GeV, $M_{\rm SUSY}=500$ GeV, and $\Phi_A=\Phi_3=90^\circ$.
\begin{itemize}
\item ${\tt IFLAG\_H(1)} =1$: In the new version, we are using $m_b(m_t^{\rm pole})=3.155$ GeV
and $m_c(m_t^{\rm pole})=0.735$ GeV as defaults. Note also that the list of the SM
and SUSY input parameters is
extended to include the CKM matrix and the diagonal sfermion mass matrices.
\\
{\tt
~---------------------------------------------------------\\
$~~$Standard~Model~Parameters~~in~/HC\_SMPARA/\\
~---------------------------------------------------------\\
$~~$AEM\_H~~~~=~0.7812E-02~:~alpha\_em(MZ)\\
$~~$ASMZ\_H~~~=~0.1185E+00~:~alpha\_s(MZ)\\
$~~$MZ\_H~~~~~=~0.9119E+02~:~Z~boson~mass~in~GeV\\
$~~$SW\_H~~~~~=~0.4808E+00~:~sinTheta\_W\\
$~~$ME\_H~~~~~=~0.5000E-03~:~electron~mass~in~GeV\\
$~~$MMU\_H~~~~=~0.1065E+00~:~muon~mass~in~GeV\\
$~~$MTAU\_H~~~=~0.1777E+01~:~tau~mass~in~GeV\\
$~~$MDMT\_H~~~=~0.4000E-02~:~d-quark~mass~at~M\_t\^{}pole~in~GeV\\
$~~$MSMT\_H~~~=~0.9000E-01~:~s-quark~mass~at~M\_t\^{}pole~in~GeV\\
$~~$MBMT\_H~~~=~0.3155E+01~:~b-quark~mass~at~M\_t\^{}pole~in~GeV\\
$~~$MUMT\_H~~~=~0.2000E-02~:~u-quark~mass~at~M\_t\^{}pole~in~GeV\\
$~~$MCMT\_H~~~=~0.7350E+00~:~c-quark~mass~at~M\_t\^{}pole~in~GeV\\
$~~$MTPOLE\_H~=~0.1743E+03~:~t-quark~pole~mass~in~GeV\\
$~~$GAMW\_H~~~=~0.2118E+01~:~Gam\_W~in~GeV\\
$~~$GAMZ\_H~~~=~0.2495E+01~:~Gam\_Z~in~GeV\\
$~~$EEM\_H~~~~=~0.3133E+00~:~e~=~(4*pi*alpha\_em)\^{}1/2\\
$~~$ASMT\_H~~~=~0.1084E+00~:~alpha\_s(M\_t\^{}pole)\\
$~~$CW\_H~~~~~=~0.8768E+00~:~cosTheta\_W\\
$~~$TW\_H~~~~~=~0.5483E+00~:~tanTheta\_W\\
$~~$MW\_H~~~~~=~0.7996E+02~:~W~boson~mass~MW~=~MZ*CW\\
$~~$GW\_H~~~~~=~0.6517E+00~:~SU(2)~gauge~coupling~~gw=e/s\_W\\
$~~$GP\_H~~~~~=~0.3573E+00~:~U(1)\_Y~gauge~coupling~gp=e/c\_W\\
$~~$V\_H~~~~~~=~0.2454E+03~:~V~=~2~MW~/~gw\\
$~~$GF\_H~~~~~=~0.1174E-04~:~GF=sqrt(2)*gw\^{}2/8~MW\^{}2~in~GeV\^{}-2\\
$~~$MTMT\_H~~~=~0.1666E+03~:~t-quark~mass~at~M\_t\^{}pole~in~GeV\\
~---------------------------------------------------------\\
$~~$CKM~Matrix~:\\
$~~$|V\_ud|~~~=~|(0.9738E+00~0.0000E+00)|~=~0.9738E+00\\
$~~$|V\_us|~~~=~|(0.2272E+00~0.0000E+00)|~=~0.2272E+00\\
$~~$|V\_ub|~~~=~|(0.2174E-02~-.3349E-02)|~=~0.3993E-02\\
$~~$|V\_cd|~~~=~|(-.2271E+00~-.1377E-03)|~=~0.2271E+00\\
$~~$|V\_cs|~~~=~|(0.9730E+00~-.3213E-04)|~=~0.9730E+00\\
$~~$|V\_cb|~~~=~|(0.4222E-01~0.0000E+00)|~=~0.4222E-01\\
$~~$|V\_td|~~~=~|(0.7478E-02~-.3259E-02)|~=~0.8157E-02\\
$~~$|V\_ts|~~~=~|(-.4161E-01~-.7602E-03)|~=~0.4162E-01\\
$~~$|V\_tb|~~~=~|(0.9991E+00~0.0000E+00)|~=~0.9991E+00\\
~---------------------------------------------------------\\
$~~$Real~SUSY~Parameters~~in~/HC\_RSUSYPARA/\\
~---------------------------------------------------------\\
$~~$TB\_H~~~~~=~0.5000E+01~:~tan(beta)\\
$~~$CB\_H~~~~~=~0.1961E+00~:~cos(beta)\\
$~~$SB\_H~~~~~=~0.9806E+00~:~sin(beta)\\
$~~$MQ3\_H~~~~=~0.5000E+03~:~M\_tilde{Q\_3}~in~GeV\\
$~~$MU3\_H~~~~=~0.5000E+03~:~M\_tilde{U\_3}~in~GeV\\
$~~$MD3\_H~~~~=~0.5000E+03~:~M\_tilde{D\_3}~in~GeV\\
$~~$ML3\_H~~~~=~0.5000E+03~:~M\_tilde{L\_3}~in~GeV\\
$~~$ME3\_H~~~~=~0.5000E+03~:~M\_tilde{E\_3}~in~GeV\\
~---------------------------------------------------------\\
$~~$Complex~SUSY~Parameters~~in~/HC\_CSUSYPARA/\\
~---------------------------------------------------------\\
$~~$|MU\_H|~~~~~=~0.2000E+04:Mag.~of~MU~parameter~in~GeV\\
$~~$|M1\_H|~~~~~=~0.5000E+02:Mag.~of~M1~parameter~in~GeV\\
$~~$|M2\_H|~~~~~=~0.1000E+03:Mag.~of~M2~parameter~in~GeV\\
$~~$|M3\_H|~~~~~=~0.1000E+04:Mag.~of~M3~parameter~in~GeV\\
$~~$|AT\_H|~~~~~=~0.1000E+04:Mag.~of~AT~parameter~in~GeV\\
$~~$|AB\_H|~~~~~=~0.1000E+04:Mag.~of~AB~parameter~in~GeV\\
$~~$|ATAU\_H|~~~=~0.1000E+04:Mag.~of~ATAU~parameter~in~GeV\\
$~~$ARG(MU\_H)~~=~0.0000E+00:Arg.~of~MU~parameter~in~Degree\\
$~~$ARG(M1\_H)~~=~0.0000E+00:Arg.~of~M1~parameter~in~Degree\\
$~~$ARG(M2\_H)~~=~0.0000E+00:Arg.~of~M2~parameter~in~Degree\\
$~~$ARG(M3\_H)~~=~0.9000E+02:Arg.~of~M3~parameter~in~Degree\\
$~~$ARG(AT\_H)~~=~0.9000E+02:Arg.~of~AT~parameter~in~Degree\\
$~~$ARG(AB\_H)~~=~0.9000E+02:Arg.~of~AB~parameter~in~Degree\\
$~~$ARG(ATAU\_H)=~0.9000E+02:Arg.~of~ATAU~parameter~in~Degree\\
~---------------------------------------------------------\\
$~~$Diagonal~Sfermion~Mass~Matrices~[GeV]~(Not~squared)~:\\
$~~$M\_Q~=~0.5000E+03~x~Diag(0.1000E+01~0.1000E+01~0.1000E+01)\\
$~~$M\_U~=~0.5000E+03~x~Diag(0.1000E+01~0.1000E+01~0.1000E+01)\\
$~~$M\_D~=~0.5000E+03~x~Diag(0.1000E+01~0.1000E+01~0.1000E+01)\\
$~~$M\_L~=~0.5000E+03~x~Diag(0.1000E+01~0.1000E+01~0.1000E+01)\\
$~~$M\_E~=~0.5000E+03~x~Diag(0.1000E+01~0.1000E+01~0.1000E+01)\\
~---------------------------------------------------------\\
$~~$Charged~Higgs~boson~pole~mass~:~0.3000E+03~GeV\\
~---------------------------------------------------------\\
}
\item ${\tt IFLAG\_H(2)}=1$: The masses and mixing matrix of the neutral Higgs boson change
due to the improvement in their calculations and the new input for the $b$-quark mass.\\
{\tt
~---------------------------------------------------------\\
$~~$Masses~and~Mixing~Matrix~of~Higgs~bosons~:\\
$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$HMASS\_H(I)~and~OMIX\_H(A,I)\\
~---------------------------------------------------------\\
$~~$H1~~Pole~Mass~~~~~~~~~~~=~0.1193E+03~GeV\\
$~~$H2~~Pole~Mass~~~~~~~~~~~=~0.2718E+03~GeV\\
$~~$H3~~Pole~Mass~~~~~~~~~~~=~0.2983E+03~GeV\\
$~~$Charged~Higgs~Pole~Mass~=~0.3000E+03~GeV~[SSPARA\_H(2)]\\
$~~~~~~~~~~~~~~~~~~~~~~~~~~$[H1]~~~~~~~~[H2]~~~~~~~~[H3]\\
$~~~~~~~~~~~~$[phi\_1]~/~0.2457E+00~~0.3360E+00~~0.9093E+00~~$\backslash$\\
$~~$O(IA,IH)=~[phi\_2]~|~0.9693E+00~~-.7551E-01~~-.2340E+00~~|\\
$~~~~~~~~~~~~$[~~a~~]~$\backslash$~-.9973E-02~~0.9388E+00~~-.3442E+00~~/\\
---------------------------------------------------------\\
}
\item ${\tt IFLAG\_H(14)}=1$: The elements of the propagator matrices
$D^{H^0\,,H^\pm}(\hat{s})$
and the $\hat{s}$-dependent couplings of the neutral Higgs bosons
to two photons, $S^{\gamma}_i(\sqrt{\hat{s}})$ and $P^{\gamma}_i(\sqrt{\hat{s}})$,
and two gluons, $S^{g}_i(\sqrt{\hat{s}})$ and $P^{g}_i(\sqrt{\hat{s}})$, taking
$\sqrt{\hat{s}}=M_{H_2}$. The couplings are compared to their values at the Higgs-boson
pole masses:
$S^{\gamma}_i(\sqrt{\hat{s}}=M_{\tt IH})={\tt NHC\_H(88,IH)}$,
$P^{\gamma}_i(\sqrt{\hat{s}}=M_{\tt IH})={\tt NHC\_H(89,IH)}$,
$S^g_i(\sqrt{\hat{s}}=M_{\tt IH})={\tt NHC\_H(84,IH)}$,
$P^g_i(\sqrt{\hat{s}}=M_{\tt IH})={\tt NHC\_H(85,IH)}$.
\\
{\tt
~---------------------------------------------------------\\
$~~$DNH4~at~sqrt{s}~=~0.2718E+03~GeV\\
~---------------------------------------------------------\\
$~~$DNH4[H1,H1]:~|(0.1238E+01~0.2290E-01)|~=~0.1238E+01\\
$~~$DNH4[H2,H2]:~|(0.5542E-01~-.1611E+04)|~=~0.1611E+04\\
$~~$DNH4[H3,H3]:~|(-.4876E+01~-.2128E-01)|~=~0.4876E+01\\
$~~$DNH4[H1,H2]:~|(-.1607E+00~-.2973E-02)|~=~0.1608E+00\\
$~~$DNH4[H1,H3]:~|(-.5956E-05~0.2606E-03)|~=~0.2606E-03\\
$~~$DNH4[H2,H3]:~|(-.4893E-01~-.2377E-03)|~=~0.4893E-01\\
$~~$DNH4[G0,H1]:~|(0.3403E-06~-.1825E-04)|~=~0.1825E-04\\
$~~$DNH4[G0,H2]:~|(0.1872E+00~0.2446E-05)|~=~0.1872E+00\\
$~~$DNH4[G0,H3]:~|(-.2222E-06~0.5181E-04)|~=~0.5182E-04\\
$~~$DNH4[G0,G0]:~|(0.1000E+01~-.2365E-05)|~=~0.1000E+01\\
~---------------------------------------------------------\\
$~~$DCH2~at~sqrt{s}~=~0.2718E+03~GeV\\
~---------------------------------------------------------\\
$~~$DCH2[H+,H+]:~|(-.4576E+01~-.2790E-01)|~=~0.4576E+01\\
$~~$DCH2[H+,G+]:~|(-.1256E-03~0.2294E-01)|~=~0.2294E-01\\
$~~$DCH2[G+,H+]:~|(-.1256E-03~0.2294E-01)|~=~0.2294E-01\\
$~~$DCH2[G+,G+]:~|(0.1000E+01~-.6202E-03)|~=~0.1000E+01\\
~---------------------------------------------------------\\
$~~$Comparisons~of~the~H-photon-photon~couplings~at~MH\^{}pole\\
$~~$and~those~at~sqrt\{s\}~=~0.2718E+03~GeV\\
~---------------------------------------------------------\\
$~~~~~~~~~~~~~~~~~~$S~couplings~~~~~~~~~~~~~P~couplings\\
$~~$H1PP(M):~(-.6615E+01~0.6386E-01)~(0.1303E-01~0.7314E-03)\\
$~~$H1PP(S):~(-.3180E+01~-.6078E+01)~(0.1779E-01~0.2017E-02)\\
$~~$H2PP(M):~(-.9852E+00~0.3333E-01)~(-.6867E+00~-.2221E+00)\\
$~~$H2PP(S):~(-.9852E+00~0.3333E-01)~(-.6867E+00~-.2221E+00)\\
$~~$H3PP(M):~(-.4272E+00~0.2509E+00)~(0.5178E+00~0.7028E-01)\\
$~~$H3PP(S):~(-.3695E+00~0.2852E+00)~(0.4567E+00~0.7475E-01)\\
~---------------------------------------------------------\\
$~~$Comparisons~of~the~H-glue-glue~couplings~at~MH\^{}pole\\
$~~$and~those~at~sqrt\{s\}~=~0.2718E+03~GeV\\
~---------------------------------------------------------\\
$~~~~~~~~~~~~~~~~~~$S~couplings~~~~~~~~~~~~~P~couplings\\
$~~$H1GG(M):~(0.5792E+00~0.4164E-01)~(0.5316E-02~-.6809E-03)\\
$~~$H1GG(S):~(0.7358E+00~0.8932E-02)~(0.6510E-02~-.1457E-03)\\
$~~$H2GG(M):~(-.3557E+00~0.2591E-02)~(-.1970E+00~-.3456E-01)\\
$~~$H2GG(S):~(-.3557E+00~0.2591E-02)~(-.1970E+00~-.3456E-01)\\
$~~$H3GG(M):~(-.2240E+00~0.2860E-01)~(0.1855E+00~0.2231E-02)\\
$~~$H3GG(S):~(-.2150E+00~0.3413E-01)~(0.1585E+00~0.2662E-02)\\
~---------------------------------------------------------\\
}
\item ${\tt IFLAG\_H(15)}=1$: The Higgs-mediated two-loop Thallium, electron, and muon EDMs. For
the Thallium case, the two main contributions from the electron EDM and the CP-odd
electron-nucleon interaction are shown separately.\\
{\tt
~---------------------------------------------------------\\
$~~~~~~~~~~~~~~~$Higgs-mediated~two-loop~EDMs\\
$~~~~~~~$Phi\_3~=~0.9000E+02\^{}o~and~Phi\_At~=~0.9000E+02\^{}o\\
~---------------------------------------------------------\\
$~~$Thallium[10\^{}-24~ecm]:~-.2612E+01\\
$~~~~~~~~~~~~~~~~~~~~~~~$[-.2568E+01~from~electron~EDM]\\
$~~~~~~~~~~~~~~~~~~~~~~~$[-.4467E-01~from~C\_S\,~~~~~~EDM]\\
$~~$Electron[10\^{}-26~ecm]:~0.4389E+00\\
$~~$Muon[10\^{}-24~ecm]~~~~:~0.8997E+00\\
~---------------------------------------------------------\\
}
\item ${\tt IFLAG\_H(16)}=1$: The $B$-meson observables.\\
{\tt
~---------------------------------------------------------\\
$~~~~~~~~~~~~~~~~~~~~~~$B~Observables\\
~---------------------------------------------------------\\
$~~$B(B\_s~->~mu~~mu~)~~~x~10\^{}7~=~0.3710E-01\\
$~~$B(B~~~->~X\_s~gamma)~x~10\^{}4~=~0.4396E+01\\
$~~$B(B\_u~->~tau~nu)/B(SM)~~~~~=~0.9854E+00\\
$~~$B(B\_d~->~tau~tau)~~~x~10\^{}7~=~0.2294E+00\\
$~~$ACP(B~->~X\_s~gamma)~x~10\^{}2~=~-.7954E-01~[\%]\\
$~~$Delta~M~[B\_d]~(SUSY)~~~~~~~=~0.6659E-04~[1/ps]\\
$~~$Delta~M~[B\_s]~(SUSY)~~~~~~~=~0.1982E-01~[1/ps]\\
~---------------------------------------------------------\\
}
\item ${\tt IFLAG\_H(17)}=1$: The details of the $B \to X_s \gamma$ calculation. As a default,
we use $m_c(\mu_c=m_c^{\rm pole})$ to
capture a part of NNLO corrections \cite{b2sg-nnlo}. The case when only the
charged-Higgs contribution is added to the SM prediction is also shown.
\\
{\tt
~---------------------------------------------------------\\
$~~~~~~~~~~~~~~~~~~~~~~$B~->~X\_s~gamma\\
$~~$delta~and~E\_gamma\^{}cut~[GeV]:~0.3333E+00~~0.1601E+01\\
~---------------------------------------------------------\\
$~~$b-q~masses~[GeV]~~(pole,~~~~~~~@mb\^{}pole,~~~@mt\^{}pole):\\
$~~~~~~~~~~~~~~~~~~~~~$0.4802E+01~~0.4415E+01~~0.3155E+01\\
$~~$c-q~masses~[GeV]~~(pole,~~~~~~~@mc\^{}pole,~~~@mb\^{}pole):\\
$~~~~~~~~~~~~~~~~~~~~~$0.1415E+01~~0.1250E+01~~0.1029E+01\\
$~~$mu\_b~and~mu\_c~~[GeV]~~~~~~~:~0.4802E+01~~0.1415+01\\
~---------------------------------------------------------\\
$~~$BR~~x~10\^{}4:~0.4396E+01~(SM+Charged~Higgs+Chargino)\\
$~~~~~~~~~~~~~$[0.4471E+01~(SM+Charged~Higgs)]\\
$~~~~~~~~~~~~~$[0.3351E+01~(SM)]\\
$~~$ACP~x~10\^{}2:~-.7954E-01~\%\\
~---------------------------------------------------------\\
}
\end{itemize}
\end{appendix}
\newpage
|
1,314,259,993,452 | arxiv | \section{Introduction}
Cluster mergers are the most energetic events in the Universe. They are the natural way of forming rich clusters of galaxies within cold dark matter scenarios, which imply a bottom-up hierarchy of structure formation \citep{Press74,Sarazin02}. The merging process generates important perturbations in the intra-cluster medium such as shocks, bulk flows and turbulence in the hot gas, which considerably affect the properties of the non-thermal components of galaxy clusters. As a matter of fact it is nowadays well established that the spectacular cluster-scale radio emission in the form of halos and relics originates by merging processes \citep[i.e.,][and references therein]{Bruggen12,Brunetti14,Botteon16}.
While the morphology of radio galaxies is strongly affected by the environment, as witnessed by the presence of wide--angle tails (WAT) and narrow--angle tails (NAT) in the majority of galaxy clusters and groups \citep[see][for a review on the topic]{Feretti02},
the role of cluster formation processes on the statistical radio properties of the galaxy population is still unclear.
The most powerful tool to investigate the influence of the environment on the radio properties of galaxies is the Radio Luminosity Function (RLF), which gives the probability for a galaxy to develop a radio source above a given radio power threshold. The comparison of RLFs for galaxies in different environments and at different redshifts provides clues on the role of the environment and of the cosmological evolution.
To date, results of these studies are however inconclusive.
The few clusters studied in detail so far in the local Universe provide conflicting results both for the radio nuclear activity and for the starburst population of galaxies \citep{Owen99,Dwarakanath99,Venturi00,Venturi01,Miller03,Giacintucci04}, suggesting that several parameters might play a relevant role.
The cosmological evolution of the RLF is a matter of debate, too.
Studies of clusters at intermediate redshift ($0.3 < z < 0.8$) provide different results for different samples
\citep{Stocke99,Branchesi06,Gralla11} when compared to the local radio luminosity function of cluster galaxies \citep{Ledlow96}.
The only solid results are the dependence of the RLF on the optical magnitude of the host galaxy, at least up to $z = 0.3$ \citep{Mauch07}, and the remarkably different behaviour of the Brightest Cluster Galaxies (BCG) in relaxed and merging clusters in the GMRT Radio Halo cluster sample \citep{Kale15} and for the HIFLUG cluster sample at lower redshift \citep{Mittal09}. These show that the fraction of brightest cluster galaxies with radio emission is much higher in relaxed clusters at least up to $z = 0.4$, and increases with increasing cool-core strength.
During the MeerKAT-16 commissioning stage, we observed the two galaxy clusters Abell 1300 (A\,1300) and MACS\,J1931.8--2634. While the main goal of the observations was to test the telescope performance and new calibration and imaging procedures, the galaxy clusters were selected with the scientific aim to address the role of cluster mergers in shaping the radio properties of cluster radio galaxies.
The two clusters are at similar redshifts ($z \sim 0.3$) and have similar mass, but they differ in their dynamical properties: A\,1300 is considered a post--merger, while MACS\,J1931.8--2634 is classified as a relaxed system.
The two clusters were observed at 1.28~GHz with the MeerKAT array, and the radio observations were complemented with optical {\it Subaru}-SuprimeCam images for a comparative study of the radio galaxy population in two different environments, which we carried out comparing their radio source counts and their RLFs.
In this paper we present the results of our study. The layout is as follows: Section~\ref{sec:obz} describes the observations and data reduction; the optical data are described in Section~\ref{sec:optical}; radio catalogues, source counts, optical IDs and RLFs are shown in Section~\ref{sec:results} and conclusions are offered in Section~\ref{sec:discussion}.
Throughout the paper we assume $H_0 = 70$~km~s$^{-1}$~Mpc$^{-1}$, $\Omega_m = 0.3$ and $\Omega_\Lambda = 0.7$, which give a 4.5~kpc~arcsec$^{-1}$ and 4.9~kpc~arcsec$^{-1}$ scale for A\,1300 and MACS\,J1931.8--2634 respectively.
\subsection{A~1300}
The galaxy cluster A\,1300 is reported as a $z = 0.31$, richness class 1 object by \citet{Abell89}. It was first surveyed in the X-ray band during the Rosat All Sky Survey \citep{Pierre94}, where it is identified with RXJ1131.9-1955. It has an X-ray luminosity of $L_x \sim 1.7 \times 10^{45}$~erg~s$^{-1}$ in the $0.1-2.4$~keV band and a mass M$\sim 1.3 \times 10^{15}$~M$_{\odot}$ \citep{Lemonon97}. Its virial radius is 1.53~Mpc, corresponding to 5.7$^{\prime}$ \citep{Ziparo12}.
Optical \citep{Pierre97} and X-ray \citep{Lemonon97} observations suggest that A\,1300 is in a post-merging phase. It is estimated that a major merger occurred about 3 Gyr ago, with further accretion taking place along the cosmic web filaments, leading to an increase of the cluster mass up to 60\% in the next $\sim$Gyr \citep{Ziparo12}. \cite{Ziparo12} quote 987 km~s$^{-1}$ as rest-frame velocity dispersion, which they also use to estimate a dynamical mass of $M_{200} \approx 1.1 \times 10^{15} M_{\odot}$.
The cluster hosts a giant radio halo and a relic located in the south--western periphery of the cluster and a number of extended radio galaxies in the central regions \citep{Reid99,Venturi13}.
A\,1300 is among the targets of the Merging Cluster Collaboration (MCC)\footnote{http://www.mergingclustercollaboration.org/}, and of the Galaxy Cluster at VirCam Survey (GCAV)\footnote{https://www.eso.org/sci/observing/PublicSurveys/docs/GCAV}, an Infra-red, 560 hrs, ESO Public Survey (PI Nonino M.) in the Y, J and Ks bands, whose aim is to explore galaxy evolution over a large variety of environments.
\subsection{MACS~J1931.8--2634}
This massive ($M_{200} = 1.74\times 10^{15}M_{\odot}$, \cite{Umetsu14}), $z = 0.35$, cool core galaxy cluster \citep{Ebeling10} is part of the Massive Cluster Survey sample \citep[MACS, see][and references therein]{Ebeling07} and has an X-ray luminosity of $L_x \sim 2.2 \times 10^{45}$~erg~s$^{-1}$ in the $0.1-2.4$~keV. It hosts one of the X-ray most luminous cool cores discovered so far, with an equivalent mass cooling rate within the central 50~$h^{-1}_{70}$~kpc of $\sim 700$~M$_{\odot}$~yr$^{-1}$ \citep{Ehlert11}. Its virial radius is 1.8~Mpc, corresponding to 6.1~arcmin \citep{Santos16}.
\\
\cite{Santos16} report a weak-lensing mass of $M_{200} \approx 0.99 \times 10^{15}$~M$_{\odot}$ \citep[see also][]{Merten15}.
For comparison with A\,1300, from the mass we estimate a velocity dispersion following \citet{Carlberg97} \citep[using the same cosmology as][]{Ziparo12}, resulting in $\sigma \approx 900$ km~s$^{-1}$, i.e. similar to A\,1300. The two clusters are thus quite similar in mass, meaning, via the mass-richness scaling relation \citep{Andreon10,Melchior17}, that they have similar richness.
The BCG is surrounded by X-ray cavities created by an outburst from the central source \citep{Allen04} and is undergoing intense star formation \citep{Fogarty15,Donahue15,Santos16}. Moreover,
it is embedded in extended radio emission whose nature, i.e. radio lobes or mini--halo, is still a matter of debate \citep{Giacintucci04,Giacintucci17}.
The cluster hosts a Narrow-Angle Tailed (NAT) radio galaxy, located at $\sim 200$~kpc linear projected distance south of the BCG. MACS\,J1931.8-2634 is part of the Cluster Lensing and Supernovae Survey with Hubble sample \citep[CLASH,][]{Postman12}.
\section{MeerKAT observations and data reduction}\label{sec:obz}
The MeerKAT radio telescope\footnote{https://www.sarao.ac.za} is a precursor for the Square Kilometre Array mid-frequency telescope \citep{Jonas16,Camilo18,Mauch20}. The full array consists of 64 antennas with 13.5--m diameter dishes which operate with an effective bandwidth from 580 to 3500~MHz split into (at present) UHF, L--band and S--band receivers.
The array configuration consists of a 1--km inner core containing $\approx70\%$ of the dishes and a distribution of antennas reaching a maximum baseline length of 8~km.
The instrumental field of view spans roughly from $0.5^\circ$ to $1^\circ$ at the -20~dB point of the voltage beam, at the low and high ends of the L-band bandwidth \citep[][]{Jonas16,Mauch20}. This wide field of view enables us to image far beyond the virial radii of the two clusters (i.e., $\sim 1.5$~Mpc) for our statistical analysis.
The observations presented in this paper were performed during the Array Release 1.5 construction phase, i.e. when only 16 antennas were available. Details of the observations are reported in Table~\ref{tab:Details_observations}.
The data were taken at L--band, with a central frequency of 1.283~GHz ($\lambda = 0.23$~m) and a total bandwidth of 852~MHz. The signal correlation was carried out using 4096~channels in total, each 208~kHz wide, with a 2~s integration time.
Even though the observations of the two clusters were close in time, the array configuration differed, and this is reflected into the $uv$--coverage, shown in Fig.~\ref{fig:uv_coverage} for both sources. The combination of the different $uv$-coverage and more severe RFI excision led to a worse sampling of the $uv$ plane and lower angular resolution for A\,1300
(see also Table~\ref{tab:Details_observations}).
\begin{table*}
\caption[Details on the radio observations]{Details of the radio observations. From left to right: (1) observing date, (2) target name, (3) and (4) J2000 coordinates, (5) central observing frequency ($\nu_c$), (6) bandwidth, (7) integration time, (8) full width half maximum and (9) position angle of the restoring beam, (10) image noise.}
\label{tab:Details_observations}
\centering
\begin{footnotesize}
\begin{center}
\begin{tabular}{c c c c c c c c c c}
\hline
\\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) \\
Obs. date& name& RA$_{J2000}$& DEC$_{J2000}$& $\nu_c$& BW& int. time& FWHM& PA& rms \\
& & h m s & $^\circ$ ' "& MHz& MHz& hr& arcsec~$\times$~arcsec & $\deg$& mJy~beam$^{-1}$ \\
\hline
\\
April 2018& A\,1300& 11:31:54.4& -19:55:42& 1283& 856& 2.4& 12 x 5& 113& 0.04 \\
\\
May 2018& MACS\,J1931.8--2634& 19:31:49.6& -26:34:34& 1283& 856& 2.0& 5 x 3& 136& 0.04 \\
\hline
\\
\end{tabular}
\end{center}
\end{footnotesize}
\end{table*}
\begin{figure}
\centering
\includegraphics[scale=0.4]{A1300_uv_coverage.jpg}
\includegraphics[scale=0.4]{MACSJ_uv_coverage.jpg}
\caption[The uv coverage]{A\,1300 (top panel) and MACS\,J1931.8--2634 (bottom panel) $uv$--coverages over the entire bandwidth. Only one out of every ten points in time and one out every 40 points in frequency are plotted. The two colours show the symmetric $uv$--points obtained from the conjugate visibilities. Note that the different $uv$-coverage for the two clusters reflects into a different synthesized beam (Table~\ref{tab:Details_observations}).}
\label{fig:uv_coverage}
\end{figure}
The data reduction was carried out using the NRAO \texttt{Common Astronomy Software Application (CASA)} package \citep{mcmullin2007casa} for the a-priori calibration and a combination of \texttt{CubiCal} \citep{Kenyon18} and \texttt{WSClean} \citep{Offringa14} for the self-calibration. Moreover, we used \texttt{DDFacet} \citep{Tasse18} to obtain the primary beam corrected images. The packages were pipelined together by means of the containerized \texttt{Stimela} scripting platform \citep{makhathini_phdthesis}. To ensure self consistency between the two fields, we adopted the same strategy for the bandpass calibration, imaging and self-calibration.
The complex bandpass was derived from the observation of PKS~B1934--638 \citep{reynolds1994revised}.
RFI mitigation was accomplished using a combination of a static mask of known L-band interferences including band rolloffs, and residual autoflagging using \texttt{AOFLAGGER} \citep{Offringa12,Offringa13}. Multiple rounds of flagging and calibration were performed to remove residual narrow-band RFI.
The complex bandpass was applied to the target, and the data were averaged by a factor of two in frequency (obtaining a 416~kHz channel width). The number of channels on which we performed the direction independent self-calibration became 2048.
The final effective bandwidth after RFI removal (which excludes the band edges) is about 750~MHz.
Self-calibration was then performed with a combination of non-parametric time-variable gains using CubiCal and sky models synthesized using the wide-field, wide-band WSClean imager.
The initial model images were deconvolved down to a 0.2~mJy threshold ($5 \sigma$).
The self-calibration was performed by four iterations with phase-only solutions. The solution interval was reduced inspecting the solutions at each step. Each imaging step in the self-calibration cycle was carried out with
Briggs weighting scheme -2 \citep{Briggs95} to achieve the maximum resolution needed to fulfill our scientific goals.
Deconvolution was performed using a combination of automatic masking, cycle-variant local RMS thresholding and manual masking to limit artifacts around bright sources. The final images were obtained with a final deconvolution without the manual mask. The images shown in this paper are not primary-beam corrected.
Primary beam corrected images were generated using the DDFacet package \citep{Tasse18} and used for the analysis reported in Section~\ref{sec:results}.
We imaged out to an extent of $\sim 1^\circ$ in diameter using $19 \times 19$ facets for which the rms noise is within 80\% of the central noise level at the top of the bandwidth.
The antenna response used in this patch-wise correction is smoothed from holographic measurements carried out at L--band \citep{asad2019primary}.
\section{Optical data}\label{sec:optical}
We used \textit{Subaru}-SuprimeCam images to identify optical counterparts of the MeerKAT radio sources.
The Suprime stacked images for MACS\,J1931.8--2634 in the B, V, Rc, Ic, z filters were retrieved from the STScI CLASH data products\footnote{https://archive.stsci.edu/prepds/clash/}. On the other hand, the images for A\,1300 were created from our own data reduction of the raw \textit{Subaru}-SuprimeCam images in the $g'$, $r'$ bands retrieved from the SMOKA-Subaru Archive\footnote{https://smoka.nao.ac.jp/index.jsp, http://mergingclustercollaboration.org/}
obtained for the Merging Cluster Collaboration, and from the ESO Archive (WFI images B, V, Rc, Programme 084.A-9001).
We used the same methods which were used to create the B, V, Rc, Ic, z Suprime images for MACS\,J1931.8-2634 \citep[e.g.,][]{Nonino09}. Both data sets were photometrically calibrated using matched point-like sources from PanStarr \citep{Chambers16}, accounting for colour terms. Details of the optical observations are given in Tables~\ref{tab:MACSJ_optical_obs} and \ref{tab:A1300_optical} for MACS\,J1931.8--2634 and A\,1300 respectively. All magnitudes are in the AB system.
\begin{table}
\centering
\caption[MACSJ1931.8-2634 optical]{MACSJ1931.8--2634 optical observations.
}
\label{tab:MACSJ_optical_obs}
\begin{footnotesize}
\begin{center}
\begin{tabular}{cr cr cr cr cr}
\hline
\\
(1) & (2) & (3) & (4) & (5) \\
Observation date & Filter & exposure & FWHM & depth(*) \\
& & (s) & (arcsec) \\
\hline
\\
2006 \& 2012& B& 2640& 1.20& 25.98 \\
\hline
\\
2006& V& 1275& 0.88& 25.33 \\
\hline
\\
2006 \& 2012& Rc& 4560& 0.81& 25.18 \\
\hline
\\
2006& Ic& 1800& 0.92& 24.7 \\
\hline
\\
2012& z& 1950& 0.76& 24.42 \\
\hline
\end{tabular}
\end{center}
\end{footnotesize}
{\scriptsize (*) depth: AB magnitude 5-sigma in 2~arcsec-diameter aperture.}
\end{table}
For each cluster, the galaxy catalogues were then generated, and the magnitudes were corrected for galactic extinction according to \cite{Schlafly11}.
The filter sets for the two clusters were not the same, in particular the two $r$ bands throughput are slightly different: using SED fitting of galaxies which are optical counterparts of radio sources
(see Section~\ref{sec:optical_identifications}) the difference $r'$--Rc ranges from $\approx 0.1$ to $\approx 0.16$ for blue and red galaxies respectively at $z\approx 0.30-0.35$, i.e. at the cluster redshift.
\begin{table}
\centering
\caption[A1300 optical]{A1300 optical observations}\label{tab:A1300_optical}
\begin{footnotesize}
\begin{center}
\begin{tabular}{cr cr cr cr cr}
\hline
\\
(1) & (2) & (3) & (4) & (5) \\
Observation date& Filter& exposure& FWHM & depth* \\
& & (s) & (arcsec)& \\
\hline
\\
2014& $g'$ & 720& 1.06& 25.69 \\
\hline
\\
2014& $r'$ & 2880& 0.94& 26.17 \\
\hline
\end{tabular}
\end{center}
\end{footnotesize}
{\scriptsize (*) depth: AB magnitude 5-sigma in 2~arcsec-diameter aperture.}\end{table}
\section{Results and Discussion}
\label{sec:results}
The $1^\circ \times 1^\circ$ MeerKAT images are shown in Fig.~\ref{fig:A1300_field} and \ref{fig:MACSJ1931_field}.
An average rms noise of 40~$\mu$Jy~beam$^{-1}$ is achieved on both targets.
The image quality is generally good, although affected by some residual stripes likely due to low level residual RFI that were not entirely removed despite the multiple flagging process. A couple of bright, off--axis sources are affected by calibration errors that would require direction dependent calibration. As they do not impact the goals of the current analysis, we leave this to a future work.
Despite the inadequate resolution of our images and the limitations in the $uv$--coverage, the giant radio halo in A\,1300 and the radio relic are visible in the inner region of the field shown in Fig.~\ref{fig:A1300_field}. A few strong sources are distributed over the whole field of view, which is otherwise populated by faint radio sources.
Fig.~\ref{fig:MACSJ1931_field} shows that the field of MACS\,J1931.8--2634 is dominated by faint radio sources, too.
Radio-optical overlays of the central $5.5$~arcmin~$\times$~5.5~arcmin region are displayed in Fig.~\ref{fig:overlays} for both clusters.
The A\,1300 radio galaxies (Fig.~\ref{fig:overlays}, left panel) are labeled according to \citet{Reid99}.
At the resolution and sensitivity of our images the BCG is associated with a compact radio source, while A2 is most likely a tailed radio galaxy. Considering the slightly different observing frequencies, the flux density of the sources labeled in Fig.~\ref{fig:overlays} (left panel) are consistent with those in \citet{Reid99}
within the errors. The relic is very clearly imaged, and consistent in size with the 325 MHz image in \citet{Venturi13}.
The central region of MACS\,J1931.8--2634 (Fig.~\ref{fig:overlays}, right panel) is dominated by the radio emission associated with the BCG and the NAT galaxy. At the angular resolution and sensitivity of our image the morphology of the BCG is fully consistent with the 1.4 GHz VLA--B image reported in \citet{Giacintucci14}, i.e. no further emission is detected surrounding the BCG. The total flux density of the radio emission in our image is $53 \pm 1 $~mJy, slightly lower than reported at 1.4~GHz by \citet{Giacintucci14}, i.e., $S_{1.4 \, {\rm GHz}} = 62 \pm 3$~mJy \citep[see also][]{Ehlert11}.
The overall morphology and extent of the NAT is very similar to that shown in \citet{Giacintucci14}.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{A1300.png}
\caption[A1300 field]{Gray scale image of the galaxy cluster A\,1300 and its field at 1.28~GHz. The green dashed circle has a $\sim$1~deg diameter and represents the HPBW of the MeerKAT primary beam. This is the area used to extract the source catalogue. The blue dashed circle marks the inner 12~arcmin radius (i.e., 0.125~deg$^2$ area), corresponding to about two virial radii. The angular resolution is 12~arcsec~$\times$~5~arcsec. Units are Jy~beam$^{-1}$.}
\label{fig:A1300_field}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{MACSJ1931.png}
\caption[MACSJ1931 field]{Same as Fig.~\ref{fig:A1300_field}, but for the MACS\,J1931.8--2634 galaxy cluster and its field. The angular resolution is 5~arcsec~$\times$~3~arcsec.}
\label{fig:MACSJ1931_field}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.35]{A1300_zoomin_200923.jpg}
\includegraphics[scale=0.35]{MACSJ_zoomin_200923.jpg}
\caption[The radio-optical overlays]{Left panel: A\,1300 radio contours (red) overlaid on the gray scale Infra-Red band image (K$_s$ band from GCAV). Radio contours start at 3$\sigma$ ($1 \sigma = 0.04$~mJy~beam$^{-1}$) and then scaled by a factor of 2. The first negative contour (-3$\sigma$ level) is drawn in blue. Galaxies are labeled following \citet{Reid99}. Right panel: same as left panel but for MACS\,J1931.8-2634. Radio contours are overlaid on the optical Suprime image (Ic filter). The restoring beam for each image is shown as a green ellipse in the bottom left corner in both panels.}
\label{fig:overlays}
\end{figure*}
\subsection{Source extraction}
For both clusters we extracted the radio sources using the \texttt{PyBDSF} \citep{Mohan15} package from the images not corrected for the primary beam, where the noise is fairly flat over the whole field. \texttt{PyBDSF} first estimates a noise map by calculating the rms noise over a 16--pixel box sliding by a 50--pixel window. Following \cite{Williams16}, the box was taken about two times smaller around bright sources to account for the possible local noise increase due to calibration artefacts.
The sources were extracted by first identifying islands of contiguous emission above a given threshold, and then decomposing islands into Gaussian components. A threshold of five and four times the local rms noise was used to define sources and island boundaries respectively. The flux density of the sources was then corrected for primary beam attenuation.
We detected 107 and 162 radio sources within the inner 30~arcmin in the field of A\,1300 and MACS\,J1931.8--2634 respectively above the threshold of 5$\sigma$ (0.2~mJy) in the images not corrected for the primary beam attenuation. Most of the sources detected in both fields are unresolved or barely resolved. We assume that a radio source is resolved when its deconvolved major and minor axes, as given by PyBDSF, are larger than the restoring beam of the image.
The radio source catalogues are reported in Tables~\ref{tab:A1300_Radio_Catalog} and \ref{tab:MACSJ_Radio_Catalog}.
We tested the accuracy of our flux density calibration by comparing our catalogue with the NVSS catalogue \citep{Condon98}.
We first produced MeerKAT images matching the NVSS 45~arcsec angular resolution, and used them to fit for the flux density for those sources that we considered isolated (i.e. not blended with other sources) and compact (to account for the different angular resolution and $uv$-coverage of MeerKAT--16 and NVSS) within 30~arcmin from the pointing centre. We finally scaled the
MeerKAT flux density to 1.4~GHz (NVSS observing frequency) assuming a spectral index\footnote{We used the convention $S_\nu \propto \nu^{-\alpha}$ where $S_\nu$ is the flux density at the frequency $\nu$.} $\alpha = 0.7$ for all the sources.
This procedure ensures a one-to-one match between the two catalogues. The result of our comparison is reported in Figure~\ref{fig:nvss_S14}. Although there is some scatter below 10~mJy, the alignment between the two measurements is generally good and the rms of the relative difference is $\sim 16$~per~cent for sources brighter than 8~mJy at 1.28~GHz, which is most likely due to the higher sensitivity of NVSS to extended emission.
We consider this number as an indication of the accuracy of our absolute flux density scale. The errors on the flux density measurements reported in Table~\ref{tab:A1300_Radio_Catalog} and \ref{tab:MACSJ_Radio_Catalog} are the PyBDSF fit errors that do not include the uncertainty on the flux density scale.
\begin{figure}
\centering
\includegraphics[scale=0.55]{comparison_NVSS_GB_scaled}
\caption[The NVSS comparison]{Comparison between the NVSS flux density $S_N$ and the MeerKAT flux density $S_M$ scaled at 1.4 GHz assuming a spectral index $\alpha=0.7$ for sources common to both cluster catalogues within a 30~arcmin radius from the image centre.}
\label{fig:nvss_S14}
\end{figure}
\subsection{Radio Source Counts}
As a first step to test possible differences in the radio source population, after primary beam correction we derived the differential radio source counts in a circular area of radius $r = 12$~arcmin.
We chose the same area (0.125~deg$^2$), which corresponds to about two virial radii in both clusters (blue dashed circle in Fig.~\ref{fig:A1300_field} and ~\ref{fig:MACSJ1931_field}). We expect that the cluster dynamics may not impact the radio emission of individual galaxies beyond this distance.
To account for the different angular resolution of the final images of the two cluster fields (see Table~\ref{tab:Details_observations}) and ensure an homogeneous analysis, we generated an image of the MACS\,J1931.8--2634 field with a 12~arcsec~$\times$~5~arcsec restoring beam, and used this latter image in the following steps.
To derive the contribution of the two clusters to the differential source counts we estimated the background source counts from the annular region between the blue and green circle (30$^{\prime}$) in Fig.~\ref{fig:A1300_field} and in
the $12^{\prime\prime}\times5^{\prime\prime}$ image of MACS\,J1931.8--2634, which corresponds to a 0.66~deg$^2$ area.
Fig.~\ref{fig:MACSJ_counts} and \ref{fig:A1300_counts} show the differential source counts $N$ for the 0.125~deg$^2$ area centered on the two clusters and the background (0.66~deg$^2$) respectively. The background counts were not subtracted from 0.125~deg$^2$ central area. The source counts have been normalized to 1 square degree in both figures.
The source count distributions tend to decline below $\sim 0.5$~mJy, corresponding to $10\sigma$ (dashed line in Fig.~\ref{fig:MACSJ_counts} and \ref{fig:A1300_counts}). We interpret such decline as an indication of the completeness limit of our samples, partly due to the primary beam attenuation, which is $\sim 20$~per~cent at 30~arcmin and negligible at 12~arcmin.
Since our analysis is carried out on images with the same angular resolution, at the same frequencies and, therefore, with the same primary beam, this implies that the completeness is the same for both cases. For the purpose of our relative comparison, the estimate and correction of completeness is therefore not necessary.
We performed a two-sample Kolmogorov-Smirnov (KS) test between the A\,1300 and MACS\,J1931.8--2634 using the source catalogues extracted in their central area (12~arcmin radius). The test yielded a 0.05~$p$-value, implying that we can reject the null hypothesis that the radio source catalogues are drawn from the same distribution with a 95~per~cent significance.
The two distributions for the background source counts are in very good agreement with each other, except for the first two bins at low flux densities.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{source_counts_rebin_200929_paper.pdf}
\caption{Differential radio source counts in A\,1300 (black) and MACS\,J1931.8-2634 (red) at 1.28~GHz in the central 0.125~deg$^2$ area, corresponding to two virial radii (blue dashed circle in Fig.~\ref{fig:A1300_field} and \ref{fig:MACSJ1931_field}).}
\label{fig:MACSJ_counts}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{background_counts_200929_paper.pdf}
\caption{Same as Figure~\ref{fig:MACSJ_counts}, but for the background (the 0.66~deg$^2$ area between the blue and the green circles in Figure~\ref{fig:A1300_field} and \ref{fig:MACSJ1931_field}).}
\label{fig:A1300_counts}
\end{figure}
\subsection{Optical identifications}
\label{sec:optical_identifications}
To search for optical counterparts we used the Rc band for MACS\,J1931.8--2634 and the $r'$ band for A\,1300, the image quality being better since both filters lie redward of the 4000~\AA~break.
We carried out the optical identifications of radio sources in our catalogues within about two virial radii by cross-correlating their positions with the catalogues derived from the $\it{Subaru}$-SuprimeCam images (see Section 3).
As the primary beam correction is negligible out to this distance from the pointing centre, our optical identifications are complete at the 0.2~mJy flux density level.
The optical position error is $\sigma_o \sim 0.02$~arcsec. We estimated the radio position error $\sigma_r$ as \citep{Prandoni00}:
\begin{equation}
\sigma_r = \frac{\theta_s}{2 \, {\rm SNR}},
\end{equation}
where $\theta_s$ is the synthesized beam size (Table~\ref{tab:Details_observations}) and SNR is the signal-to-noise ratio. For the faintest sources
(${\rm SNR} = 5\sigma$), $\sigma_r = 1.2$~arcsec and 0.5~arcsec for A\,1300 and MACS\,J1931.8--2634 respectively.
The matching criterium was based on the $R$ parameter defined as \citep{Giacintucci04}:
\begin{equation}
R^2= \frac{\Delta^2_{r-o}}{\sigma^2_r + \sigma^2_o} \, ,
\end{equation}
where $\Delta_{r-o}$ is the radio-optical position offset. We considered reliable identifications all the nearest neighbor matches with $R \leq 3$.
After further visual inspection, we were left with 26 optical counterparts in the A\,1300 cluster field ($\sim 24\%$ of the total number of radio sources), and 25 optical counterparts in the MACS\,J1931.8--2634 cluster field ($\sim 15$~per~cent of the total number of radio sources).
The information on the optical counterparts was complemented with the redshift data to identify cluster members.
\\
In A\,1300, the redshift information was taken from \citet{Ziparo12}\footnote{ESO GO Large Programme, PI B{\"o}ringer, ID number 169.A--0595}. We retrieved and re-analyzed the VIMOS spectroscopic data in \citet{Ziparo12}, limiting to slit including optical counterparts of radio sources and to targets with $r'$ brighter than 20.4 (see Table~\ref{tab:Details_clusters_members}). Combining our re-analysis with information from NED, we found 14 spectroscopic redshifts, nine of which are consistent with being cluster members. Among these nine galaxies, four are new findings from our re-analysis.
For MACS\,J1931.8--2634, the redshift information was obtained from \citet[private
communication]{Rosati14}\footnote{CLASH VLT-ESO Large Programme, PI Rosati, ID number 186.A-0798}.
We found 8 spectroscopic redshifts, six of which are consistent with being cluster members.
Even though we did not perform a full pseudo-phase space analysis, using the rest frame $\Delta (v - v_{cluster})$ and the projected distance from the BCG, assumed to be the center of the cluster, both clusters show that the optical counterparts with spectroscopic redshift lie within the phase-space diagram which envelops the cluster members \citep[Figure 4 in][]{Ziparo12}. Since the two clusters have similar mass, the phase-space diagram for the two clusters is similar, and this has been confirmed from a preliminary analysis of the yet unpublished CLASH-VLT data of MACS\,J1931.8-2634.
This confirms that the radio galaxies with spectroscopic redshift counterparts in the range 0.29-0.31 and 0.34-0.36 for A\,1300 and MACS\,J1931.8--2634 respectively are most likely cluster members.
The full list of optical identifications is given in Table~\ref{tab:MACSJ_MKT_crossID} and Table~\ref{tab:A1300_MKT_crossID}, and the radio-optical details of the cluster members
are given in Table~\ref{tab:Details_clusters_members}.
Although beyond the scope of the current paper, it is worth noting that using SED fitting \citep[MAGPHYS,][]{daCunha08} both in A\,1300 and in MACS\,J1931.8--2634 the cluster radio galaxies split into quiescent and star forming with respect to the specific Star Formation Rate, which reflects the blue and red colours in the colour--magnitude diagrams (not shown here).
\begin{table*}
\caption[Clusters members]{Details of the clusters members. Left to right: (1) radio source name; (2) and (3) J2000 right ascension and declination; (4) flux density at 1.28~MHz $S_M$; (5) indication on source size (extended or not); (6) $k$-corrected radio power $\rm{P_{1.28~GHz}}$; (7) and (8) J2000 right ascension and declination of the optical counterpart; (9), (10) and (11) magnitudes and corresponding colours; (12) spectroscopic redshift $z_{\rm sp}$.}
\label{tab:Details_clusters_members}
\centering
\begin{footnotesize}
\begin{center}
\begin{tabular}{lr cr cr cr cr cr cr cr cr cr cr cr}
\hline
\\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) &(9) & (10) & (11) & (12) \\
NAME& RA$_{J2000}$& DEC$_{J2000}$& $S_M$ & resolved & log$\rm{P_{1.28~GHz}}$ & RA$_{opt}$& DEC$_{opt}$& B& Rc & (B-Rc)& $z_{\rm sp}$&\\
& h m s& $^\circ$ ' "& mJy& & W~Hz$^{-1}$& hms& $^\circ$ ' "& & & &\\
\hline
J1931-2637 & 19:31:46.60 & -26:37:31.7 & $0.39 \pm 0.04$ & N & 23.22 & 19:31:46.53 & -26:37:31.0 & 21.95 & 19.67 & 2.28 & 0.342\\
J1931-2634 & 19:31:49.58 & -26:34:32.7 & $45.40 \pm 0.80$ & Y--BCG & 25.31 & 19:31:49.63 & -26:34:32.6 & 18.84 & 18.14 & 0.70 & 0.352\\
J1931-2635 & 19:31:50.02 & -26:35:17.2 & $145.20 \pm 0.70$ & Y--NAT & 25.81 & 19:31:50.00 & -26:35:17.1 & 21.50 & 19.20 & 2.30 & 0.351\\
J1931-2630b & 19:31:54.86 & -26:30:57.7 & $0.88 \pm 0.10$ & N & 23.60 & 19:31:54.86 & -26:30:57.0 & 21.66 & 19.46 & 2.20 & 0.351\\
J1931-2645 & 19:31:58.33 & -26:45:34.4 & $0.25 \pm 0.05$ & N & 22.98 & 19:31:58.32 & -26:45:34.4 & 20.85 & 19.33 & 1.52 & 0.359\\
J1932-2625 & 19:32:07.42 & -26:25:16.3 & $0.25 \pm 0.05$ & N & 22.98 & 19:32:07.45 & -26:25:16.5 & 20.52 & 19.22 & 1.30 & 0.349\\
\hline
\hline
\\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) &(9) & (10) & (11) & (12) \\
NAME & RA$_{J2000}$ & DEC$_{J2000}$ & $S_M$ & resolved & log$\rm{P_{1.28~GHz}}$ & RA$_{opt}$ & DEC$_{opt}$& $g'$& $r'$ & $(g'-r')$ & $z_{\rm sp}$&\\
& h m s & $^\circ$ ' " & mJy & & W~Hz$^{-1}$ & hms & $^\circ$ ' "& & & & &\\
\hline
J1131-2000 & 11:31:13.74 & -20:00:20.5 & $1.69 \pm 0.17$ & N & 23.73 & 11:31:13.73 & -20:00:21.5 & 21.35 & 19.95 & 1.40 & 0.303\\
J1131-1954 & 11:31:46.79 & -19:54:52.5 & $0.25 \pm 0.05$ & N (B2) & 22.89 & 11:31:47.13 & -19:54:52.7 & 21.05 & 19.63 & 1.41 & 0.302\\
J1131-1949 & 11:31:48.62 & -19:49:01.7 & $0.25 \pm 0.05$ & N & 22.89 & 11:31:48.56 & -19:49:02.2 & 21.23 & 20.21 & 1.02 & 0.302\\
J1131-1958 & 11:31:49.50 & -19:58:07.9 & $0.39 \pm 0.05$ & N & 23.08 & 11:31:49.53 & -19:58:07.6 & 20.87 & 19.74 & 1.13 & 0.295\\
J1131-1953b & 11:31:54.30 & -19:53:53.5 & $37.80 \pm 0.40$ & Y--TAIL (A2) & 25.03 & 11:31:54.27 & -19:53:50.8 & 20.86 & 19.41 & 1.45 & 0.305\\
J1131-1955 & 11:31:54.34 & -19:55:39.0 & $12.10 \pm 0.20$ & Y--BCG (A1)& 24.53 & 11:31:54.18 & -19:55:39.8 & 20.09 & 18.62 & 1.46 & 0.307\\
J1131-1952b & 11:31:54.85 & -19:52:07.7 & $2.30 \pm 0.20$ & Y (A3) & 23.80 & 11:31:54.95 & -19:52:10.2 & 20.70 & 19.24 & 1.45 & 0.303\\
J1132-1954 & 11:32:02.77 & -19:54:09.0 & $1.02 \pm 0.10$ & Y (12)& 23.38 & 11:32:02.70 & -19:54:13.5 & 21.23 & 19.78 & 1.44 & 0.306\\
J1132-1952 & 11:32:04.16 & -19:52:11.1 & $0.25 \pm 0.05$ & N & 22.89 & 11:32:04.39 & -19:52:12.4 & 21.14 & 20.39 & 0.75 & 0.302\\
\hline
\end{tabular}
\end{center}
\end{footnotesize}
\end{table*}
\subsection{The Radio Luminosity Function}
The RLF is a solid statistical tool to investigate the radio properties of a galaxy population \citep{Ledlow96,Venturi00,Giacintucci04,Branchesi06}.
To further explore if and how the dynamical properties of galaxy clusters affect the statistical properties of the radio galaxy population, we derived the RLF in both clusters. In particular, the influence of the environment should be reflected in the shape of the RLF or in its amplitude, or both.
Consistent with the radio source counts, our analysis has been carried out within about two virial radii (12~arcmin) in both clusters.
The 1.28~GHz $k$-corrected
radio power of the cluster radio galaxies ranges from $22.95 < \log{\rm P_{1.28~GHz}~(W/Hz)} < 25.03$ in A\,1300 and $22.98 < \log{\rm P_{1.28~GHz}~(W/Hz)} < 25.31$ in MAC\,J1931.8--2643 (see Table~\ref{tab:Details_clusters_members}).
The redshift of the individual galaxies was used to evaluate the $k$ correction.
We divided this interval in bins of $\Delta \log P = 0.4$ in A\,1300. Due to the lower number of sources, we chose a bin width of $\Delta \log P = 0.6$ for MACS\,J1931.8--2634.
\\
To ensure optical completeness in the normalization of the radio galaxy distribution we selected
all the galaxies in the magnitude range $18.6 < m_{r'} < 20.3$ for A\,1300 and those in the magnitude range
$18.1 < m_{\rm Rc} < 19.7$ for MACS\,J1931.8--2634. The negligible impact of the different filters for the two clusters is discussed in Section~\ref{sec:optical}. The faint limits in $r'$
and Rc are approximately 5 magnitudes brighter than the depth of the stacked images, which ensures completeness of the optical samples in both clusters.
We used the $r'$ and Rc filters which are redder than the 4000~\AA \ break for both clusters, and they have similar system transmission as opposed to {\em g} and B,V filters.
We obtained 261 objects for A\,1300, 70 of which have spectroscopic redshift in the range $0.29 < z_{\rm sp} < 0.31$, and 1181 objects for MACS\,J1931.8--2634, 220 of which have spectroscopic redshift in the $0.34 < z_{\rm sp} < 0.36$ range. Each radio power bin has been normalized by 70 and 220 respectively for A\,1300 and MACS\,J1931.8--2634, as clear from Table~\ref{tab:MACSJ_MKT_RLF}, which reports the radio power interval (Col. 1) and the values of the the fractional and integral RLF (Col. 2 and 3 respectively) for both clusters. MACS\,J1931.8--2634 is reported in the upper part of the table, A\,1300 in the lower part.
The two integral radio luminosity functions are shown in Fig.~\ref{fig:RLF}.
The integral RLF has a similar slope in both clusters, the one in A\,1300 being systematically higher compared to MACS\,J1931.8--2634 one. Unfortunately,
the samples are not large enough to perform a KS test, and we integrated the RFLs in a single power bin over the full radio power range.
In order to account for the different radio power detection limit in the two clusters ($\log{\rm P_{1.28~GHz}} = 22.89$ and 22.98 for A\,1300 and MACS\,J1931.8--2634 respectively), we removed the three objects with log${\rm P_{1.28~GHz}} = 22.89$ in A\,1300.
We obtained $0.027 \pm 0.011$ for MACS\,J1931.8-2634 and {\bf $0.09 \pm 0.04$} for A\,1300 respectively, leading to a $3.3 \pm 1.9$ ratio between the two integral RFLs.
These results indicate that the evidence of enhanced radio emission in A\,1300 is negligible ($1.2 \sigma$).
\begin{table}
\caption[MACSJ1931-2634 radio integral LF]{RLFs of MACS\,J1931.8-2634 (top part) and A\,1300 (bottom part) clusters respectively. From left to right: radio power (logarithmic) interval, differential RLF (normalized by the number of cluster optical galaxies) and cumulative RLF.}\label{tab:MACSJ_MKT_RLF}
\begin{footnotesize}
\begin{center}
\begin{tabular}{cr cr cr}
\hline
(1) & (2) & (3) \\
$\Delta$log${\rm P_{1.28~GHz}}$ & differential RLF & integral RLF &\\
\hline
22.95-23.55 & 3/220 & 0.0271\\
23.55-24.15 & 1/220 & 0.0135\\
24.15-24.75 & 0/220 & 0.009\\
24.75-25.35 & 1/220 & 0.009\\
25.35-25.95 & 1/220 & 0.0045\\
\hline
22.81-23.21 & 4/70 & 0.1284\\
23.21-23.61 & 1/70 & 0.0713\\
23.61-24.01 & 2/70 & 0.0571\\
24.01-24.41 & 0/70 & 0.0285\\
24.41-24.81 & 1/70 & 0.0285\\
24.81-25.21 & 1/70 & 0.0142\\
\hline
\end{tabular}
\end{center}
\end{footnotesize}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{rlf_plot_200929.pdf}
\caption{Integral RLF of the A\,1300 (black) and MACS\,J1931.8-2634 (red) cluster, respectively (see text for details).}
\label{fig:RLF}
\end{figure}
\section{Summary and future work}\label{sec:discussion}
In this paper we presented new 1.28~GHz MeerKAT observations of the two galaxy clusters A\,1300 and MACS\,J1931.8--2634 as part of the MeerKAT early science programme, when only 16 antennas were available for observing. Observations served to test calibration and imaging pipelines for MeerKAT. As scientific goal, we selected a (A\,1300) and a relaxed cluster (MACS~J\,1931.8--2634) with similar mass located at $z \sim 0.3$ in order to isolate effects due to cosmological evolution, for a comparative study of the properties of the radio galaxy population in different environments.
A\,1300 is a merging cluster, with a well-studied giant radio halo and a relic, as often found in merging clusters; MACS\,J1931.8--2634 is a relaxed cluster hosting one of the most X--ray luminous cool cores, and extended emission of unclear nature surrounding the radio AGN associated with the BCG.
The angular resolution and $uv$--coverage of our observations are very well suited to perform a comparative study of the statistical properties of the radio galaxy population, which we carried out by means of the radio source counts and the radio luminosity function.
We extracted a radio source catalogue for each cluster down to a 0.2~mJy threshold (corresponding to 5$\sigma$) and out to about two virial radii (corresponding to a 12~arcmin radius) which we consider the extent of the possible effects of cluster dynamics on the radio galaxy population. We further extracted another catalogue from the outskirt of each cluster in an annular area of radius 12~$ < r <$~30~arcmin, representative of the background distribution.
We performed a two sample KS test of the radio source counts in the two clusters. We rejected the hypothesis that the cluster catalogues come from the same distribution at a limited, 95~per~cent significance.
To further investigate if and how the cluster dynamics affects the radio emission of cluster galaxies, we complemented our radio catalogues with optical data in order to derive the radio-optical luminosity function for each cluster. In particular, we used $Subaru$ SuprimeCam data for A\,1300 in the $g^{\prime}$ and $r^{\prime}$ and the CLASH full filter coverage for MACS\,J1931.8--2634. These datasets are $\approx 5$~magnitudes deeper than the faint limits used for the radio-optical luminosity function.
The optical data were complemented with redshift information from the literature, yielding to the identification of 9 cluster radio galaxies out to 70 cluster members and 6 cluster radio galaxies out of 220 cluster members in A\,1300 and MACS\,J1931.8--2634 respectively. These radio galaxies were used to compute the differential and integral radio luminosity functions which span the power range $22.81 < \log\rm{P_{1.28~GHz}}~(W/Hz) < 25.95$.
We found that the integral RLF in A\,1300 systematically lies above the MACS\,J1931.8--2634 one. After averaging over the whole common power interval, the ratio of the two RLFs is $3.3 \pm 1.9$, implying that the probability to host radio emission is 3.3 times higher in A\,1300. Due to the small sample, however, this result is not statistically significant ($1.2\sigma$ level), suggesting that the role of cluster mergers as a trigger of radio emission in the cluster galaxy population is negligible.
The analysis presented in this work, though limited by the small statistics, aligns with the previous studies in the field, and the role of cluster
dynamics on the global statistical properties of radio galaxies remains unclear (i.e., \citep[i.e.,][]{Ledlow96,Venturi00,Giacintucci04}.
Future, more sensitive observations with the full MeerKAT array will increase the sample statistics and allow a more robust confirmation of the findings presented here.
\section*{Acknowledgements}
The MeerKAT telescope is operated by the South African Radio Astronomy Observatory, which is a facility of the National Research Foundation, an agency of the Department of Science and Innovation. This work is based on research supported in part by the National Research Foundation of South Africa (Grant Number 103424). We acknowledge the support from the Ministero degli Affari Esteri e della Cooperazione Internazionale, Direzione Generale per la Promozione del Sistema Paese, Progetto di Grande Rilevanza ZA18GR02. Based in part on data collected at Subaru Telescope and obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan. Basic research in radio astronomy at the Naval Research Laboratory is supported by 6.1 Base funding. RK acknowledges the support of the Department of Atomic Energy, Government of India, under project no. 12-R\&D-TFR-5.02-0700.
\section*{Data availability}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
|
1,314,259,993,453 | arxiv | \section*{Acknowledgements}
We appreciate discussions with the participants of
the VII Workshop on Geometric Correspondences of Gauge Theories,
especially the explanations and comments by
M.Bershtein, G.Bonelli, A.Grassi, A.Tanzini, Ya.Yamada and Y.Zenkevich.
This work was performed at the Institute for Information Transmission Problems
with the financial support
of the Russian Science Foundation (Grant No.14-50-00150).
|
1,314,259,993,454 | arxiv | \section{Full example of N-body simulation kernel using OpenCHK}
\label{sec:appendixA}
\begin{figure}[h]
\begin{lstlisting}[escapechar=$]
void solve_nbody(particles_block_t * local,
particles_block_t * tmp,
force_block_t * forces,
const int n_blocks,
const int timesteps,
const float time_interval)
{
int rank, rank_size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &rank_size);
int t = 0;
// Load local and t vars, if any.
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$#pragma chk load ([n_blocks] local, t)
for (; t < timesteps; t++) {
#pragma oss task inout([n_blocks] local)
{
// Store local and t vars, using t as id.
// Each checkpoint done must be at level 4.
// Checkpoint every 10 iterations.
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$#pragma chk store ([n_blocks] local, t) id (t) level (4) \
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ kind (CHK_FULL) if (
}
particles_block_t * remote = local;
for(int i=0; i < rank_size; i++){
#pragma oss task in([n_blocks] local,[n_blocks] remote) \
inout([n_blocks] forces)
calculate_forces(forces, local, remote, n_blocks);
#pragma oss task in([n_blocks] remote) \
out([n_blocks] tmp)
exchange_particles(remote, tmp, n_blocks, rank, rank_size, i, t);
remote=tmp;
}
#pragma oss task inout([n_blocks] local) inout([n_blocks] forces)
update_particles(n_blocks, local, forces, time_interval);
}
#pragma oss taskwait
}
\end{lstlisting}
\caption{Full example of N-body simulation kernel using OpenCHK.}
\end{figure}
\pagebreak
\section{Full example of N-body simulation kernel using FTI}
\label{sec:appendixB}
\begin{figure}[h]
\begin{lstlisting}[escapechar=$]
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$#include <fti.h>
void solve_nbody(particles_block_t * local,
particles_block_t * tmp,
force_block_t * forces,
const int n_blocks,
const int timesteps,
const float time_interval)
{
int rank, rank_size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &rank_size);
int t = 0;
// Create a new FTI data type
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$FTIT_type ckptInfo;
// Initialize the new FTI data type
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$FTI_InitType(&ckptInfo, n_blocks*sizeof(particles_block_t));
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$FTI_Protect(0, &t, sizeof(int), FTI_INTG);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$FTI_Protect(1, local, 1, ckptInfo);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$if( FTI_Status() ) {
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ FTI_Recover();
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$}
for (; t < timesteps; t++) {
#pragma oss task inout([n_blocks] local)
{
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$if(
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ int res = FTI_Checkpoint( t, 4 );
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ if( res != FTI_DONE ) {
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ printf( "FTI internal error." );
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ MPI_Abort( MPI_COMM_WORLD, -1 );
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ }
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$}
}
particles_block_t * remote = local;
for(int i=0; i < rank_size; i++){
#pragma oss task in([n_blocks] local,[n_blocks] remote) \
inout([n_blocks] forces)
calculate_forces(forces, local, remote, n_blocks);
#pragma oss task in([n_blocks] remote) \
out([n_blocks] tmp)
exchange_particles(remote, tmp, n_blocks, rank, rank_size, i, t);
remote=tmp;
}
#pragma oss task inout([n_blocks] local) inout([n_blocks] forces)
update_particles(n_blocks, local, forces, time_interval);
}
#pragma oss taskwait
}
\end{lstlisting}
\caption{Full example of N-body simulation kernel using FTI.}
\end{figure}
\pagebreak
\section{Full example of N-body simulation kernel using VeloC}
\label{sec:appendixC}
\begin{figure}[h]
\begin{lstlisting}[escapechar=$]
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$#include <veloc.h>
void solve_nbody(particles_block_t * local,
particles_block_t * tmp,
force_block_t * forces,
const int n_blocks,
const int timesteps,
const float time_interval)
{
int rank, rank_size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &rank_size);
int t = 0;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$VELOC_Mem_protect(0, &t, 1, sizeof(int));
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$VELOC_Mem_protect(1, local, n_blocks, sizeof(particles_block_t));
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$int restarted_version = VELOC_Restart_test("nbody", t);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$if(restarted_version != VELOC_FAILURE) {
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ assert(VELOC_Restart("nbody", restarted_version) == VELOC_SUCCESS);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$}
for (; t < timesteps; t++) {
#pragma oss task inout([n_blocks] local)
{
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$if(
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ int res = VELOC_Checkpoint("nbody", t);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ if( res != VELOC_SUCCESS ) {
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ printf( "VELOC internal error." );
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ MPI_Abort( MPI_COMM_WORLD, -1 );
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ }
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$}
}
particles_block_t * remote = local;
for(int i=0; i < rank_size; i++){
#pragma oss task in([n_blocks] local,[n_blocks] remote) \
inout([n_blocks] forces)
calculate_forces(forces, local, remote, n_blocks);
#pragma oss task in([n_blocks] remote) \
out([n_blocks] tmp)
exchange_particles(remote, tmp, n_blocks, rank, rank_size, i, t);
remote=tmp;
}
#pragma oss task inout([n_blocks] local) inout([n_blocks] forces)
update_particles(n_blocks, local, forces, time_interval);
}
#pragma oss taskwait
}
\end{lstlisting}
\caption{Full example of N-body simulation kernel using VeloC.}
\end{figure}
\pagebreak
\section{Full example of N-body simulation kernel using SCR}
\label{sec:appendixD}
\begin{figure}[h]
\begin{lstlisting}[escapechar=$]
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$#include <scr.h>
void solve_nbody_cp(const int n_blocks,
const int rank,
particles_block_t const* __restrict__ local,
const int timestep);
int solve_nbody_rt(const int n_blocks,
const int rank,
particles_block_t* __restrict__ local,
int *timestep);
void solve_nbody(particles_block_t * local,
particles_block_t * tmp,
force_block_t * forces,
const int n_blocks,
const int timesteps,
const float time_interval)
{
int rank, rank_size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &rank_size);
int t = 0;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$solve_nbody_rt(n_blocks, rank, local, &t);
for (; t < timesteps; t++) {
#pragma oss task inout([n_blocks] local)
{
if(
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$solve_nbody_cp(n_blocks, rank, local, t);
}
}
particles_block_t * remote = local;
for(int i=0; i < rank_size; i++){
#pragma oss task in([n_blocks] local,[n_blocks] remote) \
inout([n_blocks] forces)
calculate_forces(forces, local, remote, n_blocks);
#pragma oss task in([n_blocks] remote) \
out([n_blocks] tmp)
exchange_particles(remote, tmp, n_blocks, rank, rank_size, i, t);
remote=tmp;
}
#pragma oss task inout([n_blocks] local) inout([n_blocks] forces)
update_particles(n_blocks, local, forces, time_interval);
}
#pragma oss taskwait
}
\end{lstlisting}
\caption{First part of full example of N-body simulation kernel using SCR.}
\end{figure}
\begin{figure}[h]
\begin{lstlisting}[escapechar=$]
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$void solve_nbody_cp(const int n_blocks,
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ const int rank,
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ particles_block_t const* __restrict__ local,
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ const int timestep) {
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ int status;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ int saved_data = 0;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ int saved_data_2 = 0;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ int i=0;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ int res;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ char name[256];
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ char path[SCR_MAX_FILENAME];
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ int perform_checkpoint;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ SCR_Need_checkpoint(&perform_checkpoint);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ if (perform_checkpoint == 1) {
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ res = SCR_Start_checkpoint();
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ if(res != SCR_SUCCESS)
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ assert(0 && "SCR failed starting a checkpoint.");
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ const char * scr_prefix = getenv("SCR_PREFIX");
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ sprintf(name,
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ // Get backup file path
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ if(SCR_Route_file(name, path)==SCR_SUCCESS){
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ cint fd = open (path,
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ O_WRONLY | O_CREAT | O_TRUNC,
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ S_IRUSR | S_IRGRP | S_IROTH | S_IWUSR | S_IWGRP | S_IWOTH);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ // Open, write and close file
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ assert(fd >= 0);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ saved_data = write(fd, ×tep, sizeof(int));
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ saved_data_2 = write(fd, local, sizeof(particles_block_t)*n_blocks);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ assert(close(fd)==0);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ }
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ int is_valid = (saved_data+saved_data_2) == (sizeof(particles_block_t)*n_blocks + sizeof(int));
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ SCR_Complete_checkpoint(is_valid);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ }
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$}
\end{lstlisting}
\caption{Second part of full example of N-body simulation kernel using SCR.}
\end{figure}
\pagebreak
\begin{figure}[h]
\begin{lstlisting}[escapechar=$]
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$int solve_nbody_rt(const int n_blocks,
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ const int rank,
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ particles_block_t* __restrict__ local,
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ int *current_timestep) {
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ int status = 0, found_cp = 0, temp_tstep, num_read, num_read_particles, size;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ MPI_Comm_size(MPI_COMM_WORLD, &size);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ char name[256], path[SCR_MAX_FILENAME], cp_name[SCR_MAX_FILENAME];
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ sprintf(name, "solve_nbody-n
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ int res = SCR_Have_restart(&found_cp, path);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ if(res != SCR_SUCCESS)
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ assert(0 && "SCR failed when checking for available restarts.");
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ if(!found_cp)
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ return -1;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ found_cp = 0;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ if(SCR_Start_restart(cp_name) != SCR_SUCCESS)
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ assert(0 && "SCR failed starting a restart.");
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ if (SCR_Route_file(name, path) == SCR_SUCCESS) {
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ cint fd = open (path,
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ O_RDWR | O_CREAT,
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ S_IRUSR | S_IRGRP | S_IROTH | S_IWUSR | S_IWGRP | S_IWOTH);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ assert(fd >= 0);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ num_read = read(fd, &temp_tstep, sizeof(int));
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ void *tmp = (void *) mmap(NULL,
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ n_blocks*sizeof(particles_block_t)+sizeof(int),
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ PROT_WRITE|PROT_READ, MAP_SHARED, fd, 0);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ assert(close(fd)==0);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ if(num_read == sizeof(int) && tmp != MAP_FAILED) {
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ found_cp = 1;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ char * aux = (char *) tmp + sizeof(int);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ memcpy( local, aux, n_blocks*sizeof(particles_block_t ));
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ munmap( tmp, n_blocks*sizeof(particles_block_t) );
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ } else {
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ status = -1;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ }
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ }
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ // Follows in next figure.
\end{lstlisting}
\caption{Third part of full example of N-body simulation kernel using SCR.}
\end{figure}
\begin{figure}
\begin{lstlisting}[escapechar=$]
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ if(SCR_Complete_restart(found_cp) != SCR_SUCCESS)
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ status = -5;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ /* determine whether all tasks successfully read their checkpoint file */
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ int all_found_checkpoint = 0;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ MPI_Allreduce(&found_cp, &all_found_checkpoint, 1, MPI_INT, MPI_LAND,
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ MPI_COMM_WORLD);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ if (!all_found_checkpoint)
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ status = -2;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ /* check that everyone is at the same timestep */
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ int timestep_and, timestep_or;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ MPI_Allreduce(current_timestep, ×tep_and, 1, MPI_INT, MPI_BAND, MPI_COMM_WORLD);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ MPI_Allreduce(current_timestep, ×tep_or, 1, MPI_INT, MPI_BOR, MPI_COMM_WORLD);
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ if (timestep_and != timestep_or)
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ status = -3;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ if(status == 0)
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ *current_timestep = temp_tstep;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$ return status;
$\makebox[0pt][l]{\color{cyan}\rule[-4pt]{0.85\linewidth}{14pt}}$}
\end{lstlisting}
\caption{Last part of full example of N-body simulation kernel using SCR.}
\end{figure}
\pagebreak
\section{Introduction}
Given that exascale systems are expected to be composed of a \highlightForReview{large} number of
components, the mean time between failures (MTBF) will drastically \highlightForReview{decrease} from
the order of days in petascale systems~\cite{reed2004} to the order of hours
in exascale ones~\cite{cappello2014toward}. It is expected that exascale systems
present deep and complex memory/storage hierarchies, hindering the possibility
of non-expert users to exploit \highlightForReview{optimally}. Considering \highlightForReview{these}
facts, the \highlightForReview{high-performance computing} (HPC) research community
is focusing on resilience and fault tolerance \highlightForReview{to} mitigate the impact
of system faults \highlightForReview{more easily. Accordingly,} several
libraries and tools \highlightForReview{are being developed that} leverage low-level details and system nuances to
exploit exascale systems optimally, regardless of \highlightForReview{a} user's expertise. Depending
on the errors \highlightForReview{those systems} address, \highlightForReview{they} can be application-specific~\cite{leo},
algorithm-specific~\cite{du2012algorithm}, or more generic solutions~\cite{blcr}.
One technique that is increasing \highlightForReview{in} popularity is application-level
checkpoint\slash restart \highlightForReview{(CR)}. The main reason is its efficiency in terms of space
and performance compared to other fault-tolerance techniques. \highlightForReview{However}, current
approaches offering application-level \highlightForReview{CR} require considerable effort from the user. Users are in charge of
identifying the application state, serializing and deserializing data for
checkpoints or recovery, and modifying the program flow to check whether the
execution is a restart. Additionally, moving from one system to \highlightForReview{another} may require \highlightForReview{rewriting} the code, at least for tuning.
Among the wide variety of tools and libraries available, three of them stand out
due to their support for multi-level checkpointing: Fault
Tolerance Interface (FTI)~\cite{bautista2011fti}, Scalable Checkpoint and Restart
(SCR)~\cite{scr}, and Very Low Overhead Checkpointing System (VeloC)~\cite{veloc}. These state-of-the-art
libraries also provide optimized I/O capabilities \highlightForReview{and} several redundancy
schemes. Each of them \highlightForReview{provides} its own flexible application programming interface
(API). Nonetheless, compared to other techniques, such as
transparent \highlightForReview{CR} (which requires no participation from the
user but introduces high overhead), they still demand
significant work \highlightForReview{from} the user.
In this paper, we contribute to the aforementioned set of libraries and tools
with an application-level \highlightForReview{CR} mechanism based on compiler
directives. Using compiler directives, we enhance portability and
programmability. Our solution---as FTI, SCR, and VeloC--- supports fail-stop
errors (i.e., process abortions or hardware failures) and soft errors\highlightForReview{, although} undetected errors are not tolerated.
We present the \textbf{OpenCHK} programming model~\cite{openchk} for C/C++ and
Fortran. Our model is based on compiler directives such as \highlightForReview{in}
OpenMP \cite{openmp}. The sentinel used to recognize the directives and
clauses of the OpenCHK model is \texttt{chk}. \highlightForReview{Currently,} the model supports several
clauses and directives, \highlightForReview{which} are detailed in Section~\ref{sec:new_semantics},
providing users the ability to:
\begin{itemize}
\item Initialize and finalize the \highlightForReview{CR} environment in \highlightForReview{an easy and
portable} way.
\item Easily indicate the data to be protected.
\item Specify checkpoint conditions (e.g., frequency).
\item Set checkpoint properties, such as \highlightForReview{identifiers and levels.}
\item Select among different kinds of checkpoints
(full/differential).
\item Avoid the requirement of modifying the natural program flow \highlightForReview{to} check whether the current execution is a restart.
\end{itemize}
In this paper, we extend our initial work~\cite{maronas2017checkpoint} on
pragma-based \highlightForReview{CR} techniques in several directions. First, we
formalize our solution as the OpenCHK programming model. \highlightForReview{Second}, we add new
directives and clauses that \highlightForReview{increase} the expressiveness and \highlightForReview{introduce} new
features to the model, such as fault-tolerance-dedicated threads, differential
checkpoints, and support for the HDF5 format. \highlightForReview{Third, we extend our}
implementation with a new backend library (VeloC) and a useful functionality
called self-iterative data expressions. Last, we also extend the
evaluation to a wider set of production-level scientific applications. More
precisely, we add two new applications\highlightForReview{, xPic and LULESH,} and a new benchmark\highlightForReview{,
Heat 2D simulation}. Additionally, the whole evaluation, including the
benchmarks and applications evaluated in our previous work \highlightForReview{and} the ones
introduced in this work, has been conducted in a new and larger platform,
different \highlightForReview{from} the one used in our previous work. The results of our
evaluation reveal that our solution \highlightForReview{can} maintain efficiency and
robustness, as \highlightForReview{in the} current approaches, while enhancing portability and usability.
The rest of the paper is structured as follows. Section~\ref{sec:background}
contains an introduction to \highlightForReview{the} FTI/SCR/VeloC libraries, as well as the motivations
behind this work. Section~\ref{sec:related} reviews the most relevant related
work. Section~\ref{sec:model} specifies the model semantics and functionalities, and Section~\ref{sec:implementation} provides details of the
implementation. Section~\ref{sec:evaluation} consists of an
evaluation and discussion of our work. Finally, Section~\ref{sec:conclusion}
summarizes the work done and provides some concluding remarks. Future work
proposals are presented in Section~\ref{sec:future}.
\section{Background and Motivations}
\label{sec:background}
As introduced in the previous section, exascale systems threaten to
jeopardize the successful completion of large HPC
applications. Therefore, fault-tolerance systems become crucial to mitigate the
impact of errors and guarantee the completion of applications. There \highlightForReview{are} many
HPC applications performing large simulations based on iterative algorithms,
usually requiring long execution times to \highlightForReview{complete successfully}. Such long
execution times make them more likely to experience system faults. In fact,
given the increased likelihood of system faults in the exascale era, it is
possible to find applications requiring more execution time to finish than the
MTBF of the system. Figure~\ref{fig:mtbfBTappTime} illustrates \highlightForReview{a} scenario where
an application takes more than three times the \highlightForReview{system's} MTBF to complete. Thus,
the application is very unlikely to complete. In this kind of \highlightForReview{scenario},
techniques providing resilience, \highlightForReview{like CR},
become essential.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/mtbfBTappTime.pdf}
\caption{\highlightForReview{Long-running applications hardly complete} when MTBF is too small.}
\label{fig:mtbfBTappTime}
\end{figure}
\highlightForReview{CR} is a widely used technique for saving the full state of
an application in such a way that if an error occurs it can be restored,
allowing the execution to continue from that point instead of from
\highlightForReview{the beginning}. Section~\ref{sec:related} outlines several approaches to provide this
functionality. Our proposal focuses on providing persistent \highlightForReview{CR}. Persistent approaches store the data in a persistent way, usually in
\highlightForReview{a parallel file system (PFS)}. If a node fails and cannot be
recovered, the checkpoints are still accessible.
There \highlightForReview{are} several approaches \highlightForReview{that provide CR}, ranging from
ad-hoc solutions where developers face low-level details \highlightForReview{(such as I/O
operations)}, to libraries offering APIs for abstracting users from such low-level
details and nuances. However, those tools still involve some difficulties, such
as poor portability and complex APIs.
\highlightForReview{Regarding} complexity, current approaches require users to (1)
serialize/deserialize data, and (2) modify the natural program flow to check
\highlightForReview{if} a previous checkpoint is available for restart or \highlightForReview{if} the application must
run from scratch. Serialization/deserialization is the \highlightForReview{process} of translating
data structures or objects into a format that can be \highlightForReview{stored efficiently}. For
instance, pointers cannot be stored directly.
Instead, \highlightForReview{their} contents must \highlightForReview{first be} retrieved and then stored. In a restart,
the contents must be read from disk and then \highlightForReview{assigned to} the corresponding pointers.
Regarding the flow, current \highlightForReview{CR} approaches require
users \highlightForReview{to check for checkpoint existence explicitly} and, if any \highlightForReview{exist}, explicitly
ask to recover data. Figure~\ref{fig:complex} shows a very brief example of
these responsibilities. A more complete and real example
can be seen in the appendices, where a full example of a real application is
included using several state-of-the-art libraries, such as FTI, SCR, and VeloC.
Furthermore, some approaches force users to deal directly with I/O, as in
Appendix~\ref{sec:appendixD}.
\begin{figure}
\centering
\begin{lstlisting}
int **data;
for(int i = 0; i < N; i++){
data[i] = (int *) malloc(i*sizeof(int));
}
// Modifying program flow
if(restart_available()) {
// Deserializing after restart
int *restarted_data = nullptr;
size_t restart_size = 0;
int id = -1;
for(int i = 0; i < N; i++) {
restart(&restarted_data, &restart_size, &id);
assert(id == i);
assert(restart_size == i);
memcpy(data[i], restarted_data, restart_size);
}
}
else {
init_data(data, N);
}
// Serializing for checkpoint
int *cp_data = nullptr;
size_t cp_size = 0;
for(int i = 0; i < N; i++) {
cp_data = data[i];
cp_size = i;
checkpoint(cp_data, cp_size, i /* id */);
}
\end{lstlisting}
\vspace*{-3mm}
\caption{Snippet of pseudocode showing a brief example of the
serialization/deserialization process and the modification in
program flow}
\vspace*{-4mm}
\label{fig:complex}
\end{figure}
\highlightForReview{Second}, the proliferation of several \highlightForReview{CR} libraries with
different interfaces hinders portability between systems. As there is no
standard software stack for \highlightForReview{CR}, it is possible that
different systems offer different \highlightForReview{CR} software. Thus,
writing \highlightForReview{CR} code using native APIs may cause portability
issues when moving to a different system.
In such a situation, \highlightForReview{a} user's options are constrained to \highlightForReview{the following}: (a)
rewriting \highlightForReview{the} code using the interface of the library available in the new
system or (b) installing the original library. Notwithstanding, \highlightForReview{the} installation of
the \highlightForReview{CR} libraries requires deep knowledge of the storage
hierarchy for adequately tuning the installation to maximize
performance. Additionally, it may require special permissions that common users
do not have. In any case, both options are \highlightForReview{costly} and non-trivial,
affecting portability.
Using a directive-based approach, the aforementioned problems disappear. The
model takes several of the user responsibilities (i.e.,
data serialization/deserialization and \highlightForReview{checking} whether a restart is possible) and
\highlightForReview{the users} only \highlightForReview{need} to specify which data must be checkpointed/restored
in a simple way, maximizing programmability. Our solution adds a new
abstraction layer \highlightForReview{with a
unique interface} that enables us to leverage several backend libraries, thereby enhancing portability. The enhancement of portability
comes from enabling developers to use the already tuned installation present in
every system without changing any code. This approach
enables users to focus on applications, thereby increasing
productivity and portability.
\section{Related work}
\label{sec:related}
We describe different \highlightForReview{CR} approaches
focusing on persistent solutions. We discuss different kinds of checkpointing \highlightForReview{and} examine some checkpointing tools, such as BLCR, FTI, SCR,
and VeloC.
The \highlightForReview{CR} technique consists in regularly storing application
data and restoring it in case of error, thereby benefiting from previous work
rather than restarting from scratch. For addressing soft errors, data can be
saved in memory (non-persistent), whereas for hard faults, data must be stored
persistently in storage.
\highlightForReview{CR} approaches can be organized using several criteria:
application-level or system-level, according to where it is implemented;
persistent or diskless, depending on the \highlightForReview{method} of storing data; and coordinated
or non-coordinated, \highlightForReview{according} to whether process coordination is required
to \highlightForReview{create the checkpoints.}
In coordinated checkpointing, the processes must coordinate to take their
checkpoints building a global state. In other words, all the processes must \highlightForReview{create}
checkpoints \highlightForReview{simultaneously}. \highlightForReview{This simplifies the recovery process}
because there is no problem with possible rollback propagations. Additionally,
coordinated mechanisms only need one checkpoint for a successful recovery,
reducing storage overhead. Non-coordinated checkpointing, in contrast, allows
processes to \highlightForReview{create} checkpoints at any moment. This is a great advantage because
checkpoints can be \highlightForReview{created} when it is most convenient\highlightForReview{, but,} on a
restart, a \highlightForReview{globally} consistent state must be built \highlightForReview{by }searching the whole set of
saved checkpoints. Therefore, non-coordinated checkpointing may be affected by
rollback propagation, ending up resuming from the beginning of the execution.
Thus, overhead grows both in terms of performance and especially storage
space, \highlightForReview{because} each process must keep several checkpoints. CoCheck
\cite{stellner1996cocheck} is an example of coordinated checkpointing, while
\cite{bronevetsky2003automated} is non-coordinated.
With the objective of removing the main source of overhead, diskless
checkpointing~\cite{plank1998diskless}, \cite{zheng2004ftc} eliminated
stable storage from checkpointing. However, non-persistent approaches are less
resilient than their persistent counterparts, and they \highlightForReview{cannot} tolerate
complete system failures such as power outages. Furthermore, \highlightForReview{they increase}
memory, processor, and network overhead.
\highlightForReview{There are several} persistent checkpointing solutions, providing either
system-level or application-level checkpointing. The strongest point of
system-level approaches, such as~\cite{duell2002requirements},
\cite{roman2002survey}, \cite{duell2005design}, \cite{sankaran2005lam}, or
\cite{blcr} is the transparency: no changes in the application code are required.
However, \highlightForReview{this comes at the cost of} higher overhead in performance and disk
space compared to application-level solutions.
There are some solutions that are halfway between system-level and
application-level, such as \highlightForReview{that} proposed by Bronevetsky et al.
\cite{bronevetsky2004application} for shared memory environments. The authors
present it as an application-level approach, but the user cannot decide which
data must be checkpointed nor the frequency of the checkpoints. In fact, the
user can only place some calls to a given method indicating that a checkpoint
must be taken. Then, the system saves the heap, call stack, and local
and global variables. Given the low degree of freedom provided to the user, it
cannot be considered a pure application-level solution. Apart from that, most
applications do not need to save all the data stored by this approach \highlightForReview{for a successful restart, but only a
subset of it}. Thus, overhead is
increased both in performance and disk space usage.
\highlightForReview{There are also a variety of solutions} at application-level~\cite{young1974first},
\cite{duda1983effects}, \cite{plank2001processor}, \cite{daly2006higher}. Some provide single level checkpointing while a few provide multi-level
checkpointing. Applications that store all their checkpoints in the PFS may
introduce \highlightForReview{a large amount} of overhead~\cite{roadrunnertechreport2007},
\cite{schroeder2007}, \cite{oldfield2007modeling}, \cite{reed2004},
\cite{schroeder2010large}. Given the gap between the CPU and I/O performance,
multi-level checkpointing~\cite{gelenbe1976model}, \cite{vaidya1994case} becomes
essential for reducing overhead. The key is using \highlightForReview{different---and faster--- components}
than PFS, such as RAM disks, local node storage, or SSDs to write the
checkpoints, and moving those checkpoints only when necessary, asynchronously
and transparently. FTI~\cite{bautista2011fti},
SCR~\cite{scr} and VeloC~\cite{veloc} are multi-level \highlightForReview{CR} solutions.
Those libraries overlap in their multi-level character and their multiple
redundancy schemes, like partner checkpoints and erasure codes. However, they
differ in the way these schemes are applied. In FTI and VeloC, the cluster
topology is detected automatically and the appropriate partner nodes for the
redundancy schemes are selected by the library. In contrast, SCR allows a
slightly more flexible setup. Besides the standard groups NODE and WORLD, users
or system administrators may define additional groups (e.g., all nodes that
share a common power supply). This can be used to increase the likelihood of
successful recoveries from the various redundancy levels.
VeloC was started as a project to combine FTI and SCR into
a single framework. On the one hand, \highlightForReview{it offers} a \textit{memory-based mode}
\highlightForReview{that is} very similar to FTI. On the other hand, there is a
\textit{file-based mode} that behaves much \highlightForReview{like} SCR. However, VeloC is still
missing some features that FTI or SCR support, e.g., different checkpointing
\highlightForReview{types} (i.e., full checkpoint, differential checkpoint, etc.).
The libraries come with a distinct set of features. For instance, FTI supports
differential checkpointing\highlightForReview{, and SCR can
interact} with the running execution and halt the execution \highlightForReview{either} immediately,
after a certain checkpoint, or at a specific time. The different set of features
and the assumption that different clusters will provide different
\highlightForReview{CR} libraries suggest a common interface for these
libraries which can be used inside HPC applications to run on different systems
without changing the code. Such an interface is proposed in this paper.
\section{OpenCHK Model}
\label{sec:model}
In this section, we detail the specification of the OpenCHK programming model,
including the directives and clauses supported \highlightForReview{and} the functionalities
provided. The aim of our model is to provide a standard way of coding
\highlightForReview{CR}. We offer a new level of abstraction that hides
implementation details from users, enabling them to focus on the application.
The approach presented in this paper is similar to the one used in some
programming models, such as OpenMP, to exploit parallelism in shared-memory
environments. The rest of this section is structured as follows: \highlightForReview{first}, we
present the directives and clauses of the OpenCHK model, and then we detail the
functionalities offered by the model.
\subsection{Directives and Clauses}
\label{sec:new_semantics}
The model supports four directives. Some of these may also be annotated with
clauses that can modify their \highlightForReview{semantics} in some way. Details on both directives
and clauses are provided as follows.
\begin{enumerate}[label={}]
\item []\textbf{Directives}
\item []\textbf{\texttt{init [clauses]}}: The init directive defines the
initialization of a checkpoint context. A checkpoint context is
necessary to use the other directives. It accepts the clause:
\begin{itemize}
\item \textbf{\texttt{comm(comm-expr)}}: comm-expr becomes the MPI
communicator that should be used by the user in the checkpoint
context that is being created. This clause is mandatory.
\end{itemize}
\item []\textbf{\texttt{load(data-expr-list) [clauses]}}: This directive
triggers a load of the data expressed inside the parentheses. The load
directive accepts the clause:
\begin{itemize}
\item \textbf{\texttt{if(bool-expr)}}: The if clause is used as a
switch-off mechanism: the load will be ignored if the bool-expr
evaluates to false.
\end{itemize}
\item []\textbf{\texttt{store(data-expr-list) [clauses]}}: The store
directive may request the library to save the specified data. It
accepts the clauses:
\begin{itemize}
\item \textbf{\texttt{if(bool-expr)}}: The if
clause is used as a switch-off mechanism: the store will be ignored
if the bool-expr evaluates to false. This clause is useful for
specifying the desired checkpoint frequency.
\item \textbf{\texttt{id(integer-expr)}}: Assigns an identifier to the
checkpoint. This clause is mandatory for the store directive. The
id of a checkpoint is helpful for later identification of an
application's progress upon failure. For instance, if users are
checkpointing steps in a loop, and the id is the loop induction
variable, users can easily infer which step was last checkpointed.
However, only the last successful checkpoint is kept.
\item \textbf{\texttt{level(integer-expr)}}: Selects the checkpoint level
which is associated with where the data is stored (e.g., local node
storage, \highlightForReview{PFS}, etc.) and the redundancy schemes
applied. This clause is mandatory for the store directive. Users
must \highlightForReview{consider} that the backend libraries provide a different
number of levels and different behaviors for equivalent levels.
This is the only parameter that must be tuned
depending on the underlying backend library. In a future release, we
plan to make this clause optional and rely on the parameters
passed in the configuration file.
\item \textbf{\texttt{kind(kind-expr)}}: Selects the checkpoint kind.
Currently, two kinds are supported. They are \texttt{CHK\_FULL},
which performs a full checkpoint; and \texttt{CHK\_DIFF}, which
performs a differential checkpoint.
\end{itemize}
\item []\textbf{\texttt{shutdown}}: Closes a checkpoint context.
\end{enumerate}
Figures~\ref{fig:init_shutdown},~\ref{fig:load}, and~\ref{fig:store} show how the
directives and clauses are used in C/C++ and Fortran. \highlightForReview{Specifically},
figure~\ref{fig:init_shutdown} \highlightForReview{shows} how to initialize and shut down a
checkpoint context; figure~\ref{fig:load} shows how to load several types
of data, ranging from \highlightForReview{simple scalars to 2-dimensional arrays}, including
contiguous and non-contiguous regions; and figure~\ref{fig:store} \highlightForReview{is} the
same as the previous figure but for storing instead of loading. However, as it is a
store, we must assign an identifier and a level, as shown in the figure.
\begin{figure}
\centering
\begin{lstlisting}
// C/C++ syntax
#pragma chk init comm(mpi_communicator)
#pragma chk shutdown
// Fortran syntax
!$chk init comm(mpi_communicator)
!$chk shutdown
\end{lstlisting}
\vspace*{-3mm}
\caption{Snippet of code using init and shutdown directives in C/C++ and Fortran}
\vspace*{-4mm}
\label{fig:init_shutdown}
\end{figure}
\begin{figure}
\centering
\begin{lstlisting}
// Load a) scalar;
// b) all array elements from 0 to size-1;
// c) array2 elements from 2 to 4;
// d) 2dArray elements from 2 to 4
// of all the rows from 0 to n-1
// C/C++ syntax
#pragma chk load(scalar, array[0;size], \
array2[2:4], 2dArray[0;n][2:4]) \
[if(cond)]
// Fortran syntax
!$chk load(scalar, array(0:size-1), &
& array2(2:4), 2dArray(0:n-1)(2:4) &
& [if(cond)]
\end{lstlisting}
\vspace*{-3mm}
\caption{Snippet of code using load directive in C/C++ and Fortran}
\vspace*{-4mm}
\label{fig:load}
\end{figure}
\begin{figure}
\centering
\begin{lstlisting}
// Store a) scalar;
// b) all array elements from 0 to size-1;
// c) array2 elements from 2 to 4;
// d) 2dArray elements from 2 to 4
// of all the rows from 0 to n-1
// C/C++ syntax
#pragma chk store(scalar, array[0;size], \
array2[2:4], 2dArray[0;n][2:4]) \
kind(CHK_FULL/CHK_DIFF) id(0) \
level(1) [if(cond)]
// Fortran syntax
!$chk store(scalar, array(0:size-1), &
& array2(2:4), 2dArray(0:n-1)(2:4) &
& kind(CHK_FULL/CHK_DIFF) id(0) &
& level(1) [if(cond)]
\end{lstlisting}
\vspace*{-3mm}
\caption{Snippet of code using store directive in C/C++ and Fortran}
\vspace*{-4mm}
\label{fig:store}
\end{figure}
To see a full example, see Appendix~\ref{sec:appendixA}.
\subsection{Functionalities}
\highlightForReview{Our model is intended to standardize} a common interface for the
different \highlightForReview{existing CR} solutions, \highlightForReview{and} we aim to provide the
same functionalities that \highlightForReview{they all} offer but in a generic way.
In what follows, we explain the main functionalities supported in the OpenCHK
model, and how they fit in the \highlightForReview{currently supported backend libraries.}
\subsubsection{Basic Checkpoint\slash Restart}
As a basic functionality, OpenCHK supports checkpoint and recovery of
user-defined application data using the multi-level redundancy schemes of the
backend libraries. Users can define the levels and their respective
checkpoint frequency as desired. Currently, OpenCHK provides a coordinated
\highlightForReview{CR} mechanism because the backend
libraries only support coordinated \highlightForReview{CR}. However, given its
flexibility, OpenCHK could provide non-coordinated \highlightForReview{CR} if
required.
\subsubsection{CP-dedicated Threads}
\label{subsec:cp-dedicated}
This functionality consists of spawning a thread per node that is devoted only
to \highlightForReview{CR} work. By doing so, work related to \highlightForReview{CR} can be \highlightForReview{conducted} in parallel with the application work. This feature may be
useful when running on hybrid CPU-GPU systems or systems using coprocessors
where the main part is executed on GPU/coprocessors and
the CPUs are idle. The idle CPU time can be used to perform \highlightForReview{CR} tasks, relieving the GPU/coprocessors of doing such
tasks and focusing on the actual application work. Overall, resources are better
\highlightForReview{used} in this way, and we can achieve performance gains.
This may increase the memory pressure in some scenarios, but this
effect can be mitigated using local node storage (SSD, NVMe).
\begin{figure}[htbp]
\centerline{\includegraphics[width=\linewidth]{figures/cp-dedicated-crop.pdf}}
\caption{Comparison between \highlightForReview{a} CP-dedicated threads scheme and a traditional
scheme.}
\label{fig:cp-dedicated-threads}
\end{figure}
Figure~\ref{fig:cp-dedicated-threads} shows a comparison of an application using
the traditional scheme and the same application using this CP-dedicated thread
scheme. A CP-dedicated thread performs all the tasks related to
fault tolerance, while the GPUs can devote their resources to the application.
Up to now, this feature of the model \highlightForReview{is only supported by}
FTI and VeloC.
\subsubsection{Differential Checkpointing}
\label{subsec:diff-cp}
Differential checkpointing is a method that decreases the I/O load of
consecutive checkpoints by updating only those data blocks that have changed
since the last checkpoint was \highlightForReview{created}. Differential checkpointing has also
been \highlightForReview{called} incremental checkpointing~\cite{pbkl:95:lib}.
\highlightForReview{For our purposes}, incremental checkpointing is a different technique
that consists in building a checkpoint \highlightForReview{in} pieces in several separated write
operations that are performed as soon as the data is ready. We plan to support
incremental checkpointing in the future. More information on incremental
checkpointing can be found in Section~\ref{sec:future}. For a more detailed
explanation of our terminology, and why we prefer these terms, please refer
to~\cite{BautistaKellerDcp2019}.
\begin{figure}[htbp]
\centerline{\includegraphics[width=\linewidth]{figures/dcp_overhead_reduction.png}}
\caption{\highlightForReview{Overhead reduction} with differential
checkpoint for a certain scenario (2400 processes write
1 GB per process to the PFS). $n_d$ corresponds to the ratio of dirty data
blocks to protected data blocks.}
\label{fig:dcp_overhead}
\end{figure}
Differential checkpointing \highlightForReview{uses} a user-defined block size to evaluate
which sections of the checkpoint have changed. This granularity is important
and offers a trade-off: smaller block sizes can capture with higher precision
small changes in the dataset, which allows the backend to only update small
sections and avoid having to rewrite data that has not changed. However,
performing hash calculations on many small blocks and performing many small
writes to the storage can \highlightForReview{reduce} performance. On the other hand, large
blocks are more suited \highlightForReview{to} file system performance and lead to a reduction in
the number of hashes to be calculated, but they also lead to more unchanged data
having to be rewritten. Besides the block size, the performance of differential
checkpointing also depends on the application itself. Applications that update
entire datasets at every iteration are not well suited for differential
checkpointing. Applications in which only parts of the dataset change,
might get more benefits from differential \highlightForReview{checkpointing}.
For instance, FTI has recently demonstrated ~\cite{BautistaKellerDcp2019}
that applications updating less than 95\% of their protected data due to changes
within two consecutive checkpoints will be able to reduce their
checkpoint overhead using differential checkpointing. In other words, a
reduction in I/O size by as little as 5\% already \highlightForReview{shows} significant benefits
through differential checkpointing. The overhead reduction
depends linearly on the update rate. The slope of the regression characterizes
this linear relationship. The ratio between the I/O rate and the hashing
determines the slope. If the hashing is expensive
and the I/O rate is low, the overhead reduction will be low even for low update
rates, whereas, if the hashing is cheap and the I/O rate high, even high update
rates could obtain a benefit. It was demonstrated for \highlightForReview{the} LULESH and xPic
applications that differential checkpointing can reduce the checkpointing overhead
by 35\% and 62\%, respectively~\cite{BautistaKellerDcp2019}.
Given the model presented in~\cite{BautistaKellerDcp2019}, we can
estimate the performance benefits for a certain scenario.
Figure~\ref{fig:dcp_overhead} shows the behavior of the overhead
depending on the differential data ratio, $n_d$, corresponding to the
ratio of dirty data blocks to protected data blocks. The \highlightForReview{x-axis}
represents the value of the differential data ratio, while the \highlightForReview{y-axis}
represents the increment of time that differential checkpointing
introduces with respect to a common full checkpoint. Thus, positive
values are additional overhead introduced by differential checkpointing
with respect to a common full checkpoint, while negative values are
benefit against a common full checkpoint. The presented scenario comprises 2400
processes that write 1 GB per process to the PFS. The
time needed to complete a full checkpoint on all ranks is about 88
seconds. The threshold is at around 95\%, which is the
point where differential checkpointing and common full checkpointing
introduce the same amount of overhead. If the differential data ratio is
above this threshold, differential checkpointing introduces a penalty of
up to 5 \highlightForReview{s} compared to a common full checkpoint. In other
words, if more than 95\% of the data must be checkpointed, differential
checkpointing introduces more overhead than a common full checkpoint.
Nevertheless, if the differential data ratio is below the threshold,
there is a benefit. The overhead reduction is about 9 seconds for every 10\%
of data that we do not have to write due to differential checkpointing.
Thus, differential checkpointing becomes very quickly beneficial for
updates below 95\%. \highlightForReview{Of} all the checkpointing libraries studied in this work, the only backend library supporting this functionality is
FTI.
\subsubsection{HDF5 support}
\label{subsec:hdf5}
HDF5\cite{hdf5} allows \highlightForReview{the structuring of} datasets inside of groups and to order
groups hierarchically \highlightForReview{as in} a file system. The dimensionality of the
datasets can also be stored inside the file. HDF5 provides a vast functionality
\highlightForReview{to} archive scientific data inside a file \highlightForReview{in} persistent storage. In
addition to this, HDF5 is optimized for both sequential and parallel I/O.
Our model allows checkpoints to be stored using the HDF5 file format. The
protected datasets that serve for the successful restart are written in this
format so that users can use any tool that is capable of interacting with HDF5
files for scientific analyses. Thanks to this feature, resiliency and
scientific data analysis can be merged into one single I/O operation. Interacting with HDF5 files can be relatively complex and not always
intuitive. Therefore, this feature to support HDF5 files enhances the
flexibility of OpenCHK.
Figure~\ref{fig:hdf5ex} shows an example of the structure of a checkpoint file
in the HDF5 format. The file contains three protected variables: Dataset\_0 and
Dataset\_1, which are scalars, and Dataset\_2, which is an array. Figure
\ref{fig:hdf5-pseudo} provides two different codes able to obtain a checkpoint
like the one in Figure~\ref{fig:hdf5ex}. The first one uses OpenCHK while the
second \highlightForReview{uses} the native HDF5 API. Using OpenCHK, users just
need to use the store directive, indicating the data to be stored along with an
id and a level. Using the native HDF5 API, they \highlightForReview{must} create a dataspace per
variable, indicating the size and shape; create a dataset per variable
indicating the data elements, layout, and some other information necessary to
write, read, and interpret the data; write the data, and, finally, close the
datasets and dataspaces. Looking at the codes, it is possible to see how OpenCHK
reduces the complexity \highlightForReview{compared to} the native implementation.
\begin{figure}
\centering
\begin{lstlisting}[numbers=none]
GROUP "/" {
DATASET "Dataset_0" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 1 ) / ( 1 ) }
DATA {
(0): 1
}
}
DATASET "Dataset_1" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 1 ) / ( 1 ) }
DATA {
(0): 1
}
}
DATASET "Dataset_2" {
DATATYPE H5T_STD_I8LE
DATASPACE SIMPLE { ( 22279025 ) / ( 22279025 ) }
DATA {
(0): 50, 50, 32, 115, 101, 114, 105, 97, 108, 105,
...
(22279021): 0, 0, 0, 0
}
}
}
\end{lstlisting}
\vspace*{-3mm}
\caption{HDF5 checkpoint file structure using OpenCHK.
The protected data consists of 2 scalars and one array.}
\vspace*{-4mm}
\label{fig:hdf5ex}
\end{figure}
\begin{figure}[h!]
\centering
\lstinputlisting[frame=tb]{figures/hdf5-pseudo.c}
\vspace*{-3mm}
\caption{Code snippet to produce an HDF5 file with a structure similar to
the one shown in figure~\ref{fig:hdf5ex}, on top with OpenCHK and below
with native HDF5 routines.}
\vspace*{-4mm}
\label{fig:hdf5-pseudo}
\end{figure}
\section{Implementation Details}
\label{sec:implementation}
This section provides some insight into the implementation details of our
proposed solution. We provide our implementation of the model on top of
the Mercurium C/C++ and Fortran source-to-source compiler~\cite{mercurium} and
the Transparent Checkpoint Library (TCL)~\cite{tcl} intermediate library.
Currently, we support FTI, SCR, and VeloC as backend libraries.
Following, we detail the architecture of our implementation, the changes
effected at the Mercurium compiler level, and the implementation of TCL.
\subsection{Architecture}
We have designed an implementation based on three components: a compiler
(Mercurium) that translates directives and clauses into calls to an
intermediate library, an intermediate library (TCL) which \highlightForReview{oversees}
forwarding the user-requested actions to the adequate backend library, and
several backend libraries. Figure~\ref{fig:3layer} shows our three-layer
architecture. This approach allows us to extend
the model to support new features \highlightForReview{as} the backend libraries evolve. \newline
\begin{figure*}[h]
\centering
\includegraphics[width=.7\linewidth]{figures/3layer-crop.pdf}
\caption{Three-layer architecture.}
\label{fig:3layer}
\end{figure*}
\subsection{Mercurium}
\label{subsec:mcxx}
For supporting the OpenCHK model, Mercurium \highlightForReview{must} process the OpenCHK
directives and clauses to enable the application-level \highlightForReview{CR} functionalities. One of the main duties of Mercurium is processing
these directives and clauses \highlightForReview{to} transform them into calls to TCL. Following,
we detail the compiler transformations done for each of the directives and
clauses.
\begin{itemize}
\item \textbf{\texttt{chk init [clauses]}}: The compiler triggers the initialization
of TCL. Clauses accepted:
\begin{itemize}
\item \textbf{\texttt{comm(comm-expr)}}: Mercurium passes comm-expr,
which is a pointer, to TCL, \highlightForReview{which} sets the MPI communicator
that the user should \highlightForReview{use} in the checkpoint context.
\end{itemize}
\item \textbf{\texttt{chk load(data-expr-list) [clauses]}}: The compiler
introduces a call to TCL when it finds this statement informing \highlightForReview{it} of the
start of a restart. Then, Mercurium performs several calls registering
the data to be restarted. \highlightForReview{Accordingly}, the compiler must also send some
information about the data to be restored, such as the sizes and the
pointers, for overwriting the current data with the recovered \highlightForReview{data}. The
compiler extracts all this information from the data specified and its
own knowledge of the program symbols. When all the data is registered,
Mercurium calls a TCL method that performs the restart. This is
part of the deserialization process that \highlightForReview{would otherwise} be done by the user. Additionally, this directive implies a transparent way of
checking whether a restart exists, \highlightForReview{which would also be done by
a user modifying the program flow in other approaches.}
Clauses accepted:
\begin{itemize}
\item \textbf{\texttt{if(bool-expr)}}: The calls to TCL are only
effective when the condition expressed in this clause is true. This
means that none of the calls are done if the condition is not satisfied.
\end{itemize}
\item \textbf{\texttt{chk store(data-expr-list) [clauses]}}:
The compiler does very similar things \highlightForReview{as} those
performed for the load directive. The compiler \highlightForReview{oversees} doing a part
of the data serialization, \highlightForReview{equivalent} to the deserialization process \highlightForReview{conducted} for the load directive. The only difference is that the action being
performed is a checkpoint instead of a restart, \highlightForReview{meaning} some additional
information must be passed. In the first call, the one that notifies a
checkpoint is starting, Mercurium adds the kind, id, and level of the
checkpoint. This information is extracted from the following clauses:
\begin{itemize}
\item \textbf{\texttt{if(bool-expr)}}: The calls to TCL are only
effective when the condition expressed in this clause is true.
Thus, no calls are \highlightForReview{executed} if the condition is not satisfied.
\item \textbf{\texttt{id(integer-expr)}}: The checkpoint that is being
performed has the identifier set in this clause. It is mandatory to
specify an identifier.
\item \textbf{\texttt{level(integer-expr)}}: The checkpoint that is being
performed will be written at the specified level. It is mandatory to
specify a level.
\item \textbf{\texttt{kind(kind-expr)}}: Chooses the kind of checkpoint to
be done between full or differential, which are the \highlightForReview{currently
supported options}. The default value is full.
\end{itemize}
\item \textbf{\texttt{chk shutdown}}: The compiler triggers the finalization
of TCL.
\end{itemize}
The order specified in the
load/store clauses \highlightForReview{is critical}. The compiler forwards the data to TCL in the very same order
set by the user when writing the load/store clauses. Thus, if the order of the
data in loads and stores does not match, there may be problems when recovering
data from \highlightForReview{a} restart.
Also, it is important to mention that the three back-end
libraries are capable of deciding the checkpoint level automatically based
on the configuration file. However, our current implementation in OpenCHK does
require the user to supply the checkpoint level. This is a small limitation that will
be lifted in future versions of OpenCHK.
A further duty of Mercurium, which is crucial for improving code
programmability, is extracting the information that must pass to TCL from the
annotations done by the user. In most cases, backend libraries need the base
address and the size of data to perform \highlightForReview{CR}. For non-array
expressions, this is just the size of the type of the data. However, for more
complex data structures, \highlightForReview{like} array expressions, we may need additional
information, such as the accessed bounds of each dimension, the size of each
dimension, and the base element type of the data structure.
All the aforementioned information required by the backend libraries is only
retrievable by a tool with a full understanding of the supported programming
languages---C, C++, and Fortran--- such as the Mercurium compiler. \highlightForReview{Additionally},
this is an error-prone task. Thus, automatizing \highlightForReview{it}
minimizes the possibility of error while reducing debugging time. Apart from
that, it prevents developers from writing boilerplate code.
A further functionality added by Mercurium is \textit{self-iterative data
expressions}. Figure~\ref{fig:multideps} shows an example of this. This is a
kind of \texttt{for} loop inside the load/store clauses, \highlightForReview{which} allows iterating
over data structures instead of writing the data one by one, simplifying users'
work. Self-iterative data expressions are useful in
scenarios where \highlightForReview{many} elements of a data structure must be
checkpointed/loaded and users must write it \highlightForReview{manually}. In \highlightForReview{these} cases, annotating
the data that must be checkpointed/restored becomes a tedious task. \highlightForReview{It also}
becomes error-prone, because writing \highlightForReview{the same code several times}, changing only a
few characters, may cause errors to be more difficult to find. Self-iterative
data expressions enable users to perform this work in a much easier way.
\begin{figure}
\centering
\begin{lstlisting}
// Self iterative data expression
#pragma chk store({data[i], i=0;4})
// Equivalent
#pragma chk store(data[0], data[1], data[2], data[3])
\end{lstlisting}
\vspace*{-3mm}
\caption{Snippet of code using self-iterative data expressions}
\vspace*{-4mm}
\label{fig:multideps}
\end{figure}
To adequately point out the importance of the compiler in the OpenCHK model, we
have performed an analysis of the code complexity before and after the compiler
transformation. For that purpose, we used the Lizard~\cite{lizard}
tool to compute the cyclomatic complexity (CC) metric and
SLOCCount~\cite{sloccount} to calculate the development effort estimate (DEE).
Table~\ref{tab:complexity} shows the CC for a 2D heat simulation code
performing \highlightForReview{CR} using OpenCHK before (BT) and after (AT)
compiler transformation. Moreover, it also presents the CC for the same code
when using native FTI, SCR, and VeloC to perform \highlightForReview{CR}. As
can be seen, OpenCHK remains the simplest before the compiler transformation
in both metrics. After the compiler transformation, its CC grows \highlightForReview{by} 1 point while
its DEE more than \highlightForReview{doubles}, becoming the most complex version.
The CC is higher for the native libraries because this metric is affected by the
number of different paths that a program can take. For instance, each additional
\highlightForReview{\texttt{if}} increases the CC. The DEE, in contrast, is affected by the size of the code.
In that case, it is important to highlight that Mercurium generates very verbose
code when transforming code, so the size of it \highlightForReview{quickly becomes large}, affecting
the DEE.
\begin{table}[]
\centering
\caption{Cyclomatic complexity of a 2D heat simulation using OpenCHK (before
and after compiler transformation), FTI, SCR, and VeloC.}
\label{tab:complexity}
\begin{tabular}{@{}llllll@{}}
\toprule
& \textbf{BT} & \textbf{AT} & \textbf{FTI} & \textbf{SCR} & \textbf{VeloC} \\ \midrule
\textbf{CC} & 10.0 & 11.0 & 15.0 & 36.0 & 13.0 \\
\textbf{DEE (Person-Months)} & 0.34 & 0.76 & 0.36 & 0.51 & 0.35 \\ \bottomrule
\end{tabular}
\vspace*{-4mm}
\end{table}
\subsection{TCL}
\highlightForReview{To} maximize the portability of our approach, TCL \highlightForReview{must} process the
information passed by Mercurium and forward it to the adequate backend library.
This way, we enable users to write code agnostic from the backend library while
allowing them to use any of the supported backend libraries. TCL is responsible
for adequately formatting the information for each backend library and calling
the appropriate methods to perform the user-requested actions,
depending on the backend library chosen by the user.
\highlightForReview{Additionally}, this library, in collaboration with the Mercurium compiler,
serializes and \highlightForReview{deserializes} the data. The serialization and deserialization
process is tedious and error-prone for users. Using our mechanism, users
can perform it with little or no effort.
\highlightForReview{Additionally}, TCL prevents users from modifying the natural program flow to check
whether a restart has to be done. The library does it transparently.
If a checkpoint is available and a restart can be done, it
recovers data. In consequence, codes are cleaner and more readable.
\highlightForReview{The mechanism we propose is easily extensible}. Our first implementation of TCL was providing support for
only FTI and SCR. Now VeloC has been added as a third backend library to TCL, and
the addition of \highlightForReview{others} would be straightforward.
\section{Evaluation}
\label{sec:evaluation}
\highlightForReview{In this section we compare} the performance of our approach
with natively using the backend libraries that we use in our model.
The structure of this section is as follows: \highlightForReview{first}, we describe the
methodology \highlightForReview{used}; then, we detail the environment in which the experiments
were conducted, as well as the benchmarks and applications used. Finally, we
provide the evaluation and \highlightForReview{discuss} the results.
\subsection{Methodology}
\label{subsec:methodology}
\highlightForReview{First, the objectives of our evaluation are}
(1) demonstrating that our approach \highlightForReview{does} not add significant overhead compared to
using any of the supported backend libraries \highlightForReview{natively}, and (2) showing the
improvement \highlightForReview{that our mechanism provides in terms of code productivity.}
For (1), we \highlightForReview{designed} an experiment in which we launch
a first run of an application/benchmark, with an injected fault. Then, we
restart the application/benchmark from the last checkpoint until
successful completion. The whole process, from the first
run until successful completion, including the recovery, is
timed. We take measurements using \highlightForReview{both} OpenCHK with a given backend library and using the same backend library \highlightForReview{natively}. Then, the OpenCHK time is divided by
the native library time. \highlightForReview{To} demonstrate that no significant overhead is
introduced, the resulting number from the quotient should be 1 or close to 1.
Regarding (2), as there is no standard metric for measuring \highlightForReview{programmability}, we
have decided to consider the number of \highlightForReview{source} lines of code (SLOC) required to express
the \highlightForReview{CR} functionality. Thus, we compare the number of code
lines required using native APIs with the number of code lines required with
OpenCHK.
We will use the following nomenclature for our experiments:
\begin{itemize}
\item \texttt{FTI/SCR/VeloC}. This version is an implementation performing
application-level \highlightForReview{CR} directly using the APIs
provided by FTI, SCR or VeloC.
\item \texttt{OpenCHK}. In this version, the application-level \highlightForReview{CR is conducted} by the mechanism proposed in this work.
\end{itemize}
\highlightForReview{We obtained the results of all our experiments}
by averaging the execution times of 5 different runs per version.
The executions of the evaluation \highlightForReview{were} run with 50 MPI processes, whenever it was
possible. Nevertheless, there are some
applications/benchmarks that \highlightForReview{constrain} the number of processes to be used. For
each of those applications, we specify the number of processes used. We
consider 50 MPI processes an adequate scale for our experiments. Given that the
possible sources of overhead in our approach are constant rather than dependent
on the number of nodes, nothing suggests the possibility of scalability problems
in larger experiments.
Some of the benchmarks/applications contain intra-node parallelism. In those,
the number of threads per process is 48, whereas, for the rest, \highlightForReview{it} is 1.
The runs performed for our experiments took about 10 minutes. \highlightForReview{This} means that the
whole process, from the first run until successful completion, including the
restart, took about 10 minutes. Regarding the checkpoint frequency, we forced
one checkpoint per minute, resulting in a total of ten checkpoints per run. The
frequency is expressed in terms of iterations, so that we checkpoint data every
10\% of the iterations of the program. This is a high checkpoint frequency, \highlightForReview{which was} selected on purpose to stress the checkpointing mechanisms and ease the
performance comparison between the different evaluated approaches. Coarser
checkpoint frequencies should result in even lower overheads.
\highlightForReview{Regarding} the faults, all of \highlightForReview{them} were deterministically injected
at 90\% of the application progress. The faults introduced are exceptions that
cause process abortion, and the degree of progress \highlightForReview{was} arbitrarily chosen.
The reason for an evaluation in the presence of faults is the possibility of
measuring not only the overhead introduced in checkpointing but also in the
restart process.
\subsection{Environment and applications}
In this subsection, we detail the environment \highlightForReview{where} the experiments \highlightForReview{were} run, as well
as the applications and benchmarks that we used to evaluate our approach.
The experiments were carried out on \highlightForReview{a} machine with the configuration given in
Table~\ref{tab:mn}. More details about the hardware can be found
in~\cite{mn_arch}.
\begin{table}[]
\centering
\caption{Machine architecture}
\label{tab:mn}
\begin{tabular}{@{}ll@{}}
\toprule
\textbf{Component} & \textbf{Details} \\ \midrule
Nodes & 3456 \\
CPU & 2x Intel Xeon Platinum 8160 2.1 GHz \\
Network & 100 GB Intel Omni-Path Full-Fat Tree \\
& \& 10G bit Ethernet \\
Memory & 3240x 96 GB/node \& 216x 384 GB/node (Total: 384.75 TB) \\
Local storage & 240 GB Intel s3520 SSD \\
File system & ~14 PB GPFS disk storage \\
OS & Linux-SuSe Enterprise Server 12 SP2 \\ \bottomrule
\end{tabular}
\vspace*{-1mm}
\end{table}
The software used for our experiments, along with their versions, can be seen
in Table~\ref{tab:software}.
\begin{table}[]
\centering
\caption{Software and versions used to perform the experiments}
\label{tab:software}
\begin{tabular}{@{}ll@{}}
\toprule
\textbf{Software} & \textbf{Version} \\ \midrule
Transparent Checkpoint Library & 1.0 \\
Mercurium source-to-source compiler & 2.3.0 \\
gcc and gfortran & 7.2.0 \\
icc and ifort & 17.0.4 \\
Intel MPI & 2017.3.196 \\
SCR & 1.2.2 \\
FTI & d54a9e0\footnotemark \\ \bottomrule
\end{tabular}
\vspace*{-4mm}
\end{table}
\footnotetext{Hash of the commit used in the experiments.}
Following, we provide a brief explanation of the applications and benchmarks
used in the evaluation. The size of the applications \highlightForReview{ranges} from $\approx$ 500
to $\approx$ 15000 lines of code. Note that there are 7 applications using FTI,
5 applications using SCR, and 2 applications using VeloC. This is because we did
not have the reference \highlightForReview{versions} (native FTI/SCR/VeloC) of all the applications
to compare against.
\begin{description}
\item \textbf{BT-MZ}~\cite{bt-mz}: BT-MZ, extracted from the NAS Parallel
Benchmarks, is a pseudo application that solves problems derived from CFD
\highlightForReview{using} a block tri-diagonal solver. This implementation contains
OpenMP+MPI.
\item \textbf{Duct}~\cite{duct}: This pure MPI application, from \highlightForReview{the} CFD domain,
performs a large eddy simulation of turbulent flow in a square \highlightForReview{duct}.
\item \textbf{GERShWIN}~\cite{gershwin}: The GERShWIN application \highlightForReview{was}
developed by INRIA under the umbrella of the DEEP-ER project. It studies
human exposure to electromagnetic fields. To do so, it solves a system of
Maxwell equations. The implementation, which contains OpenMP+MPI, presents
some restrictions regarding the number of processes to run. Thus, it must
be run with 48 nodes rather than 50.
\item \textbf{Heat}: This pure MPI benchmark performs a \highlightForReview{2D heat} transfer
simulation.
\item \textbf{LULESH2.0}~\cite{LULESH2:changes}: \highlightForReview{This is a} C++ OpenMP+MPI sample
application from Lawrence Livermore National Laboratory that models the
propagation of a Sedov blast wave. The problem is formulated using a
\highlightForReview{\mbox{three-dimensional}} unstructured mesh.
\item \textbf{N-Body}: This benchmark, which simulates a dynamical system of
particles, uses OpenMP+MPI. Its implementation \highlightForReview{constrains} the number of
processes to run, so only 32 nodes \highlightForReview{were} used.
\item \textbf{SPECFEM3D}: The SPECFEM3D application simulates seismic wave
propagation using a Galerkin spectral element method. Its implementation
relies on OpenMP+MPI, and presents some restrictions that force us to use
only 32 nodes.
\item \textbf{Stream}~\cite{stream}: Extracted from the HPC Challenge
Benchmarks, the Stream benchmark measures the sustainable bandwidth and the
corresponding computation rate for \highlightForReview{a} simple vector kernel. It is implemented
using OpenMP+MPI.
\item \textbf{TurboRVB}~\cite{turborvb}: This pure MPI application \highlightForReview{was} also
developed under the umbrella of the DEEP-ER project, at SISSA. The
goal of this application is to understand high-temperature superconductivity
\highlightForReview{through} Quantum Monte Carlo simulations.
\item \textbf{xPic}: \highlightForReview{This is a} C++ OpenMP+MPI HPC application deduced from
iPic3D~\cite{MARKIDIS20101509}. It is designed for large scale
production runs. xPic simulates space plasma in \highlightForReview{three-dimensional}
parallel code.
\end{description}
\subsection{Evaluation and discussion}
\label{subsec:discussion}
As stated previously, our evaluation covers two different aspects. We want to demonstrate that our model is introducing no significant
overhead compared to using the native backend \highlightForReview{libraries} directly\highlightForReview{, and we want to} evaluate the programmability of our model.
Regarding the first aspect of the evaluation, Figure~\ref{fig:overhead} shows
three different charts, one for each backend library. Each of the charts shows
the different applications and benchmarks executed in the \highlightForReview{x-axis}, while in the
y-axis \highlightForReview{shows} the overhead calculated as \highlightForReview{previously described}.
For the first chart starting from the left, which corresponds to FTI, it can be
seen that the differences between OpenCHK and native FTI are always $<$ 2\%. The
worst case, TurboRVB, has a difference of 1.63\%, while the rest are $<$ 1\%.
Moreover, the differences are always within the standard deviation of the runs,
which \highlightForReview{range from} $\approx$0.15\% to $\approx$2.6\%\highlightForReview{, except for TurboRVB,}
which is $\approx$4.6\%, so we conclude that negligible overhead is
introduced by OpenCHK \highlightForReview{compared to} native FTI.
The \highlightForReview{center chart}, corresponding to SCR, shows differences \highlightForReview{that are} always
$<$ 0.5\%, except for the GERShWIN application that was 1.48\%. However,
this value fits within the standard deviation (1.49\%), while the rest \highlightForReview{also remain} within their respective standard deviation values. Therefore, the overhead introduced by OpenCHK is negligible \highlightForReview{compared to} native SCR.
Finally, the right-most chart, which presents results for VeloC, exhibits
differences of $<$ 0.5\%. Furthermore, these values are within their
corresponding standard \highlightForReview{deviations}. Consequently, no
significant overhead is introduced by OpenCHK \highlightForReview{compared to} native VeloC.
Therefore, we can conclude that no significant overhead is introduced at all by the OpenCHK model when compared \highlightForReview{to} its native counterparts.
These results \highlightForReview{align with} the results published in our previous
work~\cite{maronas2017checkpoint}, \highlightForReview{which} we are extending \highlightForReview{in this work}. The evaluation of
our previous work was done in MareNostrum 3, the available machine at the
\highlightForReview{time} of that work. Now, MareNostrum has been upgraded to its fourth version,
and we have relaunched the whole application set, obtaining similar
results. It is important to highlight the differences between the total number
of cores: 64 (nodes) * 16 (CPUs per node) in MareNostrum 3, and 50 (nodes) * 48
(CPUs per node) in MareNostrum 4; \highlightForReview{the new version has more than 2x the number of processors}. However, the results
remain unchanged, demonstrating a good scalability \highlightForReview{for} our approach.
\begin{figure*}
\begin{tikzpicture}[baseline]
\begin{axis}[
height=0.4\textwidth,
title={OpenCHK/FTI},
ybar,
bar width=0.5cm,
ylabel={Overhead},
ymin=0.98, ymax=1.02,
x=1.0cm,
symbolic x coords={DUCT,HEAT,LULESH,NBODY,SPECFEM3D,TURBORVB,XPIC},
xtick=data,
xticklabel style={rotate=90,anchor=east,align=center},
nodes near coords,
nodes near coords style={/pgf/number format/.cd,precision=4},
nodes near coords align={vertical},
enlarge x limits={abs=0.5cm},
]
\addplot
coordinates {(DUCT, 0.9977) (HEAT, 0.9924) (LULESH, 1.0082) (NBODY, 0.9987) (SPECFEM3D, 0.9968) (TURBORVB, 1.0163) (XPIC, 0.9937)};
\end{axis}
\end{tikzpicture}
~
\begin{tikzpicture}[baseline]
\begin{axis}[
height=0.4\textwidth,
title={OpenCHK/SCR},
ybar,
bar width=0.5cm,
ymin=0.98, ymax=1.02,
yticklabels={},
x=1.0cm,
symbolic x coords={BT-MZ,HEAT,GERSHWIN,NBODY,STREAM},
xtick=data,
xticklabel style={rotate=90,anchor=east,align=center},
nodes near coords,
nodes near coords style={/pgf/number format/.cd,precision=4},
nodes near coords align={vertical},
enlarge x limits={abs=0.5cm},
]
\addplot
coordinates {(BT-MZ, 1.0036) (GERSHWIN, 0.9852) (HEAT, 1.0021) (NBODY, 1.0017) (STREAM, 1.0021)};
\end{axis}
\end{tikzpicture}
~
\begin{tikzpicture}[baseline]
\begin{axis}[
height=0.4\textwidth,
title={OpenCHK/VeloC},
ybar,
bar width=0.5cm,
ymin=0.98, ymax=1.02,
yticklabels={},
x=1.0cm,
symbolic x coords={HEAT,NBODY},
xtick=data,
xticklabel style={rotate=90,anchor=east,align=center},
nodes near coords,
nodes near coords style={/pgf/number format/.cd,precision=4},
nodes near coords align={vertical},
enlarge x limits={abs=0.5cm},
]
\addplot
coordinates {(HEAT, 1.0034) (NBODY, 0.9996)};
\end{axis}
\end{tikzpicture}
\caption{Overhead introduced by \texttt{OpenCHK} \highlightForReview{compared to} using native FTI/SCR/VeloC}
\label{fig:overhead}
\end{figure*}
\highlightForReview{Now that we have demonstrated} that our approach introduces negligible overhead, we wish to focus
on the most important point of our approach: the programmability. We based our analysis on the SLOC metric, which stands for
\textit{source lines of code}. We measure it using SLOCCount~\cite{sloccount}.
We \highlightForReview{selected} the lines of code needed to implement each of the different
versions \highlightForReview{to} make this measurement, the results of which are shown in
Tables~\ref{tab:codelines_fti},~\ref{tab:codelines_scr},
and~\ref{tab:codelines_veloc} for FTI, SCR, and VeloC, respectively. Here it is
possible to see the lines of code required to write application-level
\highlightForReview{CR} in FTI, SCR, VeloC, and OpenCHK. The code we evaluated provides the same functionality \highlightForReview{between OpenCHK and the native versions}, but may not
be 100\% equivalent. This fact can be \highlightForReview{seen} by checking the full example of
code provided in the appendices. Moreover, native implementations include error
handling, while OpenCHK manages errors inside the TCL library.
As can be observed in Table~\ref{tab:codelines_fti}, our approach \highlightForReview{can}
drastically reduce the number of lines required to perform application-level
\highlightForReview{CR}. On average, the number of lines
required by OpenCHK \highlightForReview{was reduced to} around 30\% of the lines required by FTI to
provide the same functionality.
\begin{table}[]
\centering
\caption{Number of lines of code required to perform application-level
CR using FTI and OpenCHK.}
\label{tab:codelines_fti}
\begin{tabular}{@{}llll@{}}
\toprule
& \textbf{FTI} & \textbf{OpenCHK} & \textbf{OpenCHK/FTI} \\ \midrule
\textbf{DUCT} & 31 & 5 & 0.1613 \\
\textbf{HEAT} & 15 & 5 & 0.3333 \\
\textbf{LULESH} & 12 & 5 & 0.4167 \\
\textbf{NBODY} & 25 & 5 & 0.2 \\
\textbf{SPECFEM3D} & 28 & 6 & 0.2143 \\
\textbf{TURBORVB} & 80 & 6 & 0.075 \\
\textbf{XPIC} & 8 & 5 & 0.625 \\ \midrule
\textbf{AVERAGE} & & & 0.2894 \\ \bottomrule
\end{tabular}
\end{table}
The comparison with SCR shows even better results in terms of programmability.
Table~\ref{tab:codelines_scr} shows that OpenCHK \highlightForReview{can} express
\highlightForReview{CR} in \highlightForReview{as little as} 3\% of the lines used by SCR, allowing to
express a \highlightForReview{CR} mechanism in five lines while SCR needs 165 lines
for the same purpose. The code lines needed by OpenCHK to provide the same
functionality \highlightForReview{as} SCR represents, on average, only about 6\% of those
required by SCR.
Note that these results are for new applications that do not contain output I/O
or checkpointing. For those applications that already contain code to perform
output I/O or checkpointing, our results are inflated, \highlightForReview{because} the I/O code already
exists and does not need to be added. For example, if we
assume the output code already exists, SCR involves \highlightForReview{only} 40 additional lines of code
\highlightForReview{for} the NBODY benchmark. In
general, for legacy codes and applications that have already an I/O method
implemented in the code, using SCR and/or VeloC can be more beneficial than using
a new interface because it leverages existing code; however, for freshly developed
applications or to use features of different backend checkpoint libraries, OpenCHK
does provide an easy way to access those libraries through a simple interface.
\begin{table}[]
\centering
\caption{Number of lines of code required to perform application-level
CR using SCR and OpenCHK.}
\label{tab:codelines_scr}
\begin{tabular}{@{}llll@{}}
\toprule
& \textbf{SCR} & \textbf{OpenCHK} & \textbf{OpenCHK/SCR} \\ \midrule
\textbf{BT-MZ} & 118 & 12 & 0.1017 \\
\textbf{GERSHWIN} & 200 & 8 & 0.04 \\
\textbf{HEAT} & 78 & 5 & 0.0641 \\
\textbf{NBODY} & 109 & 5 & 0.0459 \\
\textbf{STREAM} & 165 & 5 & 0.0303 \\ \midrule
\textbf{AVERAGE} & & & 0.0564 \\ \bottomrule
\end{tabular}
\vspace*{-4mm}
\end{table}
As we are using VeloC in memory-based mode, the comparison results are very
similar to FTI. This is because VeloC memory-based mode is much \highlightForReview{like} FTI.
Therefore, similarly to FTI, we only need around 36\% \highlightForReview{of} the lines required by
VeloC to express \highlightForReview{CR}. If we were using VeloC in
file-based mode, as it is very similar to SCR, the comparison should be \highlightForReview{more like} SCR.
\begin{table}[]
\centering
\caption{Number of lines of code required to perform application-level CR using VeloC and OpenCHK.}
\label{tab:codelines_veloc}
\begin{tabular}{@{}llll@{}}
\toprule
& \textbf{VeloC} & \textbf{OpenCHK} & \textbf{OpenCHK/VeloC} \\ \midrule
\textbf{HEAT} & 10 & 5 & 0.5 \\
\textbf{NBODY} & 23 & 5 & 0.2174 \\
\textbf{AVERAGE} & & & 0.3587 \\ \bottomrule
\end{tabular}
\vspace*{-4mm}
\end{table}
In general, OpenCHK usually needs only five lines to express \highlightForReview{the entire
CR} code. Two lines for initialization--- one for creating
the MPI communicator to be passed to TCL, and one for the init directive---,
another line for the load (unless there are \highlightForReview{many} variables), another line for
the store (again, unless there are \highlightForReview{many} variables), and, finally, another line
for \highlightForReview{the} shutdown directive. An additional important \highlightForReview{feature} is that OpenCHK prevents
users from modifying the natural program flow to check whether an execution is
a restart or not. Overall, in light of the results, we can conclude that
programmability is enhanced.
Finally, portability is also improved with our solution.
Users can use their OpenCHK applications with \highlightForReview{whichever} of the three backends
supported. Consequently, moving from a system with one backend (e.g., FTI) to a
system with a different backend (e.g., SCR or VeloC) requires no changes in the
source code. Otherwise, if native APIs are used, the code
related to \highlightForReview{CR must} be completely rewritten.
\section{Conclusions}
\label{sec:conclusion}
Throughout this paper, \highlightForReview{we have detailed} the extension of a directive-based approach
designed for providing application-level \highlightForReview{CR}: the OpenCHK
programming model. The model includes the new \texttt{\#pragma chk} directive
family, composed of several directives and clauses. \highlightForReview{They allow users
to} specify data to be checkpointed persistently, along
with other details, such as checkpoint frequency, checkpoint identifier, or
checkpoint level. Additionally, the model enables users to recover data from an
existent checkpoint, in the case of a restart after failure, continuing the
execution from the recovered state rather than from \highlightForReview{the beginning.}
The directive-based approach presented in this paper eases the use of
application-level \highlightForReview{CR}. The solution proposed (1) minimizes
the modifications required in the source code to perform \highlightForReview{CR}, (2) transfers the responsibility of serializing and
\highlightForReview{deserializing} data required by traditional approaches from
the user side to the model side, and (3) prevents users from modifying the
natural program flow to check whether data can be recovered from a checkpoint.
Our solution incorporates state-of-the-art \highlightForReview{CR} libraries
(FTI, SCR, and VeloC) \highlightForReview{to} maximize resiliency and performance,
benefiting from their advanced redundancy schemes and their highly optimized
I/O operations. \highlightForReview{Additionally}, the OpenCHK model enables users to \highlightForReview{employ} any of
the backend libraries without modifying a single line of source code, thereby
enhancing the portability of applications. The OpenCHK model can be combined
with other programming models, such as OpenMP, MPI, and other similar programming
models like OmpSs, as well as combinations of them.
Furthermore, the OpenCHK model not only supports the basic
functionality but also advanced functionalities: CP-dedicated threads to reduce
checkpointing overhead in some architectures, differential checkpoints to store
only the blocks of data that have been modified, saving time and space, and
support for HDF5 to allow merging \highlightForReview{CR} with data analytics.
Moreover, given its nature, OpenCHK is easily extensible so that new features
implemented in any of the backends can be added to the model. Our contribution
consists not only of the model, but also an implementation. Our
implementation provides robust features to help users increase their
productivity, \highlightForReview{such as} self-iterative data expressions, \highlightForReview{which} are useful when dealing with
arrays to avoid tedious and error-prone tasks.
Our evaluation, consisting \highlightForReview{of} several benchmarks and production-level scientific
applications, showed (1) no significant overhead compared to using the native
APIs of state-of-the-art solutions such as FTI, SCR, and VeloC, and (2) a significant
reduction of the required number of source code lines to perform
application-level \highlightForReview{CR}. On average, OpenCHK needs only 29\%,
6\%, and 36\% of the code lines required by FTI, SCR, and VeloC,
respectively, to perform the same functionalities. Finally, we enhanced
portability, enabling users to choose among FTI, SCR, or VeloC at runtime,
with no changes in the source code.
\section{Future work}
\label{sec:future}
As future work, we would like to integrate incremental checkpoints \highlightForReview{into OpenCHK}. This
is a technique where \highlightForReview{a} checkpoint is not fully written \highlightForReview{at} one time, but
incrementally built in several separated write operations. An example of this
is an N-body simulation dealing with particle positions, velocities, and forces.
Each one of these is calculated at \highlightForReview{a} different time, starting \highlightForReview{with} the forces, then
the velocities, and finally the positions. When the forces have been updated,
they can be written in the checkpoint, possibly \highlightForReview{while the}
velocities are being calculated. Then, when the velocities have been updated,
they can be written in the checkpoint, and the same finally with the positions.
Overall, all the variables are checkpointed, but the write operations
are separated in time, to decrease storage congestion and maximize
parallelization.
Another idea is decoupling the actual operation (load/store) and the data
registration. Currently, the model does \highlightForReview{these together because} the data is
registered in the load/store clauses. However, it may become a problem when
dealing with C++ classes due to the visibility of some members in different
contexts. Therefore, allowing registration and actual load/store separately
would help in some specific cases.
Finally, we plan to add GPU checkpointing to the model \highlightForReview{to} accelerate
fault-tolerance tasks and \highlightForReview{better exploit} the resources of heterogeneous
systems.
\section*{Acknowledgment}
This work is supported by the Spanish Ministerio de Ciencia, Innovaci\'{o}n y
Universidades (TIN2015-65316-P) and by the Generalitat de Catalunya
(2014-SGR-1051). This project received funding from the European
Union's Seventh Framework Programme (FP7/2007-2013) and the Horizon 2020
(H2020) funding framework under grant agreement no. H2020-FETHPC-754304
(DEEP-EST). The present publication reflects only the authors' views. The
European Commission is not liable for any use that might be made of the
information contained herein. We would also like to
thank the reviewers for their feedback and contributions to this work.
\bibliographystyle{IEEEtran}
|
1,314,259,993,455 | arxiv | \section{Introduction}
Since Diakonov {\it et al.} predicted the mass and width of the
pentaquark baryon $\Theta^+$~\cite{Diakonov:1997mm}, there has been a
great deal of works to clarify its existence and properties. Although
various experiments have reported the existence of $\Theta^+$ after
the first observation by the LEPS collaboration~\cite{Nakano:2003qx},
the situation is not yet settled down primarily due to the relatively
low statistics of the low-energy experiments. Furthermore, in almost
all high-energy experiments, the $\Theta^+$ has not been seen (see,
for example, a recent review~\cite{Hicks:2005gp,Schumacher:2005wu,
Hicks:2005jf} for the compilation of the experimental results).
Recently, the CLAS collaboration has reported null results for finding
the $\Theta^+$ in the reactions
$\gamma p \to \bar K^0 K^+ n$~\cite{Battaglieri:2005er},
$\gamma d \to \bar pK^- K^+ n$~\cite{McKinnon:2006zv}
and
$\gamma d \to \bar \Lambda n K^+$~\cite{Niccolai:2006td}.
The upper limits of the cross sections of producing
$\Theta^+$ were estimetaed to be, for instance,
$\sigma(\gamma p \to \bar K^0 \Theta^+) \sim 0.8$ nb,
$\sigma(\gamma n \to \bar K^- \Theta^+) \sim 3$ nb.
Though these experiments had high statistics, their results do not yet
lead to the absence of $\Theta^+$ immediately,
because the updatd positive evidences also seem rather convincing.
In the LEPS, they observe a peak for the $\Theta^+$ in the reaction
$\gamma d \to \bar \Lambda(1520) n K^+$~\cite{nakano_jlab} when
the $\Lambda(1520)$ is detected in the forward angle region.
DIANA reported further evidence in
the reaction $K^+ n\to K^0 p$ on a neutron bound in the Xenon
nucleus~\cite{Barmin:2006we}.
The statistical significance of the
DIANA measurement is $4.3\sim 7.3\,\sigma$.
Moreover,
KEK-PS E522 experiment has reported a measurement of
the $\Theta^+$ via the reaction
$\pi^- p\to K^- X$~\cite{Miwa:2006if},
although the statistical siginificance is not large enough.
Experimentally, the two similar experiments from CLAS and LEPS
are not in contradiction, since they measure different regions;
CLAS detects final particles in the region where the scattering
angle is not small, while the LEPS observes the forward angle region,
and their measuring regions have little overlap.
Theoretically, it was suggested that
the production rate of
the $\Theta^+$ from the proton target is considerably
suppressed as compared to the case of the neutron target,
if the spin of $\Theta^+$ is 3/2~\cite{Nam:2005jz}.
Furthermore, in this case, the cross section of the neutron target
which is larger than the proton case is strongly forward peaking.
These may explain the different observations of the CLAS and LEPS.
Interestingly, a similar suppression is found in the
$\Lambda(1520)$-photoproduction~\cite{Nam:2005uq}, though in this case
the suppression takes place for the neutron target.
Therefore, it should be fare to say that the existance of
the $\Theta^+$ is not yet excluded.
Motivated by the previous work~\cite{Nam:2005uq}, we continue to
investigate the $\Theta^+$-photoproduction with the vector kaon $K^*$,
based on the effective Lagrangian approach with phenomenological form
factors. Here, we consider the cases with $J^P=3/2^{\pm}$ and
$J^P=1/2^+$ for the $\Theta^+$ baryon. Scalar meson
$\kappa(800,0^+)$-exchange is also taken into account, in addition to
pseudoscalar $K$- and vector $K^*$-exchanges. We note that
$\kappa$-exchange in the $t$-channel does not appear in the $\gamma
N\to \bar{K}\Theta^+$ reaction process because the $\gamma\kappa K$
coupling is not allowed~\cite{Nam:2005jz}, whereas $\kappa$-exchange
is possible in the present reaction process according to the existence of
the $\gamma\kappa K^*$ coupling. The role of $\kappa$ may be
interesting if it is dominated by a tetraquark component which has
been suggested to have a strong coupling to exotic
baryons~\cite{Roy:2003hk}.
One of the interesting features of the present reaction process is that
there are two polarizations in the initial and final states: the
polarizations of the incident photon and the outgoing $K^*$. By making
a proper combination of these two polarizations, one can determine
which meson exchange in the $t$-channel dominates the reaction
process.
The outline of the present work is sketched as follows: In Section 2,
we define the effective Lagrangians for the $\gamma N\to
\bar{K^*}\Theta^+(3/2^{\pm})$ reaction and calculate the invariant
amplitudes with phenomenological form factors. The numerical
results are given and discussed for the $\Theta^+(3/2^{\pm})$ and
$\Theta^+(1/2^+)$ in Section 3. Section 4 is devoted to the discussion
on reaction analysis via the photon and $K^*$ polarizations. We
summarize our results and draw
conclusion in the final Section.
\section{Formalism}
\begin{figure}[t]
\resizebox{12cm}{6cm}{\includegraphics{paper14f0nn.eps}}
\caption{Born diagrams calculated in the effective Lagrangian
approach. $P/V/S$ in the $t$-channel stand for the pseudoscalar kaon,
vector kaon and scalar $\kappa$-exchanges, respectively.}
\label{fig0}
\end{figure}
We investigate the reaction $\gamma N\to \bar{K}^*\Theta^+$ at the
tree level, i.e. in the Born approximation. The relevant Feynman
diagrams are drawn in Fig.~\ref{fig0}, where we define
the four momenta of the particles involved in the process. For
convenience, we will denote the spin $3/2$ and $1/2$ $\Theta^+$ with the
subscripts $3$ and $1$, respectively.
The effective Lagrangians pertinent to the present work are given as
follows. { First, we consider the vertices of photon-meson-meson couplings:}
\begin{eqnarray}
\mathcal{L}_{\gamma
KK^{*}}&=&g_{\gamma
KK^{*}}\epsilon_{\mu\nu\sigma\rho}(\partial^{\mu}A^{\nu})
(\partial^{\sigma}K)K^{*\rho}\,+{\rm
h.c.},\\
\mathcal{L}_{\gamma
K^*K^*}&=&ie[K^{*\dagger}_{\nu}(\partial_{\mu}K^{*\nu})-
K^{*}_{\nu}(\partial_{\mu}K^{*\dagger\nu})]A^{\mu},
\end{eqnarray}
where $K$, $K^*$ and $A^{\mu}$ denote the pseudoscalar kaon,
vector kaon and photon fields, respectively. We employ the effective
Lagrangian taken from
Refs.~\cite{Liu:2003rh,Liu:2003zi,Oh:2003kw,Janssen:2001wk}. Note
that, in
order to maintain gauge invariance of the reaction amplitudes, we
introduce a vector-meson exchange model using the $\gamma K^*K^*$
vertex as shown in Eq.~(2), which was suggested by
Refs.~\cite{Clark:1970xr,Clark:1977fy}. This vertex represents three
vector particle coupling which manifests the nature of the non-Abelian
gauge fields.
The baryon electromagnetic couplings for the nucleon and the spin $3/2$
and $1/2$ $\Theta^+$ are defined as follows:
\begin{eqnarray}
\mathcal{L}_{\gamma NN}
&=&-e\bar{N}\left[\Slash{A}+\frac{\kappa_N}{4M_{N}}
\sigma_{\mu\nu}F^{\mu\nu}\right]N\,+{\rm
h.c.},\\
\mathcal{L}_{\gamma
\Theta_1\Theta_1}&=&-e\bar{\Theta}_1\left[\Slash{A}+
\frac{\kappa_{\Theta}}{4M_{\Theta}}\sigma_{\mu\nu}
F^{\mu\nu}\right]\Theta_1\,+{\rm
h.c.},\\
\mathcal{L}_{\gamma
\Theta_3\Theta_3}&=&-e\bar{\Theta}^{\mu}_3g_{\mu\nu}\left[\Slash{A}+
\frac{\kappa_{\Theta}}{4M_{\Theta}}\sigma_{\sigma\rho}
F^{\sigma\rho}\right]\Theta^{\nu}_3\,+{\rm
h.c.},
\end{eqnarray}
where $N$, $\Theta^{\mu}_3$ and $\Theta_1$ stand for the nucleon, the spin
$3/2$
Rarita-Schwinger (RS) $\Theta^+$~\cite{Read:ye} and spin $1/2$
$\Theta^+$, respectively. The same structures of the Lagrangians are
used for the nucleon and spin $1/2$ $\Theta^+$ (Eqs.~(3,4)). Following
Ref.~\cite{gourdin}, we
construct the effective Lagrangian for the electromagnetic coupling of
the spin $3/2$ $\Theta^+$ in Eq.~(5). Here, being different from
Ref.~\cite{gourdin}, since the electric
quadrapole ($E2$) and magnetic octupole ($M3$) form factors are
expected to be small, compared to the charge and magnetic
dipole form factors of spin $3/2$ baryons, we only consider the
$E0$ and $M1$ electromagnetic interaction. Concerning other
possible structures of the electromagnetic
couplings, it
is worth mentioning that, as indicated in
Ref.~\cite{Read:ye},
the electromagnetic coupling for the spin $3/2$ $\Theta^+$ can be
reconstructed equivalently with the terms such as
$\bar{\Theta}^{\mu}F_{\mu\nu}\Theta^{\nu}$ and others.
The $K(K^*)N\Theta$ vertices for the spin $3/2$ and 1/2 $\Theta^+$ baryons
are defined as follows~\cite{Nam:2005uq,Machleidt:1987hj}.
\begin{eqnarray}
\mathcal{L}_{KN\Theta_3}&=&\frac{g_{KN\Theta_3}}{M_{K}}\bar{\Theta}^{\mu}_3
\partial_{\mu}K{\Gamma_5}N\,+{\rm
h.c.},\\
\mathcal{L}_{KN\Theta_1}&=&ig_{KN\Theta_1}
\bar{\Theta}_1\Gamma_5\gamma_5KN+{\rm h.c.},\\
\mathcal{L}_{K^{*}N\Theta_3}&=&-\frac{ig_{K^{*}N\Theta_3}}
{M_{K^*}}\bar{\Theta}_{3,\mu}\gamma_{\nu}F^{\mu\nu}_{K^*}\Gamma_5\gamma_5N+{\rm
h.c.},\\
\mathcal{L}_{K^*N\Theta_1}&=&g^V_{K^*N\Theta_1}
\bar{\Theta}_1\gamma_{\mu}\Gamma_5K^{*\mu}N-\frac{g^T_{K^*N\Theta_1}}{2(M_{\Theta}+M_N)}\bar{\Theta}_1\Gamma_5\sigma_{\mu\nu}F^{\mu\nu}_{K^*}N+{\rm
h.c.},
\end{eqnarray}
where $\Gamma_5$ denotes ${\bf 1}_{4\times4}$ in the
positive-parity and $\gamma_5$ for the negative-parity
$\Theta^+$, respectively, for both cases of the spin $3/2$ and
spin $1/2$. $F^{\mu\nu}_{K^*}$ stands for
$\partial^{\mu}K^{*\nu}-\partial^{\nu}K^{*\mu}$. As
for the spin $1/2$ $\Theta^+$, we consider only the pseudoscalar
coupling scheme for the $KN\Theta$ vertex due to the approximate
equivalence between the pseudoscalar and psedovector
schemes~\cite{Nam:2003uf}. On the
contrary, only pseudovector (derivative) coupling is possible for the
case of the
spin $3/2$ due to the constraint
$\gamma_{\mu}\Theta^{\mu}=0$. Concerning the $K^*N\Theta$ vertex of
Eqs.~(8,9), we
consider the Lagrangian structures which are necessary
minimally for maintaining the gauge invariance when we construct
reaction amplitudes. Note that, as for the
spin $1/2$ case, we have the vector and tensor terms in
the Lagrangian of Eq.~(9). Here, we use the value of
$g^T_{K^*N\Theta_1}=|g^V_{K^*N\Theta_1}|$ as a trial since no
experimental data are available now. However, the strength of
$g^T_{K^*N\Theta}$ can be estimated from the recent
calculations of the transition magnetic moment of $\gamma
N_8N^*_{\bar{10}}$, where $\kappa_{\gamma N_8N^*_{\bar{10}}}$ was found
to be $0\sim0.5$~\cite{Choi:2005ki,Kim:2005gz,Azimov:2005jj}. Here,
$N^*_{\bar{10}}$
is a nucleon partner of the antidecuplet pentaquark. Assuming the
vector dominance and flavor SU(3) symmetry, we expect that the ratio
$|g^T_{K^*N\Theta}/g^V_{K^*N\Theta}|$ is less than unity. Thus, our
choice of $g^T_{K^*N\Theta_1}=|g^V_{K^*N\Theta_1}|$ can be almost its upper
bound.
Finally, we introduce the photon coupling in the $K^*N\Theta$
vertex by minimal
substitution, $\partial_{\mu}\to\partial_{\mu}+i\hat{Q}A_{\mu}$ where
$\hat{Q}$ is the charge matrix acting upon the matter fields.
\begin{eqnarray}
\mathcal{L}_{\gamma K^{*}N\Theta_3}&=&\frac{eg_{K^{*}N\Theta_3}}
{M_{K^*}}\bar{\Theta}^{\mu}_3\gamma^{\nu}[A_{\mu}
K^{*}_{\nu}-A_{\nu}K^{*}_{\mu}]\Gamma_5\gamma_5N+{\rm
h.c.},\\
\mathcal{L}_{\gamma
K^{*}N\Theta_1}&=&-\frac{ieg^T_{K^*N\Theta_1}}{2(M_{\Theta}+M_N)}\bar{\Theta}_1\Gamma_5\sigma_{\mu\nu}(A^{\mu}K^{*\nu}-A^{\nu}K^{*\mu})N+{\rm
h.c.},
\end{eqnarray}
These interaction vertices are related to the Feynman diagram of the
contact term shown in Fig.~\ref{fig0}. We note that the same
interactions of Eqs.~(10,11) are obtained from the non-Abelian terms
of the covariant field tensor
$\partial_{\mu}V_{\nu}-\partial_{\nu}V_{\mu}-i[V_{\mu},V_{\nu}]$ where
$V_{\mu}$ is an SU(3) vector meson field, and using the vector dominance.
\begin{table}[t]
\begin{tabular}{|c|c|cc|cccccc|}
\hline
&$\kappa_N$&&$g_{\gamma KK^*}$&&$g_{KN\Theta_3}$&$g^V_{K^*N\Theta_3}$
&$g_{KN\Theta_1}$&$g^V_{K^*N\Theta_1}$&$g^T_{K^*N\Theta_1}$\\
\hline
$n$&$-$1.91&Neutral&0.388/GeV&$\pi(\Theta)=+1$&0.53&0.91=0.53$\sqrt{3}$&1&$\sqrt{3}$&$\sqrt{3}$\\
$p$&1.79&Charged&0.254/GeV&$\pi(\Theta)=-1$&4.22&2.0&$-$&$-$&$-$\\
\hline
\end{tabular}
\caption{Parameters of the couplings used in the numerical calculations}
\label{table1}
\end{table}
In Table.~\ref{table1}, we
list the parameters (elecgromagnetic and strong couplings) which are used
for numerical calculation. The nucleon magnetic moments $\kappa_N$ and
the $\gamma K^*K$ coupling
constants are taken from experiments~\cite{Eidelman:2004wy}. For
$g_{KN\Theta}$, we assume
$\Gamma_{\Theta\to KN}=1$ MeV and $M_{\Theta}=1540$ MeV for both
spin $3/2$ and 1/2~\cite{Eidelman:2004wy}. For $g^V_{K^*N\Theta}$, we
assume the estimation in
the quark model $g^V_{K^*N\Theta}=\sqrt{3}g_{KN\Theta}$ for the
positive parity $\Theta^+$~\cite{Close:2004tp} whereas we used the results of
Ref.~\cite{Hosaka:2004bn} for $\Theta(3/2^-)$. As for the value of the
anomalous magnetic moment
of $\Theta^+$, we
set it to be unity for both spins as a trial. We will
show later that the dependence on $\kappa_{\Theta}$ is negligible,
since the $u$-channel contributions turn out to be very small. Since we
verified that the sign of $g^V_{K^*N\Theta}$ does not influence much
on the results as shown in the previous work~\cite{Nam:2005jz}, we will
only consider plus sign. The case of $\Theta(1/2^-)$ is not studied since
we verified that it behaves very similar to that of $\Theta(1/2^+)$
except for the only obvious difference in the order of magnitudes
being smaller by factor about ten~\cite{Nam:2003uf}. We note that in
the present work, we
do not consider nucleon resonance ($N^*$) contributions. In other
words, we only
take into account the minimally possible reaction diagrams as shown in
Fig.~\ref{fig0}.
Thus,
the reaction amplitudes for spin $3/2$ ($\mathcal{M}_3$) and
1/2 ($\mathcal{M}_1$) can be written as follows. Furthermore, we have checked
that the amplitudes calculated
from the Lagrangians satisfy the Ward-Takahashi identity with the form
factors.
\begin{eqnarray}
i\mathcal{M}_{3,s}&=&-\frac{ieg_{K^*N\Theta}}{M_{K^*}}
\bar{u}(p_2)[(k_{2}\cdot\epsilon_{\Theta})\Slash{\epsilon}_{K^*}
-(\epsilon_{\Theta}\cdot\epsilon_{K^*})\Slash{k}_{2}]\Gamma_5
\gamma_5\frac{(\Slash{p}_1+M_N)F_c+\Slash{k}_1F_s}{q^2_s-M^2_N}\Slash{k}_1u(p_1)\nonumber\\&-&\frac{ie\kappa_Ng_{K^*N\Theta}}{2M_NM_{K^*}}
\bar{u}(p_2)[(k_{2}\cdot\epsilon_{\Theta})\Slash{\epsilon}_{K^*}
-(\epsilon_{\Theta}\cdot\epsilon_{K^*})\Slash{k}_{2}]\Gamma_5
\gamma_5\frac{(\Slash{q}_s+M_N)F_s}{q^2_s-M^2_2}
\Slash{\epsilon}_{\gamma}\Slash{k}_1u(p_1),\nonumber\\
i\mathcal{M}_{3,u}&=&-\frac{ieg_{K^*N\Theta}}{M_{K^*}}\bar{u}(p_2)
\Slash{\epsilon}_{\gamma}\frac{(\Slash{p}_2+M_{\Theta})F_c+
\Slash{k}_1F_u}{q^2_u-M^2_{\Theta}}[(k_{2}\cdot\epsilon_{\Theta})\Slash{\epsilon}_{K^*}-(\epsilon_{\Theta}\cdot\epsilon_{K^*})\Slash{k}_2]\Gamma_5\gamma_5u(p_1)\nonumber\\&-&
\frac{ie\kappa_{\Theta}g_{K^*N\Theta}}{2M_{\Theta}M_{K^*}}\bar{u}(p_2)
\Slash{\epsilon}_{\gamma}\Slash{k}_1\frac{(\Slash{q}_u+M_{\Theta})F_u}{q^2_u-M^2_{\Theta}}[(k_{2}\cdot\epsilon_{\Theta})\Slash{\epsilon}_{K^*}-(\epsilon_{\Theta}\cdot\epsilon_{K^*})\Slash{k}_2]\Gamma_5\gamma_5u(p_1),\nonumber\\
i\mathcal{M}_{3,t(P)}&=&-\frac{g_{\gamma KK^*}g_{KN\Theta}}{M_K}
\frac{\bar{u}(p_2)\Gamma_5u(p_1)}{q^2_t-M^2_K}
\left[(\epsilon_{\Theta}\cdot q_t)\epsilon_{\mu\nu\sigma\rho}
k^{\mu}_1\epsilon^{\nu}_{\gamma}q^{\sigma}_t\epsilon^{\rho}_{K^*}\right]F_t,\nonumber\\
i\mathcal{M}_{3,t(V)}&=&-\frac{ieg_{K^*N\Theta}}{M_{K^*}}\bar{u}(p_2)
\frac{2\epsilon_{\gamma}\cdot k_2}{q^2_t-M^2_{k^*}}[(q_{t}\cdot
\epsilon_{\Theta})\Slash{\epsilon}_{K^*}-(\epsilon_{\Theta}\cdot
\epsilon_{K^*})\Slash{q}_t]\Gamma_5\gamma_5u(p_1)F_c,\nonumber\\
i\mathcal{M}_{3,c}&=&-\frac{ieg_{K^*N\Theta}}{M_{K^*}}
\bar{u}(p_2)[(\epsilon_{\gamma}\cdot\epsilon_{\Theta})
\Slash{\epsilon}_{K^*}-(\epsilon_{\Theta}\cdot\epsilon_{K^*})
\Slash{\epsilon}_{\gamma}]\Gamma_5\gamma_5u(p_1)F_c.\nonumber\\
\label{amplitudes3}
\end{eqnarray}
\begin{eqnarray}
i\mathcal{M}_{1,s}&=&ieg^V_{K^*N\Theta_1}\bar{u}(p_2)
\Slash{\epsilon}_{K^*}\Gamma_5\frac{(\Slash{p}_1+M_N)F_c+
\Slash{k}_1F_c}{q^2_s-M^2_N}\Slash{\epsilon}_{\gamma}u(p_2)\nonumber\\&+&\frac{ie\kappa_Ng^V_{K^*N\Theta_1}}{2M_N}\bar{u}(p_2)
\Slash{\epsilon}_{K^*}\Gamma_5\frac{(\Slash{q}_s+M_N)F_s}{q^2_s-M^2_N}\Slash{k}_1\Slash{\epsilon}_{\gamma}u(p_2)\nonumber\\&+&\frac{ieg^T_{K^*N\Theta_1}}{2(M_{\Theta}+M_N)}\bar{u}(p_2)\Gamma_5{(\Slash{k}_2\Slash{\epsilon}_{K^*}-\Slash{\epsilon}_{K^*}\Slash{k}_2)}\frac{(\Slash{p}_1+M_N)F_c+\Slash{k}_1F_s}{q^2_s-M^2_N}\Slash{\epsilon}_{\gamma}u(p_2)\nonumber\\&-&
\frac{ie\kappa_Ng^T_{K^*N\Theta_1}}{4M_N(M_{\Theta}+M_N)}\bar{u}(p_2)\Gamma_5(\Slash{k}_2\Slash{\epsilon}_{K^*}-\Slash{\epsilon}_{K^*}\Slash{k}_2)\frac{\Slash{p}_1+\Slash{k}_1+M_N}{q^2_s-M^2_N}\Slash{\epsilon}_{\gamma}\Slash{k}_1u(p_2)F_s,\nonumber\\
i\mathcal{M}_{1,u}&=&ieg^V_{K^*N\Theta_1}\bar{u}(p_2)\Slash{\epsilon}_{\gamma}\frac{(\Slash{p}_s+M_{\Theta})F_c-\Slash{k}_1F_u}{q^2_u-M^2_{\Theta}}\Slash{\epsilon}_{K^*}\Gamma_5u(p_1)\nonumber\\&+&\frac{ie\kappa_{\Theta}g^T_{K^*N\Theta_1}}{4M_{\Theta}(M_{\Theta}+M_N)}\bar{u}(p_2)\Slash{k}_1\Slash{\epsilon}_{\gamma}\frac{(\Slash{q}_u+M_{\Theta})F_s}{q^2_u-M^2_{\Theta}}\Slash{\epsilon}_{K^*}\Gamma_5u(p_1)\nonumber\\&+&\frac{ieg^T_{K^*N\Theta_1}}{2(M_{\Theta}+M_N)}\bar{u}(p_2)\Slash{\epsilon}_{\gamma}\frac{(\Slash{p}_2+M_{\Theta})F_c-\Slash{k}_1F_u}{q^2_u-M^2_{\Theta}}\Gamma_5{(\Slash{k}_2\Slash{\epsilon}_{K^*}-\Slash{\epsilon}_{K^*}\Slash{k}_2)}u(p_2)\nonumber\\&-&
\frac{ie\kappa_{\Theta}g^T_{K^*N\Theta_1}}{4M_{\Theta}(M_{\Theta}+M_N)}\bar{u}(p_2)\Slash{k}_1\Slash{\epsilon}_{\gamma}\frac{\Slash{p}_2-\Slash{k}_1+M_{\Theta}}{q^2_u-M^2_{\Theta}}\Gamma_5(\Slash{k}_2\Slash{\epsilon}_{K^*}-\Slash{\epsilon}_{K^*}\Slash{k}_2)u(p_2)F_u,\nonumber\\
\mathcal{M}_{1,t(P)}&=&g_{KN\Theta_1}g_{\gamma
KK^*}\frac{\bar{u}(p_1)\Gamma_5\gamma_5
u(p_1)}{q^2_t-M^2_K}\epsilon_{\mu\nu\rho\sigma}k^{\mu}_1
\epsilon^{\nu}_{\gamma}\epsilon^{\rho}_{K^*}q^{\sigma}_tF_t\nonumber,\nonumber\\
\mathcal{M}_{1,t(V)}&=&-2ieg_{K^*N\Theta_1}\bar{u}(p_1)\frac{k_2\cdot\epsilon_{\gamma}
\Slash{\epsilon}_{K^*}\Gamma_5}{q^2_t-M^2_{K^*}}u(p_1)F_c\nonumber\\&+&\frac{ie\kappa_Ng^T_{K^*N\Theta_1}}{M_{\Theta}+M_N}\bar{u}(p_2)\Gamma_5(\Slash{q}_t\Slash{\epsilon}_{K^*}-\Slash{\epsilon}_{K^*}\Slash{q}_t)\frac{k_2\cdot\epsilon_{\gamma}}{q^2_t-M^2_{K^*}}F_cu(p_1),\nonumber\\
\mathcal{M}_{1,c}&=&\frac{ieg^T_{K^*N\Theta_1}}{2(M_{\Theta}+M_N)}\bar{u}(p_2)\Gamma_5(\Slash{\epsilon}_{\gamma}\Slash{\epsilon}_{K^*}-\Slash{\epsilon}_{K^*}\Slash{\epsilon}_{\gamma})u(p_1).
\label{amplitudes1}
\end{eqnarray}
The subscripts $s$, $u$, $t(P)$, $t(V)$, and $c$ of $\mathcal{M}$
indicate $s$-,
$u$-, pseudoscalar $K$-exchange, vector $K^*$-exchange and the contact term,
respectively. $q_s=p_1+k_1$, $q_t=k_1-k_2$
and $q_u=p_1-k_2$ are the momentum transfers for each kinematical
channel. The Mandelstam variables $s$, $t$, and $u$ are defined in a standard
way: $s=q^2_s$, $u=q^2_u$ and $t=q^2_t$. For spin $3/2$
$\Theta^+$, we need to take into account
$\mathcal{M}_{s,E,M}$, $\mathcal{M}_{u,E,M}$ and $\mathcal{M}_{t(P)}$ for the
proton target and $\mathcal{M}_{s,M}$, $\mathcal{M}_{u,E,M}$,
$\mathcal{M}_{t(P)}$, $\mathcal{M}_{t(V)}$ and $\mathcal{M}_c$ for
the neutron one, where $E$ and $M$ stand for the terms including electric
(proportional to $e$) and magnetic (proportional to
$e\kappa_{N,\Theta}$) interactions. $\epsilon_{\gamma}$ and
$\epsilon_{K^*}$ are the
polarization vectors of the photon and the vector
kaon, respectively. $\epsilon_{\Theta}$ is the spin-1 component of
the Rarita-Schwinger field for the $\Theta^+$~\cite{Nam:2005uq}. We
simplify the spin $3/2$ RS propagator by that of spin $1/2$ baryon. It was
shown that this simplification worked qualitatively well in the low-energy
regions~\cite{Nam:2005uq}. The
evaluation of the invariant amplitudes for the
spin $1/2$ is also performed similarly to that of spin $3/2$.
In the present work, we also take into
account scalar meson
$\kappa(800,0^+)$-exchange in addition to $K$- and
$K^*$-exchange. The relevant effective Lagrangians are defined as
follows:
\begin{eqnarray}
\mathcal{L}_{\gamma\kappa K^*}&=&g_{\gamma\kappa
K^*}F_{\mu\nu}F^{\mu\nu}_{K^*}\kappa,\nonumber\\
\mathcal{L}_{\kappa N\Theta_3}&=&\frac{g_{\kappa N\Theta_3}}
{M_{\kappa}}\bar{\Theta}^{\mu}_3(\partial_{\mu}\kappa)
\Gamma_5\gamma_5N,\nonumber\\
\mathcal{L}_{\kappa N\Theta_1}&=&ig_{\kappa
N\Theta_1}\bar{\Theta}_1\Gamma_5\kappa N,
\label{kappa}
\end{eqnarray}
where $\kappa$ indicates the scalar meson field with its
physical mass $\sim 800$ MeV~\cite{Eidelman:2004wy}. Since there is no
information of the coupling constants
$g_{\gamma\kappa K^*}$ and $g_{\kappa N\Theta_{3}}$, we will estimate
them for both the spin $3/2$ and 1/2 $\Theta^+$ as follow as a trial:
\begin{eqnarray}
g_{\gamma\kappa K^*}=|g_{\gamma KK^*}|\,\,\, {\rm and}\,\,\,g_{\kappa N\Theta_{3}}=|g_{KN\Theta_{3}}|.\nonumber
\end{eqnarray}
We note that the signs of these coupling constants are unknown
and not estimated by flavor SU(3) symmetry. However, we
verified that the signs of these coupling constants do not make
significant differences in the numerical results. Hence, we only
consider plus
signs for the coupling constants. The reaction
amplitudes for $\kappa$-exchange (t(S)) can be written as follows:
\begin{eqnarray}
i\mathcal{M}_{3,t(S)}&=&-\frac{2g_{\gamma\kappa K^*}g_{\kappa
N\Theta_3}}{M_{\kappa}} \frac{\bar{u}(p_2)\Gamma_5\gamma_5u(p_1)}
{q^2_t-M^2_{\kappa}}[\epsilon_{\Theta}\cdot q_t][(k_1\cdot
k_2)(\epsilon_{\gamma}\cdot\epsilon_{K^*})-(\epsilon_{\gamma}\cdot k_2)(\epsilon_{K^*}
\cdot k_1)]F_{\kappa},\nonumber\\
i\mathcal{M}_{1,\kappa}&=&-2ig_{\gamma\kappa
K^*}g_{\kappa N\Theta_1}\frac{\bar{u}(p_2)\Gamma_5u(p_1)}{q^2_t-M^2_{\kappa}}[(k_1\cdot
k_2)(\epsilon_{\gamma}\cdot\epsilon_{K^*})-(\epsilon_{\gamma}\cdot k_2)(\epsilon_{K^*}\cdot
k_1)].
\label{amplitudes}
\end{eqnarray}
As shown in Eqs.~(\ref{amplitudes3}), (\ref{amplitudes1}) and
(\ref{amplitudes}), we employ the four-dimensional form
factors~\cite{Nam:2005uq} defined as follows:
\begin{eqnarray}
F_{x}(q^2)&=&\frac{\Lambda^4}{\Lambda^4+(x-M^2_x)^2},\,\,\,
x=s,t,u,\nonumber\\
F_{c}&=&F_u+F_{t(V)}-F_uF_{t(V)}\,\,\,{\rm for\,\,neutron},\nonumber\\
F_{c}&=&F_s+F_u-F_sF_u\,\,\,{\rm for\,\,proton},
\label{formfactor}
\end{eqnarray}
where $M_x$ is the mass of the interchanged particle in the $x$-channels. We
verified that the inclusion of the form factor maintains the gauge
invariance. We make use of the
cutoff value $\Lambda=750$ MeV as in
Refs.~\cite{Nam:2005jz,Nam:2005uq}.
\section{Numerical results}
We present in this section the numerical results of the total and
differential cross sections, asymmetries, and momentum transfer
$t$-dependences for the neutron and proton targets. Here, the
asymmetry is defined as follows:
\begin{eqnarray}
{\rm Asymmetry}=\frac{\left(\frac{d\sigma}{d\Omega}\right)_{\perp}
-\left(\frac{d\sigma}{d\Omega}\right)_{\parallel}}{
\left(\frac{d\sigma}{d\Omega}\right)_{\perp}
+\left(\frac{d\sigma}{d\Omega}\right)_{\parallel}}.
\label{asym}
\end{eqnarray}
The notations $\parallel$ and $\perp$ in Eq.~(\ref{asym}) stand for
the photon polarizations which are parallel and perpendicular to the
reaction plane, respectively.
In Fig.~\ref{fig1}, we show various contributions to the total
cross sections for each kinematical channel separately as
functions of photon energy in the laboratory frame ($E^{\rm
lab}_{\gamma}$). The upper two panels represent the results for
the $\Theta^+(3/2^+)$, where we see that the contact and
psuedoscalar $K$-exchange terms are main contributions for the neutron
target, whereas the $K$-exchange term dominates the reaction for
the proton one. Since the $\gamma K^*K$
coupling constants for the proton and neutron targets differ by
$g_{\gamma K^0\bar{K}^{*0}}/g_{\gamma K^+K^{*-}}\sim 1.5$, we obtain
the
contribution of $K$-exchange to the total cross sections for the proton
target about two times larger than the neutron one. Being
different from $\Theta^+(3/2^+)$, $\kappa$- and
$K$-exchanges govern the reaction for the $\Theta^+(3/2^-)$ as
demonstrated in the lower two panels. The total cross sections of
$K$-exchange for $\Theta^+(3/2^-)$ becomes much larger than those of
$\Theta^+(3/2^+)$ due to the $d$-wave coupling for the $KN\Theta_3$
vertex. The large contribution of
$\kappa$-exchange can be understood by that we assumed larger coupling
constants $g_{\kappa N\Theta}$ and $g_{\kappa\gamma K^*}$ for
$\Theta^+(3/2^-)$ than those of $\Theta^+(3/2^+)$. However, even if
we ignore $\kappa$-exchange, the qualitative tendency
$\sigma_{3/2^+}<\sigma_{3/2^-}$ will not be altered, since
$K$-exchange is more dominant than the contributions from the
$\kappa$-exchange. Moreover, though we can see about two or
three times
difference in magnitudes of the total cross sections between the
neutron and proton targets, the difference is much smaller than that
of the $\Lambda^*$-photoproduction associated with the pseudoscalar kaon as shown
in the previous work~\cite{Nam:2005jz}.
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[width=7cm]{paper14f1nn.eps}
\includegraphics[width=7cm]{paper14f2nn.eps}
\end{tabular}
\begin{tabular}{cc}
\includegraphics[width=7cm]{paper14f3nn.eps}
\includegraphics[width=7cm]{paper14f4nn.eps}
\end{tabular}
\caption{Various contributions to the total cross sections from
different kinematical channels. The labels are defined by
s ($s$-channel), u ($u$-channel), t(P) (pseudoscalar kaon exchange in
$t$-channel), t(V) (vector kaon exchange in
$t$-channel), t(S) (scalar $\kappa$ exchange in
$t$-channel) and c (contact term). We
show the four different cases, i.e. $\Theta^+(3/2^+)$ from the neutron
(upper-left) and
proton (upper-right) targets, and $\Theta^+(3/2^-)$ from the neutron
(lower-left) and proton (lower-right) ones.}
\label{fig1}
\end{figure}
In Fig.~\ref{fig2} we show the total (upper-left) and differential
(upper-right) cross sections, the asymmetry (lower-left) due to the
different photon polarizations, and the momentum transfer
$t$-dependence (lower-right) for $\Theta^+(3/2^+)$. The total cross
sections from the neutron (solid line) and proton (dashed line)
targets are not very much different; the proton case is slightly
larger due to the ratio $g_{\gamma K^0\bar{K}^{*0}}/g_{\gamma
K^+K^{*-}}\sim 1.5$. The differential cross sections
are calculated at two
different photon energies, i.e. $E^{\rm lab}_{\gamma}=3.0$ GeV (thin
curves) and $3.5$ GeV (thick curves). The angle $\theta$ denotes the one
between the incident photon and outgoing $K^*$ in the center of
mass frame. It is clearly shown that the differential cross section
in the forward direction is strongly enhanced; it is mainly due to
$K$-exchange. We also find that $\kappa$-exchange
increases the differential cross section in the forward direction. The
asymmetry behaves similarly in general for the proton and neutron
targets as shown in the lower-left panel of Fig.~\ref{fig2}. The sign
of the asymmetry is negative when $K$-exchange dominates the process. The
momentum transfer $t$-dependences are drawn in
the lower-right panel. The $t$-dependences show again the strong
enhancement in forward scattering. Also, we verified that the
dependence on the coupling constants $g_{\gamma\kappa K^*}$ and
$g_{\kappa N\Theta_{3}}$ is not significant, since the contributionof
$\kappa$ (t(S)) is small as shown in the upper-left panel of
Fig.~\ref{fig1}. Even
for the case that we use $g_{\gamma\kappa K^*}=2|g_{\gamma KK^*}|$ and
$g_{\kappa N\Theta_{3}}=2|g_{KN\Theta_{3}}|$, only $25\%$ or less difference
appears in the order of magnitudes of the total cross
sections. Furthermore, other observables are not changed much by this
choice.
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[width=7cm]{paper14f5nn.eps}
\includegraphics[width=7cm]{paper14f6nn.eps}
\end{tabular}
\begin{tabular}{cc}
\includegraphics[width=7cm]{paper14f7nn.eps}
\includegraphics[width=7cm]{paper14f8nn.eps}
\end{tabular}
\caption{The total
(upper-left) and differential (upper-right) cross
sections, the asymmetry (lower-right), and the momentum transfer
$t$-dependence (lower-right) for $\Theta^+(3/2^+)$. The solid and
dashed curves represent the
results from the neutron and proton targets, respectively. Thin curves
denote those calculated at $E^{\rm lab}_{\gamma}=3.0$ GeV, while
thick ones stand for those at $E^{\rm lab}_{\gamma}=3.5$ GeV.}
\label{fig2}
\end{figure}
Now, we turn to the results for the $\Theta^+(3/2^-)$ depicted in
Fig.~\ref{fig3}. The total cross sections turn out to be about a few
tens times larger than those for the $\Theta^+(3/2^+)$. The angular
distributions (differential cross sections
and the momentum transfer $t$-dependence) are rather similar to those
for $\Theta^+(3/2^+)$, since the contributions of $K$- and
$\kappa$-exchanges enhance the forward scattering. However, the
asymmetries are distinguished clearly from the case of the
$\Theta^+(3/2^+)$. The asymmetries for the $\Theta^+(3/2^-)$
production are in general positive when $\kappa$-exchange dominates.
However, if $\kappa$-exchange
is switched off, the asymmetries becomes similar to those
for the $\Theta^+(3/2^+)$ production with negative sign due to
$K$-exchange dominance, which indicates that
$\kappa$-exchange plays a key role in distinguishing $\Theta^+(3/2^-)$
from the positive-parity one.
We note that, however, the dependence on
the couplings of scalar $\kappa$ in the case of the negative
parity is not ignored, being different from the previous case of
positive parity. This aspect can be easily verified by the curves
shown in the lower-left panel of Fig.~\ref{fig1} in which the
$\kappa$-exchange in $t$-channel, t(S) is the dominant
contribution. Thus, the choice of $g_{\gamma\kappa K^*}=2|g_{\gamma
KK^*}|$ and $g_{\kappa N\Theta_{3}}=2|g_{KN\Theta_{3}}|$ enhances the
magnitudes of the total cross sections by a factor more than
$\sim10$. For instance, we obtain $\sim 420$ nb at
$E_{\gamma}=3.0$ GeV for the $\Theta(3/2^-)$-photoprodution from the
neutron target. Despite the strong dependence on these coupling
constants, the angular distributions are not much affected and show
the strong forward
enhancement. The asymmetry defined in Eq.~(\ref{asym}) becomes
all positive for the neutron and proton targets with a
similar shape as shown in the lower-left panel of
Fig.~\ref{fig2}.
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[width=7cm]{paper14f9nn.eps}
\includegraphics[width=7cm]{paper14f10nn.eps}
\end{tabular}
\begin{tabular}{cc}
\includegraphics[width=7cm]{paper14f11nn.eps}
\includegraphics[width=7cm]{paper14f12nn.eps}
\end{tabular}
\caption{The total
(upper-left) and differential (upper-right) cross
sections, the asymmetry (lower-right), and the momentum transfer
$t$-dependence (lower-right) for $\Theta^+(3/2^-)$. The solid and
dashed curves represent the
results from the neutron and proton targets, respectively. Thin curves
denote those calculated at $E^{\rm lab}_{\gamma}=3.0$ GeV, while
thick ones stand for those at $E^{\rm lab}_{\gamma}=3.5$ GeV.
}
\label{fig3}
\end{figure}
From here, we compare the results of spin $1/2$ $\Theta^+$ with the
spin $3/2$ $\Theta^+$-photoproduction in Fig.~\ref{fig4}. Here, we
consider only the case
of the positive-parity $\Theta^+$, since the cross sections for the
negative-parity one are in general about ten times smaller than those
for the positive-parity $\Theta^+$ (see, for example,
Ref.~\cite{Nam:2003uf}). However, we note that
the contribution of $\kappa$-exchange was not considered in the former
studies
~\cite{Nam:2003uf}. The total cross sections are of a few
nanobarn order, being similar to and slightly larger than that of
$\Theta^+(3/2^+)$. We also observe that the angular distribution is
enhanced strongly in the forward direction. The sign of the asymmetry
depends on the type of the target; for the proton target it is positive
while for the neutron one negative. We have checked that the contribution from
the tensor terms proportional to $g^T_{K^*N\Theta_1}$ makes the cross
sections larger only by $\sim10\%$ (see Eq.~(9)) when
$g^T_{K^*N\Theta_1}=|g^V_{K^*N\Theta_1}|$. It also turns
out that the effects
from the tensor terms on the angular
distribution and asymmetry are negligible. However, again, rather
strong dependence on
the coupling constants of $g_{\gamma\kappa K^*}$ and $g_{\kappa
N\Theta_{3}}$ are observed as shown in the case of
$\Theta(3/2^-)$. Especially, the asymmetry becomes all positive
having peaks at $\sim70^{\circ}$ for the neutron and proton.
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[width=7cm]{paper14f13nn.eps}
\includegraphics[width=7cm]{paper14f14nn.eps}
\end{tabular}
\begin{tabular}{cc}
\includegraphics[width=7cm]{paper14f15nn.eps}
\includegraphics[width=7cm]{paper14f16nn.eps}
\end{tabular}
\caption{The total
(upper-left) and differential (upper-right) cross
sections, the asymmetry (lower-right) and the momentum transfer
$t$-dependence (lower-right) for $\Theta^+(1/2^+)$. The solid and
dashed lines represent the
results from the neutron and proton targets, respectively. Thin lines
are for the results calculated at $E^{\rm lab}_{\gamma}=3.0$ GeV while
thick lines for done at $E^{\rm lab}_{\gamma}=3.5$ GeV.}
\label{fig4}
\end{figure}
\section{Reaction analysis via the photon and $K^*$ polarizations}
Last but not least, we discuss the analysis of the polarizations of
the photon and the vector $K^*$ meson. Since the $K^*$ meson can
decay into the pseudoscalar kaon and pion, it is possible to
determine the polarization state of $K^*$ by the measured azimuthal
distribution of the kaon and pion. By doing this, we can tell what
meson exchange in the present reaction plays a dominant role.
Similar analysis can be extended to other spin $3/2$ as well as spin $1/2$
baryon productions.
For this purpose, we first fix the photon polarization to be
perpendicular to the reaction plane. Then, as clearly shown in
Eq.~(\ref{amplitudes1}), the $K^*$-exchange contribution disappears,
since it is proportional to $k_2\cdot\epsilon_{\gamma}$ in which $k_2$ and
$\epsilon_{\gamma}$ denote the outgoing $K^*$ momentum and photon
polarization vector, respectively. Now, let us set the polarization
vector of $K^*$, $\epsilon_{K^*}$ to be parallel to the direction of
$\epsilon_{\gamma}$. In this case, examining the
$\epsilon_{\mu\nu\sigma\rho}$ structure of $K$-exchange in
Eq.~(\ref{amplitudes1}), one can easily see that the contribution of
$K$-exchange vanishes. Thus, as shown in the
panels on left side of Fig.~\ref{fig5}, only $\kappa$-exchange
survives for both the positive (in the upper panel of
Fig.~\ref{fig5} and negative (in the lower panel of Fig.~\ref{fig5}
parity $\Theta^+$. We also observe that $\kappa$-exchange dominates
the reaction even when we include all channels, as depicted by the
curve labeled as ``Total'' in Fig.~\ref{fig5}. However, we note that
the strengths of the $\kappa$-exchange contribution depends on the
unknown
$\kappa N\Theta$ and $\gamma\kappa K^*$ coupling constants.
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[width=7cm]{paper14f17nn.eps}
\includegraphics[width=7cm]{paper14f18nn.eps}
\end{tabular}
\begin{tabular}{cc}
\includegraphics[width=7cm]{paper14f19nn.eps}
\includegraphics[width=7cm]{paper14f20nn.eps}
\end{tabular}
\caption{Differential cross sections when the photon and $K^*$ are
polarized in parallel (left) and perpendicular (right) to each other. We
consider the states of $J^P=3/2^+$ (upper panels) and $3/2^-$ (lower
panels).}
\label{fig5}
\end{figure}
We now proceed to examine the case when the two polarization vectors are
perpendicular to each other. As in the parallel case, the photon
polarization vector is fixed to be perpendicular to the reaction plane
so that $K^*$-exchange can be eliminated. The corresponding results
are shown in the right side of Fig.~\ref{fig5}. The amplitude of
$\kappa$-exchange turns out to be zero, because the term in the bracket
of Eq.~(\ref{amplitudes}) vanishes. Therefore, the contribution
comes only from pseudoscalar $K$-exchange. Experimentally, the
comparison of the two polarization combinations,
$\epsilon_{\gamma}\perp\epsilon_{K^*}$ and $\epsilon_{\gamma}\parallel\epsilon_{K^*}$,
provide information of the strengths of the $KN\Theta$ and $\kappa
N\Theta$ coupling constants.
The bump or the increase in the differential cross sections for
$\theta\displaystyle\mathop{>}_{\sim} 60^{\circ}$ as shown in the right
side of Fig.~\ref{fig5} is mainly due to the contact term
contribution. The total contributions do not differ much from the
cases with the $K$-exchange contribution only. Interestingly, the
results for the two different parities of $\Theta^+$ are rather
similar each other except for the order of magnitudes, since the
polarization dependence arises only from the structure of the $\gamma
K^*M(K,K^*,\kappa)$ coupling, but not from that of $MN\Theta^+$ one,
which carries the information of the parity of $\Theta^+$.
The polarization analysis of the photon and vector $K^*$ sheds light on
determining which meson exchange is dominant in the present
reaction. Though we do not show the results for the
$\Theta^+(1/2^+)$-photoprodcution explicitly here, we verified that
the similar
conclusion was drawn. We notice that this analysis may also be of
great use in determining which meson is the most prominent in
general $\gamma N\to M(1^-)B$ reactions, since the
method discussed here is based only on the structure of the
photon-meson-meson vertices, but not of vertices including baryons.
\section{Summary and Conclusion}
We have investigated the photoproduction of the exotic pentaquark
baryon $\Theta^+$ via the reaction process $\gamma N\to
\bar{K}^*\Theta^+$, assuming that $\Theta^+$ has spin $3/2$. The
effective Lagrangian approach was employed with phenomenological form
factors~\cite{Nam:2005jz,Nam:2005uq}. We used the coupling
constant for the $K^*N\Theta(3/2)$ vertex estimated from
the constituent quark model. We also considered scalar meson
$\kappa(800,0^+)$-exchange. We assumed
the following relations for the coupling constants; $g_{\gamma\kappa K^*}=g_{\gamma KK^*}$ and $g_{\kappa N\Theta}=g_{KN\Theta}$ as a
trial. The main results of the present work are summarized in
Table.~\ref{table2}.
In the present work, we did not find large difference
between the total cross sections from the neutron and proton targets,
which is different from the conclusion of the previous work of $\gamma
N\to \bar{K}\Theta^+(3/2)$~\cite{Nam:2005jz}. The reason lies in the
fact that the contact term in the present case does not provide a
large contribution to the cross sections, compared to other
meson-exchange. These differences between the
$\Theta^+$-photoproductions with the pseudoscalar $K$ and with the
vector $K^*$ can be useful to determine the spin quantum number of the
$\Theta^+$ baryon. We estimated the total cross
sections for the present reaction qualitatively as follows:
$\sigma_{3/2^+}\sim 1.5$ nb and $\sigma_{3/2^-}\sim 50$
nb for the energy regions of $E_{\rm th}\displaystyle\mathop{<}_{\sim} E^{\rm lab}_{\gamma}\displaystyle\mathop{<}_{\sim}
3.5$ GeV for both the neutron and proton targets. We notice that
there is the model dependence due to the coupling constants of
$\kappa$-exchange, in particular, in the case of $\Theta^+(3/2^-)$.
However, the tendency
$\sigma_{\Theta^+(3/2^+)}<\sigma_{\Theta^+(3/2^-)}$ is rather stable,
since psuedoscalar $K$-exchange which has a less dependence on the
model parameters is the most
dominant contribution in the present reaction.
In angular distributions, we observed a large enhancement in the
forward region due to the $t$-channel dominance ($K$- and
$\kappa$-exchanges) for both the
spin $1/2$ and spin $3/2$ cases. From these
observations, we expect that in the laboratory frame, there must be
even stronger forward enhancement for the outgoing $K^*$. The asymmetry
shows relatively clear difference between the positive and negative
parities of the $\Theta^{+}(3/2) $, though there is one caveat: once
we know the strengths of the coupling constants
$g_{\gamma\kappa K^*}$ and $g_{\kappa N\Theta}$. We also compared the
present results to those from the reaction with the $\Theta^+(1/2^+)$.
Finally, an analysis was proposed to determine which meson exchange is
dominant in the $t$-channel, with the photon and $K^*$ polarizations being
explicitly considered. It was observed that scalar meson
$\kappa$-exchange only survives when the polarizations of the photon
and $K^*$ are parallel. On the contrary, when these
polarizations are perpendicular to each other, pseudoscalar
$K$-exchange turns out to be dominant. This analysis may be applied to
a general reaction $\gamma N \to M(1^-)B$.
We note that the coupling constants $g_{\gamma \kappa K^*}$ and $g_{\kappa
N\Theta}$, being important in the present investigation, are not known
well. Especially, the asymmetry is affected much by the different
choices of the coupling constants for the cases of $\Theta(3/2^-)$ and
$\Theta(1/2^+)$ whereas the cross sections are changed only in the order of
the magnitudes.
Considering the rather small values shown in Table.~\ref{table2}, it might be
rather difficult to observe a clear peak from the present reaction process in
the present experimental facilities. However, since we once again observed
strong forward scattering enhancement which could be measured most
appropriately by LEPS, it is expected that different experimental setup may
obtain sizable statistics for the indication of $\Theta^+$ for the present
reaction process.
\begin{table}[t]
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$J^P$&\multicolumn{2}{c|}{$3/2^+$}&
\multicolumn{2}{c|}{$3/2^-$}&
\multicolumn{2}{c|}{$1/2^+$}\\
\hline
Target&$n$&$p$&$n$&$p$&$n$&$p$\\
\hline
$\sigma$ at $E_{\gamma}^{\rm lab}=3.0$ GeV&$\sim2.5$ nb&$\sim3.2$
nb&$\sim 40$ nb&$\sim90$ nb&$\sim4$ nb& $\sim5.5$ nb\\
\hline
\end{tabular}
\caption{Main results of the $\Theta^+$-photoproduction via $\gamma N\to
\bar{K}^*\Theta^+$.}
\label{table2}
\end{table}
\section*{Acknowledgment}
We are very grateful to J.~K.~Ahn, V.~Koubarovski, T.~Nakano, and
T.~Hyodo for fruitful discussions. The work of S.I.N. has been
supported in part by the scholarship from the Ministry of Education,
Culture, Science and Technology of Japan. The work of A.H. is
supported in part by the Grant for Scientific Research ((C)
No.16540252) from the Education, Culture, Science and Technology of
Japan. The work of H.C.K. and S.I.N. was supported by the Korea
Research Foundation Grant funded by the Korean Government (MOEHRD)
(KRF-2005-202-C00102).
|
1,314,259,993,456 | arxiv | \section{Introduction}
The spectrum of baryon resonances is expected to be very rich and
considerably more complex than the mesonic excitation spectrum. Yet,
experimentally, the number of known light-quark mesons exceeds by
far the number of known baryon resonances \cite{Amsler:2008zzb}.
In quark models \cite{Capstick:bm,Glozman:1997ag,Loring:2001kx},
most high-mass baryon resonances are only weakly coupled to $N\pi$
\cite{Capstick:1992th}, and can thus not been seen in elastic $\pi
N$ scattering experiments. For inelastic reactions like $\pi N\to
\eta N$, $\pi N\to K\Lambda$, $\pi N\to K\Sigma$, data with a
polarized target are missing, and data on differential cross
sections have low statistics, and are often inconsistent when
different experiments are compared. States weakly coupled to the
$\pi N$ channel may thus have escaped identification. The situation
was aggravated by a recent analysis of a large body of $\pi N$
elastic and charge exchange scattering data in which many of the
less established nucleon and $\Delta$ resonances were not
confirmed~\cite{Arndt:2006bf}.
Other interpretations of the baryon spectrum exist as well. Very
popular are diquark models \cite{Anselmino:1992vg,Kirchbach:2001de,%
Jaffe:2003sg,Jaffe:2004ph} in which one quark-pair is frozen and in
which the number of predicted states decreases. Further, we mention
approaches based on chiral Lagrangians in which low-lying baryon
resonances are generated dynamically. In many cases, these
calculations offer a consistent description of resonance properties
and scattering data (see e.g. \cite{Kaiser:1995eg}) but so far, they
do not give a survey of all resonances to be expected. Often
discussed is the conjecture that chiral symmetry might be restored
in high-mass meson and baryon resonances.
\cite{Glozman:2003bt,Glozman:2007ek}. The conjecture gives an
attractive interpretation of one experimental observation, that
resonances show up as parity doublets or even higher multiplets. It
fails to predict at which mass resonances should be found and,
experimentally, all meson and baryon resonances on the leading Regge
trajectory have no parity partner. The AdS/QCD model describes QCD
in terms of a dual gravitational theory
\cite{deTeramond:2005su,Brodsky:2007hb}; with some phenomenological
adjustments, it is surprisingly successful in predicting the baryon
mass spectrum and the number of expected states
\cite{Forkel:2007cm,Forkel:2008un}. It also predicts where parity
multiplets should occur and where not. Reviews of baryon
spectroscopy can be found in
\cite{Hey:1982aj,Capstick:2000qj,Klempt:2009pi}.
A decision which of the above approaches provides the most accurate
representation of Nature requires a better experimental knowledge of
the excitation spectrum. The limitations of the current data base on
light-quark baryons is the stimulus for experiments studying baryon
resonances in photoproduction of complex final states where the $\pi
N$ channel can be avoided in both the initial and the final state.
The analysis of multibody final states including fermions is
complex; in order to compare different approaches and to identify
possible problems, a common meeting ground is needed. This is
provided by the simplest photoproduction reactions, by $\gamma p\to
p\pi^0$ and $\gamma p\to n\pi^+$. In this paper, we give masses,
widths, photocouplings, and $N\pi$ decay branching ratios for the
most important contributing resonances and compare our pion
photoproduction and helicity amplitudes to those obtained by SAID
\cite{SAID}, MAID \cite{MAID} and within the Gie\ss en model
\cite{Penner:2002md,Shklyar:2006xw}. The fits are based on a large
number of data sets and include data with multibody final states.
The study thus shows to which extent multibody final states are
compatible with the
best-studied $N\pi$ system.\\[-4ex]
\section{\label{Data}Data used in the fits}
A large number of reactions is used in the truly coupled-channel
fits presented here. The data cover elastic $\pi N$ scattering as
well as inelastic reactions, they cover differential cross sections
and single and double polarization variables. Reactions with
multi-body final states are included exploiting an event-based
likelihood method.
Different data sets often have a very different statistical power.
Weights $w_i$ are introduced to force the fit to take into account
highly significant but low-statistics data, e.g. beam asymmetries.
Without these weights, polarization data often have too small an
impact on the fit result. The weight of a newly introduced data set
is increased when the fit is visually unacceptable, or decreased
until first discrepancies between data and fit become apparent.
\subsection{Elastic \boldmath$\pi N\to \pi N$ scattering}
In the analysis presented here, data on elastic $\pi N$ scattering
(charge exchange is implicitly included) are not used directly.
Instead, we rely on the detailed work of the George-Washington
Center for Nuclear~Studies~\cite{Arndt:2006bf}~and~use, for energies
up to 2.2\,GeV, their scattering amplitudes.\\[-4ex]
\subsection{The reaction \boldmath$\pi^- p\to \eta n$}
The inelastic $\pi^- p$ scattering process leading to the $n\eta$
final state was reported from several experiments
\cite{Deinet:1969cd,Richards:1970cy,Brown:1979ii,Prakhov:2005qb} and
\cite{Debenham:1975bj,Crouch:1980vw}. Above 1.8\,GeV, large
discrepancies between the data \cite{Brown:1979ii,Crouch:1980vw}
show up. The data from \cite{Debenham:1975bj} cover extreme backward
angles and are partly incompatible with all other results. A
critical discussion of the available data can be found
in~\cite{Durand:2008es}. We use here the data from
\cite{Richards:1970cy,Prakhov:2005qb} (see Table
\ref{piN_data_table}) which show better consistency.
\begin{table}[t]
\caption{\label{piN_data_table}Pion induced reactions fitted in the
coupled-channel analysis and $\chi^2$ contributions.}
\begin{center}\begin{tabular}{ccccc}
\hline\hline\\[-2ex]
$\pi N \rightarrow \pi N$& Wave & $N_{\rm data}$ &$w_i$ &$\chi^2/N_{\rm data}$\\[1ex]\hline\\[-2ex]
\cite{Arndt:2006bf}& $S_{11}$ & 104 & 30 & 1.81 \\
& $S_{31}$ & 112 & 20 & 2.27 \\
& $P_{11}$ & 112 & 20 & 2.49 \\
& $P_{31}$ & 104 & 20 & 2.01\\
& $P_{13}$ & 112 & 10 & 1.90 \\
& $P_{33}$ & 120 & 10 & 2.53\\
& $D_{13}$ & 96 & 10 & 2.16\\
& $D_{33}$ & 108 & 12 &2.56 \\
& $D_{15}$ & 96 & 20 & 3.37\\
& $F_{35}$ & 62 & 20 & 1.32\\
& $F_{37}$ & 72 & 10 & 2.86\\[0.5ex]\hline \hline\\[-2ex]
$\pi^- p \rightarrow \eta n$ & Observ. & $N_{\rm data}$&$w_i$ & $\chi^2/N_{\rm data}$\\[1ex]\hline\\[-2.3ex]
\cite{Richards:1970cy}& $d\sigma/d\Omega$ & 70 & 10 &1.96 \\
\cite{Prakhov:2005qb}& $d\sigma/d\Omega$ & 84 & 30 & 2.67 \\[0.5ex]\hline\\[-2.3ex]
\hline
\end{tabular}\vspace{-5mm}\end{center}
\end{table}
\begin{table}
\caption{\label{3BodyReactions}Reactions leading to 3-body final
states are included in event-based likelihood fits. The
$\chi^2/N_{\rm bin}$ values are calculated from selected Dalitz
plots (see text for details). References to the data are given in
the text.}
\begin{center}\begin{tabular}{lccccc}
\hline\hline\\[-2ex]
\multicolumn{2}{c}{$d\sigma/d\Omega(\pi^-p \rightarrow \pi^0\pi^0
n)$} & $N_{\rm data}$ &$w_i$& $-\ln L$\\[1ex]\hline\\[-2ex]
T=373 MeV && 5248 & 10& -1025\\
T=472 MeV && 10641& 5& -2685\\
T=551 MeV &\cite{Prakhov:2004zv}& 41172 & 2.5& -7322\\
T=655 MeV && 63514 & 2& -15647\\
T=691 MeV && 30030 & 3.5& -8256\\
T=733 MeV && 29948 & 4& -7534\\[0.4ex]\hline\\[-2.1ex]
$d\sigma/d\Omega(\gamma p \rightarrow \pi^0\pi^0 p)$ &\cite{Thoma:2007bm,Sarantsev:2007bk}& 110601 & 4 & -27568\\
$d\sigma/d\Omega(\gamma p \rightarrow \pi^0\eta p)$
&\cite{Weinheimer:2003ng,Horn:2007pp,Horn:2008qv}& 17468 & 8 & -5587
\\ \hline\hline\\[-2ex]
\multicolumn{2}{c}{$d\sigma/d\Omega(\pi^-p \rightarrow \pi^0\pi^0
n)$} & $N_{\rm bin}$ & ~& $\chi^2/N_{\rm bin}$\\[1ex]\hline\\[-2ex]
T=373 MeV && 471 & ~& 1.24\\
T=472 MeV && 478& ~& 1.30\\
T=551 MeV &\cite{Prakhov:2004zv}& 514 & ~& 1.56\\
T=655 MeV && 518 & ~& 1.31\\
T=691 MeV && 502 & ~ & 1.19\\
T=733 MeV && 501 & ~& 1.53\\[0.4ex]\hline\\[-2.1ex]
$d\sigma/d\Omega(\gamma p \rightarrow \pi^0\pi^0 p)$ &\cite{Thoma:2007bm,Sarantsev:2007bk}& 769 & ~ & 1.59\\
$d\sigma/d\Omega(\gamma p \rightarrow \pi^0\eta p)$
&\cite{Weinheimer:2003ng,Horn:2007pp,Horn:2008qv}& 1119 & ~ & 1.04
\\
\hline\hline\\[-2ex]
\multicolumn{2}{c}{} & $N_{\rm data}$ &$w_i$& $\chi^2/N_{\rm data}$\\[1ex]\hline\\[-2ex]
$\Sigma(\gamma p \rightarrow \pi^0\pi^0 p)$ &\cite{Assafiri_03}
& 128 & 35 & 0.96\\
$\Sigma(\gamma p \rightarrow \pi^0\eta p)$ &\cite{Gutz:2008zz}& 180
& 15 & 2.37\\
$E(\gamma p \rightarrow \pi^0\pi^0 p)$
&\cite{Ahrens_07} & 16 & 35 & 1.91
\\\hline\hline
\end{tabular}\vspace{-2mm}\end{center}
\end{table}
\subsection{The reaction \boldmath $\pi^- p \rightarrow \pi^0\pi^0 n$}
In the low-energy region, up to $\sim 1.5$\,GeV in mass, very
precise data from BNL are available \cite{Prakhov:2004zv}. These
data are included in an event-based likelihood fit (see Table
\ref{3BodyReactions}). The likelihood values have no direct
significance; only likelihood difference can be related to
probability changes when particular contributions are removed from
the fit. To demonstrate the quality of the description we have
constructed for every energy of the initial pion $m^2_{p\pi^0}$
versus $m^2_{p\pi^0}$ Dalitz plots for data and for Monte Carlo
events with $40\times 40$ bins. The Monte Carlo events were weighted
with the squared amplitude from our final PWA solution. The data and
weighted Monte Carlo Dalitz plots were compared; in
Table~\ref{3BodyReactions} the $\chi^2/N_{bin}$ is given as well as
the number of bins with nonzero number of Monte Carlo events.
\begin{table}[t]
\caption{\label{chisquare}Observables from $\pi$ and $\eta$
photoproduction fitted in the coupled-channel analysis and $\chi^2$
contributions. For pion production, free normalization factors and
additional systematic errors were introduced to allow for data
variation beyond statistical expectations (see text).}
\begin{center}\begin{tabular}{lcccc}
\hline\hline\\[-2ex]
$\gamma p \rightarrow \pi^0 p$ & Observ. & $N_{\rm data}$&$w_i$ & $\chi^2/N_{\rm data}$\\[1ex]\hline\\[-2ex]
\cite{Fuchs:1996ja} (TAPS@MAMI)& $d\sigma/d\Omega$ & 1692 & 1.5&1.25 \\
\cite{Ahrens:2002gu,Ahrens:2004pf} (GDH A2)& $d\sigma/d\Omega$ & 164 & 7&1.34 \\
\cite{Bartalini:2005wx} (GRAAL)& $d\sigma/d\Omega$ & 861 & 2&1.46 \\
\cite{Bartholomy:2004uz,vanPee:2007tw} (CB-ELSA)& $d\sigma/d\Omega$ & 1106 & 3.5&1.34 \\
\cite{Dugger:2007bt} (CLAS)& $d\sigma/d\Omega$ & 592 & 5 &2.11 \\
\cite{Bartalini:2005wx,Barbiellini:1970qu,Gorbenko:1974sz,Gorbenko:1978re,Belyaev:1983xf,%
Blanpied:1992nn,Beck:1997ew,Adamian:2000yi,Blanpied:2001ae}& $\Sigma$ &1492& 3 &3.26\\
\cite{Gorbenko:1974sz,Gorbenko:1978re,Belyaev:1983xf,Booth:1976es,Feller:1976ta,%
Gorbenko:1977rd,Herr:1977vx,Fukushima:1977xj,Bussey:1979wt,Agababian:1989kd,%
Asaturian:1986bj,Bock:1998rk,Maloy:1961qy}& $T$&389& 6 &3.71\\
\cite{Gorbenko:1974sz,%
Gorbenko:1978re,Belyaev:1983xf,Maloy:1961qy,Gorbenko:1975pz,Kato:1979br,Bratashevsky:1980dk,%
Bratashevsky:1986xz}& $P$&607& 3 &3.23\\
\cite{Bussey:1979wr,Ahrens:2005zq} & $G$&75& 5 &1.50\\
\cite{Bussey:1979wr} & $H$&71& 5 &1.26\\
\cite{Ahrens:2002gu,Ahrens:2004pf} & $E$&140& 7 &1.23\\
\cite{Bratashevsky:1980dk,Avakyan:1991pj}& $O_x$&7& 10 &1.77\\
\cite{Bratashevsky:1980dk,Avakyan:1991pj}& $O_z$&7& 10 &0.46\\\hline\\[-2.3ex]
\hline\\[-2ex]
$\gamma p \rightarrow \pi^+ n$ & Observ. & $N_{\rm data}$&$w_i$ & $\chi^2/N_{\rm data}$\\[1ex]\hline\\[-2ex]
\cite{Ecklund:1967zz,Betourne:1968bd,Bouquet:1971cv,Fujii:1971qe,%
Ekstrand:1972rt,Fujii:1976jg,Arai:1977kb,Durwen:1980mq,Althoff:1983te,%
Heise:1988ag,Buechler:1994jg,Dannhausen:2001yz,Ahrens:2006gp}& $d\sigma/d\Omega$ & 1583 & 2 &1.64 \\
\cite{Ahrens:2004pf,Ahrens:2006gp} (GDH A2)& $d\sigma/d\Omega$ & 408 & 14 &0.61 \\
\cite{Dugger:2009pn} (CLAS)& $d\sigma/d\Omega$ & 484 & 4 &1.80 \\
\cite{Blanpied:2001ae,Taylor:1960dn,Smith:1963zza,Alspector:1972pw,Knies:1974zx,%
Ganenko:1976rf,Bussey:1979ju,Getman:1981qt,Hampe:1980jb,Beck:1999ge,%
Ajaka:2000rj,Bocquet:2001ny}& $\Sigma$ &899 & 3 &3.48\\
\cite{Bussey:1979ju,Getman:1981qt,Althoff:1973kb,Arai:1973xs,Feller:1974qf,Althoff:1975kt,Genzel:1975tx,%
Althoff:1976gq,Althoff:1977ef,Fukushima:1977xh,Getman:1980pw,%
Fujii:1981kx,Dutz:1996uc}& $T$&661 & 3 &3.21\\
\cite{Bussey:1979ju,Getman:1981qt,Egawa:1981uj}& $P$&252 & 3 &2.90\\
\cite{Ahrens:2005zq,Bussey:1980fb,Belyaev:1985sp} & $G$&86 & 3 &5.64\\
\cite{Bussey:1980fb,Belyaev:1985sp,Belyaev:1986va} & $H$&128& 3& 3.90\\
\cite{Ahrens:2004pf,Ahrens:2006gp} & $E$&231& 14 & 1.55\\\hline\\[-2.3ex]
\hline\\[-2ex]
$\gamma p \rightarrow \eta p$ & Observ. & $N_{\rm data}$&$w_i$ & $\chi^2/N_{\rm data}$\\[1ex]\hline\\[-2ex]
\cite{Krusche:nv}& $d\sigma/d\Omega$ &100 & 7 &2.16 \\
\cite{Crede:04,Bartholomy:2007zz}& $d\sigma/d\Omega$ &680 & 40 &1.47 \\
\cite{Ajaka:1998zi}& $\Sigma$ &51 & 10 &2.26 \\
\cite{Bartalini:2007fg} & $\Sigma$ &100& 15 &2.02\\
\cite{Bock:1998rk} & $T$ &50& 70 &1.48\\
\hline\\[-2ex]
\hline\\[-2ex]
\end{tabular}\vspace{-2mm}\end{center}
\end{table}
\subsection{Photoproduction of single neutral pions off protons}
References to the data on the reaction $\gamma p\to p\pi^0$ and
their $\chi^2$ contributions are collected in Table~\ref{chisquare}.
For the differential cross section, we use only the most recent
data, reported by TAPS@MAMI \cite{Fuchs:1996ja}, GDH-A2
\cite{Ahrens:2002gu,Ahrens:2004pf}, GRAAL \cite{Bartalini:2005wx},
CB-ELSA \cite{Bartholomy:2004uz,vanPee:2007tw}, and CLAS
\cite{Dugger:2007bt} which cover a wide range of energies and
angles. A large variety of older data exist which cover only a
limited fraction of the energy and angular range. These data provide
significant information on polarization observables.
\begin{figure}
\centerline{
\epsfig{file=pi0_cb_cl_gdh_compare.eps,width=0.45\textwidth,height=0.3\textwidth}
} \caption{The CB-ELSA, CLAS and GDH differential cross section on
$\gamma p\to \pi^0 p$ in the region 1500 MeV. In the Crystal Barrel
data, a common systematic error due to uncertainties in the
reconstruction efficiency is included. The curve represents
the``first" fit without normalization (see text). The GDH data are
introduced with a large weight.}
\label{fig:pi0_cb_cl_gdh}
\end{figure}
The differential cross sections reported by the different
collaborations exhibit small but significant systematic
discrepancies practically in all mass regions; due to the small
statistical errors these are easily recognized. We show the
systematic deviations by comparing the data with a curve
representing the ``first" fit to all data, without normalization
factors.
In Fig.~\ref{fig:pi0_cb_cl_gdh} the differential cross section from
GDH-A2, CLAS and CB-ELSA are shown and compared to the preliminary
fit for the 1495-1530\,MeV mass range. The GDH data systematically
exceed CLAS data while the CB-ELSA data provide numbers between
these two measurements. The GDH-A2 data have a larger statistical
error than the CLAS data; we introduced them into the fit with
larger weight since they provide important information about the
difference between helicity 3/2 and 1/2 cross sections.
In the mass region 1600-1750 MeV, there are notable discrepancies
between GRAAL and CLAS data (here the CB-ELSA data fall again
between GRAAL and CLAS results). As an example, the mass region
around 1670 MeV is shown for the three data sets in
Fig.~\ref{fig:diff_pi0}. The curve corresponds to the fit where the
CLAS data are taken with statistical errors only and dominate the
solution.
\begin{figure}
\centerline{
\epsfig{file=pi0_cb_cl_gr_compare.eps,width=0.45\textwidth} }
\caption{Comparison of three data sets on the $\gamma p\to \pi^0 p$
differential cross section in the region 1670 MeV. The curve
represents the ``first" fit without normalization (see text).}
\label{fig:diff_pi0}
\end{figure}
At higher energies, only the CB-ELSA and CLAS data are available.
There are two clear discrepancies between these data sets. The first
one is located in the 1900\,MeV mass region where the CB-ELSA data
systematically exceed the CLAS data in the backward hemisphere (see
Fig.~\ref{fig:pi0_cb_cl_1900}, top). A second discrepancy shows up
above $W=2100$ MeV in the very forward angular range. Here the
corresponding CB-ELSA points are systematically lower than those
from CLAS. At $W$=2300-2400\,MeV, the two data sets are fully
consistent (see Fig.~\ref{fig:pi0_cb_cl_1900}, bottom).
\begin{figure}
\begin{center}
\epsfig{file=pi0_cb_cl_compare_3a.eps,width=0.45\textwidth,height=0.28\textwidth}\\
\epsfig{file=pi0_cb_cl_compare_3b.eps,width=0.45\textwidth,height=0.28\textwidth}
\vspace{-2mm}\end{center} \caption{The CB-ELSA and CLAS differential cross
section on $\gamma p\to \pi^0 p$ in the region of 1900 (top) and
2350 (bottom) MeV. The curve represents the ``first" fit without
normalization (see text).\vspace{-2mm}}
\label{fig:pi0_cb_cl_1900}
\end{figure}
A fraction of the discrepancies is assigned to normalization. Most
experiments give explicit normalization errors which were not taken
into account in the ``first" fit. We then allowed for a free
normalization factor for the $\pi^0$ differential cross section,
which is determined in the fit to 1.00 (TAPS@MAMI), 1.01 (GDH-A2),
0.99 (GRAAL and CB-ELSA) and 0.95 (CLAS).
The data from the four experiments are still not yet statistically
compatible; at least one experiment must have additional
unrecognized systematic errors. Of course, we do not know which
experiment. We assume that all four experiments have systematic
errors which were not recognized. These were estimated from the
variance of the experimental results in preset bins of energy and
angle (in $\cos\theta$). For this purpose, differential cross
sections were calculated, by interpolation, for these bins. From the
variance we estimated systematic errors which increase linearly from
1\% at $W=1400$\,MeV to 9\% at $W=2450$\,MeV. These systematic
errors were added to all four data sets. With the increased
systematic errors, the data are compatible and the $\chi^2$ of a fit
reflects the quality of a fit and not the inconsistency between
different data sets. These errors are used in the fits only. In the
figures, the data and their errors are shown as quoted in the
original papers.
The beam asymmetry $\Sigma$ has been determined in a number of
experiments
\cite{Bartalini:2005wx,Barbiellini:1970qu,Gorbenko:1974sz,Gorbenko:1978re,Belyaev:1983xf,%
Blanpied:1992nn,Beck:1997ew,Adamian:2000yi,Blanpied:2001ae}, as well
as the target asymmetry $T$
\cite{Gorbenko:1974sz,Gorbenko:1978re,Belyaev:1983xf,Booth:1976es,Feller:1976ta,%
Gorbenko:1977rd,Herr:1977vx,Fukushima:1977xj,Bussey:1979wt,Agababian:1989kd,%
Asaturian:1986bj,Bock:1998rk}, and
the polarization $P$ of the recoiling proton \cite{Gorbenko:1974sz,%
Gorbenko:1978re,Belyaev:1983xf,Maloy:1961qy,Gorbenko:1975pz,Kato:1979br,Bratashevsky:1980dk,%
Bratashevsky:1986xz}. Few data exist from experiments with polarized
photons and polarized target or from measurements of the recoil
polarization. Data on $O_x$ and $O_z$ can be found in
\cite{Bratashevsky:1980dk,Avakyan:1991pj}, on $G$ in
\cite{Bussey:1979wr,Ahrens:2005zq}, and on $H$ in
\cite{Bussey:1979wr}. Data on the helicity difference $\sigma_{3/2}-
\sigma_{1/2}$ were published in \cite{Ahrens:2002gu,Ahrens:2004pf};
in Tables \ref{3BodyReactions} and \ref{chisquare} we quote $E$
which is defined as $(\sigma_{3/2}- \sigma_{1/2})/(\sigma_{3/2}+
\sigma_{1/2})$. These data are included in the fits. Their
statistical errors are mostly large, the systematic errors likely
small. Hence we retain the original errors.
\subsection{The reaction \boldmath$\gamma p \rightarrow \pi^+ n$}
Total cross sections were reported in
\cite{Ecklund:1967zz,Betourne:1968bd,Bouquet:1971cv,Fujii:1971qe,%
Ekstrand:1972rt,Fujii:1976jg,Arai:1977kb,Durwen:1980mq,Althoff:1983te,%
Heise:1988ag,Zenz:1988ah,Buechler:1994jg,Dannhausen:2001yz,Ahrens:2006gp,%
Dugger:2009pn}. Again, some discrepancies show up, at energies above
1600\,MeV and in the forward region, between the new CLAS data and
former measurements. An example of such discrepancies is given in
Fig.~\ref{fig:cl_said_pipn}. Normalization factors for the different
data were introduced which are determined to be in the range from
0.96 to 1.03. A consistent description was achieved by adding a
systematic error which increases linearly from 1\% at $W=1400$\,MeV
and to 9\% at $W=2450$\,MeV.
\begin{figure}
\centerline{
\epsfig{file=piplusn_cb_cl_compare.eps,width=0.43\textwidth,height=0.33\textwidth}
} \caption{Older data from different experiments (top) and CLAS
differential cross section on $\gamma p\to \pi^+ n$ in the region
1630 MeV. The curve represents the ``first" fit without
normalization (see text). The consistency is excellent.}
\label{fig:cl_said_pipn}
\end{figure}
The beam asymmetry $\Sigma$ was determined in
\cite{Blanpied:2001ae,Taylor:1960dn,Smith:1963zza,Alspector:1972pw,Knies:1974zx,%
Ganenko:1976rf,Bussey:1979ju,Getman:1981qt,Hampe:1980jb,Beck:1999ge,%
Ajaka:2000rj,Bocquet:2001ny}, the target asymmetry $T$ in
\cite{Bussey:1979ju,Getman:1981qt,Althoff:1973kb,Arai:1973xs,Feller:1974qf,Althoff:1975kt,Genzel:1975tx,%
Althoff:1976gq,Althoff:1977ef,Fukushima:1977xh,Getman:1980pw,%
Fujii:1981kx,Dutz:1996uc}, the neutron recoil polarization can be
found in \cite{Bussey:1979ju,Getman:1981qt,Egawa:1981uj}. A few data
from double polarization are available: on $G$
\cite{Ahrens:2005zq,Bussey:1980fb,Belyaev:1985sp}, $H$
\cite{Bussey:1980fb,Belyaev:1985sp,Belyaev:1986va}, and on the
helicity difference $\sigma_{3/2}- \sigma_{1/2}$
\cite{Ahrens:2004pf,Ahrens:2006gp}. The data are fitted with the
errors as given in the respective papers.
\subsection{Photoproduction of \boldmath$\eta$ mesons off protons}
For photoproduction of $\eta$ mesons, differential cross sections
\cite{Krusche:nv,Dugger:2002ft,Crede:04,Bartholomy:2007zz,Bartalini:2007fg}
and the related beam asymmetry $\Sigma$
\cite{Ajaka:1998zi,Bartalini:2007fg,Elsner:2007hm} are the only
quantities which have been measured so far. Double polarization
observables are presently studied intensively at several
laboratories but so far, no results have been published. The recent
high-statistics measurements on $\gamma p\to p\eta$
\cite{Williams:2009yj,Crede:2009zz} are not yet included in the fits
presented here.
\begin{table}[t]
\caption{\label{chisquare1}Hyperon photoproduction observables
fitted in the coupled-channel analysis and $\chi^2$ contributions.}
\begin{center}\begin{tabular}{lcccc}
\hline\hline\\[-2ex]
$\gamma p \rightarrow K^+ \Lambda$ & Observ. & $N_{\rm data}$&$w_i$ & $\chi^2/N_{\rm data}$\\[1ex]\hline\\[-2ex]
\cite{Bradford:2005pt} & $d\sigma/d\Omega$&1377 & 4 &1.81 \\
\cite{Zegers:2003ux}& $\Sigma$ &45& 10 & 1.65\\
\cite{Lleres:2007tx}& $\Sigma$ &66& 5 & 1.53\\
\cite{McNabb:2003nf} & $P$&202 & 6.5 &2.03\\
\cite{Lleres:2007tx} & $P$&66 & 3 &1.26\\
\cite{Lleres:2008em}& $T$&66 & 15 &1.26\\
\cite{Bradford:2006ba} & $C_x$&160 &11 &1.23\\
\cite{Bradford:2006ba}& $C_z$&160 & 11 &1.41\\
\cite{Lleres:2008em}& $O_x$&66 & 12 &1.30\\
\cite{Lleres:2008em}& $O_z$&66 & 15 &1.54\\\hline\\[-2.3ex]
\hline\\[-2ex]
$\gamma p \rightarrow K^+ \Sigma$ & Observ. & $N_{\rm data}$&$w_i$ & $\chi^2/N_{\rm data}$\\[1ex]\hline\\[-2ex]
\cite{Bradford:2005pt}& $d\sigma/d\Omega$ & 1280& 2.5 &2.06 \\
\cite{Zegers:2003ux} & $\Sigma$ &45& 10 &1.11\\
\cite{Lleres:2007tx} & $\Sigma$ &42& 5 &0.90\\
\cite{McNabb:2003nf}& $P$&95 & 6 &1.45\\
\cite{Bradford:2006ba}& $C_x$&94 & 7 &2.20\\
\cite{Bradford:2006ba}& $C_z$&94 & 7 &2.00\\\hline\\[-2.3ex]
\hline\\[-2ex]
$\gamma p \rightarrow K^0 \Sigma^+$ & Obsv. & $N_{\rm data}$&$w_i$ & $\chi^2/N_{\rm data}$\\[1ex]\hline\\[-2ex]
\cite{McNabb:2003nf} & $d\sigma/d\Omega$ & 48 &2.3 &3.76 \\
\cite{Lawall:2005np} & $d\sigma/d\Omega$ & 160 &5 &0.98 \\
\cite{Castelijns:2007qt} & $d\sigma/d\Omega$ & 72 &5 &0.82 \\
\cite{Castelijns:2007qt}& $P$&72 & 20 &0.61\\
\hline\hline
\end{tabular}\end{center}
\end{table}
\subsection{The reactions \boldmath$\gamma p\to K^+\Lambda, K^+\Sigma^0$ and $K^0\Sigma^+$}
Data on hyperon photoproduction used in the present fits are
collected in Table~\ref{chisquare1}. We use the differential cross
sections for $\gamma p\to K^+\Lambda$ and $K^+\Sigma^0$ from CLAS
\cite{Bradford:2005pt}. As shown in \cite{Sarantsev:2005tg}, the
Saphir data \cite{Glander:2003jw} on differential cross sections are
about compatible with the CLAS data when an energy dependent
normalization factor is introduced. The beam asymmetry was measured
at SPring-8 \cite{Zegers:2003ux} and GRAAL \cite{Lleres:2007tx}; the
$\Lambda$ polarization was deduced in \cite{Lleres:2007tx} and
\cite{McNabb:2003nf}. Target asymmetry $T$ and $O_x$ and $O_z$ for
$\gamma p\to K^+\Lambda$ were reported in \cite{Lleres:2008em}. In
\cite{Bradford:2006ba}, CLAS data on the spin transfer coefficients
$C_x$ and $C_z$ were presented for both, $\gamma p\to K^+\Lambda$
and $\gamma p\to K^+\Sigma^0$.
Differential cross sections on the reaction $\gamma p \rightarrow
K^0 \Sigma^+$ were measured by CLAS \cite{McNabb:2003nf}, Saphir
\cite{Lawall:2005np} and CB-ELSA/ TAPS \cite{Castelijns:2007qt}. For
the latter data we include the determination of the $P$ polarization
derived from an analysis of the $\Sigma^+$ decay.
\subsection{The reactions \boldmath$\gamma p\to p\pi^0\pi^0$ and $\gamma p\to p\pi^0\eta$}
The two reactions $\gamma p\to p\pi^0\pi^0$
\cite{Thoma:2007bm,Sarantsev:2007bk} and $\gamma p\to p\pi^0\eta$
\cite{Weinheimer:2003ng,Horn:2007pp,Horn:2008qv} are included event
by event using an extended likelihood method. The quality of the fit
can be judged from the description of Dalits plots. For the $\gamma
p\to p\pi^0\pi^0$ reaction we constructed Dalitz plots in
$m^2_{p\pi^0}$ versus $m^2_{p\pi^0}$ with $20\times 20$ bins, for
the four 100 MeV $\gamma p$ invariant mass intervals from 1350 to
1750 MeV. For the $\gamma p\to p\eta\pi^0$ reaction, $m^2_{p\pi^0}$
versus $m^2_{p\eta}$ Dalitz plots were constructed for seven 100 MeV
$\gamma p$ invariant mass intervals from 1700 to 2400 MeV. The
number of bins with nonzero Monte Carlo events and $\chi^2/N_{bin}$
are given in Table~\ref{3BodyReactions}. The beam asymmetries
\cite{Assafiri_03,Gutz:2008zz} and the helicity dependence $E$
\cite{Ahrens_07} are included in the fit in the form of histograms.
Both these reactions have been studied intensively, see
\cite{Braghieri_95,Haerter_97,Zabrodin_97,Zabrodin_99,Wolf_00,%
Kleber_00,Langgaertner_01,Ripani_03,Ahrens_03,Kotulla_04,%
Ahrens_05,Strauch_05,Ajaka_07,Krambrich:2009te} for the first and
\cite{Nakabayashi:2006ut,Ajaka:2008zz,Kashevarov:2009ww} for the
latter reaction. For these data only selected histograms are
available; they are not included in our fits.
\section{Partial wave amplitudes}
A general expression for the decomposition of the two-particle
scattering amplitude $A(s,t)$ into partial wave amplitudes
$A^{\beta\beta'}_n(s)$ which describe production, propagation and
decay of a two-particle systems with fixed total angular momentum
$J$, parity and (if conserved) $C$-parity can be written as:
\begin{eqnarray}
A(s,t)&\!=&\!\!\sum\limits_{\beta\beta' n}\!\! A^{\beta\beta'}_n(s)
Q^{(\beta) \dagger}_{\mu_1\ldots\mu_n}(k)
F^{\mu_1\ldots\mu_n}_{\nu_1\ldots\nu_n}
Q^{(\beta')}_{\nu_1\ldots\nu_n}(q)~~~~
\label{decomp_0}
\end{eqnarray}
where $k_i$ are initial and $q_i$ are final particle momenta,
$s=(k_1+k_2)=(q_1+q_2)=P^2$, $t=(k_1-q_1)^2=(k_2-q_2)^2$,
$k=(k_1-k_2)/2$, $q=(q_1-q_2)/2$ and $n=J$ for a boson system and
$n=J-1/2$ for a fermion one. The vertices
$Q^{(\beta')}_{\nu_1\ldots\nu_n}$ and
$Q^{(\beta)\dagger}_{\mu_1\ldots\mu_n}$ ('$\dagger$' stands for
hermitian conjugation) describe the transition of the system into
the initial- and final-state particles, and depend on the total and
relative momenta. The indices $\beta$ and $\beta'$ list quantum
numbers of the production and decay amplitudes, e.g. isospin, spin
and orbital angular momenta. The tensor
$F^{\mu_1\ldots\mu_n}_{\nu_1\ldots\nu_n}$ depends only on the total
momentum $P$ and describes the tensor structure of the partial wave.
It is often called projection operator. The formalism for
construction of vertices for meson-baryon partial waves and
projection operators is given in
\cite{Anisovich:2004zz,Anisovich:2006bc}. For convenience we provide
key formula for projection operators and vertices in Appendix A.
In the case of resonance production, the total amplitude $A(s,t)$
can be expanded into a sum of partial wave amplitudes multiplied by
vertices, see eq.~(\ref{decomp_0}). Here the partial wave amplitudes
$A^{\beta\beta'}_n(s)$ provide the energy dependence of the
resonance which can be parameterized, for example, as $N/D$
amplitude, as K-matrix or, in the simplest case, as a Breit-Wigner
amplitude \cite{Anisovich:2008zz}. For non-resonant contributions,
like $t$ and $u$ channel exchanges, the situation is different. In
many partial wave analyses (including the present one) these
contributions are simply added to the resonant part of the total
amplitude and the sum is used to fit the experimental data. However,
one needs to know the contribution of $t$ and $u$-exchanges in every
partial wave if the final partial wave amplitudes are to be compared
with results from other analyses. This decomposition is also
required when rescattering between non-resonant and resonant parts
of the amplitude should be taken into account. For the non-resonant
contributions used in the energy dependent fits one has therefore to
solve an inverse task: to extract partial wave amplitudes from the
total amplitude.
This task can be solved by using the orthogonality condition for
partial wave operators. Multiplying the total amplitude from eq.
(\ref{decomp_0}) with initial and final projection operators and
vertices and integrating over solid angle of the initial and final
momenta we obtain
\begin{eqnarray}
\label{decomp_2}
&&F^{\tau_1\ldots\tau_n}_{\mu_1\ldots\mu_n}\int
\frac{d\Omega_k}{4\pi}\frac{d\Omega_q}{4\pi}
Q^{(\alpha)}_{\mu_1\ldots\mu_n}(k) A(s,t)
Q^{(\alpha')}_{\nu_1\ldots\nu_n}(q)
F^{\nu_1\ldots\nu_n}_{\eta_1\ldots\eta_n}
\nonumber \\ &&
= (-1)^n
F^{\tau_1\ldots\tau_n}_{\eta_1\ldots\eta_n}
\sum \limits_{\beta\beta'}
A^{\beta\beta'}_n(s)W^{\alpha\beta}_n(k^2_\perp)
W^{\beta'\alpha'}_n(q^2_\perp)\,,
\end{eqnarray}
where $k^2_\perp$ and $q^2_\perp$ are squared relative momenta
orthogonal to the total momentum of the system $P$ (see Appendix A).
The factor $W_n^{\alpha\beta}$ corresponds to the on-shell one-loop
amplitude for transition between two vertices
$Q^{(\beta)}_{\mu_1\ldots\mu_n}$. It can be calculated as
\begin{eqnarray}
W^{\alpha\beta}_n(k^2_\perp)\!&=&\!
\frac{F^{\alpha_1\ldots\alpha_n}_{\mu_1\ldots\mu_n}}{\xi_n}
\!\!\int\!\!
\frac{d\Omega_k}{4\pi}Q^{(\alpha)}_{\mu_1\ldots\mu_{n}}(k)
Q^{(\beta)}_{\nu_1\ldots\nu_{n}}(k)F_{\alpha_1\ldots\alpha_n}^{\nu_1\ldots\nu_n}
\nonumber \\
\xi_n&=&(-1)^nF^{\nu_1\ldots\nu_n}_{\mu_1\ldots\mu_n}g_{\mu_1\nu_1}\ldots
g_{\mu_n\nu_n}\,.
\label{wn}
\end{eqnarray}
For meson-nucleon and $\gamma N$ vertices, the $W^{\alpha\beta}_n$
were calculated in \cite{Anisovich:2006bc}. For convenience we
provide the corresponding expressions in Appendix B and expressions
for partial wave amplitudes for photoproduction of a single meson
are given in Appendix C.
\subsection{Parameterization of the partial wave amplitudes}
In the present analysis, the partial waves at low energies are
described in the framework of a K-matrix/P-vector approach.
High-mass resonances (above 2.2 GeV) are described by relativistic
multi-channel Breit-Wigner amplitudes. In the case of
photoproduction reactions, the regge\-ized t- and u-channel
amplitudes were added to the resonant part. Then the multipoles were
calculated by solving eq.~(\ref{decomp_2}).
\subsubsection{Pion induced reactions in K-matrix approach}
The multi-channel amplitude is given by the matrix $\hat
\textbf{A}(s)$ where the matrix element ${A}_{ab}(s)$ defines the
transition amplitude from state 'a' to state 'b'. In
eq.~(\ref{decomp_0}) this amplitude is denoted as
$A^{\beta\beta'}_n(s)$ to emphasize the different spin-parity
contributions. Now we will use the notation ${A}_{ab}(s)$ which
identifies the initial and the final channels, e.g. $\gamma N$, $\pi
N$, $\eta N$, $K\Lambda$, $\pi \Delta$, and omit the indices
representing the partial wave. Scattering between different channels
is taken into account explicitly in the K-matrix; the amplitude is
given by
\begin{eqnarray}
\mathbf{\hat A}(s) \;=\; \mathbf{\hat K}\;(\mathbf{\hat I}\;-\;i
\mathbf{\hat \rho \hat K})^{-1}\,. \label{k_matrix}
\end{eqnarray}
where $\mathbf{\hat K}$ is the K-matrix, $\mathbf{\hat I}$ is the
unity matrix and $\mathbf{\hat \rho}$ is a diagonal matrix of the
according phase space. For two-particle states (for example $\pi
N$), the phase space is calculated as a simple loop diagram (see
\cite{Anisovich:2006bc}). For $J=L+1/2$, the so-called '+' states,
the phase space is equal to
\begin{eqnarray}
\label{psr_plus}
\rho_+(s)=\frac{\alpha_L}{2L+1} \frac{2|\vec
k|^{2L+1}}{\sqrt{s}}\frac{k_{10}+m_N}{2m_N} \frac
{F(k^2)}{B(L,r,k^2)}
\end{eqnarray}
and for '-' states with $J=L-1/2$, the phase space is given by
\begin{eqnarray}
\label{psr_minus}
\rho_{-}(s)=\frac{\alpha_L}{L} \frac{2|\vec
k|^{2L+1}}{\sqrt{s}}\frac{k_{10}+m_N}{2m_N} \frac
{F(k^2)}{B(L,r,k^2)}
\end{eqnarray}
where $s$ is the total energy squared, $k$, is the relative momentum
between baryon and meson, $\vec k$ its three-vector component,
$k_{10}$ is the energy of the baryon (with mass $m_N$) calculated in
the c.m.s. of the reaction. $J$ is the total, $L$ the orbital
angular momentum of the baryon-plus-meson system, and the
coefficient $\alpha_L$ is equal to:
\begin{eqnarray}
\alpha_L=\prod\limits_{n=1}^L\frac{2n-1}{n}\,.
\end{eqnarray}
The phase volume is regularized at large energies by a standard
Blatt-Weisskopf form $B(L,r,k^2)$ with $r=0.8$\,fm, and a
form-factor $F(k^2)$ of the type
\begin{eqnarray}
F(k^2)=\frac{\Lambda+0.5}{\Lambda+k^2}\quad{\rm or}\quad
F(k^2)=\frac{\Lambda+2.5}{\Lambda+s}\,.
\end{eqnarray}
Fits with both parameterizations yield nearly identical results. The
parameter $\Lambda$ were taken from our previous analysis
\cite{Sarantsev:2005tg,Anisovich:2005tf} and fixed to 1.5 for the
first parameterization and 3.0 for the second one. The exact
formulas for the three-body phase volume are given in
\cite{Anisovich:2006bc}.
The K-matrix $\mathbf{\hat K}$ is parameterized as follows:
\begin{eqnarray}
K_{ab}\;=\;\sum_\alpha \frac{g_a^{(\alpha)} g_b^{(\alpha)}} {M^2_\alpha - s} \;+\; f_{ab},
\label{Kmat}
\end{eqnarray}
where $M_\alpha$ and $g_a^{(\alpha)}$ are the mass and the coupling
constant of the resonance $\alpha$, and where $f_{ab}$ describes a
direct (non-resonant) transition from the initial state $a$ to the
final state $b$, e.g. from $\pi N\to\Lambda K$.
For most partial waves it is sufficient to assume that $f_{ab}$ are
constants. The $S_{11}$ and $S_{31}$ and waves require a slightly
more complicated structure, we use
\begin{eqnarray}
f_{ab} =\frac{f_{ab}^{(1)}+f_{ab}^{(2)}\sqrt s}{s-s_0^{ab}}\,.
\end{eqnarray}
Here the $f_{ab}^{(i)}$ and $s_0^{ab}$ are constants
which are determined in the fits. In the case of the $S_{11}$ wave,
this more flexible parameterization is required to describe $\pi
N\to N\pi$, $\pi N\to N\eta$, and $\eta N\to N\eta$ transitions. Let
us note that this form is similar to the one used by SAID
\cite{Arndt:2006bf}.
\subsubsection{The photoproduction amplitude}
The photoproduction amplitude can be written in the P-vector
approach \cite{Chung:1995dx}. The P-vector amplitude for the initial
state '$a$' photoproduction is then given by
\begin{eqnarray}
A_a \;=\; \hat P_b\;(\hat I\;-\;i\hat \rho \hat K)^{-1}_{ba}\,.
\end{eqnarray}
The production vector $\mathbf{\hat P}$ is parameterized as:
\begin{eqnarray}
P_{b}\;=\;\sum_\alpha \frac{ g_{\gamma \rm N}^{(\alpha)} g_b^{(\alpha)}}{M^2_\alpha - s} \;+\;
\tilde f_{b}
\label{Pvect}
\end{eqnarray}
where $g_{\gamma\rm N}^{(\alpha)}$ are the photo-couplings of the
resonance $\alpha$ and where non-resonant production of a final
state $b$ is described by contributions $\tilde f_{b}$. In general,
these are functions of $s$ but mostly, a constant $\tilde f_{b}$ is
sufficient.
The P-vector approach is based on the idea that a channel with a
weak coupling can be omitted from the K-matrix. Indeed, adding to
the K-matrix the $\gamma N$ channel would not change the properties
of the amplitude. Due to its weak coupling, the $\gamma N$
interaction can be taken into account only once; this is done in the
form of a P-vector. Loops due to virtual decays of a resonance into
$N\gamma$ and back into the resonance can be neglected safely. A
similar approach can be used to describe decay modes with a weak
couplings. The amplitude for the transition into such a channel can
be written as D-vector amplitude,
\begin{eqnarray}
\label{amplitude} A_a \;=\; \hat D_a + [\hat K (\hat I\;-\;i\hat
\rho \hat K)^{-1}\,\hat \rho ]_{ab} \hat D_{b}\;,
\end{eqnarray}
where the parameterization of the
D-vector is similar to the parameterization of the P-vector:
\begin{eqnarray}
D_{b}\;=\;\sum_\alpha \frac{g_b^{(\alpha)}g_{f}^{(\alpha)} }{M^2_\alpha - s} \;+\;
\tilde d_{b}\,.
\label{Dvect}
\end{eqnarray}
Here $g_{f}^{(\alpha)}$ is the coupling of a resonance to the final
state and $\tilde d_{b}$ is a non-resonant production from the
K-matrix-channel $b$ to the final state. As in the case of the
P-vector approach, channels with weak couplings can be taken into
account only in their final decay, and are not taken into account in
the rescattering. Let us note that if the final state is already
included as one of K-matrix channels, the amplitude
(\ref{amplitude}) reproduces the K-matrix amplitude
(\ref{k_matrix}).
In cases where both, initial and final coupling constants are weak,
we use an approximation which we call PD-vector. In this case the
amplitude is given by
\begin{eqnarray}
A_{ab} \;=\; \hat G_{ab} + \hat P_{a}
(\hat I\;-\;i\hat \rho \hat K)^{-1}\,\hat \rho \hat D_{b}\;,
\end{eqnarray}
where $\hat G_{ab}$ corresponds to a tree diagram for the transition
from state '$a$' to state '$b$'.
\begin{eqnarray}
G_{ab}\;=\;\sum_\alpha \frac{g_a^{(\alpha)}g_{b}^{(\alpha)} }{M^2_\alpha - s} \;+\;
\tilde h_{ab}\,.
\label{PDvect}
\end{eqnarray}
Here $g_{i}^{(\alpha)}$ is the production coupling of the resonance.
For photoproduction, $g_{a}^{(\alpha)}=g_{\gamma\rm N}^{(\alpha)}$
holds true, and $\tilde h_{ab}$ is the direct non-resonant
transition from the initial to the different final channels.
\subsection {Reggeized meson exchange amplitudes}
At high energies, angular distributions of photo-produced mesons
exhibit clear peaks in the forward direction. These peaks originate
from meson exchanges in the t-channel. Their contributions are
parameterized as $\pi$, $\rho(\omega)$, $\rm K$ or $\rm K^*$
exchanges.
The most straight forward parameterization of particle exchange
amplitudes is the exchange of Regge trajectories. The invariant part
of the t-channel exchange amplitude can be written as
\cite{Anisovich:2008zz}
\begin{eqnarray}
T(s,t)=g_1(t)g_2(t) R(\pm,\nu,t)\,\;\;\;\; \nu=\frac 12 (s-u).
\end{eqnarray}
Here, $g_i$ are vertex functions, and $R(+,\nu,t)$ and $R(-,\nu,t)$
are Reggeon propagators for exchanges with positive and negative
signature. Exchanges of $\pi$ and $\rm K$ have positive, $\rho$,
$\omega$ and $\rm K^*$ exchanges have negative signature.
The $\rho$ trajectory has a negative signature and the corresponding
propagator is equal to
\begin{eqnarray}
R_\rho(-,\nu,t)=\frac{ie^{-i\frac{\pi}{2}\alpha_\rho(t)}} {\cos
(\frac{\pi}{2}\alpha_\rho(t)) \Gamma \left (\frac
{\alpha_\rho(t)}{2} +\frac 12\right )} \left
(\frac{\nu}{\nu_0}\right )^{\alpha_\rho(t)} \ .\quad
\end{eqnarray}
where $\alpha_{\rho}(t)=0.50+0.85t$. The $\omega$ trajectory is
identical to the $\rho$ trajectory. The expressions for other
Reggeon propagators used in the fit are given in Appendix D.
\section{Partial wave analysis}
\subsection{Fit of the $\pi^0 p$ and $\pi^+n$ photoproduction reactions}
The new CLAS data on the $\gamma p\to \pi^0p$ reaction are compared
to our fit in Fig.~\ref{fig:pi0_clas}. The $\chi^2$ contributions of
this fit from the various channels are given in
Tables~\ref{piN_data_table}\,-\,\ref{chisquare1}. We remind the
reader that we estimated additional systematic errors for the
$\gamma p\to \pi^0p$ and $\gamma p\to \pi^+n$ differential cross
sections; these additional errors are not shown in
Fig.~\ref{fig:pi0_clas}.
Some systematic deviations between data and fit can be recognized in
the mass region below 1800\,MeV. These are mostly the result of
discrepancies between the data. In Fig.~\ref{fig:pi0_cb} we show for
comparison some CB-ELSA data, and in Fig.~\ref{fig:pi0_graal} some
GRAAL data, for invariant masses which are close to the CLAS values.
At higher energies the solution describes very well the new CLAS
data, however CB-ELSA data are also described with rather good
accuracy.
The new data on the $\gamma p\to \pi^+n$ and the fit curve are shown
in Fig.~\ref{fig:pipn_clas}. Here, the total normalization factors
resolve rather well discrepancies at masses below 1600 MeV. The
description of the earlier data on the $\gamma p\to \pi^+n$ in this
mass region is shown in Fig.~\ref{fig:pipn_said}.
\begin{figure}[pt]
\centerline{
\epsfig{file=pi0_cla7_pap.eps,width=0.5\textwidth,height=0.66\textheight}
} \caption{CLAS data on the differential cross section for $\gamma
p\to \pi^0p$ with current solution. Only statistical errors for the
CLAS data are shown}\vspace{4mm}
\label{fig:pi0_clas}
\centerline{
\epsfig{file=pi0_3200_pap.eps,width=0.50\textwidth,height=0.22\textheight}
} \caption{CB-ELSA data on the $\gamma p\to \pi^0p$ differential
cross section for energies below 1.8 GeV. Only the errors quoted by
the CB-ELSA collaboration are shown.\vspace{3mm}}
\label{fig:pi0_cb}
\end{figure}
\begin{figure}
\centerline{
\hspace{5mm}\epsfig{file=pi0_xsgr_pap.eps,width=0.48\textwidth} }
\caption{GRAAL data on the $\gamma p\to \pi^0p$ differential cross
section at energies close to the CLAS values. Only the errors quoted
by the GRAAL collaboration are shown.\vspace{5mm}}
\label{fig:pi0_graal}
\centerline{
\epsfig{file=piplusn_cl09_pap.eps,width=0.50\textwidth,height=0.6\textheight}
} \caption{CLAS data on the $\gamma p\to \pi^+n$ differential cross
section with the current solution. \vspace{4mm}}
\label{fig:pipn_clas}
\end{figure}
\subsection{Photoproduction multipoles}
We now turn to a discussion of the partial wave amplitudes. It
should be stressed that the amplitudes we give for $\gamma p \to
p\pi^0$ and $\gamma p \to n\pi^+$ are constrained by a large number
of other reactions. This is particularly important in the vicinity
of thresholds. Of course, the elastic $\pi N$ scattering amplitude
and the pion photoproduction amplitude are influenced by opening new
channels and the couplings to the new channels can be estimated from
their effect on the scattering and photoproduction amplitudes. But
this is rather indirect, and it is desirable to take the inelastic
channels into account directly.
The multipoles for $\pi^0$ photoproduction are shown in
Fig.~\ref{fig:pi0_mult_re} in comparison to the SAID SP09K2700
\cite{SAID} and
\begin{figure}[pt]
\centerline{
\epsfig{file=piplusn_3200_pap_4.eps,width=0.50\textwidth} }
\caption{The $\gamma p\to \pi^+n$ differential cross section with
the current solution. The data are taken from
\cite{Bouquet:1971cv,Ekstrand:1972rt,Fujii:1976jg,Arai:1977kb,Durwen:1980mq,Althoff:1983te,Heise:1988ag,Dannhausen:2001yz}}
\label{fig:pipn_said}
\end{figure}
MAID 2007 \cite{MAID} solutions, those for $\gamma p\to \pi^+n$ in
Fig.~\ref{fig:pipn_mult_re}. The errors cover a large number of fits
which differ mostly by the parameterization of the $2\pi N$ channel
at masses above 1.8\,GeV. For convenience of the reader, we list in
Table \ref{waves} the lowest photoproduction multipoles and the
corresponding partial waves.
\begin{figure*}[pt]
\centerline{
\epsfig{file=mult_pi0p_stu_re.eps,width=0.46\textwidth,height=0.45\textheight}~~
\epsfig{file=mult_pi0p_stu_im.eps,width=0.46\textwidth,height=0.45\textheight}~~
} \caption{\label{fig:pi0_mult_re} The real (two left-hand columns)
and imaginary (two right-hand columns) part of multipoles for the
$\pi^0$ photoproduction. The errors are systematic and cover a large
number of fits (see the text). The dashed curves correspond to the
SAID solution SP09K2700 \cite{SAID} and the dotted curves to the
MAID solution 2007 \cite{MAID}} \centerline{
\epsfig{file=mult_pipn_stu_re.eps,width=0.46\textwidth,height=0.45\textheight}~~
\epsfig{file=mult_pipn_stu_im.eps,width=0.46\textwidth,height=0.45\textheight}~~
} \caption{\label{fig:pipn_mult_re} The real (two left-hand columns)
and imaginary (two right-hand columns) part of multipoles for the
$\gamma p\to \pi^+n$ reaction. The errors are systematic and cover a
large number of fits (see the text). The dashed curves correspond to
the SAID solution SP09K2700 \cite{SAID} and the dotted curves to the
MAID solution 2007 \cite{MAID}.}
\end{figure*}
\begin{figure*}[pt]
\centerline{
\epsfig{file=mult_12_re.eps,width=0.46\textwidth,height=0.45\textheight}~~
\epsfig{file=mult_12_im.eps,width=0.46\textwidth,height=0.45\textheight}~~
} \caption{\label{fig:pi0_mult_re} The real (two left-hand columns)
and imaginary (two right-hand columns) part of the isospin-1/2
photoproduction multipoles. The errors are systematic and cover a
large number of fits (see the text). The dashed curves correspond to
the SAID solution SP09K2700 \cite{SAID} and the dotted curves to the
MAID solution 2007 \cite{MAID}} \centerline{
\epsfig{file=mult_32_re.eps,width=0.46\textwidth,height=0.45\textheight}~~
\epsfig{file=mult_32_im.eps,width=0.46\textwidth,height=0.45\textheight}~~
} \caption{\label{fig:pipn_mult_re} The real (two left-hand columns)
and imaginary (two right-hand columns) part of the isospin-1/2
photoproduction multipoles. The errors are systematic and cover a
large number of fits (see the text). The dashed curves correspond to
the SAID solution SP09K2700 \cite{SAID} and the dotted curves to the
MAID solution 2007 \cite{MAID}.}
\end{figure*}
\begin{table}[pb]
\caption{\label{waves}Photoproduction multipoles and partial waves.
In general, two multipoles lead to one spin-parity wave.}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{cccccc}
\hline\hline \multicolumn{2}{c}{Multipoles} &
\multicolumn{2}{c}{Partial waves}&$J^P$\\\hline
$E_0^+$& - & $S_{11}$ & $S_{31}$&$1/2^-$\\
- & $M_1^-$ & $P_{11}$ & $P_{31}$&$1/2^+$\\
$E_1^+$& $M_1^+$ & $P_{13}$ & $P_{33}$&$3/2^+$\\
$E_2^-$& $M_2^-$ & $D_{13}$ & $D_{33}$&$3/2^-$\\
$E_2^+$& $M_2^+$ & $D_{15}$ & $D_{35}$&$5/2^-$\\
$E_3^-$& $M_3^-$ & $F_{15}$ & $F_{35}$&$5/2^+$\\
\hline\hline
\end{tabular}
\renewcommand{\arraystretch}{1.0}
\end{center}
\end{table}
Most amplitudes derived within the SAID, MAID, or BnGa approach
yield consistent results, at least qualitatively. The best agreement
is found for the $M_1^+$ amplitude which describes the spin flip
amplitude for the photo-induced transition from the proton to the
$\Delta$ resonance and its excitations. The $\Delta$ resonance is
fully elastic, hence the agreement in the low-mass region is not
unexpected. Even the small $E_1^+$ multipoles are not inconsistent.
Some multipoles which we discuss next show significant differences
between the different approaches. The $E_0^+$ multipole has a
similar structure in all three approaches but shows significant
differences in detail. In the BnGa solution, the electric dipole
transition $E_0^+$ exceeds the other solutions in the threshold
regions, likely due to a larger role of the subthreshold $\Lambda
K^+$ amplitude. The differences are even larger for the $M_1^-$
multipole; this may be not unexpected in view of the notorious
difficulties with the $1/2^+$ partial wave. Surprisingly, the
multipoles for $\gamma p\to n\pi^+$ are in much better consistency.
The differences in the $E_2^-$ and $M_2^-$ can be assigned to
additional $\Delta_{3/2^-}(1940)$ and $\Delta_{3/2^-}(2260)$
resonances introduced to fit data on $\gamma p\to p\pi^0\eta$
\cite{Horn:2007pp,Horn:2008qv}. Significantly different are the
multipoles leading to $5/2^-$ states. In our fits, the $E_2^+$ and
$M_2^+$ multipoles include an additional resonance $N_{5/2^-}(2060)$
\cite{Anisovich:2005tf}. We note that the (dominant) resonance
contributions are compatible with the Watson theorem.
\subsection{Properties of contributing resonances}
A large number of resonances is identified in the fits. Some have a
strong coupling to pion photoproduction, for others, the product of
squared photocoupling constant and $N\pi$ decay branching ratio is
small and they contribute mostly to inelastic channels; their
properties will be discussed elsewhere. These latter resonances are
listed in Table~\ref{allb}. They do help to improve the fit to pion
photoproduction but their helicity amplitudes are not well defined,
and photo-couplings and decay branching ratios of these states can
be varied within large limits without significant $\chi^2$
deterioration. All these solutions were included in the error
estimation procedure.
The pole position of the states, photo-couplings and $\pi N$
branching ratios for the states contributing strongly to pion
photoproduction are given in Table~\ref{resonances}. The inclusion
of the new CLAS data rather notably stabilized the solution and
improved most of the errors. The pole positions are determined by
finding zeros of the real and the imaginary part of the denominator
in the partial wave amplitudes. Thus two lines are defined in the
complex energy plane. Their crossing points defines the pole
position. The coupling constants, including the helicity amplitudes,
are calculated as residues of the P-vector/K-matrix amplitude at the
pole position and are given together with their phases. The $\pi N$
branching ratios are calculated as squared residue-couplings,
multiplied by the phase volume taken at the Breit-Wigner resonance
mass. We note that nucleon-meson or nucleon-photon couplings are
defined at the pole position of a resonance, and are complex
numbers. In Table~\ref{resonances} we give the (complex)
photon-couplings at the pole positions; their analogues, the
helicity amplitudes $A_{1/2}$ and $A_{3/2}$ are defined for
Breit-Wigner amplitudes, not for more general formalisms. The method
how we derive Breit-Wigner parameters and helicity amplitudes is
discussed below. For the $P_{13}(1720)$ and $D_{33}(1700)$
resonances, the Breit-Wigner width is much larger than one might
expect from the pole position. In the $N\pi$ channel, the visible
width is much closer to this expectation. The effect is known from
$a_0(980)$ which has a visible width of about 50\,MeV in the
$\pi\eta$ mass distribution but a much larger width in the $K\bar K$
mass distribution the width is much larger because of the rapidly
opening $K\bar K$ phase space. In the $P_{13}(1720)$ and
$D_{33}(1700)$ case, the phase space for $N\pi\pi$ 3-body decays
grows rapidly with increasing mass.
\begin{table}[pt]
\caption{\label{allb}Baryon resonances included in the fit which
contribute little to photoproduction of pions.}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{cccccc}
\hline \hline \hspace{-2mm}
\hspace{-5mm}&\hspace{-4mm}Mass\hspace{-4mm}&\hspace{-7mm}
Width&&\hspace{-4mm}Mass\hspace{-4mm}&\hspace{-7mm} Width\\\hline
\hspace{-1mm}$P_{11}(1860)$\hspace{-3mm}&\hspace{-3mm}1900$\pm
30$\hspace{-3mm}&\hspace{-3mm}300$\pm$40\hspace{-3mm}&\hspace{-2mm}
$P_{13}(1900)$\hspace{-3mm}&\hspace{-3mm}1960$\pm$30\hspace{-3mm}&\hspace{-3mm}185$\pm$40\\
\hspace{-1mm}$D_{13}(1700)$\hspace{-3mm}&\hspace{-3mm}1730$\pm$40\hspace{-3mm}&\hspace{-3mm}310$\pm$60\hspace{-3mm}&\hspace{-3mm}
$D_{13}(1875)$\hspace{-3mm}&\hspace{-3mm}1870$\pm$25\hspace{-3mm}&\hspace{-3mm}150$\pm$40\\
\hspace{-1mm}$P_{33}(1600)$\hspace{-3mm}&\hspace{-3mm}1640$\pm$40\hspace{-3mm}&\hspace{-3mm}480$\pm$1{\scriptsize
00}\hspace{-3mm}&\hspace{-3mm}
$P_{33}(1920)$\hspace{-3mm}&\hspace{-3mm}1950$\pm$40\hspace{-3mm}&\hspace{-3mm}330$\pm$50\\
\hspace{-1mm}$F_{15}(2000)$\hspace{-3mm}&\hspace{-3mm}1910$\pm$50\hspace{-3mm}&\hspace{-3mm}360$\pm$80\hspace{-3mm}&\hspace{-3mm}
$D_{15}(2070)$\hspace{-3mm}&\hspace{-3mm}2065$\pm$25\hspace{-3mm}&\hspace{-3mm}340$\pm$40\\
\hspace{-1mm}$D_{33}(1940)$\hspace{-3mm}&\hspace{-3mm}1995$\pm$40\hspace{-3mm}&\hspace{-3mm}360$\pm$50\hspace{-3mm}&\hspace{-3mm}
$D_{13}(2170)$\hspace{-3mm}&\hspace{-3mm}2160$\pm$35\hspace{-3mm}&\hspace{-3mm}370$\pm$50\\
\hspace{-1mm}$D_{33}(2360)$\hspace{-3mm}&\hspace{-3mm}2360$\pm$50\hspace{-3mm}&\hspace{-3mm}480$\pm$80&
\hspace{-3mm}
$S_{31}(1900)$\hspace{-3mm}&\hspace{-3mm}1955$\pm$30\hspace{-3mm}&\hspace{-3mm}335$\pm$40\\
\hline\hline
\end{tabular}
\renewcommand{\arraystretch}{1.0}
\end{center}
\end{table}
\begin{table}[pt]
\caption{\label{resonances} Pole position (in MeV), photo-couplings
calculated as residues in the pole (in GeV$^{-1/2}10^{3}$, phases in
degrees) and corresponding Breit-Wigner parameters for states
contributing strongly to pion photoproduction (the branching ratios are
in percents). The PDG values are
given in parentheses. For $F_{35}(1905)$ two solutions are given.}
\begin{footnotesize}
\renewcommand{\arraystretch}{1.00}
\begin{center}
\begin{tabular}{lcc}
\hline\hline
State & $S_{11}(1535)$ & $S_{11}(1650)$
\\\hline
Re(pole) & $1510\!\pm\!25$ ($1510\!\pm\!20$)& $1670\!\pm\!35$ ($1655\!\pm\!15$)\\
-2Im(pole) & $140\!\pm\!30$ ($170\!\pm\!80$) & $170\!\pm\!40$ ($165\!\pm\!15$) \\
$A^{1/2}(\gamma p)$ & $90\!\pm\!25\,/0^o\!\pm\!45^o$ & $65\!\pm\!30$\,/$28^o\pm 15^o$ \\
$M_{BW}$ & $1535\!\pm\!20$ ($1535\!\pm\!10$)& $1680\!\pm\!40$ ($1658\!\pm\!12$) \\
$\Gamma_{BW}$ & $170\!\pm\!35$ ($150\!\pm\!25$) & $170\!\pm\!45$ ($165\!\pm\!20$) \\
$\Gamma_{\pi N}/\Gamma$ & $35\pm 15$ ($45\pm 10$) & $50\pm
25$ ($78\pm 18$) \\ \hline State &
$P_{11}(1440)$ & $S_{31}(1620)$
\\\hline
Re(pole) & $1370\!\pm\!4$ ($1365\!\pm\!15$) &$1596\!\pm\!7$ ($1600\!\pm\!10$) \\
-2Im(pole) & $193\!\pm\!7$ ($~190\!\pm\!30$) &~$130\!\pm\!10$ ($~118\!\pm\!3$) \\
$A^{1/2}(\gamma p)$ & -$48\!\pm\!12$\,/-$58^o\!\pm\!20^o$ &$62\!\pm\!10$\,/-$0^o\!\pm\!20^o$ \\
$M_{BW}$ & $1440\!\pm\!12$ ($1445\!\pm\!25$)& $1625\!\pm\!10$ ($1630\!\pm\!30$) \\
$\Gamma_{BW}$ & $335\!\pm\!50$($325\!\pm\!125$)& $148\!\pm\!15$ ($143\!\pm\!8$) \\
$\Gamma_{\pi N}/\Gamma$ & $60\pm 6$ ($65\pm 15$) & $23\pm 5$ ($25\pm 5$) \\
\hline State & $P_{11}(1710)$ &
$P_{33}(1232)$ \\\hline
Re(pole) &$1708\!\pm\!18$ ($1720\!\pm\!50$) &$1211\!\pm\!1$ ($1210\!\pm\!1$) \\
-2Im(pole) &$200\!\pm\!20$ ($230\!\pm\!150$) &$100\!\pm\!2$ ($~100\!\pm\!2$) \\
$A^{1/2}(\gamma p)$ &$24\!\pm\!8$\,/-$20^o\!\pm\!60^0$ &-$136\!\pm\!5$\,/-$17^o\!\pm\!5^o$ \\
$A^{3/2}(\gamma p)$ & ~ &-$267\!\pm\!8$\,/-$3^o\!\pm\!3^o$ \\
$M_{BW}$ &$1725\!\pm\!25$ ($1710\!\pm\!30$) &$1230\!\pm\!2$ ($1232\!\pm\!1$) \\
$\Gamma_{BW}$ &$200\!\pm\!35$ ($150\!\pm\!100$) &$112\!\pm\!4$ ($118\!\pm\!2$) \\
$\Gamma_{\pi N}/\Gamma$ &$12\pm 6$ ($15\pm 5$) & $100$ ($100$) \\
\hline State & $P_{13}(1720)$ &
$D_{33}(1700)$ \\\hline
Re(pole) &$1660\!\pm\!35$ ($1675\!\pm\!15$) &$1650\!\pm\!30$ ($1650\!\pm\!30$)\\
-2Im(pole) &$360\!\pm\!80$ ($190\!\pm\!85$) &$275\!\pm\!35$ ($200\!\pm\!40$) \\
$A^{1/2}(\gamma p)$ &$140\!\pm\!50$\,/-$35^o\!\pm\!25^o$&$160\!\pm\!45$\,/$35^o\!\pm\!12^o$\\
$A^{3/2}(\gamma p)$ &$110\!\pm\!50$\,/$10^o\!\pm\!35^o$&$165\!\pm\!40$\,/$40^o\!\pm\!18^o$\\
$M_{BW}$ & $1770\!\pm\!100$ ($1725\!\pm\!25$) &$1780\!\pm\!40$ ($1710\!\pm\!40$) \\
$\Gamma_{BW}$ &$650\!\pm\!120$ ($225\!\pm\!75$) & $580\!\pm\!120$ ($300\!\pm\!100$) \\
$\Gamma_{\pi N}/\Gamma$ & $14\pm 5$ ($15\pm 5$) &$16\pm 7$ ($15\pm 5$) \\
\hline State & $D_{13}(1520)$ & $F_{35}(1905)$ (sol.1)\\
\hline
Re(pole) &$1512\!\pm\!3$ ($1510\!\pm\!5$) &$1800\!\pm\!15$ ($1830\!\pm\!5$) \\
-2Im(pole) &$110\!\pm\!6$ ($112\!\pm\!7$) &$300\!\pm\!20$ ($282\!\pm\!18$) \\
$A^{1/2}(\gamma p)$ &-$30\!\pm\!6$\,/$15^o\!\pm\!10^o$&$28\!\pm\!10$\,/-$35^o\!\pm\!15^o$ \\
$A^{3/2}(\gamma p)$ &$130\!\pm\!6$\,/$6^o\!\pm\!5^o$ &-$42\!\pm\!12$\,/-$25^o\!\pm\!15^o$ \\
$M_{BW}$ &$1524\!\pm\!4$ ($1520\!\pm\!5$) & $1890\!\pm\!25$ ($1890\!\pm\!25$) \\
$\Gamma_{BW}$ &$117\!\pm\!6$ ($112\!\pm\!13$) & $335\!\pm\!30$ ($335\!\pm\!65$) \\
$\Gamma_{\pi N}/\Gamma$ & $57\pm 5$ ($60\pm 5$) & $12\pm 3$ ($12\pm 3$) \\
\hline State & $D_{15}(1675)$ &
$F_{35}(1905)$ (sol.2) \\\hline
Re(pole) &$1650\!\pm\!5$ ($1660\!\pm\!5$) &$1805\!\pm\!15$ ($1830\!\pm\!5$) \\
-2Im(pole) &$143\!\pm\!7$ ($~138\!\pm\!12$) &$310\!\pm\!20$ ($282\!\pm\!18$) \\
$A^{1/2}(\gamma p)$ &$20\!\pm\!4$\,/-$6^o\!\pm\!6^o$ &$47\!\pm\!10$\,/-$30^o\!\pm\!12^o$ \\
$A^{3/2}(\gamma p)$ &$24\!\pm\!8$\,/-$6^o\!\pm\!6^o$ & $0\!\pm\!3$ \\
$M_{BW}$ &$1678\!\pm\!5$ ($1675\!\pm\!5$) & $1850\!\pm\!20$ ($1890\!\pm\!25$) \\
$\Gamma_{BW}$ &$177\!\pm\!15$ ($148\!\pm\!18$) & $345\!\pm\!30$ ($335\!\pm\!65$) \\
$\Gamma_{\pi N}/\Gamma$ & $37\pm 5$ ($40\pm 5$) & $12\pm 3$ ($12\pm 3$) \\
\hline State &$F_{15}(1680)$ &
$F_{37}(1950)$ \\\hline
Re(pole) &$1672\!\pm\!4$ ($1673\!\pm\!8$) & $1882\!\pm\!8$ ($1880\!\pm\!10$) \\
-2Im(pole) &$114\!\pm\!12$ ($133\!\pm\!12$) & $262\!\pm\!12$ ($240\!\pm\!20$) \\
$A^{1/2}(\gamma p)$ &-$12\!\pm\!6$\,/-$45^o\!\pm\!30^o$&-$81\!\pm\!8$\,/-$15^o\!\pm\!12^o$ \\
$A^{3/2}(\gamma p)$ &$130\!\pm\!8$\,/$0^o\!\pm\!10^o$ &-$93\!\pm\!8$\,/-$15^o\!\pm\!15^o$ \\
$M_{BW}$ &$1685\!\pm\!5$ ($1685\!\pm\!5$) & $1928\!\pm\!8$ ($1933\!\pm\!18$) \\
$\Gamma_{BW}$ &$117\!\pm\!12$ ($130\!\pm\!10$) & $290\!\pm\!14$($285\!\pm\!50$) \\
$\Gamma_{\pi N}/\Gamma$ & $66\pm 8$ ($68\pm 3$) & $44\pm 8$ ($40\pm 5$) \\
\hline\hline
\end{tabular}
\end{center}
\end{footnotesize}
\renewcommand{\arraystretch}{1.0}
\end{table}
Within the quoted errors, the results from the new solution are
mostly compatible with those published in \cite{Thoma:2007bm}. The
largest changes to our previous solution are observed for the
photo-couplings of the $S_{31}(1620)$ resonance (which was (130$\pm$
50)\,GeV$^{-1/2}$$\times$$10^{3}$ in \cite{Thoma:2007bm}), and for
the small helicity component $A_{1/2}$ of the $D_{13}(1520)$ (which
was (7.0$\pm$1.5)\,GeV$^{-1/2}$$\times$$10^{3}$). The changes are
largely due to the inclusion of additional polarization data for
$\gamma p\to \pi^0 p$ and $\gamma p \to \pi^+ n$.
The new data also require $P_{11}(1710)$. In
\cite{Sarantsev:2007bk}, this resonance improved the description of
the data slightly but we were not forced to introduce it. In the
present fit, there are three resonances above the nucleon in the
$P_{11}$ wave: the Roper resonance $P_{11}(1440)$, the
$P_{11}(1710)$, and the newly proposed $P_{11}(1860)$.
We found two minima for the photo-couplings of the $F_{35}(1905)$
state. Both solutions are given in the Table~\ref{resonances}
($5^{\rm th}$ and $6^{\rm th}$ row, $2^{\rm nd}$ column). The first
solution corresponds well to the PDG average values, while the
second solution has an almost vanishing helicity-3/2 coupling. Both
solutions reproduce single meson photoproduction data with the same
quality, however the second solution provides a better likelihood
for the $\gamma p\to \pi^0\eta p$ reaction. The analysis of the high
energy data on the two-pion photoproduction will help to define
which solution is the physical one. Also the forthcoming double
polarization data will provide new and important constraints on the
amplitudes.
The pole structure of two resonances is found to be ambiguous. The
pole of the $S_{11}(1650)$ resonance is located between the $\Lambda
K$ and $\Sigma K$ thresholds. In one set of acceptable solutions we
found large couplings of this state to these channels and a
complicated pole structure of two or even more poles close to the
two thresholds. We included such pole positions and corresponding
residues in the errors given in Table~\ref{resonances}. The
forthcoming analysis of the data on the $\pi p\to K\Lambda$ should
help to define the $S_{11}$ pole structure more accurately, however
new data on $\pi N\to K\Lambda$ and $\pi N\to K\Sigma$ would be
extremely valuable. The fit of two-pion photoproduction data
suggests a substantial coupling of $P_{13}(1720)$ to the
$D_{13}(1520)\pi$ channel. Thus we observe here a double pole
structure near the $D_{13}(1520)\pi$ threshold which results in
rather large errors and in difficulties to identify the
corresponding Breit-Wigner parameters. We believe that the
forthcoming polarization data on two-pion photoproduction will
improve significantly the accuracy in the definition of
$P_{13}(1720)$ pole structure. A more precise definition of the
$P_{13}(1720)$ pole with its $N\pi\pi$ couplings
\begin{table*}[pt]
\caption{\label{helicities}Helicity amplitudes $A_{1/2}$ and
$A_{3/2}$ for $N^\ast$ and $\Delta^\ast$ from this work, from SAID08
~\protect\cite{SAID}, from MAID07~\protect\cite{MAID}, from the
Gie\ss en model \protect\cite{Penner:2002md,Shklyar:2006xw} and
estimates from Ref.~\protect\cite{Amsler:2008zzb}.} \vspace{2mm}
\renewcommand{\arraystretch}{1.2}
\begin{center}\begin{tabular}{rrrrrrrrrrr} \hline\hline Resonance &
\multicolumn{5}{c}{$A_{1/2}$ (GeV$^{-1/2}$$\times$$10^{3}$)} &
\multicolumn{5}{c}{$A_{3/2}$ (GeV$^{-1/2}$$\times$$10^{3}$)} \\
& BnGa09 &\hspace{-2mm} FA08\hspace{2mm} &MAID07\hspace{-2mm}&\hspace{-2mm}Gie\ss en\hspace{-4mm}&\hspace{-2mm}PDG\hspace{2mm}&\hspace{-4mm}BnGa09\hspace{4mm}& \hspace{-3mm} FA08 \hspace{3mm} &\hspace{-2mm}MAID07\hspace{-2mm}&\hspace{-2mm}Gie\ss en\hspace{-2mm}&\hspace{-2mm} PDG \\
\hline\hline
$S_{11}(1535)$ &$90\!\pm\!15$ & 100.9$\pm$3.0 &\hspace{-4mm} 66\hspace{4mm} &\hspace{-2mm}95 \hspace{2mm} & 90$\pm$30 && && &\\
$S_{11}(1650)$ &$60\!\pm\!20$ & 9.0$\pm$9.1 &\hspace{-4mm} 33\hspace{4mm} &\hspace{-2mm}57 \hspace{2mm} & 53$\pm$16 && && &\\
$P_{11}(1440)$ &-$52\!\pm\!10$& $-$56.4$\pm$1.7 &\hspace{-4mm} $-$61\hspace{4mm} &\hspace{-2mm}-84 \hspace{2mm} & $-$65$\pm$4 && && &\\
$P_{11}(1710)$ &$25\!\pm\!10$&& &\hspace{-2mm}-50 \hspace{2mm} & $9\!\pm\!22$ && && &\\
$P_{13}(1720)$ &$130\!\pm\!50$& 90.5$\pm$3.3 &\hspace{-4mm} 73\hspace{4mm} &\hspace{-2mm}-65 \hspace{2mm} & 18$\pm$30 &$100\!\pm\!50$& $-$36.0$\pm$3.9 & \hspace{-2mm} $-$11 \hspace{2mm} & \hspace{-2mm} 35 \hspace{2mm} & $-$19$\pm$20 \\
$D_{13}(1520)$ &-$32\!\pm\!6$ & $-$26$\pm$1.5 &\hspace{-4mm} $-$27\hspace{4mm} &\hspace{-2mm}-15 \hspace{2mm} & $-$24$\pm$9 &$138\!\pm\!8$& 141.2$\pm$1.7 & \hspace{-2mm} 161 \hspace{2mm} & \hspace{-2mm} 146 \hspace{2mm} & 166$\pm$5 \\
$D_{15}(1675)$ &$21\!\pm\!4$ & 14.9$\pm$2.1 &\hspace{-4mm} 15\hspace{4mm} &\hspace{-3mm}9 \hspace{3mm} & 19$\pm$8 &$24\!\pm\!8$& 18.4$\pm$2.1 & \hspace{-2mm} 22 \hspace{2mm} & \hspace{-2mm} 21 \hspace{2mm} & 15$\pm$9 \\
$F_{15}(1680)$ &-$12\!\pm\!6$ & $-$17.6$\pm$1.5 &\hspace{-4mm} $-$25\hspace{4mm} &\hspace{-3mm}3 \hspace{3mm} & $-$15$\pm$6 &$136\!\pm\!12$& 134.2$\pm$1.6 & \hspace{-2mm} 134 \hspace{2mm} & \hspace{-2mm} 116 \hspace{2mm} & 133$\pm$12 \\
\hline
$S_{31}(1620)$ &$63\!\pm\!12$& 47.2$\pm$2.3 &\hspace{-4mm} 66\hspace{4mm} &\hspace{-2mm} -50\hspace{2mm} & 27$\pm$11 && & &\\
$P_{33}(1232)$ &-$136\!\pm\!5$& $-$139.6$\pm$1.8 &\hspace{-4mm} $-$140\hspace{4mm} &\hspace{-2mm} -128\hspace{2mm} & $-$135$\pm$6 &-$267\!\pm\!8$ & $-$258.9$\pm$2.3& \hspace{-2mm} $-$265 \hspace{2mm} & \hspace{-2mm} -247 \hspace{2mm} & $-$250$\pm$8 \\
$D_{33}(1700)$ &$160\!\pm\!45$& 118.3$\pm$3.3 &\hspace{-4mm} 226\hspace{4mm} &\hspace{-2mm} 96\hspace{2mm} & 104$\pm$15&$160\!\pm\!40$& 110.0$\pm$3.5& \hspace{-2mm} 210 \hspace{2mm} & \hspace{-2mm} 154 \hspace{2mm} & 85$\pm$22 \\
$F_{35}(1905)$ &$28\!\pm\!12$& 11.4$\pm$8.0 &\hspace{-4mm} 18\hspace{4mm} &\hspace{-2mm} \hspace{2mm} & 26$\pm$11&-$42\!\pm\!15$& $-$51.0$\pm$8.0& \hspace{-2mm} $-$28 \hspace{2mm} && $-$45$\pm$20 \\
or: &($48\!\pm\!12$)& & && &($0\!\pm\!3$)& & & \\
$F_{37}(1950)$ &-$83\!\pm\!8$& $-$71.5$\pm$1.8 &\hspace{-4mm} $-$94\hspace{4mm} &\hspace{-2mm} \hspace{2mm} & $-$76$\pm$12&-$92\!\pm\!8$& -$96\!\pm\!8$ & \hspace{-2mm} $-$121 \hspace{2mm} && $-$97$\pm$10 \\
\hline\hline
\end{tabular}\end{center}
\end{table*}\noindent
may also help to resolve the discrepancies in the determination of
its $A_{3/2}$ helicity amplitude when different analyses are
compared.
The Review of Particle Properties lists Breit-Wigner parameters and
real helicity amplitudes. In the K-matrix approach, a photo-produced
resonance is described by a P-vector (eq.~\ref{Pvect}). Even if the
photo-coupling constant $g_{\gamma N}^{(\alpha)}$ is real, the
photo-coupling at the resonance position, calculated as residuum of
the amplitude $P_b$ at the pole, will in general be complex. To
allow for a comparison with other determinations, we define helicity
amplitudes by the following procedure: a Breit-Wigner amplitude is
constructed with an adjustable mass and a width which is
parameterized as a sum of all partial widths, $\sum \rho_i g_i^2$.
The total widths is scaled with one parameter. This scaling
parameter as well as the Breit-Wigner mass are adjusted to reproduce
the pole position of the P-vector amplitude. The Breit-Wigner
helicity amplitudes $A_{1/2}$ and $A_{3/2}$ are defined by the
condition that the residues of the Breit-Wigner photoproduction
amplitude reproduce the magnitude of the original residues of the
P-vector/K-matrix amplitude.
In Table \ref{helicities} we compare our results on $A_{1/2}$ and
$A_{3/2}$ for $N^\ast$ and $\Delta^\ast$ with previous
determinations of these quantities. These real helicity amplitudes
are given in Table \ref{helicities} and compared to values obtained
by SAID, MAID, and the Gie\ss en model, and to the values listed by
the PDG \cite{Amsler:2008zzb}. First we notice that our errors are
much larger than those given by FA08, MAID and Gie\ss en do not give
any errors. We believe that the FA08 systematic errors are
underestimated: the impact of variations in the couplings to
inelastic channels can hardly be tested using only reactions with
$N\pi$ in the final state. The errors we quote are not statistical
errors; those are small. Our errors are derived from a large number
of fits changing the number of resonances, switching on and off
couplings to inelastic channels, using different start values for
the fits.
For most resonances, reasonable consistency between the different
analyses is found. In particular the helicity amplitudes for
photoproduction of the Roper resonance from SAID, MAID, and BnGa are
fully consistent (the Gie\ss en result is a bit higher) even though
mass, width, and $N\pi$ decay branching fractions differ somewhat.
BnGa and SAID, e.g., find,
respectively,\\[-3ex]
\begin{center}
\renewcommand{\arraystretch}{1.2}\begin{tabular}{cccc}
& M (MeV)& $\Gamma$ (MeV)&$\Gamma_{N\pi}/\Gamma_{\rm
tot}$\\\hline
BnGa & $1440\pm 12$&$335\pm 50$&$0.60\pm 0.06$\\
FA08& 1485& 284 & 0.79\\\hline
\end{tabular}\end{center}
\noindent In our analysis, the Roper resonance is fully constrained:
from three of the four reactions, $\pi N$ elastic scattering,
$\gamma p\to N\pi$, $\pi^-p\to p\pi^0\pi^0$, and $\gamma p\to
p\pi^0\pi^0$, the amplitude for the forth reaction can be predicted.
Hence we are particularly confident that these results are correct.
We comment briefly on further differences. The PDG result for the
$A_{1/2}$ amplitude of (53$\pm$16)\,GeV$^{-1/2}$$\times$$10^{3}$ for
producing $S_{11}(1650)$ was driven by the 1995 VPI result
(69$\pm$5)\,GeV$^{-1/2}$$\times$$10^{3}$ \cite{Arndt:1995ak} and by
the small value (22$\pm$7)\,GeV$^{-1/2}$$\times$$10^{3}$ obtained in
\cite{Dugger:2007bt}. The most recent FA08 analysis gives
(9.0$\pm$9.1)\,GeV$^{-1/2}$$\times$$10^{3}$, a value which is much
smaller and which is not confirmed here; we find
(60$\pm$20)\,GeV$^{-1/2}$ $\times$$10^{3}$, in close agreement with
the Gie\ss en result. Part of the discrepancy with FA08 is certainly
due to the $S_{11}(1650)$ branching ratio to the $\pi N$ channel; in
FA08 this is fixed to be 100\% while we find (50$\pm$25)\%. Of
course, photoproduction defines only the product of the helicity and
$\pi N$ couplings.
Possibly related are the differences in the helicity amplitudes for
$P_{13}(1720)$. Our value for $A_{1/2}$ is compatible with the new
FA08 analysis and in conflict with the value quoted by the PDG.
Incompatible with all other determinations - even in the sign - is
our value for the $A_{3/2}$ helicity amplitude for $P_{13}(1720)$
production. Also the Gie\ss en results are at variance with the
other determinations. Clearly, more data are required to resolve
this discrepancy; the results from double polarization experiments
carried out at present in different laboratories will very likely be
decisive.
There is the possibility, that the discrepancies in the properties
of $P_{13}(1720)$ have a physical origin. In \cite{Ripani:2002ss} it
was found that data on the reaction $e p \to e' p \pi^+\pi^-$ could
be described only when resonance parameters were drastically changed
with respect to published results, or when a new resonance in the
$P_{13}$ wave was introduced. Apparently, the $P_{13}(1720)$
properties are different in $N\pi$ and in $N\pi\pi$; this might be a
hint for the presence of a close-by state in the same partial wave.
\section{Summary}
We have presented results from a partial wave analysis on a large
variety of different reactions, from $\pi N$ elastic scattering to
photoproduction of multibody final states. The main emphasis of this
paper was devoted to a determination of the electric and magnetic
multipoles leading to the production of neutral or charged pions in
photo-induced reactions off protons. The multipoles are mostly
consistent with previous analyses but a few significant
discrepancies call for clarifications. The analysis provides masses,
widths, and helicity amplitudes for several known resonances. Masses
and widths and the $\pi N$ partial decay widths of all resonances
agree very well with established values. Only the photocoupling of
the $P_{13}(1720)$ resonance differs remarkably from PDG and
from the values found in a recent analysis of the CLAS
collaboration. This discrepancy may be a further hint for the
conjecture \cite{Ripani:2002ss} that the $P_{13}(1720)$ resonance
may have a more complicated structure than usually assumed.
\subsection*{Acknowledgements}
We would like to thank the members of SFB/TR16 for continuous
encouragement. We acknowledge financial support from the Deutsche
Forschungsgemeinschaft (DFG) within the SFB/TR16 and from the
Forschungszentrum J\"ulich within the FFE program. The collaboration
with St. Petersburg received funds from DFG and the Russian
Foundation for Basic Research.
\section*{Appendix A: The structure of the fermion propagator}
We consider scattering of two particles with momenta $k_1$ and $k_2$
in the initial and $q_1$ and $q_2$ in the final state. There are
three independent momenta. It is convenient to choose the total
four-momentum of the system $P=k_1+k_2=q_1+q_2$ and two relative
momenta $k^\perp_\mu$ and $q^\perp_\mu$ which are orthogonal to the
total momentum:
\begin{eqnarray}
k^\perp_\mu&=&\frac 12 (k_1-k_2)_\nu g_{\mu\nu}^\perp,\qquad
q^\perp_\mu=\frac 12 (q_1-q_2)_\nu g_{\mu\nu}^\perp, \nonumber \\
&&g^\perp_{\mu\nu}=\left (g^\perp_{\mu\nu}-\frac{P_\mu
P_\nu}{P^2}\right ).
\end{eqnarray}
The tensor $F^{\mu_1\ldots\mu_n}_{\nu_1\ldots\nu_n}$ depends only on
the total momentum $P$ ($s=P^2$) and describes the tensor structure
of the partial wave. It can be calculated as a product of two
polarization tensors $\Psi^{\alpha}_{\nu_1\ldots\nu_n}$ summed over
possible polarizations:
\begin{eqnarray}
F^{\mu_1\ldots\mu_n}_{\nu_1\ldots\nu_n}=\sum\limits_\alpha
\Psi^{\alpha *}_{\mu_1\ldots\mu_n}
\Psi^{\alpha}_{\nu_1\ldots\nu_n}\,.
\end{eqnarray}
For every set of indices, $F^{\mu_1\ldots\mu_n}_{\nu_1\ldots\nu_n}$
satisfies the properties of the polarization tensor: it is
symmetrical over permutation of any two indices, traceless and for
$n>0$ is orthogonal to the total momentum of the system. It usually
normalized by the condition:
\begin{eqnarray}
F^{\mu_1\ldots\mu_n}_{\nu_1\ldots
\nu_n}F^{\nu_1\ldots\nu_n}_{\xi_1\ldots\xi_n}=
(-1)^nF^{\mu_1\ldots\mu_n}_{\xi_1\ldots\xi_n}.
\end{eqnarray}
and is often called projection operator: its convolution with
another tensor by one set of indices results in a tensor which obeys
the symmetry properties of the corresponding partial wave.
In the case of a fermionic system,
$F^{\mu_1\ldots\mu_n}_{\nu_1\ldots\nu_n}$ can be written in the form
\begin{eqnarray}
F^{\mu_1\ldots\mu_n}_{\nu_1\ldots\nu_n}\!=\!(-1)^n
\frac{\sqrt{s}\!+\!\hat P}{2\sqrt{s}}
O^{\mu_1\ldots\mu_n}_{\xi_1\ldots \xi_n}
T^{\xi_1\ldots\xi_n}_{\beta_1\ldots \beta_n} O^{\beta_1\ldots
\beta_n}_{\nu_1\ldots\nu_n}\,.
\label{fp}
\end{eqnarray}
Here, $(\sqrt s+\hat P)$ corresponds to the numerator of a
propagator describing a particle with $J=1/2$ and $n\!=\!J\!-\!1/2$
($\sqrt s\!=\!M$ for the stable particle). We define
\begin{eqnarray}
T^{\xi_1\ldots\xi_n}_{\beta_1\ldots \beta_n}&=& \frac{n+1}{2n\!+\!1}
\big( g_{\xi_1\beta_1}\!-\! \frac{n}{n\!+\!1}\sigma_{\xi_1\beta_1}
\big) \prod\limits_{i=2}^{n}g_{\xi_i\beta_i},
\\ \nonumber \\
\sigma_{\alpha_i\alpha_j}&=&\frac 12
(\gamma_{\alpha_i}\gamma_{\alpha_j}-
\gamma_{\alpha_j}\gamma_{\alpha_i}).
\label{t1}
\end{eqnarray}
We introduced the factor $1/(2\sqrt s)$ in the propagator which
removes the divergency of this function at large energies. For the
stable particle it means that bispinors are normalized as follows:
\begin{eqnarray}
\bar u(k_N) u(k_N)\!=\!1\;,\;\;
\sum\limits_{polarizations}\!\!\!\!\!\! u(k_N)\bar u(k_N)
\!=\!\frac{m\!+\!\hat k_N}{2m}\;.
\label{bisp_norm}
\end{eqnarray}
Here and below, $\hat k\equiv\gamma_\mu k_\mu$.
The boson projection operator $O^{\mu_1\ldots\mu_n}_{\nu_1\ldots
\nu_n}$ has the following properties:
\begin{eqnarray}
&&P_{\mu_i}O^{\mu_1\ldots\mu_n}_{\nu_1\ldots \nu_n}
=P_{\nu_j}O^{\mu_1\ldots\mu_n}_{\nu_1\ldots \nu_n}=0\;, \nonumber \\
&&g_{\mu_i\mu_j}O^{\mu_1\ldots\mu_n}_{\nu_1\ldots \nu_n}
=g_{\nu_i\nu_j}O^{\mu_1\ldots\mu_n}_{\nu_1\ldots \nu_n}=0\;, \nonumber \\
&&O^{\mu_1\ldots\mu_n}_{\alpha_1\ldots \alpha_n} O^{\alpha_1\ldots
\alpha_n}_{\nu_1\ldots \nu_n}= (-1)^n
O^{\mu_1\ldots\mu_n}_{\nu_1\ldots \nu_n} \;.
\label{O_proper}
\end{eqnarray}
For the lowest states,
\begin{eqnarray}
O\!&=&\! 1\ ,\qquad O^\mu_\nu\!=\!g_{\mu\nu}^\perp\ , \nonumber \\
O^{\mu_1\mu_2}_{\nu_1\nu_2}\!&=&\! \frac 12 \left (
g_{\mu_1\nu_1}^\perp g_{\mu_2\nu_2}^\perp \!+\!
g_{\mu_1\nu_2}^\perp g_{\mu_2\nu_1}^\perp \!- \!\frac 23
g_{\mu_1\mu_2}^\perp g_{\nu_1\nu_2}^\perp \right )\!.~~~
\end{eqnarray}
For higher states, the operator can be calculated using the
recurrent expression:
\begin{eqnarray} &&O^{\mu_1\ldots\mu_n}_{\nu_1\ldots
\nu_n}=\frac{1}{n^2} \bigg (
\sum\limits_{i,j=1}^{n}g^\perp_{\mu_i\nu_j}
O^{\mu_1\ldots\mu_{i-1}\mu_{i+1}\ldots\mu_n}_{\nu_1\ldots
\nu_{j-1}\nu_{j+1}\ldots\nu_n}
\nonumber \\
&& -\frac{4}{(2n-1)(2n-3)}
\nonumber \\ && \times\sum\limits_{i<j\atop k<m}^{n}
g^\perp_{\mu_i\mu_j}g^\perp_{\nu_k\nu_m}
O^{\mu_1\ldots\mu_{i-1}\mu_{i+1}\ldots\mu_{j-1}\mu_{j+1}\ldots\mu_n}_
{\nu_1\ldots\nu_{k-1}\nu_{k+1}\ldots\nu_{m-1}\nu_{m+1}\ldots\nu_n}
\bigg )\,.
\end{eqnarray}
The tensor $F^{\mu_1\ldots\mu_n}_{\nu_1\ldots \nu_n}$ has all
orthogonality properties of the tensor
$O^{\mu_1\ldots\mu_n}_{\nu_1\ldots \nu_n}$ plus orthogonality to the
$\gamma$-matrix:
\begin{eqnarray}
\gamma_{\mu_i}F^{\mu_1\ldots\mu_n}_{\nu_1\ldots \nu_n}
=F^{\mu_1\ldots\mu_n}_{\nu_1\ldots \nu_n}\gamma_{\nu_j}=0\;.
\end{eqnarray}
The pseudoscalar meson-nucleon vertices for the partial wave with
spin $J$ have the form:
\begin{eqnarray}
\label{piN_vertex}
Q^{(+)}_{\mu_1\ldots\mu_n}&=&
X^{(n)}_{\mu_1\ldots\mu_n}(q^\perp)u(q_1)\,, \nonumber \\
Q^{(-)}_{\mu_1\ldots\mu_n}&=& i\gamma_5 \gamma_\nu
X^{(n+1)}_{\nu\mu_1\ldots\mu_{n}}(q^\perp)u(q_1) \, ,
\end{eqnarray}
where $n=J\!-\!1/2$ and $u(q_1)$ is the bispinor of the baryon. The
'+' and '-' indices describe two sets of the partial waves with
relation between orbital momentum $L$ and the total spin $J$ as
$J=L+1/2$ ('+' partial waves) and $J=L-1/2$ ('-' partial waves). The
'+' set of vertices describes the partial waves with
$J^P=\frac{1}{2}^-$, $\frac{3}{2}^+$, $\frac{5}{2}^-\ldots$ and the
second set $J^P=\frac{1}{2}^+$, $\frac{3}{2}^-$,
$\frac{5}{2}^+\ldots$.
In the case of virtual photons there are, for every partial wave
with $J>1/2$, three $\gamma^* N$ vertices; for real photons, only
two of them are independent \cite{Anisovich:2006bc}. In the $LS$
formalism these vertices correspond to spin $1/2$ and $3/2$ of the
photon-nucleon system. For '+' states the vertices are (following
the ordering in \cite{Anisovich:2006bc}):
\begin{eqnarray}
\label{vf_plus}
Q^{(1+)}_{\alpha_1\ldots\alpha_n}&=& \bar u(k_1)\gamma^\perp_\mu
i\gamma_5
X^{(n)}_{\alpha_1\ldots\alpha_n}(k^\perp)\varepsilon_\mu\,, \nonumber \\
Q^{(3+)}_{\alpha_1\ldots\alpha_n}&=&\bar u(k_1) \gamma_\nu i
\gamma_5 X^{(n)}_{\nu\alpha_1\ldots\alpha_{n-1}}(k^\perp)
g^\perp_{\mu\alpha_n}\varepsilon_\mu \,,
\end{eqnarray}
where $\bar u(k_1)$ is bispinor of the initial nucleon and
$\varepsilon_\mu$ is the photon polarization vector.
For '-' states we have:
\begin{eqnarray}
\label{vf_minus}
Q^{(1-)}_{\alpha_1\ldots\alpha_n}&=&\bar u(k_1)
\gamma_\xi\gamma^\perp_\mu\varepsilon_\mu
X^{(n+1)}_{\xi\alpha_1\ldots\alpha_{n}}(k^\perp) \;, \nonumber \\
Q^{(3-)}_{\alpha_1\ldots\alpha_{n}}&=&\bar u(k_1)
X^{(n-1)}_{\alpha_2\ldots\alpha_{n}}(k^\perp)
g^\perp_{\alpha_1\mu}\varepsilon_\mu \;.
\end{eqnarray}
The orbital angular momentum operators for $L \le 3 $ are:
\begin{eqnarray}
X^{(0)}&=&1\ , \qquad X^{(1)}_\mu=k^\perp_\mu\ , \qquad
\nonumber \\
X^{(2)}_{\mu_1 \mu_2}&=&\frac32\left(k^\perp_{\mu_1}
k^\perp_{\mu_2}-\frac13\, k^2_\perp g^\perp_{\mu_1\mu_2}\right),
\nonumber \\
X^{(3)}_{\mu_1\mu_2\mu_3}&=&\frac52\Big[k^\perp_{\mu_1} k^\perp_{\mu_2 }
k^\perp_{\mu_3}
\nonumber \\
&-&
\frac{k^2_\perp}{5}
\left(g^\perp_{\mu_1\mu_2}k^\perp_{\mu_3}+g^\perp_{\mu_1\mu_3}k^\perp_{\mu_2}+
g^\perp_{\mu_2\mu_3}k^\perp_{\mu_1}
\right)\Big]\,.~~~
\end{eqnarray}
The operator $X^{(n+1)}_{\nu\mu_1\ldots\mu_n}$
can be written as a series of products of metric tensors and
relative momentum vectors. The first term is proportional to the
production of relative momentum vectors $k^\perp_\mu$, other terms
correspond to the substitution of two vectors by a metric
tensor with corresponding indices:
\begin{eqnarray}
&&X^{(n+1)}_{\nu\mu_1\ldots\mu_n}(k^\perp) =\alpha_{n+1} \bigg [
k^\perp_{\nu}k^\perp_{\mu_1}k^\perp_{\mu_2}k^\perp_{\mu_3}
\ldots k^\perp_{\mu_n}-
\frac{k^2_\perp}{2n\!+\!1}
\nonumber \\
&&\times\bigg(\sum\limits_{i=1}^n
g^\perp_{\nu\mu_i}\prod\limits_{j\ne i} k^\perp_{\mu_j}+
\sum\limits_{i<j}^n
g^\perp_{\mu_i\mu_j}k^\perp_{\nu}
\!\!\prod\limits_{m\ne i\ne j}\!\!
k^\perp_{\mu_m}+\ldots \bigg )
\nonumber \\
&&+\frac{k^4_\perp}{(2n\!+\!1)(2n\!-\!1)}\bigg(
\sum\limits_{i,j<m}^n
g^\perp_{\nu\mu_i}g^\perp_{\mu_j\mu_m}
\!\!\prod\limits_{l\ne i\ne j\ne m}\!\! k^\perp_{\mu_l}
\nonumber \\
&&+\sum\limits_{i<k,j<m}^n
g^\perp_{\mu_i\mu_k}g^\perp_{\mu_j\mu_m} k^\perp_{\nu}
\prod\limits_{^{l\ne i\ne k}_{\ne j\ne m}} k^\perp_{\mu_l}+
\ldots\bigg)+\ldots\bigg ].~~~~~~~~
\label{x-direct}
\end{eqnarray}
\section*{Appendix B: Contribution of the loop diagrams}
For the $\pi N$ vertices we have \cite{Anisovich:2006bc}:
\begin{eqnarray}
W^{(+)}_n&=&(-1)^n \frac{\alpha_n}{2n+1}|\vec k|^{2n}
\frac{m_N+k_{10}}{2m_N}\;,
\nonumber \\
W^{(-)}_n&=&(-1)^n \frac{\alpha_{n+1}}{n+1} |\vec k|^{2n+2}
\frac{m_N+k_{10}}{2m_N}\;,
\label{w_piN_minus}
\end{eqnarray}
\begin{figure}
\centerline{\epsfig{file=t-ch_8_sc.eps,width=8cm}} \caption{Diagram
representation of eq. (\ref{decomp_2})}
\label{fig::scalor_proj}
\end{figure}
and for the $\gamma N$ vertices (in the case of photoproduction):
\begin{eqnarray}
W^{(11+)}_n&=&(-1)^n2\frac{\alpha_n}{2n+1}|\vec k|^{2n}
\frac{m_N\!+\!k_{10}}{2m_N}\;,
\nonumber \\
W^{(33+)}_n&=&(-1)^n\frac{\alpha_n}{2n+1}\frac{(n+1)}{n} |\vec
k|^{2n}\frac{m_N\!+\!k_{10}}{2m_N}\;,
\nonumber \\
W^{(13+)}_n&=&(-1)^n\frac{\alpha_n}{2n+1}|\vec
k|^{2n}\frac{m_N\!+\!k_{10}}{2m_N}\,.
\label{w_plus}
\end{eqnarray}
for the '+' states and
\begin{eqnarray}
W^{(11-)}_n&=&(-1)^n\frac{2\alpha_{n+1}}{n+1}|\vec k|^{2n+2}
\frac{m_N\!+\!k_{10}}{2m_N}\;,
\nonumber \\
W^{(33-)}_n&=&(-1)^n\frac{\alpha_{n-1}(n+1)}{(2n\!+\!1)(2n\!-\!1)}
|\vec k|^{2n-2}\frac{m_N\!+\!k_{10}}{2m_N}\;, \nonumber \\
W^{(13-)}_n&=&(-1)^n\frac{\alpha_{n-1}}{n+1}|\vec k|^{2n}
\frac{m_N\!+\!k_{10}}{2m_N}\,
\label{w_minus}
\end{eqnarray}
for the '-' states.
The $\gamma N$ vertices in this representation
are not orthogonal to each other, and to extract partial waves one
needs to solve a $2\times2$ system of linear equations.
\section*{Appendix C: Single meson photoproduction amplitude}
The general structure of the single--meson photoproduction amplitude
in c.m.s. of the reaction is given by
\begin{eqnarray}
J_\mu\!=\! i {\mathcal F_1}
\sigma_\mu\! +&&\!\!{\mathcal F_2} (\vec \sigma \vec q)
\frac{\varepsilon_{\mu i j} \sigma_i k_j}{|\vec k| |\vec q|} \!+\!i
{\mathcal F_3} \frac{(\vec \sigma \vec k)}{|\vec k| |\vec q|} q_\mu
\!+\!i {\mathcal F_4} \frac{(\vec \sigma \vec q)}{\vec q^2} q_\mu\,,
\nonumber \\ &&A=\omega^*J_\mu\varepsilon_\mu \omega'\,,
\label{amp_t_gammaN_cms}
\end{eqnarray}
where $\vec q$ is the momentum of the nucleon in the $\pi N$ channel
and $\vec k$ the momentum of the nucleon in the $\gamma N$ channel
calculated in the c.m.s. of the reaction. The $\sigma_i$ are Pauli
matrices and $\omega$, $\omega'$ are non relativistic spinors of
initial and final states correspondingly.
If ${\mathcal F_i}$ are known, e.g. from the $t$ or $u$ channel
exchange amplitudes calculated in the c.m.s. of the reaction, the
partial wave amplitudes can be obtained as
\begin{eqnarray}
A^{(i\pm)}_n&=&\int\limits_{-1}^{1} \frac{dz}{2}{\mathcal
F_m}D^{(i\pm)}_m\,,
\end{eqnarray}
where $z$ is the cosine of the angle between initial and final
relative momenta and vectors $D^{(i\pm)}$ are equal to
\begin{eqnarray}
D^{(1+)}&=&\frac{1}{\kappa_n\alpha_n}
\left(P_n,-P_{n+1},0,\frac{(1\!-\!z^2)
P'_{n+1}}{(n\!+\!1)(n\!+\!2)}\right)\ ,
\nonumber \\
D^{(2+)}&=&\frac{1-z^2}{\kappa_n\alpha_n}
\left(0,0,\frac{P'_n}{(n\!+\!1)}, \frac{n
P'_{n+1}}{(n\!+\!1)(n\!+\!2)}\right )\ ,
\nonumber \\
D^{(1-)}&=&-\frac{n\!+\!1}{\kappa_{n+1}\alpha_{n+1}}
\left(-P_{n+1},P_n,\frac{(1\!-\!z^2)P'_{n+1}}{(n\!+\!1)(n\!+\!2),0}\right)\ ,
\nonumber \\
D^{(2-)}&=&-\frac{1-z^2}{\kappa_{n-1}\alpha_{n-1}|\vec
k|^2}\left (0,0,\frac{nP'_{n+1}}{(n\!+\!2)}, P'_n\right ).
\label{amp_dec}
\end{eqnarray}
Here $P_n=P_n(z)$ are Legendre polynomials and $P'_n=dP_n(z)/dz$.
Using the multipole decomposition of the $A^{(i\pm)}_n$ amplitudes
given in \cite{Anisovich:2004zz} one can obtain the standard
expression for the projection of the total amplitude into multipoles.
\section*{Appendix D: Reggeon propagator parametrization }
In this section we give the expressions for Reggeon propagators used in the fit.
The propagator for pion exchange has the form
\begin{eqnarray}
R_{\pi}(+,\nu,t)=\frac{e^{-i\frac{\pi}{2}\alpha_{\pi}(t)}}
{\sin (\frac{\pi}{2}\alpha_{\pi}(t)) \Gamma \left (\frac{
\alpha_{\pi}(t)}{2} +1\right )}
\left (\frac{\nu}{\nu_0}\right )^{\alpha_{\pi}(t)}\;,\quad
\end{eqnarray}
where $\alpha_{\pi}(t)=-0.014+0.72t$ is a function defining the
trajectory, $\nu_0$ is a normalization factor (which can be taken to
be 1). The $\Gamma$-function is introduced in the denominator to
eliminate the additional poles at $t<0$. The propagator for Kaon
exchange is given by
\begin{eqnarray}
R_{\rm K}(+,\nu,t)=\frac{e^{-i\frac{\pi}{2}\alpha_{\rm K}(t)}}
{\sin (\frac{\pi}{2}\alpha_{\rm K}(t)) \Gamma \left (\frac{
\alpha_{\rm K}(t)}{2} +1\right )}
\left (\frac{\nu}{\nu_0}\right )^{\alpha_{\rm K}(t)}\;,\quad
\end{eqnarray}
where $\alpha_{\rm K}(t)=-0.25+0.85t$.
The propagator
for $\rm K^*$ exchange is identical to the $\rho$ exchange propagator but has $\alpha_{\rm
K^*}(t)=0.32+0.85t$.
|
1,314,259,993,457 | arxiv | \section{Introduction}
In 1973 Yoishira Nambu \cite{nambu} proposed a generalization of classical
Hamiltonian mechanics, using ternary and higher-order brackets ( $n$-ary
brackets or multibrackets). During the past two decades Nambu proposal has
been a matter for many investigations \cite{flato1} - \cite{gau} and the
permanent interest for this issue is related with the recognition of the
feasible physical richness and the mathematical beauty of ternary and higher
algebraic systems. Recently such an algebraic structure has been analyzed
and reformulated by Tachtajan \cite{tacht1} -- \cite{tacht3} in an invariant
geometrical form. He proposed the notion of Nambu-Lie ''gebra'', which is a
generalization of Lie algebras for ternary (in general $n$-ary) case. The
ternary algebra or ''gebra'' is a linear space in which conditions of
antisymmetry and generalized Jacobi identity are fulfilled.
In this letter we investigate some new aspects of the Nambu proposal,
connected with Hopf algebra concept. That is, a ternary (and $n$-ary) Hopf
structure is introduced as generalization of the usual Hopf algebra, and
some examples are presented.
Let us begin with a definition of a ternary (associative) algebra.
\begin{definition}
. A ternary algebra with unit over a commutative ring {\bf C} is a vector
space $A$ together with a way of multiplying three elements $a,b,c\in A$
\begin{equation}
m:A\otimes A\otimes A=A^{\otimes 3}\rightarrow
A,\;\;\;such\;\;that\;\;\;m(a\otimes b\otimes c)=abc
\end{equation}
The unit element in A is thus defined:
\begin{equation}
m(1\otimes 1\otimes a)=m(1\otimes a\otimes 1)=m(a\otimes 1\otimes 1)=a;
\end{equation}
and the 3-associativity means
\begin{equation}
(abc)de=a(bcd)e=ab(cde).
\end{equation}
\end{definition}
\begin{definition}
. A 3- associator can be defined by
\begin{equation}
I^{(3)}=2(abc)de-a(bcd)e-ab(cde),
\end{equation}
\end{definition}
Let us give examples of an associative and non-associative 3-algebras.
\begin{example}
{\sc . }{\em (Associative 3-algebra.) } Let $A=\{a,b,c...;m\}$ be a set of
linear operators on a Hilbert space, such that $m$ is the usual associative
product, then $I^{(3)}=0$.
\end{example}
\begin{example}
. {\em (Non-associative 3-algebra. )} Consider $A=\{a,b,c...;m\}$ a set of
analytic functions defined on ${\cal R}^3$, such that
\[
m(a\otimes b\otimes c){({\bf x)}}=\frac{\partial a{({\bf x)}}}{\partial x_1}%
\frac{\partial b{({\bf x)}}}{\partial x_2}\frac{\partial c{({\bf x)}}}{%
\partial x_3},
\]
where ${\bf x}=(x_1,x_2,x_3)$. In this case, $I^{(3)}\ne 0$. Indeed,
denoting
\[
\partial _i=\frac \partial {\partial x_i},\,\,\,\,\,\,\,\,\,\partial _{ij}=%
\frac{\partial ^2}{\partial x_i\partial x_j},\,\,\,\,\,\,\,\,\,\,\,%
\,i,j=1,2,3,
\]
$\,\,\,$we obtain the following non zero value for the 3-associator
\begin{eqnarray*}
I^{(3)} &=&2(\partial _1a\,\,\partial _{12}b+\partial _{11}a\,\,\partial
_2b)\;\partial _3c\,\,\partial _2d\;\partial _3e \\
&&\ +\partial _1a\;\partial _2b\;\partial _{13}c\;\partial _2d\;\partial _3e
\\
&&\ -\partial _1a\;\partial _2b\;\partial _1c\;\partial _{23}d\;\partial _3e
\\
&&\ -\partial _1a\;\partial _1b\;\partial _2c\;\partial _{23}d\;\partial _3e
\\
&&\ -\partial _1a\;\partial _{12}b\;\partial _2c\;\partial _3d\;\partial _3e
\\
&&\ -\partial _1a\;\partial _1b\;\partial _{22}c\;\partial _3d\;\partial _3e
\\
&&\ -\partial _1a\;\partial _2b\;\partial _1c\;\partial _2d\;\partial _{33}e.
\end{eqnarray*}
\end{example}
\begin{definition}
. The unit map is defined (as usual) by
\[
I:{\bf C}\rightarrow A,\qquad i(\lambda )=\lambda 1;\quad 1\in A.
\]
\end{definition}
In a commutative diagrammatic representation the associativity of a ternary
algebra $A$ can be represented in the following way
\[
\begin{array}{c}
\,A\otimes {\bf C}\otimes {\bf C}\ ^{\underrightarrow{\,id\otimes \iota
\otimes i\,}}A\otimes \,\,A\otimes A \\
\,\simeq \downarrow
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\downarrow _m
\\
A\,\,\,\,\,\,\,\,\,_{\overrightarrow{\,\,\,\,\,\,\,\,\,\,\,\,\,\,id\,\,\,\,}%
}\,\,\,\,\,\,\,\,A
\end{array}
\,\,\,\,\,\,\,\,\,\,\,\,,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\begin{array}{c}
\,\,{\bf C}\otimes A\otimes {\bf C}^{\underrightarrow{\,\iota \otimes
id\otimes i}}\,\,\,\,A\otimes A\otimes A \\
\,_{\simeq }\downarrow
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\downarrow _m
\\
A\,\,\,\,\,\,\,\,_{\overrightarrow{\,\,\,\,\,\,\,id\,\,\,\,\,\,\,\,\,\,\,\,\,%
}}\,\,\,\,\,\,\,A
\end{array}
\]
\[
\begin{array}{c}
{\bf C}\otimes {\bf C}\otimes A\,\,\,^{\underrightarrow{i\otimes i\otimes id}%
}\,\,\,\,\,\,\,A\otimes A\,\otimes A\,\,\,\,\,\,\,\,\, \\
\simeq \downarrow
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,%
\downarrow _m \\
A\,\,\,\,\,\,\,\,\,\,\,_{\overrightarrow{\,\,\,\,\,\,\,\,\,\,\,id\,\,\,\,\,%
\,\,\,\,\,\,\,\,\,}\,\,\,\,\,\,\,\,\,\,\,}A
\end{array}
\]
On the other hand, the associativity of the multiplication is characterized
by
\[
m(m\otimes id\otimes id)=m(id\otimes m\otimes id)=m(id\otimes id\otimes m).
\]
\begin{definition}
. The Nambu product (a 3-commutator) is defined by $\pi =\pi ^{+}-\pi ^{-}$
, where
\[
\pi ^{+}:A^{\otimes 3}\rightarrow A^{\otimes 3},\quad
\]
\begin{equation}
\pi ^{+}(a,b,c)=a\otimes b\otimes c+b\otimes c\otimes a+c\otimes a\otimes b;
\end{equation}
\[
\pi ^{-}:A^{\otimes 3}\rightarrow A^{\otimes 3},\quad
\]
\[
\pi ^{-}(a,b,c)=c\otimes b\otimes a+a\otimes c\otimes b+b\otimes a\otimes c.
\]
$\pi ^{+}(\pi ^{-})$ is the sum over all terms with even (odd) permutation
of $a,b$ and $c\in A.$
\end{definition}
Using this definition of Nambu product, a generalization of the concept of
abelian algebra ( an abelian 3-algebra) is given by the square representing
3-commutativity.
\[
\begin{array}{c}
\,\,\,\,\,\,\,\,A^{\otimes 3\,\,}\,\,\,\,\,\,\,^{\underrightarrow{%
\,\,\,\,\,\,\,\,\pi ^{+}\,\,\,\,\,\,}}\,\,\,\,\,\,\,A^{\otimes 3} \\
_{\pi ^{-}}\downarrow
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\downarrow _m
\\
A^{\otimes 3}\,\,\,\,\,\,_{\overrightarrow{\,\,\,\,\,\,\,\,\,\,m\,\,\,\,\,\,%
\,\,\,}}\,\,\,\,\,\,\,\,A
\end{array}
\]
Notice that Definitions 1-4 can immediately be generalized for the $n$-ary
case. Besides, for the particular case $n=2$, we have
\begin{eqnarray*}
m\circ \pi ^{+}(a\otimes b) &=&ab, \\
m\circ \pi ^{-}(a\otimes b) &=&ba.
\end{eqnarray*}
In this (binary) situation, $\pi ^{+}$ plays the role of the identity map ($%
id$), whilst $\pi ^{-}$ corresponds to the flip operator ($\tau $).
\begin{example}
{\sc .} {\em ( Abelian 3-algebra.) } Consider the set of functions from
Example 2, such that now
\[
m(a\otimes b\otimes c){({\bf x)}}=a({\bf x})\,\,b({\bf x})\,\,c({\bf x}).
\]
In this case, the 3-commutator $\pi =0$.
\end{example}
\begin{example}
{\sc . }{\em (Noncommutative 3- algebra.)} Consider $A$ as given in
Example1. Then we can introduce the Nambu ternary bracket $[\cdot ,\cdot
,\cdot ]=m\circ \pi $ by the linear operator
\[
\left[ a,b,c\right] =abc+bca+cab-cba-acb-bac,
\]
which satisfies the properties: \\
\end{example}
{\em (alternation law)}\\
\begin{equation}
\lbrack a,b,c]=[b,c,a]=[c,a,b]=-[a,c,b]=-[c,b,a]=-[b,a,c], \label{alt}
\end{equation}
{\em (derivation law) }
\begin{equation}
\lbrack a,b,cd]=c[a,b,d]+[a,b,c]d, \label{der}
\end{equation}
{\em ( generalized Jacobi identity)}
\begin{equation}
\lbrack g,h[a,b,c]]=[[g,h,a],b,c]+[a,[g,h,b],c]+[a,b,[g,h,c]]. \label{jac}
\end{equation}
Such a generalized Jacobi identity has been analyzed by several authors\cite
{rugg, flato4, tacht1, ad1}, in different contexts, and it was called {\it %
fundamental identity} in Ref.\cite{tacht1}.
\begin{example}
. (Commutative 3-algebra.) Consider $A$ as given by Example 2(in this case $m
$ is a nonassociative product). Then we can introduce the Nambu bracket $%
\left\{ \cdot ,\cdot ,\cdot \right\} =m\circ \pi $ by $\left\{ a,b,c\right\}
=\varepsilon ^{ijk}\,\partial _ia\,\,\partial _jb\,\,\partial _kc$. A basis
for this (classical) Nambu bracket is then $x_{1,\,\,}x_2,x_{3,\;\;\;}$such
that $\left\{ x_1,x_2,x_2\right\} =1.$
\end{example}
We use the definition of 3-algebra, given in terms of commutative diagrams,
to explore the structure of dual coalgebra, and so to introduce a
generalization of Hopf algebra \cite{ad2, madore}. To do this, we proceed as
usually is done with the concept of coalgebra: a 3-coproduct is defined by
inverting the arrows in the definition of the 3-associative algebra.
Therefore, we obtain the definition of 3-comultiplication $\Delta $ and
3-counit $\epsilon $,
\[
\Delta :A\rightarrow A^{\otimes 3},
\]
\[
\epsilon :A\rightarrow {\bf C,}
\]
such that the following diagrams commute
\[
\begin{array}{c}
\,A\otimes {\bf C}\otimes {\bf C}\ ^{\underleftarrow{id\otimes \epsilon
\otimes \epsilon }}A^{\otimes 3} \\
\,_{\simeq }\uparrow
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\uparrow
_\Delta \\
A\,\,\,\,\,\,_{\overleftarrow{\,\,\,\,\,\,\,\,\,id\,\,\,\,\,\,\,\,\,\,\,\,}%
}\,\,\,\,\,\,\,\,A
\end{array}
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\begin{array}{c}
\,\,{\bf C}\otimes A\otimes {\bf C}^{\underleftarrow{\epsilon \otimes
id\otimes \epsilon }}\,\,\,A^{\otimes 3} \\
_{\simeq }\uparrow
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\uparrow
_\Delta \\
A\,\,\,\,_{\,\overleftarrow{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,id\,\,\,\,\,\,\,\,%
\,\,\,\,\,\,}}\,\,\,\,\,\,\,\,A
\end{array}
\]
\[
\begin{array}{c}
{\bf C}\otimes {\bf C}\otimes A\,\,^{\underleftarrow{\epsilon \otimes
\epsilon \otimes id}}\,\,\,\,\,\,A^{\otimes 3}\,\,\,\,\,\,\,\,\, \\
\,_{\simeq }\uparrow
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,%
\uparrow _\Delta \\
A\,\,\,\,\,_{\overleftarrow{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,id\,\,\,\,\,\,\,\,%
\,\,\,\,}}\,\,\,\,\,\,\,\,\,\,A
\end{array}
\]
The 3-coassociativity is defined by
\[
(\Delta \otimes id\otimes id)\Delta =(id\otimes \Delta \otimes id)\Delta
=(id\otimes id\otimes \Delta )\Delta ,
\]
and 3-cocommutativity is expressed by square
\[
\begin{array}{c}
\,\,\,\,\,\,\,\,\,A^{\otimes 3\,\,}\,\,\,\,\,\,^{\underleftarrow{%
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\pi ^{+}\,\,\,\,\,}}\,\,\,\
\,\,\,\,\,\,\,\,A^{\otimes 3} \\
_{\pi ^{-}}\uparrow
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,%
\uparrow _\Delta \\
A^{\otimes 3}\,\,\,_{\,\,\overleftarrow{\,\,\,\,\,\,\,\,\,\,\,\,\,\Delta
\,\,\,\,\,\,\,\,\,}}\,\,\,\,\,A
\end{array}
\]
A 3-algebra and a 3-coalgebra give rise to a generalization of bialgebras
(that is, a 3-bialgebra). In order to define a Hopf 3-algebra, a
generalization of antipode should be introduced. A natural 3-antipode $S$
can be defined as follows,
\begin{eqnarray}
m\circ (S\otimes id\otimes id)\circ \Delta &=&i\circ \epsilon ,
\label{anti1} \\
m\circ (id\otimes S\otimes id)\circ \Delta &=&i\circ \epsilon ,
\label{anti2} \\
m\circ (id\otimes id\otimes S)\circ \Delta &=&i\circ \epsilon \,.
\label{anti3}
\end{eqnarray}
For 4-bialgebras, a 4-antipode can be introduced by
\begin{eqnarray}
m\circ (S\otimes id\otimes S\otimes id)\circ \Delta &=&i\circ \epsilon ,
\label{an1} \\
m\circ (id\otimes S\otimes S\otimes id)\circ \Delta &=&i\circ \epsilon ,
\label{an2} \\
m\circ (S\otimes id\otimes id\otimes S)\circ \Delta &=&i\circ \epsilon ,
\label{an3} \\
m\circ (id\otimes S\otimes id\otimes S)\circ \Delta &=&i\circ \epsilon .
\label{an4}
\end{eqnarray}
All such relations are supposed to be satisfied simultaneously
\begin{example}
{\sc .} {\em ( 3-bialgebra from Nambu-Lie algebra.)} Considering $A$ in
Example 4, a 3-bialgebra can be derived, if we introduce a 3-coproduct by
\end{example}
\[
\Delta a=a\otimes a\otimes a\equiv a^{\otimes 3}.
\]
Indeed, in this case $\Delta $ respects the algebraic relation, since
\[
\lbrack \Delta a,\Delta b,\Delta c]=\Delta [a,b,c].
\]
Notice that if we try to introduce a 3-coproduct by
\begin{equation}
\Delta a=a\otimes 1\otimes 1+1\otimes a\otimes 1+1\otimes 1\otimes a,
\end{equation}
the algebra structure can not be respected, because $\Delta [a,b,c]\ne
[\Delta a,\Delta b,\Delta c]$. This result shows us that we can obtain a
bialgebra structure attached to some (if any) universal enveloping 3-algebra
of a Nambu-Lie algebra, but a Hopf 3-algebra can not be trivially introduced
in this case.
\begin{example}
{\sc .} {({\em $SL(n,{\bf C})$)}} Consider the group $G=SL(n,{\bf C),}$
where an element $x=(a_j^i)\in G$ has unit determinant, $det\,\,x=1$. It is
well known that the algebra generated by the functions $a_j^i(x)$, with
2-coproduct defined by $\Delta {a^i}_j=\sum_k{a^i}_k\otimes {a^k}_j$ is a
Hopf algebra (see, for example, \cite{madore}), with counit given by
\begin{equation}
\epsilon ({a^i}_j)=\delta {^i}_j \label{counit11}
\end{equation}
and the antipode given by inverse matrix
\begin{equation}
S({a^i}_j)={(a^{-1})^i}_j. \label{ant6}
\end{equation}
In order to study a generalization of this Hopf algebra, we can consider two
cases, 3- and 4- algebras. First, a 3-coproduct $\Delta $ can be defined as
following
\begin{equation}
\Delta a_j^i(x,y,z)=a_j^i(x,y,z)=\sum_{k,l}a_k^i(x)a_l^k(y)a_j^l(z)
\end{equation}
\[
=\sum_{k,l}a_k^i\otimes a_l^k\otimes a_j^l(x,y,z),
\]
or
\begin{equation}
\Delta a_j^i=\sum_{k,l}a_k^i\otimes a_l^k\otimes a_j^l.
\end{equation}
Therefore, a 3-bialgebra can be derived, using the usual counit, Eq.(\ref
{counit11}). It is an easy matter to see that the usual antipode, Eq.(\ref
{ant6}), can not be used to define a satisfactory 3-antipode as given by
Eqs.(\ref{anti1})--(\ref{anti3}).
For 4-product, however, using
\[
\Delta a_j^i=\sum_{k,l,m}a_k^i\otimes a_l^k\otimes a_m^l\otimes a_j^m,
\]
together with Eqs.(\ref{an1})--(\ref{an4}), where $S$ is being given by Eq.(%
\ref{ant6}), we get a Hopf 4-algebra.
\end{example}
In short, we have presented here a generalization of the concept of
associative and commutative algebra. Exploring the notion of duality, and
following in parallel with the usual (binary) approach, the concept of
n-bialgebra could be introduced. In particular, we have presented examples
of n-bialgebras, as that one derived from the Nambu-Lie algebra and that
associated with the $SL(n,C)$ group. It has also been indicated how to
introduce the concept of 3- and 4-antipode, in order to obtain a
generalization of Hopf algebra. As an example, a Hopf 4-algebra attached to $%
SL(n,C)$ was derived. It should be interesting to study the connection of a
3-Hopf algebra with a universal enveloping algebra of a 3-noncommutative
algebra. These aspects will be studied in more details elsewhere.
\subparagraph{
\[
\]
}
{\bf Acknowledgments}
We are grateful to A. Matos Neto and J. David M. Vianna for helpful
discussions. One of us (A.E.S) thanks to CNPq ( a Brazilian Agency for
Research) for financial support.
|
1,314,259,993,458 | arxiv | \section{Introduction}
\label{sec:intro}
The interactive gesture generation task aims to control the gesture of a virtual character with a user control signal. Many works addressed the problem of synthesizing the gesture of an avatar along with a speech modality \cite{alexanderson2020style,ahuja-etal-2020-gestures}. These methods enabled capturing and synthesis of natural co-speech gestures of a virtual character. \cite{kucherenko2020gesticulator} used speech and text jointly as inputs to their proposed model to generate the gestures and reported that the multimodal aspect of their method helps to understand the sentence semantics and outputs natural and diverse gestures. \cite{Yoon2020Speech} encoded these modalities along with the speaker identity since each expressive behavior highly relies on the speaker.
\par \noindent
\\
Nevertheless, motion synthesis from a non-verbal audio input such as laughter is a complex task where no a priori semantic information is available with the audio signal to help with understanding the overall context. However, laughter constitutes an important part of social interaction \cite{mckeown2015relationship} where the smiling and laughing expression of an interlocutor induces a mimicry effect on each partner \cite{el2019smile}. The growing interest in virtual environments has led to the development of virtual social agents. The immersive factor of a virtual world is partly induced by the naturalness of the motion of virtual characters. The human-avatar social interaction is an active research topic among the computer vision community and rendering natural motion is a crucial task to enhance the social aspect of the avatar \cite{garau2003impact}. Co-laughter gesture synthesis is thus a relevant task in human computer interaction where it can be exploited in various use cases such as video game development \cite{mancini2013laugh} or in a medical context e.g. to enhance the social skills of children with autism spectrum disorder \cite{didehbani2016virtual}.
\par \noindent
\\
The work presented in this paper falls in a wider project aiming at generating co-laughter motion corresponding to the audio given at its input using generative deep neural networks. We present here first analyses results on the relationship between body movements (excluding facial expressions) and several aspects of laughter. These analyses would help us gain a better understanding of our data and thus organize their use to build the previously mentioned generative system.
The motion data is not extracted from motion capture sensors but is estimated from the recorded RGB videos directly. Neural networks are powerful tools for learning complex relationships between given modalities within a database. Thus, the proposed analysis allows us to identify whether correlations between laughter, its intensity and the associated movement are significant within a given dataset. If this dataset does not exhibit a high correlation between laughter and body motion, it may be a challenging dataset to train neural networks that synthesize body motion from audio laughter.
\par \noindent
\\
This paper is organized as follow: Section \ref{sec:sota} reviews the state-of-the-art analysis of the relationship between multiple laughter modalities and co-laughter motion synthesis methods. Section \ref{sec:exp} explains the experimental protocol and Section \ref{sec:res} analyzes the experimental results. Section \ref{sec:futur} discusses the limitations of this work and proposes some improvements.
\section{Related Work}
\label{sec:sota}
To focus on the synthesis task, it is useful to understand and measure the relationship between laughter as an audio signal and the gesture performed during that laughter. \cite{6681455} found a significant contrast in the captured motions between different types of laughter (hilarious, social, and non-laughter) and claimed that motion features analysis helped with the classification of laughter type. \cite{7298420} showed that full-body motion features are sufficient to detect laughter occurrences. \cite{mancini2013laugh} pointed out the periodic pattern of the shoulder motion while laughing in the dataset \textit{Multimodal Multiperson Corpus of Laughter in Interaction} \cite{10.1007/978-3-319-02714-2_16}. \cite{ishi2019analysis} focused on laughter intensity to reveal that the degree of smiling face and the occurrences of the front, back, up, and down motions are proportional to the laughter intensity.
\par \noindent
\\
\cite{dilorenzo2008laughing} proposes a physics-based model to synthesize the torso deformation induces by the air flow while laughing. \cite{niewiadomski2014rhythmic} performs a harmonic analysis of the laughter body motions to get relevant rhythmic features for the generation of body movements. \cite{ding2017audio} synthesized upper body gestures from laughter audio signal based on the captured or defined co-laughter motion correlations. Their approach is based on a statistical framework for head and torso motion and a rule-based method for shoulder motion due to the limitation of their dataset. \cite{ishi2019analysis} generated co-speech and laughter motion (eyelids, face, hand and upper body) on physical android robots. The works presented above relied on recorded motion capture datasets of people laughing in multiple contexts. \cite{jokinen2016body} analyzed videos of social interactions and pointed out the synchrony of body movements with laughter. Similarly, this research aims to identify body motion relationships with laughter from RGB videos and audio signals. However, \cite{jokinen2016body} estimated bounding boxes around the limbs of the participants.
\par \noindent
\\
This work proposes an analysis of the relationship between low-level motion features extracted from RGB videos i.e. the Cartesian position of each joint, the laughter intensity and audio features in the context of a dyadic conversation. This relational study aims to identify any significant correlation between the positions of the joints and the laughter audio signal and intensity. Two approaches are tested and are further explained in Section \ref{subsubsec:audio} regarding the laughter audio signal: first, the audio signal is decomposed into a set of low-level and physical features and then the audio signals are embedded into a latent space from the baseline speech oriented model \textit{Wav2vec 2.0} \cite{DBLP:journals/corr/abs-2006-11477}.
Finally, the relationship between the 2D Cartesian positions of the skeleton and laughter intensity is established and described in Section \ref{subsubsec:intensity}.
\section{Experiments}
\label{sec:exp}
\subsection{Dataset}
In our experiments, we used the dataset \textit{Naturalistic Dyadic Conversation on Moral Emotions} (\textit{NDC-ME}) \cite{heron2018dyadic}. It consists of a collection of dyadic conversations focusing on moral emotions through speaker-listener interactions. In contrast to \textit{IFADV} Corpus \cite{van2008ifadv} and the \textit{Cardiff Conversation Database} \cite{aubrey2013cardiff}, the whole upper body of the participants is available in the videos and their motion is not constrained by any object. 21 pairs of participants have been recorded while they were interacting together without following a fixed scenario. The audio and videos have been captured separately. The emotions and the intensity of the expressed emotion of each participant during the recording have been labeled using the annotation tool \textit{ELAN} \cite{elan} and are available here \footnote{\href{https://zenodo.org/record/3820510}{https://zenodo.org/record/3820510}}. The annotation rules follow the protocol \footnote{This protocol is available \href{https://www.researchgate.net/publication/341371010_supplementary_materialpdf}{here}} used by \cite{el2019smile}. The laughter clips are also labeled into 3 categories regarding their intensity: low, medium, and high. At that time, only 7 pairs have been annotated. Following these annotations, the audio and videos in which laughter occurs are extracted from the initial dataset. 186 videos are kept including 10 male and 4 female speakers for a total duration of 199.33 seconds. Then, 2D Cartesian positions of the skeleton joints are extracted from the RGB videos using \textit{OpenPose} \cite{DBLP:journals/corr/abs-1812-08008}. The skeleton consists of 8 joints representing the upper body of the subject. A frame sample with an estimated skeleton as well as the upper body structure is shown in Figure \ref{fig:openposed}.
\begin{figure}
\centering
\includegraphics[scale=0.15]{images/openposed.jpg}
\includegraphics[scale=0.25]{images/Skeleton.png}
\caption{Top: sample of a video with the estimated skeleton and face landmarks. Since this work only focuses on the body skeleton, the face landmarks are ignored. Bottom: structure of the upper body skeleton.}
\label{fig:openposed}
\end{figure}
\subsection{Experimental setup}
\label{sec:exo}
This part describes the experimental protocol to identify the correlation between the laughter modalities in \textit{NDC-ME} dataset.
\par \noindent
\\
Joint movement signals are represented as time series $s$ where $s_{j}^{i} = p_{j}^{i} - \bar{p_{j}}$ with $ p_{j}^{i} $, the Cartesian position of a joint $j$ at frame $i$ and $ \bar{p_{j}}$ the mean position of the joint $j$. Thus, $s_j$ is the temporal fluctuations of the position of the joint $j$ around its mean position. Then, the horizontal and vertical component of the motion signal of joint $j$ are respectively noted $x_j$ and $y_j$. In this work, we consider separately horizontal and vertical movements for the sake of simplicity but it would be interesting to consider both directions. The correlations on shoulders, elbows and wrists are computed separately for the right and left body parts and we further report the average value.
\subsubsection{Body movement and audio features}
\label{subsubsec:audio}
We wanted to analyse the correlation between the audio signal and the body movement. For the audio signal, we extracted two sets of features per 20 ms frame : one that includes 19 well-known low-level features in the speech analysis domain (3 from LPC, 13 MFCCs and 3 LPCCs), and the other that includes the 512 embedded outputs of the \textit{Wav2vec 2.0} model. For each subset of features, we computed the pearson correlation coefficient between $(x_j,y_j)$ and the time series of audio features.
\subsubsection{Body movements and laughter intensity}
\label{subsubsec:intensity}
Firstly, the following features were extracted for each horizontal and vertical joint movement signal $(x_j,y_j)$: In the time domain (power $P$, maximum amplitude value $max$, mean value $\mu$ and standard deviation $\sigma$), and the frequency domain (the maximum value of Fourier Transform $max(FT)$, the mean of Fourier Transform $\mu(FT)$, and peak frequency $f_\mathit{pk}=argmax(FT)$). Since laughter videos vary in length, Fourier Transform curves were linearly interpolated in 248 uniform samples between 0 and Nyquist frequency $f_\mathit{Nyquist}$. The upper 10\% of the frequency range was excluded when finding the peak frequency in order to exclude high-frequency noise ( $f_\mathit{pk} < 0.9 f_\mathit{Nyquist}$ ). The correlation between those extracted features of joints movement and laughter intensity are then analyzed.
\begin{table*}
\centering
\begin{center}
\begin{tabular}{ | p{1.1cm} || p{6.55cm} || p{6.55cm} |}
\hline
\textbf{Feature} &
\centering \textbf{Horizontal Movement}&
\centering \textbf{Vertical Movement}
\end{tabular}
\begin{tabular}{ | p{1.1cm} || p{0.8cm} | p{1.0cm} | p{1.3cm} | p{0.9cm}| p{0.8cm}|| p{0.8cm} | p{1.0cm} | p{1.3cm} | p{0.9cm}| p{0.8cm}|}
\hline
\textbf{} &
\textbf{Head} &
\textbf{Thorax} &
\textbf{Shoulders} &
\textbf{Elbows} &
\textbf{Wrists} &
\textbf{Head} &
\textbf{Thorax} &
\textbf{Shoulders} &
\textbf{Elbows} &
\textbf{Wrists}
\\
\hline
LPC &
0.03& 0.02& 0.05& 0.03 & 0.02
& 0.04 & 0.05 & 0.07 & 0.02 & 0.03 \\
\hline
MFCCs &
-0.03& -0.01& 0.01& -0.01 & 0.01
& -0.08 & -0.06 & -0.06 & -0.04 & -0.01 \\
\hline
LPCCs &
0.05& 0.03& 0.04& -0.01 & -0.01
& 0.05 & 0.07 & 0.07 & -0.02 & 0.08 \\
\hline
W2V&
\textbf{0.09}& \textbf{0.08}& \textbf{0.07}& \textbf{0.08} & \textbf{0.09}
& \textbf{0.11} & \textbf{0.09} & \textbf{0.09} & \textbf{0.10} & \textbf{0.09} \\
\hline
\end{tabular}
\caption{Maximum average correlation between an audio feature and a joint with respect to its movement direction.}
\label{table:table6}
\end{center}
\end{table*}
\begin{table*}
\centering
\begin{center}
\begin{tabular}{ | p{1.1cm} || p{6.55cm} || p{6.55cm} |}
\hline
\textbf{Feature} &
\centering \textbf{Horizontal Movement}&
\centering \textbf{Vertical Movement
\end{tabular}
\begin{tabular}{ | p{1.1cm} || p{0.8cm} | p{1.0cm} | p{1.3cm} | p{0.9cm}| p{0.8cm}|| p{0.8cm} | p{1.0cm} | p{1.3cm} | p{0.9cm}| p{0.8cm}|}
\hline
\textbf{} &
\textbf{Head} &
\textbf{Thorax} &
\textbf{Shoulders} &
\textbf{Elbows} &
\textbf{Wrists} &
\textbf{Head} &
\textbf{Thorax} &
\textbf{Shoulders} &
\textbf{Elbows} &
\textbf{Wrists}
\\
\hline
\hline
max &
0.09& 0.25& 0.30& 0.22& 0.26
& 0.26& \textbf{0.39}& \textbf{0.25}& 0.25& 0.20 \\
\hline
P &
0.08& 0.09& 0.18& 0.10& 0.13
& 0.29& 0.25& 0.16& 0.10 & 0.10 \\
\hline
$\mu$ &
0.10& 0.05& 0.06& 0.02& 0.08
& -0.17& -0.19& -0.14& -0.16& -0.15 \\
\hline
$\sigma$ &
0.16& 0.23& 0.28& 0.23& 0.26
& 0.35& 0.31& 0.21& 0.27& 0.20 \\
\hline
$\mu(FT)$ &
0.13& 0.26& 0.30& 0.24& 0.26
& 0.28& 0.37& 0.23& 0.25& 0.18 \\
\hline
max(FT) &
0.23& \textbf{0.32}& \textbf{0.36}& \textbf{0.32}& \textbf{0.34}
& \textbf{0.36}& 0.33& 0.24& \textbf{0.32}& \textbf{0.21} \\
\hline
fpk &
\textbf{-0.29}& -0.22& -0.20& -0.2& -0.22
& -0.22& -0.21& -0.12& -0.20& -0.12 \\
\hline
\end{tabular}
\caption{Correlation between laughter intensity and a joint movement feature. The power $P$, maximum amplitude value $max$, mean value $\mu$ and standard deviation $\sigma$ are computed from the horizontal and vertical motion signals in the time domain. In the frequency domain, the motion features are the maximum value of Fourier Transform $max(FT)$, the mean of Fourier Transform $\mu(FT)$ and and peak frequency $f_{pk}$. The correlation is bound between -1 and 1. The higher absolute value means a stronger correlation and 0 shows no correlation in the data.}
\label{table:table5e}
\end{center}
\end{table*}
\section{Results}
\label{sec:res}
Section \ref{sec:res} presents the results of the correlation analysis between body movements, audio features and laughter intensity.
\subsection{Body movements and audio features}
Table \ref{table:table6} shows the maximum average correlation between an audio feature and a joint movement. The values depicted informs us about the weak correlation between the evolution of the position of a joint compared to the evolution of an audio feature. However, using embedded features rather than interpretable ones increases the correlation across all joints.
\subsection{Body movements and laughter intensity}
The correlation between the extracted features and laughter intensity is shown in table \ref{table:table5e}. Since $max(FT)$ feature has the highest correlation, we visualized the distribution of $max(FT)$ features under multiple laughter intensities in Figure \ref{fig:maxft}. The visualization of $max(FT)$, similar to the other extracted features, resulted in overlapping boxplots. Hence, we conclude that any of the extracted features alone is not sufficient to identify the laughter intensity. However, statistically speaking, the mean value of the distribution (the orange line in Figure \ref{fig:maxft}) increases with laughter intensity.
\begin{figure}[t]
\centering
\includegraphics[scale=0.45]{images/Fig31x.png}
\caption{The maximum Fourier transform of a joint movement signal $max(FT(p_j))$ under multiple laughter intensities. Each Row represents a joint and each column represents a direction of movement (horizontal/vertical). Each figure has 3 boxplots (low laughter intensity at 0, medium at 1, and high at 2). The orange line in a boxplot represents the mean.}
\label{fig:maxft}
\end{figure}
\section{Discussion and Challenges}
\label{sec:futur}
The results presented in Section \ref{sec:res} indicate that, in \textit{NDC-ME} dataset, body movements and audio features seem to be weakly correlated. Further investigation and processing are needed to draw a more robust conclusions. Thus, this dataset seems, at the moment and with this current analysis, challenging for a co-laughter gesture synthesis task. However, we found some aspects in the dataset that might impact the results in our analysis: in some files, speaker speech overlaps with the listener's laughter and we suspect that this influenced the experimental results in Section \ref{sec:res}. These need to be removed from the dataset in future work to get more accurate results. One suggestion is the application of channel source separation methods to the audio to distinguish the laughter or speech of each participant and have a better audio representation (more suitable features). Then, the laughter intensity has been subjectively annotated by a single annotator and having a low number of annotators makes the data distribution more sensitive to human error. We suggest to increase the number of annotators and e.g. extracting the mean annotations to reduce this impact. Moreover, since the dataset has not been fully annotated yet, it contains a relatively small amount of laughter examples. Then, in a future work, we would like to extract correlations from audio acoustic features such as pitch or loudness. Moreover, it would be interesting to take into account other modalities such as the type of laughter and the context of the interactions. Finally, in this work, we focus on body movement but face landmarks are available from the \textit{OpenPose} estimation as shown in Figure \ref{fig:openposed}. The relationship between those landmarks and the laughter intensity and laughter audio features can be established in further investigation.
\section{Conclusion}
\label{sec:ccl}
This work proposes a method to analyze the relationship between laughter, its intensity and the body movement in recorded dyadic conversations. In contrast with previous works, the gestures are extracted from the RGB videos using a baseline pose estimation method. First, this work highlights around 30\% correlation between laughter intensity and motion features where the maximum amplitude of the Fourier transform leads to the highest correlation value. Moreover, the analysis of correlation between interpretable and high-level audio features does not output significant correlation values. This work highlights some of the limitations of \textit{NDC-ME} dataset that we need to take into account in the context of deep generative model training for body motion generation from a laughter audio signal. This analysis opens the way to create datasets suited to build multimodal models that generate the motion of virtual agents from the audio cue.
\section{Acknowledgements}
This work was supported by Service Public de Wallonie Recherche under grant n° 2010235 - \textit{ARIAC} by \textit{DIGITALWALLONIA4.AI}
\section{Bibliographical References}\label{reference}
\bibliographystyle{lrec2022-bib}
|
1,314,259,993,459 | arxiv | \section{Introduction}
\label{intro}
Dempster-Shafer evidence theory (DSET) \cite{dempster2008upper,shafer1976mathematical} as a generalization of probability theory (PT) express the information by interval probabilities. For an $n$-element mutually exclusive set, probability distribution express its information by $n$ probabilities, and in DSET, $2^n$ mass functions called focal elements formed basic probability assignment (BPA) to express information. BPA utilizes more dimensional data than probability distributions, so it has the ability to express more information than probability distributions. Relying on the above advantages, DSET is widely applied in information fusion \cite{Xiong2021InformationSciences,yang2013discounted,pan2020association}, data de-combination \cite{fan2021combination}, reasoning \cite{liao2020deng}, and reliability evaluation \cite{gao2021NET}. Because the power set means the combination numbers \cite{Song2021powerset}, the permutation numbers subset not be considered. Smarandache and Dezert \cite{smarandache2006advances} extended $2^n$ to $U^n$ to propose the Dezert-Smarandache theory (DSmT), which can express more generalized information than DSET, and Xiao \cite{Xiao2020CEQD,xiao2021caftr} extended BPA to the complex number field to predict interference effects in a more proper way.
In PT, Shannon entropy \cite{shannon2001mathematical} can express the uncertainty of probability distribution, but how to measure the total uncertainty of BPA is still an open issue. BPA can be seen as formed by two properties discord and non-specificity \cite{jousselme2006measuring}. Discord represents the conflict between elements in the framework, and non-specificity, as a difference between BPA and probability distribution, represents the uncertainty of the distribution. In order to facilitate understanding, we divide common BPA measurement methods into $3$ types.
\begin{description}
\item[\textbf{Local measurement: measure a certain characteristic of BPAs}] Hohle \cite{hohle1982entropy} and Yager \cite{yager1983entropy} respectively utilized belief function and plausibility function to calculate the confusion and dissonance of BPAs. Hartley entropy was proposed to express the non-specificity of BPA in \cite{higashi1982measures}. Klir \textit{et al.} measure the discord of BPA in \cite{klir1990uncertainty}. Harmanec \textit{et al.} \cite{AU} proposed a method to measure the aggregate uncertainty (AU) of BPA. Jousselme \textit{et al.} \cite{jousselme2006measuring} substituted pignistic probability transformation in to Shannon entropy to propose ambiguity measure (AM). We proposed the belief eXtropy to measure the negation degree \cite{zhou2021eXtropy}.
\item[\textbf{Splitting method: measure uncertainty after splitting the mass functions}] Pal \cite{pal1993uncertainty} first utilized the splitting to divide the mass functions of the focal elements with $n$ elements into $n$ parts and then substituting them into Shannon entropy. Based on above, Deng \cite{Deng2020ScienceChina,Deng2020InformationVolume} splitting the mass functions to their power set, which can represent more uncertainty, and Abell{\'a}n \textit{et al.} evaluated Deng entropy and its extensions in \cite{abellan2017analyzing,moral2020critique}. These two methods satisfy non-negativity, monotonicity, probability consistency, and additivity. However, their maximum entropy distribution is not a vacuous BPA, which is counter intuitive.
\item[\textbf{Belief functions: measure uncertainty based on belief functions}]Due to the limitations of BPA to express information, many measurements use belief functions to express its uncertainty. Wang and Song \cite{wang2018uncertainty} utilized elements' belief (Bel) function and plausibility (Pl) function to measure the discord and non-specificity respectively (Hereinafter referred to as SU measurement). Jirou{\v{s}}ek and Shenoy combined Pl function and Hartley entropy to proposed a new entropy (Hereinafter referred to as JS entropy) \cite{jirouvsek2018new} and they also proposed a decomposable entropy based on commonality (q) function \cite{jirouvsek2020properties}. Yang and Han \cite{yang2016new} proposed a novel uncertainty measure based on the distance of elements' Bel functions and Pl functions.
\end{description}
There are total $10$ requirements for total uncertainty measurement (UM) methods of BPA in \cite{klir2013uncertainty,abellan2008requirements}. Although some of them are controversial, they can evaluate UM methods comprehensively.
The elements in framework are mutually exclusive, so in process of decision-making, how to transform the BPA to probability distribution is significant. Pignisitic probability transformation (PPT) is utilized in decision layer of transfer belief model (TBM) \cite{smets2005decision}, which distributes the mass functions of multi-element focal elements equally under the principle of keeping the maximum uncertainty. Cobb and Shenoy \cite{cobb2006plausibility} proposed plausibility transformation method (PTM) based on the elements' Pl functions, which has Dempster combination rule consistency. In addition, there are many methods of probability transformation methods \cite{CHEN2021104438} and Han \textit{et al.} evaluated them in \cite{han2015evaluation}. Probability transformation can also be regarded as the non-specificity loss. The previous methods only gave the results of the transformation, and did not describe the process of generating the probability. Therefore, their reasonability only can be evaluated from results , which is not comprehensive enough.
In the paper, we propose a possible PPT generation process based on fractal, and based on this process, we propose a new belief entropy called Fractal-based belief (FB) entropy to measure the total uncertainty of BPA. After evaluation and comparison, we prove that FB entropy can meet the requirements in numerical calculation and has a corresponding physical model. The contributions of paper is summarized as follows: (1) We first use fractal idea to simulate the process of probability transformation. (2) FB entropy can not only measure the uncertainty of BPA reasonably but has corresponding physical model as well, which is superior to all existing belief entropy. (3) We does not deliberately consider discord and non-specificity when defining FB entropy, but we can separate the two parts of uncertainty based on the fractal result. For different BPAs, the proportions of the two parts are different, which is more intuitive. In general, the structure of this paper is as follows:
\begin{description}
\item[$\bullet$]The Section \ref{preliminaries} mainly introduces the basic concepts of DSET, common probability transformation methods and classical uncertainty measurements of BPA.
\item[$\bullet$]In the Section \ref{process}, we simulate the process of PPT based on the fractal and give it a possible explanation.
\item[$\bullet$]Section \ref{fbentropy1} is the core of the paper. According to the process of PPT, we propose the FB entropy. After evaluation its properties, we prove FB entropy can measure BPA rationally.
\item[$\bullet$]Some unique advantages of FB entropy are shown in Section \ref{fbentropy2} by comparing with common methods.
\end{description}
\section{Preliminaries}
\label{preliminaries}
\subsection{Dempster-Shafer evidence theory}
\begin{definition}[BPA]\label{bpa}\cite{dempster2008upper}
For a finite set $\Theta$ with $n$ elements, it can be written as $\Theta=\{\theta_{1},\dots,\theta_{n}\}$, which is called a discernment framework in DSET. The mass functions of elements in $2^\Theta$ can be written as $\mathbb{B}(2^{\Theta})=\{m(\varnothing),m(\theta_{1}),\dots, m(\theta_{n}), m(\theta_{1}\theta_{2}),\dots, m(\theta_{1}\dots\theta_{n})\}$, and $m(F_i)$ satisfies
\begin{equation}
m(\varnothing)=0;~~~~~~~\sum_{F_{i}\in 2^\Theta} m(F_{i})=1;~~~~~~m(F_{i})\geqslant 0,
\end{equation}
where $\mathbb{B}(2^{\Theta})$ is basic probability assignment (BPA), and $\{F_i\}$ is called focal element.
\end{definition}
This paper only discusses normalized BPA, so $m(\varnothing)=0$. In addition to mass functions, belief functions also can store the information of BPA.
\begin{definition}[Belief functions]\label{beliefinterval}\cite{shafer1976mathematical}
For an $n$-element discernment framework $\Theta$, with its BPA $\mathbb{B}(2^{\Theta})$, the belief (Bel) function, plausibility (Pl) function, and commonality (q) function of focal elements are defined as
\begin{equation}
\begin{aligned}
&Bel(F_{i})=\sum_{G_{i}\subseteq F_{i}}m(G_{i})=1-Pl(\overline{F_{i}}),\\
&Pl(F_{i})=\sum_{G_{i}\cap F_{i} \neq \varnothing~and~G_{i}\subseteq X}m(G_{i})=1-Bel(\overline{F_{i}}),\\
&q(F_{i})=\sum_{F_i\subseteq G_i}m(G_i).
\end{aligned}
\end{equation}
It is obvious that the $Bel(A)\leqslant Pl(A)$, and the belief interval of focal element $A$ is $[Bel(A),~Pl(A)]$.
\end{definition}
The above two methods in Definition\ref{bpa} and \ref{beliefinterval} are usually used to calculate the uncertainty of information in DSET. Next, some common probability transformation methods are shown.
\subsection{Common probability transformation methods}
\begin{definition}[PPT] \cite{smets2005decision}
\label{PPT}
For an $n$-element discernment framework $\Theta$ with its BPA $\mathbb{B}(2^{\Theta})$. Its pignistic probability transformation (PPT) called $BetP(\theta_i)$ is defined as :
\begin{equation}\label{PPTe}
BetP(\theta_i)=\sum_{\theta_{i} \in F_{i}~and~F_{i}\in 2^\Theta} \frac{m(F_i)}{|F_{i}|} ,
\end{equation}
where $|F_{i}|$ is the cardinality of focal element $F_{i}$.
\end{definition}
\begin{definition}[PTM]\cite{cobb2006plausibility}
\label{PPF}
For an $n$-element discernment framework $\Theta$ with its BPA $\mathbb{B}(2^{\Theta})$. Its plausibility transformation method (PTM) called $PnPl(\theta_i)$ is defined as :
\begin{equation}
PnPl(\theta_i)=\frac{Pl(\theta_{i})}{\sum_{j=1}^{n}Pl(\theta_{j})}
\end{equation}
\end{definition}
Besides PTM, other probability transformation methods are specializations of PPT, i.e., the support of $m(\theta_i)$ plus the support degree of multi-element focal elements for $\theta_i$. Though PTM doesn't satisfy the upper and low probability rule, it is the only method satisfies the Dempster combination rule consistency \cite{han2015evaluation}.
\subsection {Classical uncertainty measurements (UM) of BPA}
\begin{definition}[UM]\label{um}
For a discernment framework $\Theta=\{\theta_{1},\theta_{2},\dots,\theta_{n}\}$, its BPA is $\mathbb{B}(2^{\Theta})$, PPT is $ P_{\mathbb{B}}(\theta_{i})$, and PPF is $Pl\_ P_{m}(\theta_{i})$. Common uncertainty measurements of BPA and its maximum distribution are shown in Table\ref{d1t1}.
\newgeometry{left=2cm, right=2cm, top=2cm, bottom=2cm}
\begin{table*}[htbp!]\footnotesize
\centering
\begin{adjustbox}{angle=90}
\begin{tabular}{ccccc}
\Xhline{1.4pt}
Methods & Expression &\tabincell{c}{Maximum distribution}&Maximum& Remark \\
\Xhline{1.4pt}
\tabincell{l}{Ambiguity measure\cite{jousselme2006measuring}}&\tabincell{l}{$H_{j}=-\sum_{i=1}^{n}P_{\mathbb{B}}^{\Theta}(\theta_{i})log(P_{\mathbb{B}}^{\Theta}(\theta_{i}))$}&$P_{\mathbb{B}}^{\Theta}(\theta_{i})=\frac{1}{|\Theta|}$&$\log (|\Theta|)$&\tabincell{l}{Elements;\\Cardinality;\\Mass function}\\
\hline
Confusion measurement \cite{hohle1982entropy}& $C_{H}=-\sum_{F_{i}\in 2^{\Theta}}m(F_{i})\log Bel(F_{i})$&$m({\theta_{i}})=\frac{1}{|\Theta|}$&$\log (|\Theta|)$& \tabincell{l}{Mass function;\\Bel function} \\
\hline
Dissonance measurement \cite{yager1983entropy}&$E_Y=-\sum_{F_{i}\in 2^{\Theta}}m(F_{i})\log Pl(F_{i})$& \tabincell{l}{$m(F_i)=\frac{1}{K}, \forall 1 \rightarrow K,$\\$ \{F_1\}\cap \cdots \cap \{F_K\}= \varnothing $}& $\log (|\Theta|)$ &\tabincell{l}{Mass function;\\Pl function}\\
\hline
Hartley entropy \cite{higashi1982measures}& $E_{H}=-\sum_{F_{i}\in 2^{\Theta}}m(F_{i})\log |F_{i}|$ &$m(\Theta)=1$&$\log (|\Theta|)$& \tabincell{l}{Mass function; \\Cardinality} \\
\hline
Discord measurement \cite{klir1990uncertainty}& \tabincell{l}{$S_{KP}=-\sum_{F_{i}\in 2^{\Theta}}m(F_{i}) \log \sum_{G_{i}\in 2^{\Theta}}m(G_{i})\frac{|F_{i}\cap G_{i}|}{|G_{i}|}$} &$m({\theta_{i}})=\frac{1}{|\Theta|}$&$\log (|\Theta|)$& \tabincell{l}{Mass function;\\ Cardinality} \\
\hline
\tabincell{l}{Aggregate uncertainty \\(AU) measurement \cite{AU}}& $argmax_{\mathcal{P}}[-\sum_{i=1}^{n}p(\theta_{i})\log p(\theta_{i})]$&$BetP(\theta_{i})=\frac{1}{|\Theta|}$&$\log (|\Theta|)$& \tabincell{l}{Mass function}\\
\hline
\tabincell{l}{Pal \textit{et al.}'s entropy \cite{pal1993uncertainty}}&$E_{p}=-\sum_{F_{i}\in 2^{\Theta}} m(F_{i}) \log \frac{m(F_{i})}{{|F_{i}|}}$&$m(F_i)=\frac{|F_i|}{|\Theta|\cdot2^{|\Theta|-1}}$&$\log (|\Theta|\cdot 2^{|\Theta|-1})$&\tabincell{l}{Mass function;\\Cardinality}\\
\hline
Deng entropy\cite{Deng2020ScienceChina}&$E_{d}=-\sum_{F_{i}\in 2^{\Theta}}m(F_{i})\log \frac{m(F_{i})}{2^{|F_{i}|}-1}$&\tabincell{l}{$m(A)=\frac{2^{|F_{i}|}-1}{\sum_{F_{i}\in 2^{\Theta}} 2^{|F_{i}|}-1}$}&$\log (3^{|\Theta|}-2^{|\Theta|})$&\tabincell{l}{Mass function;\\power set}\\
\hline
SU measurement \cite{wang2018uncertainty}& \tabincell{l}{$SU=\sum_{\theta_i\in\Theta}[-\frac{Pl(\theta_{i})+Bel(\theta_{i})}{2}\log\frac{Pl(\theta_{i})+Bel(\theta_{i})}{2}+Pl(\theta_{i})-Bel(\theta_{i})]$}& \tabincell{l}{$Bel(\theta_{i})=0; $\\$Pl(\theta_{i})=1$}&$|\Theta|$&\tabincell{l}{Bel function;\\Pl function;\\Elements}\\
\hline
JS entropy \cite{jirouvsek2018new}&\tabincell{l}{$H_{JS}=\sum_{F_{i}\in 2^{\Theta}}m(F_{i})\log(|F_{i}|)-\sum_{i=1}^{n}Pl\_ P_{m}(\theta_{i})\log Pl\_ P_{m}(\theta_{i})$}&$m(\Theta)=1$&$2\log (|\Theta|)$&\tabincell{l}{PMT;\\Hartley entropy;\\Elements}\\
\hline
Decomposable entropy \cite{jirouvsek2020properties}&$H_q=\sum_{F_i\in 2^\Theta}(-1)^{|F_i|}q(F_i)\log q(F_i)$&NaN&NaN&\tabincell{l}{q function;\\Focal element}\\
\hline
Yang and Han's method \cite{yang2016new}&$TU^l=1-\frac{1}{n}\cdot\sqrt{3}\sum_{\theta_i\in\Theta}d^l([Bel(\theta_i),Pl(\theta_i)],[0,1])$&$m(\Theta)=1$&$1$&\tabincell{l}{Bel function;\\Pl function;\\Distnace}\\
\Xhline{1.4pt}
\end{tabular}
\end{adjustbox}
\caption{These are classical and novel uncertainty measurements of BPA. We can find that in all of entropies, only JS entropy satisfies the intuitive maximum distribution.}
\label{d1t1}
\end{table*}
\restoregeometry
\end{definition}
\section{The process of probability transformation based on fractal} \label{process}
\subsection{Simulating probability transformation from fractal perspective}
Even though BPA can express more information by assigning the mass functions to the multi-element focal elements, in reality, all we observe are probability distributions. So how to reasonably transform BPA into probability distribution is the key to combining BPA with practical applications. PPT as the decision-making layer in TBM, which has wide applications. We propose a process for the PPT according to fractal, assuming that the result of PPT is generated under the action of time. For $2$-element discernment framework $X=\{A,B\}$, the process of BPA transforming into probability is shown in Figure \ref{f1}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.45\textwidth]{f1.png}
\caption{The left part of Figure \ref{f1} shows that mass function of $AB$ is split to $A$ and $B$ in different ways over time. When $n\rightarrow \infty$, BPA transforms into probability distribution. For a certain unit time, right part is a step of transformation in a unit time, so we can think that the process of transformation is $AB$ continuously splitting itself into its own power sets $\{A, B, AB\}$.}
\label{f1}
\end{figure}
Self-similarity is a basic property of fractal theory, that is, in the process of fractal, the whole and part are similar. In order to show this property more clearly, we use Figure \ref{f2} to show the process of splitting.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.45\textwidth]{f2.png}
\caption{The small triangle formed by the new split $AB^{n-1}$ and the large triangle of the entire split satisfy self-similarity.}
\label{f2}
\end{figure}
The entropy change in time due to the fractal geometry is assimilated to the information growth through the scale refinement. Wen \textit{et al.} also proved this point in their work on information dimension \cite{wen2021invited,Gao2021Information}. As shown in Figure \ref{f1} and \ref{f2}, as the number of splitting increases, before the newly generated $A$ and $B$ fusion the original $A$ and $B$, the overall belief entropy increases, which conforms to the ides proposed by Wang. The new BPA generated after the fusion means the system get new knowledge (the splitting method of original BPA), the overall belief entropy be unchanged or decreasing, which also conforms to information theory.
\subsection{The process of PPT}
For a given BPA, when the probability transformation without receiving outside knowledge, the PPT is most intuitive, and it uses an even splitting method to ensure the largest uncertainty of the information. According to the Equation \ref{PPTe} and the Figure \ref{f1}, with transformation in per unit time, it allocates equal mass functions to subsets with same cardinality, and the probability obtained at the end of the iteration must be PPT. Example \ref{ee1} shows the differences with different allocations.
\begin{example}
\label{ee1}
Given a discernment framework $X=\{a,b,c\}$, and the BPA is $m(X)=1$. The change of mass belief function after per splitting are shown in follows,
\begin{equation}
\begin{aligned}
&m^{n}(F_{i})=m^{n-1}(F_{i})+\frac{1}{p}m^{n-1}(G_{i})+\frac{1}{q}m^{n-1}(H_{i}) \\
&m^{n}(G_{i})=(1-\frac{2}{p})m^{n-1}(G_{i})+\frac{1}{q}m^{n-1}(H_{i})\\
&m^{n}(H_{i})=(1-\frac{6}{q})m^{n-1}(H_{i}),
\end{aligned}
\end{equation}
where $p\geqslant 3$ and $q\geqslant 7$. $|F_{i}|=1,~|G_{i}|=2,~|H_{i}|=3$, and $m^{n}(A)$ means $n$ times splitting of $m(A)$. Because of $3$-element discernment framework has three $2$-element focal elements and one $3$-element focal element, so $p$ and $q$ satisfy $p+4=q$. When the $p$ and $q$ given different value, the change of mass functions with the splitting process is shown in the Figure \ref{nf1}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.45\textwidth]{nf1.png}
\caption{Regardless of the values of $p$ and $q$, as the number of splits increases, BPA $m(X)=1$ is eventually transformed into a uniformly distributed probability $m(a)=m(b)=m(c)=\frac{1}{3}$, which is the same as the result of PPT.}
\label{nf1}
\end{figure}
Hartley entropy \cite{higashi1982measures} represents the uncertainty of non-specificity in BPA. When Hartley entropy is $0$, BPA degenerates into probability distribution. So as shown in Figure \ref{fh1}, in the process of BPA transformation into probability distribution, Hartley entropy of BPA gradually decreases from the maximum value to $0$.
\begin{figure}[hbtp!]
\centering
\includegraphics[width=0.95\textwidth]{fh1.png}
\caption{The trend of Hartley entropy in Example \ref{ee1}.}
\label{fh1}
\end{figure}
\end{example}
According to the above description, the PPT process is shown in detail, and the transformation process is different for different scales of unit time division. But the result is to maintain an uniform distribution for all elements.
\subsection{Discussion the process of probability transformation}
Besides the PPT, other methods also can be written as the fractal process if they satisfy the upper and low probability requirement. But other methods only give the calculation methods of results, and result-oriented inference cannot accurately simulate the transformation process, so we won't discuss them here. Although PMT cannot be written in the form of a split mass function, it can be seen as a continuous fusion of a uniform focal element assignment $m(F_i)=\frac{1}{2^{|F_i|-1}}$. Example \ref{eppf} shows that the transformation process of PMT in Definition \ref{PPF}.
\begin{example}\label{eppf}
Given a discernment framework $X=\{a,b\}$, and its BPA is $\mathbb{B}(2^{\Theta})=\{m(a)=0.2,m(b)=0.4,m(ab)=0.4\}$. Based on Definition \ref{PPF}, the PMT of $\mathbb{B}(2^{\Theta})$ is \\ $\{PnPl(a)=\frac{3}{7},PnPl(b)=\frac{4}{7}\}$. If we continually use another BPA $\mathbb{B}(2^{X})=\{m(a)=\frac{1}{3},m(b)=\frac{1}{3},m(ab)=\frac{1}{3}\}$ to fuse the $\mathbb{B}(2^{\Theta})$ by Dempster combination rule, the results are shown in Figure \ref{fppf}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.45\textwidth]{fppf.png}
\caption{With the continuous fusion $\mathbb{B}(2^{X})$ in Example \ref{eppf}, $\mathbb{B}(2^{\Theta})$ eventually transform into $PnPl$.}
\label{fppf}
\end{figure}
\end{example}
In this section, we present the implementation process of the existing main probability transformation methods. For the newly proposed probability transformation methods, the rationality can be verified according to the process ideas given in this section. More importantly, for BPA, its uncertainty can be measured by using the intermediate quantity of its transformation process. The specific method will be given in next section.
\section{Fractal-based belief entropy}\label{fbentropy1}
A new belief entropy called fractal-based belief (FB) entropy is proposed based on the process of PPT in this section. It can not only measure the uncertainty of BPAs, but also make their maximum entropy distributions correspond to solving actual physical problems. In Example \ref{ee1}, when $p$ and $q$ take different values, the evolution speed of PPT in per unit time is different. In order to better express the concept of "uniformity", we rule the focal element is equally split into its power set in per unit time.
\subsection{Fractal-based belief entropy}
\begin{definition}[FB entropy]\label{bfentropy}
For a discernment framework $\Theta=\{\theta_{1},\theta_{2},\dots,\theta_{n}\}$, its BPA is $\mathbb{B}(2^{\Theta})$, and the fractal-based (FB) entropy of $\mathbb{B}(2^{\Theta})$ is defined as
\begin{equation}\label{bfentropye}
E_{FB}=-\sum_{F_{i}\subseteq \Theta(F_{i}\in 2^{\Theta})} m_{F}(F_{i})\log m_{F}(F_{i}),
\end{equation}
where
\begin{equation}\label{mfe}
m_{F}(F_{i})=\frac{m(F_{i})}{2^{|F_{i}|}-1}+\sum_{F_{i}\subseteq G_{i}\cap|F_{i}|<|G_{i}|}\frac{m(G_{i})}{2^{|G_{i}|}-1}.
\end{equation}
The new set $\mathbb{B}_{F}(2^{\Theta})$ composed by $m_{F}(F_{i})$ is called fractal-based basic probability assignment (FBBPA).
\end{definition}
By observing the Equation \ref{mfe}, we can find that $\mathbb{B}_{F}(2^{\Theta})$ is obtained by a unit time transformation of PPT. For $m(\Theta)=1$, the most uncertain BPA intuitively, after a unit time splitting, the $\mathbb{B}_{F}(2^{\Theta})$ is a uniform distribution of $2^{\Theta}$, which is same with the maximum entropy distribution of Shannon entropy. So FBBPA is neither BPA nor probability distribution, but describes the characteristics of BPA from the perspective of probability.
\subsection{The Maximum FB entropy and its physical meaning}
\begin{definition}[Maximum FB entropy]\label{maxbfentropy}
For a discernment framework $\Theta=\{\theta_{1},\dots,\theta_{n},\}$, its BPA is $\mathbb{B}(2^{\Theta})$. The maximum fractal-based belief entropy $E_{FB}^{\uparrow}(\Theta)$ is
\begin{equation}\label{maxbfentropy}
\begin{aligned}
E_{FB}^{\uparrow}=\log (2^{n}-1),
\end{aligned}
\end{equation}
when $m(\Theta)=1.$
\end{definition}
\textbf{\emph{Proof.}}Let
\begin{equation}
E=-\sum_{F_{i}\in2^\Theta}m_{F}(F_{i})\log m_{F}(F_{i}),
\end{equation}
and according to Equation \ref{mfe}, it is obvious that $\sum_{A\subseteq \Theta}m_{F}(A)=1$. So the Lagrange function can be denoted as
\begin{equation}
E_{0}=-\sum_{F_{i}\in2^\Theta}m_{F}(F_{i})\log m_{F}(F_{i})+\lambda (\sum_{F_{i}\subseteq \Theta}m_{F}(F_{i})-1),
\end{equation}
and calculate its gradient
\begin{equation}
\frac{\partial E_{0}}{\partial m_{F}(F_{i})}=-\log m_{F}(F_{i})-\frac{1}{\ln a}+\lambda=0.
\end{equation}
For all $F_{i}\subseteq \Theta$
\begin{equation}
\log m_{F}(F_{i})=-\frac{1}{\ln a}+\lambda=k,
\end{equation}
so when $m_{F}(F_{i})=\frac{1}{2^{|\Theta|}-1}$ and $m(\Theta)=1$, the $E_{FB}(\Theta)$ reaches the maximum.
The maximum Shannon entropy called information volume and its probability distribution can solve real physical problems in practical applications. As the generalization of the Shannon entropy, the maximum FB entropy also has a corresponding physical model in reality, which are shown in Example \ref{e2}.
\begin{example}[Physical model of maximum FB entropy]\label{e2}
Assuming there are $64$ teams participating in a competition. The only information source is organizer, and we can ask him whether some teams are champions. The goal for us is to find all champions.
\begin{description}
\item[\textbf{Q}:] How many times inquiring can we find the champion at least?
\item[\textbf{Case1}:]We know the number of champions is $1$.
\item[\textbf{Case2}:]We don’t know the exact number of champions.
\end{description}
Figure \ref{FB_PHYSICAL} shows the difference between the t $2$ Cases.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{FB_PHYSICAL.png}
\caption{The difference between $2$ Cases.}
\label{FB_PHYSICAL}
\end{figure}
Case $1$ can be written as an uniform probability distribution with $64$ basic events $\{p(1)=\cdots=p(64)=\frac{1}{64}\}$. The Shannon entropy with base $2$ is $\log_2 64=6$, so only $6$ times inquiring can we find the champion. But for Case $2$, we are not sure about the number of championships, so all power sets of the $64$ teams frame have equal probability to win championships. It can be written as $\{p(1)=\frac{1}{2^{64}-1},p(2)=\frac{1}{2^{64}-1},\cdots,p(1\cdots64)=\frac{1}{2^{64}-1}\}$, which also corresponds to the FBBPA of the maximum FB entropy $\{m_F(1)=\frac{1}{2^{64}-1},m_F(2)=\frac{1}{2^{64}-1},\cdots,m_F(1\cdots64)=\frac{1}{2^{64}-1}\}$, so it corresponds to BPA $m(1\cdots64)=1$ and FB entropy $E_{FB}=\log_2 (2^{64}-1)\approx64$. The inquiry times of Case $2$ is $64$, which means that we can only find all the champions by inquiring all teams.
\end{example}
Example \ref{e2} illustrates that FB entropy is a generalization of Shannon entropy in the physical model of maximum entropy.
\subsection{Evaluation FB entropy}
According to the $10$ requirements for total uncertainty measurements of BPA in \cite{klir2013uncertainty,abellan2008requirements}, we evaluate the properties of FB entropy to prove its advantages. Among them, $\heartsuit$ means that FB entropy satisfies this proposition, and $\spadesuit$ means that FB entropy does not satisfy this proposition. For the unsatisfied propositions, we give explanations and prove the rationality of FB entropy. Under an $n$-element discernment framework $\Theta$ with BPA $\mathbb{B}(2^{\Theta})$, we evaluate the FB entropy in Proposition \ref{p1} - \ref{p10}.
\begin{proposition}[Probabilistic consistency ($\heartsuit$)]\label{p1}
When $\forall |F_i|>1$ $m(F_i)=0$, the total uncertainty measurement should degenerate into the Shannon entropy.
\end{proposition}
\textbf{\emph{Proof.}}When $\mathbb{B}(2^{\Theta})$ satisfies $\sum_{\theta_i\in\Theta}m(\theta_i)=1$, substitute it into Equation \ref{bfentropye} and \ref{mfe}:
\begin{equation}
E_{FB}=-\sum_{F_{i} \in 2^\Theta} m_{F}(F_{i})\log m_{F}(F_{i}) = -\sum_{i=1}^{n}m(\theta_{i})\log m(\theta_{i})=H(\mathbb{B}).
\end{equation}
So the FB entropy satisfies the Proposition \ref{p1}.
\qed
\begin{proposition}[Set consistency($\spadesuit$)]\label{p2}
The total uncertainty measurement of vacuous BPA ($m(\Theta)=1$) should equal to Hartley entropy $TU=E_H=\log |\Theta|$.
\end{proposition}
\textbf{\emph{Proof.}} For vacuous BPA, $E_{FB}=\log (2^{|\Theta|}-1) \neq \log |\Theta|\neq E_H.$
So the FB entropy doesn't satisfy the Proposition \ref{p2}.
\textbf{\emph{Explanation.}}The uncertainty of probability distribution is caused by the discord between basic events. The maximum Shannon entropy probability distribution is uniform, which means assignment same support degree to all events. In DSET, BPA can not only reflect the discord between elements, but contain the uncertainty to the distribution itself as well. In Example \ref{e2}, a BPA under $n$-element discernment framework and a probability distribution under $2^n-1$-events random variables can express same information, which also express that BPA can express more information than probability distribution for same dimension. So the maximum belief entropy larger than maximum Shannon entropy is more rational. Some common maximum total uncertainty measurement and Shannon entropy are shown in Figure \ref{f5} to show this property more intuitively.
\begin{figure}
\begin{minipage}[htbp]{0.5\linewidth}
\centering
\includegraphics[width=0.98\textwidth]{f5.jpg}
\caption{The maximum uncertainty of common total uncertainty measurements are larger than maximum Shannon entropy, so the requirement of set consistency is not reasonable.}
\label{f5}
\end{minipage}
\begin{minipage}[htbp]{0.5\linewidth}
\centering
\includegraphics[width=0.98\textwidth]{f6.jpg}
\caption{With the uncertainty increasing, the FB entropy, JS entropy and SU measurement is also increasing.}
\label{f6}
\end{minipage}
\end{figure}
\qed
\begin{proposition}[Monotonicity($\heartsuit$)]\label{p6}
If BPA $\mathbb{B}(2^{\Theta})$ and $\mathbb{B}(2^{\Omega})$ have following relationship: $\mathbb{B}(2^{\Theta}) \subseteq \mathbb{B}(2^{\Omega})$, the total uncertainty measurements of them should satisfy $UM(\mathbb{B}(2^{\Theta})) \leq UM(\mathbb{B}(2^{\Omega}))$.
\end{proposition}
\textbf{\emph{Proof.}}
Luo \textit{et al.} \cite{Luo2019matrix} proposes a widely used method of BPA's negation, and the direction of negation is the direction of ignorance. For a discernment framework $\Theta=\{\theta_{1},\theta_{2}\}$, its process of negation in $10$ times is shown in Table \ref{t2}, and the trends of JS entropy, SU measurement, Deng entropy and FB entropy are shown in Figure \ref{f6}. According to their trends, we can find that JS entropy, SU measurement and FB entropy is continuously rising, which illustrate that they satisfy the Proposition \ref{p6}. But for Deng entropy, its maximum entropy distribution is a uniform distribution based on time, so it does not satisfy monotonicity in this case.
\begin{table}[htbp!]
\caption{Intuitively, as the number of negation increases, $m(\Theta)$ gradually increases, and the uncertainty expressed by BPA be larger}
\label{t2}
\begin{center}
\begin{tabular}{c|cccccccccc}
\Xhline{1.4pt}
Times& $1$& $2$& $3$& $4$& $5$& $6$& $7$& $8$& $9$& $10$ \\
\Xhline{1.4pt}
$m(x_{1}) $ & $0.6000$& $0.0500$& $0.1500$& $0.0125$& $0.0375$& $0.0031$& $0.0094$& $0.0008$& $0.0023$& $0.0002$\\
$m(x_{2})$ & $0.1000$& $0.3000$& $0.0250$& $0.0750$& $0.0063$ & $0.0187$& $0.0016$& $0.0047$& $0.0004$& $0.0012$ \\
$m(x_{1}x_{2})$& $0.3000$& $0.6500$& $0.8250$& $0.9125$& $0.9562$ & $0.9781$& $0.9891$& $0.9945$& $0.9973$& $0.9986$ \\
\Xhline{1.4pt}
\end{tabular}
\end{center}
\end{table}
\qed
\begin{proposition}[Range($\spadesuit$)]\label{p3}
The range of total uncertainty measurements should satisfy the $[0,\log (|\Theta|)]$
\end{proposition}
\textbf{\emph{Proof.}}
If $m(\theta_{i})=1$, the FB entropy reaches the minimum $0$. So the $E_{FB}^{\downarrow}=0$. In the Definition \ref{maxbfentropy}, we have proven that the $E_{FB}^{\uparrow}=\log(2^{n}-1)$. Based above, the range of FB entropy is $[0,\log (2^n-1)]$, which doesn't satisfy the Proposition \ref{p3}.
The explanation of Proposition \ref{p3} is similar to Proposition \ref{p2}.
\qed
\begin{proposition}[Additivity($\heartsuit$)]\label{p4}
Suppose $X$, $Y$ and $Z$ are $3$ discernment frameworks. Among them, $X$ and $Y$ are independent, and $Z=X\times Y$. The total uncertainty measurement should satisfy
\begin{equation}
TUM(Z)=TUM(X)+TUM(Y),
\end{equation}
where $TUM$ is a general term for total uncertainty measurements.
\end{proposition}
\textbf{\emph{Proof.}} Joint BPA has different definitions according to whether $m(\varnothing)=0$. Smets \cite{smets1993belief} defined that generalized Bayesian theorem under the condition $m(\varnothing)\neq 0$ and for the joint frame $\Psi=\Theta \times \Omega$, the number of mass functions satisfies $2^{\Psi}=2^{\Theta + \Omega}$. But in this paper, we consider BPA is normalized, so the number of joint mass functions for $X \times Y$ is $(2^{|X|}-1)(2^{|Y|}-1)$. Figure \ref{f61} using a specific case to show the difference between $2$ definitions intuitively.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.85\textwidth]{FB_add.png}
\caption{Suppose $2$ discernment frameworks $X=\{x_1,x_2\}$ and $Y=\{y_1,y_2\}$, and we use the Smets' joint frame can get a new frame $Z=\{z_{11},z_{12},z_{21},z_{22}\}$. For the normalized BPA, original BPAs' product can not cover all focal elements under new frame, so the mass functions of yellow parts are always be $0$, which appears conflict with the definition of joint probability distribution. So we define the mass functions of joint BPA are generated by the product of original BPAs (blue part).}
\label{f61}
\end{figure}
According to the above description, the number of mass functions of the joint BPA is less than the power set under the joint framework. According to Definition \ref{bfentropy}, the calculation method of FBBPA at this time is no longer assigning mass functions to its power set, but assigning them to subsets which has inclusive relationship under current frame.
For joint frame $Z =X\times Y $, we define joint BPA $m^{Z}$ and joint FBBPA $m_{F}^Z$ as follows:
\begin{equation}\label{mze}
\begin{aligned}
&m^Z(z_{ij})=m(x_i)\times m(y_j);\\
&m^Z(z_{ij}z_{im})=m(x_i)\times m(y_j y_m);\\
&m^Z(z_{ij}z_{im}z_{nj}z_{nm})=m(x_i x_n) \times m(y_j y_m);
\end{aligned}
\end{equation}
\begin{equation}\label{mzfe}
\begin{aligned}
\forall F_i \subseteq Z,m_{F}^Z(F_i)=m^Z(F_i)+\sum_{F_i\subseteq K_i;G_i\times H_i =K_i}\frac{m^Z(K_i)}{(2^|G_i|-1)(2^|H_i|-1)};
\end{aligned}
\end{equation}
For $C\subseteq Z$, $A\subseteq X$ and $B \subseteq Y$ and they satisfy $A\times B=C$, according to the Equation \ref{mze} and \ref{mzfe},
\begin{equation}
\begin{aligned}
m^{Z}_F(C)&=m^Z(C)+\sum_{C \subseteq K_i;G_i\times H_i =K_i;A\subseteq G_i;B\subseteq H_i }\frac{m^Z(K_i)}{(2^|G_i|-1)(2^|H_i|-1)}\\
&=m(A)\times m(B)+\sum_{C \subseteq K_i;G_i\times H_i =K_i ;A\subseteq G_i;B\subseteq H_i }\frac{m(G_i)\times m(H_i)}{(2^|G_i|-1)(2^|H_i|-1)}\\
&=(m(A)+\sum_{A\subseteq G_i}\frac{m(G_i)}{2^|G_i|-1})\times(m(B)+\sum_{B\subseteq H_i}\frac{m(H_i)}{2^|H_i|-1})\\
&=m_F(A)\times m_F(B)
\end{aligned}
\end{equation}
We know that the Shannon entropy satisfies additivity, and it is proved that the consistency of the joint FBBPA and the joint BPA, it is easy to conclude that the FB entropy satisfies the additivity and Example \ref{eee} shows its calculation process. So the FB entropy satisfies the Proposition \ref{p4}.
\begin{example}\label{eee}
For two independent BPAs under $2$-element frames $X$ and $Y$, $\mathbb{B}(2^X)=\{m(x_1)=m(x_2)=\frac{1}{5},m(x_1 x_2)=\frac{3}{5}\}$ and $\mathbb{B}(2^Y)=\{m(y_1)=\frac{1}{10},m(y_2)=\frac{3}{5},m(y_1 y_2)=\frac{3}{10}\}$, so the joint BPA is
\begin{equation}
\begin{aligned}
&\mathbb{B}(2^Z)=\mathbb{B}(2^X)\times \mathbb{B}(2^Y)=\{m(z_{11})=\frac{1}{50},m(z_{12})=\frac{6}{50},m(z_{21})=\frac{1}{50},m(z_{22})=\frac{6}{50},\\
&m(z_{11} z_{21})=\frac{3}{50},m(z_{12} z_{22})=\frac{18}{50},m(z_{11} z_{12})=\frac{3}{50},m(z_{21} z_{22})=\frac{3}{50},m(z_{11} z_{12} z_{21} z_{22})=\frac{3}{50}\}.
\end{aligned}
\end{equation}
According to Equation \ref{mzfe}, the joint FBBPA is
\begin{equation}
\begin{aligned}
&\mathbb{B}_F(2^Z)=\mathbb{B}_F(2^X)\times \mathbb{B}_F(2^Y)=\{m_F(z_{11})=\frac{4}{50},m_F(z_{12})=\frac{14}{50},m_F(z_{21})=\frac{4}{50},m_F(z_{22})=\frac{14}{50},\\
&m_F(z_{11} z_{21})=\frac{2}{50},m_F(z_{12} z_{22})=\frac{7}{50},m_F(z_{11} z_{12})=\frac{2}{50},m_F(z_{21} z_{22})=\frac{2}{50},m_F(z_{11} z_{12} z_{21} z_{22})=\frac{1}{50}\}.
\end{aligned}
\end{equation}
So the FB entropy of $\mathbb{B}(2^Z)$, $\mathbb{B}(2^X)$ and $\mathbb{B}(2^Y)$ satisfy that $E_{FB}(Z)=2.6787=1.5219+1.1568=E_{FB}(X)+E_{FB}(Y)$.
\end{example}
\qed
\begin{proposition}[Subadditivity($\heartsuit$)]\label{p5}
Suppose $X$, $Y$ and $Z$ are discernment frameworks. And $Z=X\times Y$. The total uncertainty measurements should satisfy
\begin{equation}
TUM(Z) \leq TUM(X)+TUM(Y).
\end{equation}
\end{proposition}
\textbf{\emph{Proof.}}
\begin{description}
\item[\textbf{Case1:}] If the BPAs of $X$ and $Y$ are independent, the $E_{FB}(Z) = E_{FB}(X)+E_{FB}(Y)$ has been proven in Propositon \ref{p4}.
\item[\textbf{Case2:}] If the BPAs of $X$ and $Y$ are not independent, when they are combined into a joint BPA, they obtain information from each other's BPA, which can reduce the uncertainty of the joint BPA. So $E_{FB}(Z) < E_{FB}(X)+E_{FB}(Y)$ .
\end{description}
According to $2$ Cases, we can prove that the FB entropy satisfies the Proposition \ref{p5}.
\qed
\begin{proposition}[RB1($\heartsuit$)]\label{p7}
The calculation process of total uncertainty measurement cannot be too complicated.
\end{proposition}
\textbf{\emph{Proof.}}
For the $n$-element discernment framework, the computation complexity of Equation \ref{bfentropye} and \ref{mfe} are $\mathcal{O}(2^n)$ and $\mathcal{O}(n2^n)$ respectively. According to Table \ref{d1t1}, the computation complexity of JS entropy and SU is similar with FB entropy, and Yang and Han's method has higher complexity than them. Therefore, the complexity of FB entropy is within an absolutely acceptable range and satisfies Proposition \ref{p7}.
\qed
\begin{proposition}[RB2($\heartsuit$)]\label{p8}
The total uncertainty measurement can be divided two methods to measure the discord and non-specificity respectively.
\end{proposition}
\textbf{\emph{Proof.}}
Different from other methods (JS entropy, SU measurement and Deng entropy), the discord and non-specificity can not be divided from the expression directly, but it can obtain these two measurements in a more reasonable way. We define FB entropy from the process of PPT, so when BPA transformed to PPT, the assignment expresses discord only. Based on above, the discord $E^{\mathcal{D}}_{FB}$ and non-specificity $E^{\mathcal{N}}_{FB}$ of $E_{FB}$ are defined as follows:
\begin{equation}
\begin{aligned}
&E^{\mathcal{D}}_{FB}=H(BetP)=H_j=-\sum_{\theta_i\in\Theta}BetP(\theta_{i})\log BetP(\theta_{i});\\
&E^{\mathcal{N}}_{FB}=E_{FB}-E^{\mathcal{D}}_{FB}=\sum_{\theta_i\in\Theta}BetP(\theta_{i})\log BetP(\theta_{i})-\sum_{F_i\in2^\Theta} m_{F}(F_i)\log m_{F}(F_i).
\end{aligned}
\end{equation}
The relationship of discord and non-specificity are shown in Figure \ref{f71}. Based on above, the FB entropy satisfies Proposition \ref{p8}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.75\textwidth]{f71.jpg}
\caption{The relationship of FB entropy and its discord $\&$ non-specificity measure}
\label{f71}
\end{figure}
\begin{proposition}[RB3($\heartsuit$)]\label{p9}
Total uncertainty measurement must be sensitive to changes in BPA.
\end{proposition}
\qed
\textbf{\emph{Proof.}}
Since the change from BPA to FBBPA is reversible, any change to BPA equals to change FBBPA. For different FBBPA, Shannon entropy has been proved to be a sensitive measurement method, so for any BPA, FB entropy is also sensitive to its changes. So FB entropy satisfies Proposition \ref{p9}. We use Example \ref{e3} to show the results intuitively.
\qed
\begin{example}\label{e3}
For discernment framework $X=\{x_1,x_2\}$, $m(x_1)$ and $m(x_2)$ change from $0$ to $1$ and satisfy $m(x_1)+m(x_2)\leqslant 1$.
Figure \ref{f74} and \ref{f75} are the change trend of discord and non-specificity, and Figure \ref{f72} and \ref{f73} show their relationships and top view.
\begin{figure}
\begin{minipage}[htbp]{0.48\linewidth}
\centering
\includegraphics[width=0.98\textwidth]{f74.jpg}
\caption{Discord change trend in Example \ref{e3}}
\label{f74}
\end{minipage}
\begin{minipage}[htbp]{0.48\linewidth}
\centering
\includegraphics[width=0.98\textwidth]{f75.jpg}
\caption{Non-specificity change trend in Example \ref{e3}}
\label{f75}
\end{minipage}
\end{figure}
\begin{figure}
\begin{minipage}[htbp]{0.48\linewidth}
\centering
\includegraphics[width=0.98\textwidth]{f73.jpg}
\caption{The relationship of TUM in Example \ref{e3}}
\label{f72}
\end{minipage}
\begin{minipage}[htbp]{0.48\linewidth}
\centering
\includegraphics[width=0.98\textwidth]{f72.jpg}
\caption{The top view of TUM in Example \ref{e3}}
\label{f73}
\end{minipage}
\end{figure}
\end{example}
\begin{proposition}[RB4($\heartsuit$)]\label{p10}
The proposed method is supposed to have corresponding model when meets more generalized theory than evidence theory.
\end{proposition}
\textbf{\emph{Proof.}}
FB entropy distributes mass functions uniformly to the power sets of their focal elements, because DSET uses power set $2^n$ to express information. As a generalization of DSET, the DSmT \cite{smarandache2006advances} proposed by Desert and Smarandache is to extend the power set $2^n$ to $U^n$. According to this idea, FB entropy can also measure the uncertainty of the assignment in DSmT, and only needs to uniformly distribute the mass functions to $U^n$ subsets. So FB entropy satisfies Proposition \ref{p10}.
So the FB entropy satisfies the Proposition \ref{p10}.
\qed
In this Section, $10$ requirements evaluate the general properties of FB entropy and prove its rationality. In particular, in terms of additivity, there was no previous total uncertainty measurement can complete the additivity verification on the basis of joint BPA. In the rest of the paper, we will further show the advantages of FB entropy.
\section{Advantages of FB entropy}\label{fbentropy2}
We make an intuitive comparison through several examples to show the advantages of FB entropy, which are not available in previous methods.
\subsection{View from combination rules: combination interval consistency}
Combination rule of Dempster (CRD) \cite{dempster2008upper} and disjointed combination rule (DCR) \cite{smets1993belief} are most widely used combination rules of normalized BPA. For BPAs $\mathbb{B}_1(2^\Theta)$ and $\mathbb{B}_2(2^\Theta)$ under discernment framework $\Theta$, the CRD $\mathbb{B}_{1\oplus2}$ and DCR $\mathbb{B}_{1\circledtiny{$\cup$}2}$ are
\begin{equation}\label{crde}
m_{1\oplus 2}(F_i)=
\begin{cases}
K^{-1}\cdot \sum_{G_i\subseteq\Theta,H_i\subseteq\Theta}m_1(G_i)m_2(H_i)& \text{$G_i \cap H_i=F_i$}\\
0& \text{$F_i$ = $\varnothing$}
\end{cases},
\end{equation}
\begin{equation}
m_{1\circledtiny{$\cup$}2}(F_i)=\sum_{G_i\subseteq\Theta,H_i\subseteq\Theta,G_i \cup H_i=F_i}m_1(G_i)m_2(H_i),
\end{equation}
where $K=\sum_{H_i\cap G_i=\varnothing}m_1(G_i)m_2(H_i)$.
CRD in DSET corresponds to the cross product in probability theory. The Shannon entropy of the probability after cross product is smaller than all original probabilities, which means that the uncertainty of the distribution are reduced after receiving new information. Intuitively, the uncertainty of the BPA after combination by CRD also should be reduced, so its total uncertainty measurement should satisfy $TUM(\mathbb{B}_{1\oplus2}) \leqslant \min\{TUM(\mathbb{B}_1),TUM(\mathbb{B}_2)\}$. DCR is a conservative combination rule. It assigns the mass functions of conflict evidence to their union, which is bound to cause more uncertainty. Therefore, the total uncertainty measurement of the BPA after using DCR should be larger and satisfies $TUM(\mathbb{B}_{1\circledtiny{$\cup$}2}) \geqslant \max\{TUM(\mathbb{B}_1),TUM(\mathbb{B}_2)\}$. So the total uncertainty measurement should satisfy the combination interval consistency, i.e., $\{TUM(\mathbb{B}_1),TUM(\mathbb{B}_2)\}\in[TUM(\mathbb{B}_{1\oplus2}),TUM(\mathbb{B}_{1\circledtiny{$\cup$}2})]$. For $2$ BPAs $\mathbb{B}_1=\{m(a)=m(b)=\frac{1-\frac{i}{1000}}{2},m(ab)=\frac{i}{1000}\}$ and $\mathbb{B}_2=\{m(a)=0.1,m(b)=0.7,m(ab)=0.2\}$, when $i$ from $0$ to $1$, Figure \ref{FB_combin} and \ref{Ed_combin} show the change trend of FB entropy and Deng entropy. From the Figures, it can be concluded that FB entropy meets the combination interval consistency but Deng entropy does not. So for this property, the measurement effect of FB entropy is better than Deng entropy.
\begin{figure}
\begin{minipage}[htbp]{0.48\linewidth}
\centering
\includegraphics[width=0.8\textwidth]{FB_combin.png}
\caption{The change trend of FB entropy}
\label{FB_combin}
\end{minipage}
\begin{minipage}[htbp]{0.48\linewidth}
\centering
\includegraphics[width=0.98\textwidth]{Ed_combin.png}
\caption{The change trend of Deng entropy}
\label{Ed_combin}
\end{minipage}
\end{figure}
\subsection{View from non-specificity: more rational measurement}
Non-specificity as a peculiar property of DSET, analyzing its uncertainty reasonably is significant. Besides the most well known Hartley entropy \cite{higashi1982measures}, Yang \textit{et al.} \cite{yang2016non} utilized belief interval to measure the non-specificity. In addition, common total uncertainty measurements can separate non-specificity, which are shown in Table\ref{tnon}. We evaluate these methods separately from qualitative and quantitative aspects.
\begin{table}[htbp!]
\caption{Non-specificity of common total uncertainty measurements}
\label{tnon}
\begin{center}
\small
\begin{tabular}{c|cccc}
\Xhline{1.4pt}
Methods& \tabincell{c}{JS entropy~\&\\Pal \textit{et al.}'s entropy} & SU measurement & Deng entropy &FB entropy \\
\hline
Non-specificity&$\sum_{F_i\in 2^\Theta}m(F_i)\log |F_i|$&$\sum_{\theta_i\in\Theta}(Pl(\theta_i)-Bel(\theta_i))$&$\sum_{F_i\in 2^\Theta}m(F_i)\log (2^{|F_i|}-1)$&\tabincell{c}{$\sum_{\theta_i\in\Theta}BetP(\theta_{i})\log BetP(\theta_{i})$\\$-\sum_{F_i\in2^\Theta} m_{F}(F_i)\log m_{F}(F_i)$}\\
\Xhline{1.4pt}
\end{tabular}
\end{center}
\end{table}
\textbf{Qualitative analysis:} SU measurement, JS entropy and other derivative methods \cite{Xue2021Interval} are to calculate the uncertainty of discord and non-specificity respectively and then add them up. In this way, discord and non-specificity are measured separately, which can reflect the relative uncertainty between different pieces of evidence. But for a BPA, we cannot know the proportion of discord and non-specificity in its total uncertainty. So logically speaking, these two parts should be divided from the total uncertainty measurement, instead of using these $2$ parts to form a method. From this point of view, Pal \textit{et al.}, Deng entropy and FB entropy are more reasonable.
\textbf{Quantitative analysis:} For a $4$-element discernment framework $X=\{a,b,c,d\}$ with $2$ evidence $\mathbb{B}_1(2^X)=\{m(ab)=m(bc)=m(cd)=m(ad)=\frac{1}{4}\}$ and $\mathbb{B}_2(2^X)=\{m(ab)=m(bc)=m(cd)=m(ad)=m(ac)=m(bd)=\frac{1}{6}\}$. According to non-specificity of them are shown in Table \ref{tnone}, the results of $\mathbb{B}_1(2^X)$ and $\mathbb{B}_2(2^X)$ measured by FB entropy are different. Although the belief intervals of their elements are all $[0,\frac{1}{2}]$, the probability ranges they can cover are not same. For example, $\mathbb{B}_2(2^X)$ can appear probability distribution $\{p(a)=\frac{1}{2},p(b)=\frac{1}{3},p(c)=\frac{1}{6}\}$, but $\mathbb{B}_1(2^X)$ only can reach $\{p(a)=\frac{1}{2},p(b)=\frac{1}{4},p(c)=\frac{1}{4}\}$. Only FB entropy can express this kind of difference, which proves its advantages in this aspect.
\begin{table}[htbp!]
\caption{Non-specificity of $\mathbb{B}_1(2^X)$ and $\mathbb{B}_2(2^X)$ }
\label{tnone}
\begin{center}
\small
\begin{tabular}{c|cccc}
\Xhline{1.4pt}
Methods& \tabincell{c}{JS entropy~\&\\Pal \textit{et al.}'s entropy} & SU measurement & Deng entropy &FB entropy \\
\hline
$\mathbb{B}_1(2^X)$ &$1$&$2$&$1.5850$&$2.8554$\\
\hline
$\mathbb{B}_2(2^X)$ &$1$&$2$&$1.5850$&$3.1133$\\
\Xhline{1.4pt}
\end{tabular}
\end{center}
\end{table}
\subsection{View from physical model: Stronger ability to express information than Shannon entropy}
In Example \ref{e2}, we have used a physical model to show the difference between the maximum FB entropy and the maximum Shannon entropy. It is proved that the information volume of the maximum FB entropy under $n$-element discernment framework is equivalent to the information volume of the maximum Shannon entropy under $2^n-1$-events random variable. Expanding the cases in Example \ref{e2} can further prove the advantages of FB entropy.
\begin{description}
\item[\textbf{Q}:] How many times inquiring can we find the champion at least?
\item[\textbf{Case1}:]We don’t know the exact number of champions.
\item[\textbf{Case2}:]We don't know the exact number of champions, but we know that the champions are in a certain half.
\item[\textbf{Case3}:]We don't know the exact number of champions, but we know that the champions are in a quarter of the population.
\item[\textbf{Case4}:]We don't know the exact number of champions, but we know that the champions are in a $\frac{1}{64}$ of the population.
\end{description}
Among them, the BPA and probability distribution of Case $1$ and Case $4$ have shown in Example \ref{e2}, and Case $4$ also can be described as we knowing only have one champion. For Case $2$ and Case $3$, each of them can not be expressed by $1$ probability distribution, but BPAs $\mathbb{B}_{Case~2}=\{m(1\cdots 32)=m(33\cdots 64)=\frac{1}{2}\}$ and $\mathbb{B}_{Case~3}=\{m(1\cdots 16)=m(17\cdots 32)=m(33\cdots 48)=m(49\cdots 64)=\frac{1}{4}\}$ can express. The FB entropy of Case $2$ $E_{FB}(\mathbb{B}_{Case~2})\approx 33$, and in reality, we also need to inquire $33$ times to find all champions. First, we inquire $1$ time to find which half contain all champions, and then we can find champions by inquiring all 32 people. For Case $3$, $E_{FB}(\mathbb{B}_{Case~2})\approx 18$, which also be consistent with physical model. Based on the above, we can express the the relationship of $4$ Cases in Figure \ref{ff}, which unifies the process of BPA degeneration to probability distribution and FB entropy degeneration to Shannon entropy. From the superiority of BPA compared to probability distribution, the superiority of FB entropy compared to Shannon entropy is inferred. At this point, FB entropy is better than all existing belief entropies.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.85\textwidth]{FB_phsical2.png}
\caption{The relationship of $4$ Cases.}
\label{ff}
\end{figure}
\section{Conclusion}
This paper utilizes the fractal to simulate the process of pignistic probability transformation, which shows the process of lost information in transformation more intuitive. Based on the process, we propose the fractal-baed belief entropy to measure the BPA's total uncertainty. After 10 required evaluations, we prove that FB entropy can reasonably measure the uncertainty of BPA. In addition, we prove its superiority from $3$ aspects: combination rule, non-specificity and physical model. Based on above, the contributions of paper are summarized as follows:
\begin{itemize}
\item[$\bullet$] [\textbf{Process of probability transformation:}] We consider probability transformation as a process, and propose a possible transformation process of PPT and PMT. This idea provides a new perspective to evaluate probability transformation, so that result orientation is no longer the only evaluation criterion.
\item [$\bullet$][\textbf{Total uncertainty measurement of BPA:}] Based on fractal, we propose the FBBPA and substitute it into Shannon entropy to define the FB entropy. After evaluation, FB entropy can not only measure total uncertainty of BPA reasonably, but satisfy the additivity, which realize the corresponding with Shannon entropy.
\item[$\bullet$] [\textbf{Combination rule interval:}] Based on the CRD and the DCR, we propose the combination interval consistency and prove the FB entropy is better than Deng entropy in this property.
\item[$\bullet$] [\textbf{Discord and non-specificity measurement:}] Since PPT is the end point of the fractal method, we substitute it into Shannon entropy as discord measurement. Through qualitative and quantitative analysis, we prove that FB entropy is superior to all previous uncertainty measurement methods in this aspect.
\item[$\bullet$] [\textbf{Physical model consistency: }] As a generalization of Shannon entropy, FB entropy can not only degenerate into Shannon entropy when the input is probability distribution, but correspond to Shannon entropy in the physical model of maximum entropy as well.
\end{itemize}
In future research, this work can be further extended from three directions. (1) In terms of probability transformation, more probability transformation methods can be simulated based on the proposed process model. (2) In terms of uncertainty measurement, this fractal-based measurement method can be applied to more uncertainty theories. (3) In DSET, FB entropy can be applied to solve practical problems such as information fusion, decision making and fault diagnosis.
\section*{Declaration of interests}
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
\section*{Acknowledgment}
The work is partially supported by National Natural Science Foundation of China (Grant No. 61973332), JSPS Invitational Fellowships for Research in Japan (Short-term). Thanks to the reviewers' valuable comments, which significantly improved the quality of the paper. Thanks to colleagues in the Information Fusion and Intelligent System Laboratory for their help and support.
\section{Introduction}
\label{intro}
Dempster-Shafer evidence theory (DSET) \cite{dempster2008upper,shafer1976mathematical} as a generalization of probability theory (PT) express the information by interval probabilities. For an $n$-element mutually exclusive set, probability distribution express its information by $n$ probabilities, and in DSET, $2^n$ mass functions called focal elements formed basic probability assignment (BPA) to express information. BPA utilizes more dimensional data than probability distributions, so it has the ability to express more information than probability distributions. Relying on the above advantages, DSET is widely applied in information fusion \cite{Xiong2021InformationSciences,yang2013discounted,pan2020association}, data de-combination \cite{fan2021combination}, reasoning \cite{liao2020deng}, and reliability evaluation \cite{gao2021NET}. Because the power set means the combination numbers \cite{Song2021powerset}, the permutation numbers subset not be considered. Smarandache and Dezert \cite{smarandache2006advances} extended $2^n$ to $U^n$ to propose the Dezert-Smarandache theory (DSmT), which can express more generalized information than DSET, and Xiao \cite{Xiao2020CEQD,xiao2021caftr} extended BPA to the complex number field to predict interference effects in a more proper way.
In PT, Shannon entropy \cite{shannon2001mathematical} can express the uncertainty of probability distribution, but how to measure the total uncertainty of BPA is still an open issue. BPA can be seen as formed by two properties discord and non-specificity \cite{jousselme2006measuring}. Discord represents the conflict between elements in the framework, and non-specificity, as a difference between BPA and probability distribution, represents the uncertainty of the distribution. In order to facilitate understanding, we divide common BPA measurement methods into $3$ types.
\begin{description}
\item[\textbf{Local measurement: measure a certain characteristic of BPAs}] Hohle \cite{hohle1982entropy} and Yager \cite{yager1983entropy} respectively utilized belief function and plausibility function to calculate the confusion and dissonance of BPAs. Hartley entropy was proposed to express the non-specificity of BPA in \cite{higashi1982measures}. Klir \textit{et al.} measure the discord of BPA in \cite{klir1990uncertainty}. Harmanec \textit{et al.} \cite{AU} proposed a method to measure the aggregate uncertainty (AU) of BPA. Jousselme \textit{et al.} \cite{jousselme2006measuring} substituted pignistic probability transformation in to Shannon entropy to propose ambiguity measure (AM). We proposed the belief eXtropy to measure the negation degree \cite{zhou2021eXtropy}.
\item[\textbf{Splitting method: measure uncertainty after splitting the mass functions}] Pal \cite{pal1993uncertainty} first utilized the splitting to divide the mass functions of the focal elements with $n$ elements into $n$ parts and then substituting them into Shannon entropy. Based on above, Deng \cite{Deng2020ScienceChina,Deng2020InformationVolume} splitting the mass functions to their power set, which can represent more uncertainty, and Abell{\'a}n \textit{et al.} evaluated Deng entropy and its extensions in \cite{abellan2017analyzing,moral2020critique}. These two methods satisfy non-negativity, monotonicity, probability consistency, and additivity. However, their maximum entropy distribution is not a vacuous BPA, which is counter intuitive.
\item[\textbf{Belief functions: measure uncertainty based on belief functions}]Due to the limitations of BPA to express information, many measurements use belief functions to express its uncertainty. Wang and Song \cite{wang2018uncertainty} utilized elements' belief (Bel) function and plausibility (Pl) function to measure the discord and non-specificity respectively (Hereinafter referred to as SU measurement). Jirou{\v{s}}ek and Shenoy combined Pl function and Hartley entropy to proposed a new entropy (Hereinafter referred to as JS entropy) \cite{jirouvsek2018new} and they also proposed a decomposable entropy based on commonality (q) function \cite{jirouvsek2020properties}. Yang and Han \cite{yang2016new} proposed a novel uncertainty measure based on the distance of elements' Bel functions and Pl functions.
\end{description}
There are total $10$ requirements for total uncertainty measurement (UM) methods of BPA in \cite{klir2013uncertainty,abellan2008requirements}. Although some of them are controversial, they can evaluate UM methods comprehensively.
The elements in framework are mutually exclusive, so in process of decision-making, how to transform the BPA to probability distribution is significant. Pignisitic probability transformation (PPT) is utilized in decision layer of transfer belief model (TBM) \cite{smets2005decision}, which distributes the mass functions of multi-element focal elements equally under the principle of keeping the maximum uncertainty. Cobb and Shenoy \cite{cobb2006plausibility} proposed plausibility transformation method (PTM) based on the elements' Pl functions, which has Dempster combination rule consistency. In addition, there are many methods of probability transformation methods \cite{CHEN2021104438} and Han \textit{et al.} evaluated them in \cite{han2015evaluation}. Probability transformation can also be regarded as the non-specificity loss. The previous methods only gave the results of the transformation, and did not describe the process of generating the probability. Therefore, their reasonability only can be evaluated from results , which is not comprehensive enough.
In the paper, we propose a possible PPT generation process based on fractal, and based on this process, we propose a new belief entropy called Fractal-based belief (FB) entropy to measure the total uncertainty of BPA. After evaluation and comparison, we prove that FB entropy can meet the requirements in numerical calculation and has a corresponding physical model. The contributions of paper is summarized as follows: (1) We first use fractal idea to simulate the process of probability transformation. (2) FB entropy can not only measure the uncertainty of BPA reasonably but has corresponding physical model as well, which is superior to all existing belief entropy. (3) We does not deliberately consider discord and non-specificity when defining FB entropy, but we can separate the two parts of uncertainty based on the fractal result. For different BPAs, the proportions of the two parts are different, which is more intuitive. In general, the structure of this paper is as follows:
\begin{description}
\item[$\bullet$]The Section \ref{preliminaries} mainly introduces the basic concepts of DSET, common probability transformation methods and classical uncertainty measurements of BPA.
\item[$\bullet$]In the Section \ref{process}, we simulate the process of PPT based on the fractal and give it a possible explanation.
\item[$\bullet$]Section \ref{fbentropy1} is the core of the paper. According to the process of PPT, we propose the FB entropy. After evaluation its properties, we prove FB entropy can measure BPA rationally.
\item[$\bullet$]Some unique advantages of FB entropy are shown in Section \ref{fbentropy2} by comparing with common methods.
\end{description}
\section{Preliminaries}
\label{preliminaries}
\subsection{Dempster-Shafer evidence theory}
\begin{definition}[BPA]\label{bpa}\cite{dempster2008upper}
For a finite set $\Theta$ with $n$ elements, it can be written as $\Theta=\{\theta_{1},\dots,\theta_{n}\}$, which is called a discernment framework in DSET. The mass functions of elements in $2^\Theta$ can be written as $\mathbb{B}(2^{\Theta})=\{m(\varnothing),m(\theta_{1}),\dots, m(\theta_{n}), m(\theta_{1}\theta_{2}),\dots, m(\theta_{1}\dots\theta_{n})\}$, and $m(F_i)$ satisfies
\begin{equation}
m(\varnothing)=0;~~~~~~~\sum_{F_{i}\in 2^\Theta} m(F_{i})=1;~~~~~~m(F_{i})\geqslant 0,
\end{equation}
where $\mathbb{B}(2^{\Theta})$ is basic probability assignment (BPA), and $\{F_i\}$ is called focal element.
\end{definition}
This paper only discusses normalized BPA, so $m(\varnothing)=0$. In addition to mass functions, belief functions also can store the information of BPA.
\begin{definition}[Belief functions]\label{beliefinterval}\cite{shafer1976mathematical}
For an $n$-element discernment framework $\Theta$, with its BPA $\mathbb{B}(2^{\Theta})$, the belief (Bel) function, plausibility (Pl) function, and commonality (q) function of focal elements are defined as
\begin{equation}
\begin{aligned}
&Bel(F_{i})=\sum_{G_{i}\subseteq F_{i}}m(G_{i})=1-Pl(\overline{F_{i}}),\\
&Pl(F_{i})=\sum_{G_{i}\cap F_{i} \neq \varnothing~and~G_{i}\subseteq X}m(G_{i})=1-Bel(\overline{F_{i}}),\\
&q(F_{i})=\sum_{F_i\subseteq G_i}m(G_i).
\end{aligned}
\end{equation}
It is obvious that the $Bel(A)\leqslant Pl(A)$, and the belief interval of focal element $A$ is $[Bel(A),~Pl(A)]$.
\end{definition}
The above two methods in Definition\ref{bpa} and \ref{beliefinterval} are usually used to calculate the uncertainty of information in DSET. Next, some common probability transformation methods are shown.
\subsection{Common probability transformation methods}
\begin{definition}[PPT] \cite{smets2005decision}
\label{PPT}
For an $n$-element discernment framework $\Theta$ with its BPA $\mathbb{B}(2^{\Theta})$. Its pignistic probability transformation (PPT) called $BetP(\theta_i)$ is defined as :
\begin{equation}\label{PPTe}
BetP(\theta_i)=\sum_{\theta_{i} \in F_{i}~and~F_{i}\in 2^\Theta} \frac{m(F_i)}{|F_{i}|} ,
\end{equation}
where $|F_{i}|$ is the cardinality of focal element $F_{i}$.
\end{definition}
\begin{definition}[PTM]\cite{cobb2006plausibility}
\label{PPF}
For an $n$-element discernment framework $\Theta$ with its BPA $\mathbb{B}(2^{\Theta})$. Its plausibility transformation method (PTM) called $PnPl(\theta_i)$ is defined as :
\begin{equation}
PnPl(\theta_i)=\frac{Pl(\theta_{i})}{\sum_{j=1}^{n}Pl(\theta_{j})}
\end{equation}
\end{definition}
Besides PTM, other probability transformation methods are specializations of PPT, i.e., the support of $m(\theta_i)$ plus the support degree of multi-element focal elements for $\theta_i$. Though PTM doesn't satisfy the upper and low probability rule, it is the only method satisfies the Dempster combination rule consistency \cite{han2015evaluation}.
\subsection {Classical uncertainty measurements (UM) of BPA}
\begin{definition}[UM]\label{um}
For a discernment framework $\Theta=\{\theta_{1},\theta_{2},\dots,\theta_{n}\}$, its BPA is $\mathbb{B}(2^{\Theta})$, PPT is $ P_{\mathbb{B}}(\theta_{i})$, and PPF is $Pl\_ P_{m}(\theta_{i})$. Common uncertainty measurements of BPA and its maximum distribution are shown in Table\ref{d1t1}.
\newgeometry{left=2cm, right=2cm, top=2cm, bottom=2cm}
\begin{table*}[htbp!]\footnotesize
\centering
\begin{adjustbox}{angle=90}
\begin{tabular}{ccccc}
\Xhline{1.4pt}
Methods & Expression &\tabincell{c}{Maximum distribution}&Maximum& Remark \\
\Xhline{1.4pt}
\tabincell{l}{Ambiguity measure\cite{jousselme2006measuring}}&\tabincell{l}{$H_{j}=-\sum_{i=1}^{n}P_{\mathbb{B}}^{\Theta}(\theta_{i})log(P_{\mathbb{B}}^{\Theta}(\theta_{i}))$}&$P_{\mathbb{B}}^{\Theta}(\theta_{i})=\frac{1}{|\Theta|}$&$\log (|\Theta|)$&\tabincell{l}{Elements;\\Cardinality;\\Mass function}\\
\hline
Confusion measurement \cite{hohle1982entropy}& $C_{H}=-\sum_{F_{i}\in 2^{\Theta}}m(F_{i})\log Bel(F_{i})$&$m({\theta_{i}})=\frac{1}{|\Theta|}$&$\log (|\Theta|)$& \tabincell{l}{Mass function;\\Bel function} \\
\hline
Dissonance measurement \cite{yager1983entropy}&$E_Y=-\sum_{F_{i}\in 2^{\Theta}}m(F_{i})\log Pl(F_{i})$& \tabincell{l}{$m(F_i)=\frac{1}{K}, \forall 1 \rightarrow K,$\\$ \{F_1\}\cap \cdots \cap \{F_K\}= \varnothing $}& $\log (|\Theta|)$ &\tabincell{l}{Mass function;\\Pl function}\\
\hline
Hartley entropy \cite{higashi1982measures}& $E_{H}=-\sum_{F_{i}\in 2^{\Theta}}m(F_{i})\log |F_{i}|$ &$m(\Theta)=1$&$\log (|\Theta|)$& \tabincell{l}{Mass function; \\Cardinality} \\
\hline
Discord measurement \cite{klir1990uncertainty}& \tabincell{l}{$S_{KP}=-\sum_{F_{i}\in 2^{\Theta}}m(F_{i}) \log \sum_{G_{i}\in 2^{\Theta}}m(G_{i})\frac{|F_{i}\cap G_{i}|}{|G_{i}|}$} &$m({\theta_{i}})=\frac{1}{|\Theta|}$&$\log (|\Theta|)$& \tabincell{l}{Mass function;\\ Cardinality} \\
\hline
\tabincell{l}{Aggregate uncertainty \\(AU) measurement \cite{AU}}& $argmax_{\mathcal{P}}[-\sum_{i=1}^{n}p(\theta_{i})\log p(\theta_{i})]$&$BetP(\theta_{i})=\frac{1}{|\Theta|}$&$\log (|\Theta|)$& \tabincell{l}{Mass function}\\
\hline
\tabincell{l}{Pal \textit{et al.}'s entropy \cite{pal1993uncertainty}}&$E_{p}=-\sum_{F_{i}\in 2^{\Theta}} m(F_{i}) \log \frac{m(F_{i})}{{|F_{i}|}}$&$m(F_i)=\frac{|F_i|}{|\Theta|\cdot2^{|\Theta|-1}}$&$\log (|\Theta|\cdot 2^{|\Theta|-1})$&\tabincell{l}{Mass function;\\Cardinality}\\
\hline
Deng entropy\cite{Deng2020ScienceChina}&$E_{d}=-\sum_{F_{i}\in 2^{\Theta}}m(F_{i})\log \frac{m(F_{i})}{2^{|F_{i}|}-1}$&\tabincell{l}{$m(A)=\frac{2^{|F_{i}|}-1}{\sum_{F_{i}\in 2^{\Theta}} 2^{|F_{i}|}-1}$}&$\log (3^{|\Theta|}-2^{|\Theta|})$&\tabincell{l}{Mass function;\\power set}\\
\hline
SU measurement \cite{wang2018uncertainty}& \tabincell{l}{$SU=\sum_{\theta_i\in\Theta}[-\frac{Pl(\theta_{i})+Bel(\theta_{i})}{2}\log\frac{Pl(\theta_{i})+Bel(\theta_{i})}{2}+Pl(\theta_{i})-Bel(\theta_{i})]$}& \tabincell{l}{$Bel(\theta_{i})=0; $\\$Pl(\theta_{i})=1$}&$|\Theta|$&\tabincell{l}{Bel function;\\Pl function;\\Elements}\\
\hline
JS entropy \cite{jirouvsek2018new}&\tabincell{l}{$H_{JS}=\sum_{F_{i}\in 2^{\Theta}}m(F_{i})\log(|F_{i}|)-\sum_{i=1}^{n}Pl\_ P_{m}(\theta_{i})\log Pl\_ P_{m}(\theta_{i})$}&$m(\Theta)=1$&$2\log (|\Theta|)$&\tabincell{l}{PMT;\\Hartley entropy;\\Elements}\\
\hline
Decomposable entropy \cite{jirouvsek2020properties}&$H_q=\sum_{F_i\in 2^\Theta}(-1)^{|F_i|}q(F_i)\log q(F_i)$&NaN&NaN&\tabincell{l}{q function;\\Focal element}\\
\hline
Yang and Han's method \cite{yang2016new}&$TU^l=1-\frac{1}{n}\cdot\sqrt{3}\sum_{\theta_i\in\Theta}d^l([Bel(\theta_i),Pl(\theta_i)],[0,1])$&$m(\Theta)=1$&$1$&\tabincell{l}{Bel function;\\Pl function;\\Distnace}\\
\Xhline{1.4pt}
\end{tabular}
\end{adjustbox}
\caption{These are classical and novel uncertainty measurements of BPA. We can find that in all of entropies, only JS entropy satisfies the intuitive maximum distribution.}
\label{d1t1}
\end{table*}
\restoregeometry
\end{definition}
\section{The process of probability transformation based on fractal} \label{process}
\subsection{Simulating probability transformation from fractal perspective}
Even though BPA can express more information by assigning the mass functions to the multi-element focal elements, in reality, all we observe are probability distributions. So how to reasonably transform BPA into probability distribution is the key to combining BPA with practical applications. PPT as the decision-making layer in TBM, which has wide applications. We propose a process for the PPT according to fractal, assuming that the result of PPT is generated under the action of time. For $2$-element discernment framework $X=\{A,B\}$, the process of BPA transforming into probability is shown in Figure \ref{f1}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.45\textwidth]{f1.png}
\caption{The left part of Figure \ref{f1} shows that mass function of $AB$ is split to $A$ and $B$ in different ways over time. When $n\rightarrow \infty$, BPA transforms into probability distribution. For a certain unit time, right part is a step of transformation in a unit time, so we can think that the process of transformation is $AB$ continuously splitting itself into its own power sets $\{A, B, AB\}$.}
\label{f1}
\end{figure}
Self-similarity is a basic property of fractal theory, that is, in the process of fractal, the whole and part are similar. In order to show this property more clearly, we use Figure \ref{f2} to show the process of splitting.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.45\textwidth]{f2.png}
\caption{The small triangle formed by the new split $AB^{n-1}$ and the large triangle of the entire split satisfy self-similarity.}
\label{f2}
\end{figure}
The entropy change in time due to the fractal geometry is assimilated to the information growth through the scale refinement. Wen \textit{et al.} also proved this point in their work on information dimension \cite{wen2021invited,Gao2021Information}. As shown in Figure \ref{f1} and \ref{f2}, as the number of splitting increases, before the newly generated $A$ and $B$ fusion the original $A$ and $B$, the overall belief entropy increases, which conforms to the ides proposed by Wang. The new BPA generated after the fusion means the system get new knowledge (the splitting method of original BPA), the overall belief entropy be unchanged or decreasing, which also conforms to information theory.
\subsection{The process of PPT}
For a given BPA, when the probability transformation without receiving outside knowledge, the PPT is most intuitive, and it uses an even splitting method to ensure the largest uncertainty of the information. According to the Equation \ref{PPTe} and the Figure \ref{f1}, with transformation in per unit time, it allocates equal mass functions to subsets with same cardinality, and the probability obtained at the end of the iteration must be PPT. Example \ref{ee1} shows the differences with different allocations.
\begin{example}
\label{ee1}
Given a discernment framework $X=\{a,b,c\}$, and the BPA is $m(X)=1$. The change of mass belief function after per splitting are shown in follows,
\begin{equation}
\begin{aligned}
&m^{n}(F_{i})=m^{n-1}(F_{i})+\frac{1}{p}m^{n-1}(G_{i})+\frac{1}{q}m^{n-1}(H_{i}) \\
&m^{n}(G_{i})=(1-\frac{2}{p})m^{n-1}(G_{i})+\frac{1}{q}m^{n-1}(H_{i})\\
&m^{n}(H_{i})=(1-\frac{6}{q})m^{n-1}(H_{i}),
\end{aligned}
\end{equation}
where $p\geqslant 3$ and $q\geqslant 7$. $|F_{i}|=1,~|G_{i}|=2,~|H_{i}|=3$, and $m^{n}(A)$ means $n$ times splitting of $m(A)$. Because of $3$-element discernment framework has three $2$-element focal elements and one $3$-element focal element, so $p$ and $q$ satisfy $p+4=q$. When the $p$ and $q$ given different value, the change of mass functions with the splitting process is shown in the Figure \ref{nf1}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.45\textwidth]{nf1.png}
\caption{Regardless of the values of $p$ and $q$, as the number of splits increases, BPA $m(X)=1$ is eventually transformed into a uniformly distributed probability $m(a)=m(b)=m(c)=\frac{1}{3}$, which is the same as the result of PPT.}
\label{nf1}
\end{figure}
Hartley entropy \cite{higashi1982measures} represents the uncertainty of non-specificity in BPA. When Hartley entropy is $0$, BPA degenerates into probability distribution. So as shown in Figure \ref{fh1}, in the process of BPA transformation into probability distribution, Hartley entropy of BPA gradually decreases from the maximum value to $0$.
\begin{figure}[hbtp!]
\centering
\includegraphics[width=0.95\textwidth]{fh1.png}
\caption{The trend of Hartley entropy in Example \ref{ee1}.}
\label{fh1}
\end{figure}
\end{example}
According to the above description, the PPT process is shown in detail, and the transformation process is different for different scales of unit time division. But the result is to maintain an uniform distribution for all elements.
\subsection{Discussion the process of probability transformation}
Besides the PPT, other methods also can be written as the fractal process if they satisfy the upper and low probability requirement. But other methods only give the calculation methods of results, and result-oriented inference cannot accurately simulate the transformation process, so we won't discuss them here. Although PMT cannot be written in the form of a split mass function, it can be seen as a continuous fusion of a uniform focal element assignment $m(F_i)=\frac{1}{2^{|F_i|-1}}$. Example \ref{eppf} shows that the transformation process of PMT in Definition \ref{PPF}.
\begin{example}\label{eppf}
Given a discernment framework $X=\{a,b\}$, and its BPA is $\mathbb{B}(2^{\Theta})=\{m(a)=0.2,m(b)=0.4,m(ab)=0.4\}$. Based on Definition \ref{PPF}, the PMT of $\mathbb{B}(2^{\Theta})$ is \\ $\{PnPl(a)=\frac{3}{7},PnPl(b)=\frac{4}{7}\}$. If we continually use another BPA $\mathbb{B}(2^{X})=\{m(a)=\frac{1}{3},m(b)=\frac{1}{3},m(ab)=\frac{1}{3}\}$ to fuse the $\mathbb{B}(2^{\Theta})$ by Dempster combination rule, the results are shown in Figure \ref{fppf}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.45\textwidth]{fppf.png}
\caption{With the continuous fusion $\mathbb{B}(2^{X})$ in Example \ref{eppf}, $\mathbb{B}(2^{\Theta})$ eventually transform into $PnPl$.}
\label{fppf}
\end{figure}
\end{example}
In this section, we present the implementation process of the existing main probability transformation methods. For the newly proposed probability transformation methods, the rationality can be verified according to the process ideas given in this section. More importantly, for BPA, its uncertainty can be measured by using the intermediate quantity of its transformation process. The specific method will be given in next section.
\section{Fractal-based belief entropy}\label{fbentropy1}
A new belief entropy called fractal-based belief (FB) entropy is proposed based on the process of PPT in this section. It can not only measure the uncertainty of BPAs, but also make their maximum entropy distributions correspond to solving actual physical problems. In Example \ref{ee1}, when $p$ and $q$ take different values, the evolution speed of PPT in per unit time is different. In order to better express the concept of "uniformity", we rule the focal element is equally split into its power set in per unit time.
\subsection{Fractal-based belief entropy}
\begin{definition}[FB entropy]\label{bfentropy}
For a discernment framework $\Theta=\{\theta_{1},\theta_{2},\dots,\theta_{n}\}$, its BPA is $\mathbb{B}(2^{\Theta})$, and the fractal-based (FB) entropy of $\mathbb{B}(2^{\Theta})$ is defined as
\begin{equation}\label{bfentropye}
E_{FB}=-\sum_{F_{i}\subseteq \Theta(F_{i}\in 2^{\Theta})} m_{F}(F_{i})\log m_{F}(F_{i}),
\end{equation}
where
\begin{equation}\label{mfe}
m_{F}(F_{i})=\frac{m(F_{i})}{2^{|F_{i}|}-1}+\sum_{F_{i}\subseteq G_{i}\cap|F_{i}|<|G_{i}|}\frac{m(G_{i})}{2^{|G_{i}|}-1}.
\end{equation}
The new set $\mathbb{B}_{F}(2^{\Theta})$ composed by $m_{F}(F_{i})$ is called fractal-based basic probability assignment (FBBPA).
\end{definition}
By observing the Equation \ref{mfe}, we can find that $\mathbb{B}_{F}(2^{\Theta})$ is obtained by a unit time transformation of PPT. For $m(\Theta)=1$, the most uncertain BPA intuitively, after a unit time splitting, the $\mathbb{B}_{F}(2^{\Theta})$ is a uniform distribution of $2^{\Theta}$, which is same with the maximum entropy distribution of Shannon entropy. So FBBPA is neither BPA nor probability distribution, but describes the characteristics of BPA from the perspective of probability.
\subsection{The Maximum FB entropy and its physical meaning}
\begin{definition}[Maximum FB entropy]\label{maxbfentropy}
For a discernment framework $\Theta=\{\theta_{1},\dots,\theta_{n},\}$, its BPA is $\mathbb{B}(2^{\Theta})$. The maximum fractal-based belief entropy $E_{FB}^{\uparrow}(\Theta)$ is
\begin{equation}\label{maxbfentropy}
\begin{aligned}
E_{FB}^{\uparrow}=\log (2^{n}-1),
\end{aligned}
\end{equation}
when $m(\Theta)=1.$
\end{definition}
\textbf{\emph{Proof.}}Let
\begin{equation}
E=-\sum_{F_{i}\in2^\Theta}m_{F}(F_{i})\log m_{F}(F_{i}),
\end{equation}
and according to Equation \ref{mfe}, it is obvious that $\sum_{A\subseteq \Theta}m_{F}(A)=1$. So the Lagrange function can be denoted as
\begin{equation}
E_{0}=-\sum_{F_{i}\in2^\Theta}m_{F}(F_{i})\log m_{F}(F_{i})+\lambda (\sum_{F_{i}\subseteq \Theta}m_{F}(F_{i})-1),
\end{equation}
and calculate its gradient
\begin{equation}
\frac{\partial E_{0}}{\partial m_{F}(F_{i})}=-\log m_{F}(F_{i})-\frac{1}{\ln a}+\lambda=0.
\end{equation}
For all $F_{i}\subseteq \Theta$
\begin{equation}
\log m_{F}(F_{i})=-\frac{1}{\ln a}+\lambda=k,
\end{equation}
so when $m_{F}(F_{i})=\frac{1}{2^{|\Theta|}-1}$ and $m(\Theta)=1$, the $E_{FB}(\Theta)$ reaches the maximum.
The maximum Shannon entropy called information volume and its probability distribution can solve real physical problems in practical applications. As the generalization of the Shannon entropy, the maximum FB entropy also has a corresponding physical model in reality, which are shown in Example \ref{e2}.
\begin{example}[Physical model of maximum FB entropy]\label{e2}
Assuming there are $64$ teams participating in a competition. The only information source is organizer, and we can ask him whether some teams are champions. The goal for us is to find all champions.
\begin{description}
\item[\textbf{Q}:] How many times inquiring can we find the champion at least?
\item[\textbf{Case1}:]We know the number of champions is $1$.
\item[\textbf{Case2}:]We don’t know the exact number of champions.
\end{description}
Figure \ref{FB_PHYSICAL} shows the difference between the t $2$ Cases.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{FB_PHYSICAL.png}
\caption{The difference between $2$ Cases.}
\label{FB_PHYSICAL}
\end{figure}
Case $1$ can be written as an uniform probability distribution with $64$ basic events $\{p(1)=\cdots=p(64)=\frac{1}{64}\}$. The Shannon entropy with base $2$ is $\log_2 64=6$, so only $6$ times inquiring can we find the champion. But for Case $2$, we are not sure about the number of championships, so all power sets of the $64$ teams frame have equal probability to win championships. It can be written as $\{p(1)=\frac{1}{2^{64}-1},p(2)=\frac{1}{2^{64}-1},\cdots,p(1\cdots64)=\frac{1}{2^{64}-1}\}$, which also corresponds to the FBBPA of the maximum FB entropy $\{m_F(1)=\frac{1}{2^{64}-1},m_F(2)=\frac{1}{2^{64}-1},\cdots,m_F(1\cdots64)=\frac{1}{2^{64}-1}\}$, so it corresponds to BPA $m(1\cdots64)=1$ and FB entropy $E_{FB}=\log_2 (2^{64}-1)\approx64$. The inquiry times of Case $2$ is $64$, which means that we can only find all the champions by inquiring all teams.
\end{example}
Example \ref{e2} illustrates that FB entropy is a generalization of Shannon entropy in the physical model of maximum entropy.
\subsection{Evaluation FB entropy}
According to the $10$ requirements for total uncertainty measurements of BPA in \cite{klir2013uncertainty,abellan2008requirements}, we evaluate the properties of FB entropy to prove its advantages. Among them, $\heartsuit$ means that FB entropy satisfies this proposition, and $\spadesuit$ means that FB entropy does not satisfy this proposition. For the unsatisfied propositions, we give explanations and prove the rationality of FB entropy. Under an $n$-element discernment framework $\Theta$ with BPA $\mathbb{B}(2^{\Theta})$, we evaluate the FB entropy in Proposition \ref{p1} - \ref{p10}.
\begin{proposition}[Probabilistic consistency ($\heartsuit$)]\label{p1}
When $\forall |F_i|>1$ $m(F_i)=0$, the total uncertainty measurement should degenerate into the Shannon entropy.
\end{proposition}
\textbf{\emph{Proof.}}When $\mathbb{B}(2^{\Theta})$ satisfies $\sum_{\theta_i\in\Theta}m(\theta_i)=1$, substitute it into Equation \ref{bfentropye} and \ref{mfe}:
\begin{equation}
E_{FB}=-\sum_{F_{i} \in 2^\Theta} m_{F}(F_{i})\log m_{F}(F_{i}) = -\sum_{i=1}^{n}m(\theta_{i})\log m(\theta_{i})=H(\mathbb{B}).
\end{equation}
So the FB entropy satisfies the Proposition \ref{p1}.
\qed
\begin{proposition}[Set consistency($\spadesuit$)]\label{p2}
The total uncertainty measurement of vacuous BPA ($m(\Theta)=1$) should equal to Hartley entropy $TU=E_H=\log |\Theta|$.
\end{proposition}
\textbf{\emph{Proof.}} For vacuous BPA, $E_{FB}=\log (2^{|\Theta|}-1) \neq \log |\Theta|\neq E_H.$
So the FB entropy doesn't satisfy the Proposition \ref{p2}.
\textbf{\emph{Explanation.}}The uncertainty of probability distribution is caused by the discord between basic events. The maximum Shannon entropy probability distribution is uniform, which means assignment same support degree to all events. In DSET, BPA can not only reflect the discord between elements, but contain the uncertainty to the distribution itself as well. In Example \ref{e2}, a BPA under $n$-element discernment framework and a probability distribution under $2^n-1$-events random variables can express same information, which also express that BPA can express more information than probability distribution for same dimension. So the maximum belief entropy larger than maximum Shannon entropy is more rational. Some common maximum total uncertainty measurement and Shannon entropy are shown in Figure \ref{f5} to show this property more intuitively.
\begin{figure}
\begin{minipage}[htbp]{0.5\linewidth}
\centering
\includegraphics[width=0.98\textwidth]{f5.jpg}
\caption{The maximum uncertainty of common total uncertainty measurements are larger than maximum Shannon entropy, so the requirement of set consistency is not reasonable.}
\label{f5}
\end{minipage}
\begin{minipage}[htbp]{0.5\linewidth}
\centering
\includegraphics[width=0.98\textwidth]{f6.jpg}
\caption{With the uncertainty increasing, the FB entropy, JS entropy and SU measurement is also increasing.}
\label{f6}
\end{minipage}
\end{figure}
\qed
\begin{proposition}[Monotonicity($\heartsuit$)]\label{p6}
If BPA $\mathbb{B}(2^{\Theta})$ and $\mathbb{B}(2^{\Omega})$ have following relationship: $\mathbb{B}(2^{\Theta}) \subseteq \mathbb{B}(2^{\Omega})$, the total uncertainty measurements of them should satisfy $UM(\mathbb{B}(2^{\Theta})) \leq UM(\mathbb{B}(2^{\Omega}))$.
\end{proposition}
\textbf{\emph{Proof.}}
Luo \textit{et al.} \cite{Luo2019matrix} proposes a widely used method of BPA's negation, and the direction of negation is the direction of ignorance. For a discernment framework $\Theta=\{\theta_{1},\theta_{2}\}$, its process of negation in $10$ times is shown in Table \ref{t2}, and the trends of JS entropy, SU measurement, Deng entropy and FB entropy are shown in Figure \ref{f6}. According to their trends, we can find that JS entropy, SU measurement and FB entropy is continuously rising, which illustrate that they satisfy the Proposition \ref{p6}. But for Deng entropy, its maximum entropy distribution is a uniform distribution based on time, so it does not satisfy monotonicity in this case.
\begin{table}[htbp!]
\caption{Intuitively, as the number of negation increases, $m(\Theta)$ gradually increases, and the uncertainty expressed by BPA be larger}
\label{t2}
\begin{center}
\begin{tabular}{c|cccccccccc}
\Xhline{1.4pt}
Times& $1$& $2$& $3$& $4$& $5$& $6$& $7$& $8$& $9$& $10$ \\
\Xhline{1.4pt}
$m(x_{1}) $ & $0.6000$& $0.0500$& $0.1500$& $0.0125$& $0.0375$& $0.0031$& $0.0094$& $0.0008$& $0.0023$& $0.0002$\\
$m(x_{2})$ & $0.1000$& $0.3000$& $0.0250$& $0.0750$& $0.0063$ & $0.0187$& $0.0016$& $0.0047$& $0.0004$& $0.0012$ \\
$m(x_{1}x_{2})$& $0.3000$& $0.6500$& $0.8250$& $0.9125$& $0.9562$ & $0.9781$& $0.9891$& $0.9945$& $0.9973$& $0.9986$ \\
\Xhline{1.4pt}
\end{tabular}
\end{center}
\end{table}
\qed
\begin{proposition}[Range($\spadesuit$)]\label{p3}
The range of total uncertainty measurements should satisfy the $[0,\log (|\Theta|)]$
\end{proposition}
\textbf{\emph{Proof.}}
If $m(\theta_{i})=1$, the FB entropy reaches the minimum $0$. So the $E_{FB}^{\downarrow}=0$. In the Definition \ref{maxbfentropy}, we have proven that the $E_{FB}^{\uparrow}=\log(2^{n}-1)$. Based above, the range of FB entropy is $[0,\log (2^n-1)]$, which doesn't satisfy the Proposition \ref{p3}.
The explanation of Proposition \ref{p3} is similar to Proposition \ref{p2}.
\qed
\begin{proposition}[Additivity($\heartsuit$)]\label{p4}
Suppose $X$, $Y$ and $Z$ are $3$ discernment frameworks. Among them, $X$ and $Y$ are independent, and $Z=X\times Y$. The total uncertainty measurement should satisfy
\begin{equation}
TUM(Z)=TUM(X)+TUM(Y),
\end{equation}
where $TUM$ is a general term for total uncertainty measurements.
\end{proposition}
\textbf{\emph{Proof.}} Joint BPA has different definitions according to whether $m(\varnothing)=0$. Smets \cite{smets1993belief} defined that generalized Bayesian theorem under the condition $m(\varnothing)\neq 0$ and for the joint frame $\Psi=\Theta \times \Omega$, the number of mass functions satisfies $2^{\Psi}=2^{\Theta + \Omega}$. But in this paper, we consider BPA is normalized, so the number of joint mass functions for $X \times Y$ is $(2^{|X|}-1)(2^{|Y|}-1)$. Figure \ref{f61} using a specific case to show the difference between $2$ definitions intuitively.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.85\textwidth]{FB_add.png}
\caption{Suppose $2$ discernment frameworks $X=\{x_1,x_2\}$ and $Y=\{y_1,y_2\}$, and we use the Smets' joint frame can get a new frame $Z=\{z_{11},z_{12},z_{21},z_{22}\}$. For the normalized BPA, original BPAs' product can not cover all focal elements under new frame, so the mass functions of yellow parts are always be $0$, which appears conflict with the definition of joint probability distribution. So we define the mass functions of joint BPA are generated by the product of original BPAs (blue part).}
\label{f61}
\end{figure}
According to the above description, the number of mass functions of the joint BPA is less than the power set under the joint framework. According to Definition \ref{bfentropy}, the calculation method of FBBPA at this time is no longer assigning mass functions to its power set, but assigning them to subsets which has inclusive relationship under current frame.
For joint frame $Z =X\times Y $, we define joint BPA $m^{Z}$ and joint FBBPA $m_{F}^Z$ as follows:
\begin{equation}\label{mze}
\begin{aligned}
&m^Z(z_{ij})=m(x_i)\times m(y_j);\\
&m^Z(z_{ij}z_{im})=m(x_i)\times m(y_j y_m);\\
&m^Z(z_{ij}z_{im}z_{nj}z_{nm})=m(x_i x_n) \times m(y_j y_m);
\end{aligned}
\end{equation}
\begin{equation}\label{mzfe}
\begin{aligned}
\forall F_i \subseteq Z,m_{F}^Z(F_i)=m^Z(F_i)+\sum_{F_i\subseteq K_i;G_i\times H_i =K_i}\frac{m^Z(K_i)}{(2^|G_i|-1)(2^|H_i|-1)};
\end{aligned}
\end{equation}
For $C\subseteq Z$, $A\subseteq X$ and $B \subseteq Y$ and they satisfy $A\times B=C$, according to the Equation \ref{mze} and \ref{mzfe},
\begin{equation}
\begin{aligned}
m^{Z}_F(C)&=m^Z(C)+\sum_{C \subseteq K_i;G_i\times H_i =K_i;A\subseteq G_i;B\subseteq H_i }\frac{m^Z(K_i)}{(2^|G_i|-1)(2^|H_i|-1)}\\
&=m(A)\times m(B)+\sum_{C \subseteq K_i;G_i\times H_i =K_i ;A\subseteq G_i;B\subseteq H_i }\frac{m(G_i)\times m(H_i)}{(2^|G_i|-1)(2^|H_i|-1)}\\
&=(m(A)+\sum_{A\subseteq G_i}\frac{m(G_i)}{2^|G_i|-1})\times(m(B)+\sum_{B\subseteq H_i}\frac{m(H_i)}{2^|H_i|-1})\\
&=m_F(A)\times m_F(B)
\end{aligned}
\end{equation}
We know that the Shannon entropy satisfies additivity, and it is proved that the consistency of the joint FBBPA and the joint BPA, it is easy to conclude that the FB entropy satisfies the additivity and Example \ref{eee} shows its calculation process. So the FB entropy satisfies the Proposition \ref{p4}.
\begin{example}\label{eee}
For two independent BPAs under $2$-element frames $X$ and $Y$, $\mathbb{B}(2^X)=\{m(x_1)=m(x_2)=\frac{1}{5},m(x_1 x_2)=\frac{3}{5}\}$ and $\mathbb{B}(2^Y)=\{m(y_1)=\frac{1}{10},m(y_2)=\frac{3}{5},m(y_1 y_2)=\frac{3}{10}\}$, so the joint BPA is
\begin{equation}
\begin{aligned}
&\mathbb{B}(2^Z)=\mathbb{B}(2^X)\times \mathbb{B}(2^Y)=\{m(z_{11})=\frac{1}{50},m(z_{12})=\frac{6}{50},m(z_{21})=\frac{1}{50},m(z_{22})=\frac{6}{50},\\
&m(z_{11} z_{21})=\frac{3}{50},m(z_{12} z_{22})=\frac{18}{50},m(z_{11} z_{12})=\frac{3}{50},m(z_{21} z_{22})=\frac{3}{50},m(z_{11} z_{12} z_{21} z_{22})=\frac{3}{50}\}.
\end{aligned}
\end{equation}
According to Equation \ref{mzfe}, the joint FBBPA is
\begin{equation}
\begin{aligned}
&\mathbb{B}_F(2^Z)=\mathbb{B}_F(2^X)\times \mathbb{B}_F(2^Y)=\{m_F(z_{11})=\frac{4}{50},m_F(z_{12})=\frac{14}{50},m_F(z_{21})=\frac{4}{50},m_F(z_{22})=\frac{14}{50},\\
&m_F(z_{11} z_{21})=\frac{2}{50},m_F(z_{12} z_{22})=\frac{7}{50},m_F(z_{11} z_{12})=\frac{2}{50},m_F(z_{21} z_{22})=\frac{2}{50},m_F(z_{11} z_{12} z_{21} z_{22})=\frac{1}{50}\}.
\end{aligned}
\end{equation}
So the FB entropy of $\mathbb{B}(2^Z)$, $\mathbb{B}(2^X)$ and $\mathbb{B}(2^Y)$ satisfy that $E_{FB}(Z)=2.6787=1.5219+1.1568=E_{FB}(X)+E_{FB}(Y)$.
\end{example}
\qed
\begin{proposition}[Subadditivity($\heartsuit$)]\label{p5}
Suppose $X$, $Y$ and $Z$ are discernment frameworks. And $Z=X\times Y$. The total uncertainty measurements should satisfy
\begin{equation}
TUM(Z) \leq TUM(X)+TUM(Y).
\end{equation}
\end{proposition}
\textbf{\emph{Proof.}}
\begin{description}
\item[\textbf{Case1:}] If the BPAs of $X$ and $Y$ are independent, the $E_{FB}(Z) = E_{FB}(X)+E_{FB}(Y)$ has been proven in Propositon \ref{p4}.
\item[\textbf{Case2:}] If the BPAs of $X$ and $Y$ are not independent, when they are combined into a joint BPA, they obtain information from each other's BPA, which can reduce the uncertainty of the joint BPA. So $E_{FB}(Z) < E_{FB}(X)+E_{FB}(Y)$ .
\end{description}
According to $2$ Cases, we can prove that the FB entropy satisfies the Proposition \ref{p5}.
\qed
\begin{proposition}[RB1($\heartsuit$)]\label{p7}
The calculation process of total uncertainty measurement cannot be too complicated.
\end{proposition}
\textbf{\emph{Proof.}}
For the $n$-element discernment framework, the computation complexity of Equation \ref{bfentropye} and \ref{mfe} are $\mathcal{O}(2^n)$ and $\mathcal{O}(n2^n)$ respectively. According to Table \ref{d1t1}, the computation complexity of JS entropy and SU is similar with FB entropy, and Yang and Han's method has higher complexity than them. Therefore, the complexity of FB entropy is within an absolutely acceptable range and satisfies Proposition \ref{p7}.
\qed
\begin{proposition}[RB2($\heartsuit$)]\label{p8}
The total uncertainty measurement can be divided two methods to measure the discord and non-specificity respectively.
\end{proposition}
\textbf{\emph{Proof.}}
Different from other methods (JS entropy, SU measurement and Deng entropy), the discord and non-specificity can not be divided from the expression directly, but it can obtain these two measurements in a more reasonable way. We define FB entropy from the process of PPT, so when BPA transformed to PPT, the assignment expresses discord only. Based on above, the discord $E^{\mathcal{D}}_{FB}$ and non-specificity $E^{\mathcal{N}}_{FB}$ of $E_{FB}$ are defined as follows:
\begin{equation}
\begin{aligned}
&E^{\mathcal{D}}_{FB}=H(BetP)=H_j=-\sum_{\theta_i\in\Theta}BetP(\theta_{i})\log BetP(\theta_{i});\\
&E^{\mathcal{N}}_{FB}=E_{FB}-E^{\mathcal{D}}_{FB}=\sum_{\theta_i\in\Theta}BetP(\theta_{i})\log BetP(\theta_{i})-\sum_{F_i\in2^\Theta} m_{F}(F_i)\log m_{F}(F_i).
\end{aligned}
\end{equation}
The relationship of discord and non-specificity are shown in Figure \ref{f71}. Based on above, the FB entropy satisfies Proposition \ref{p8}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.75\textwidth]{f71.jpg}
\caption{The relationship of FB entropy and its discord $\&$ non-specificity measure}
\label{f71}
\end{figure}
\begin{proposition}[RB3($\heartsuit$)]\label{p9}
Total uncertainty measurement must be sensitive to changes in BPA.
\end{proposition}
\qed
\textbf{\emph{Proof.}}
Since the change from BPA to FBBPA is reversible, any change to BPA equals to change FBBPA. For different FBBPA, Shannon entropy has been proved to be a sensitive measurement method, so for any BPA, FB entropy is also sensitive to its changes. So FB entropy satisfies Proposition \ref{p9}. We use Example \ref{e3} to show the results intuitively.
\qed
\begin{example}\label{e3}
For discernment framework $X=\{x_1,x_2\}$, $m(x_1)$ and $m(x_2)$ change from $0$ to $1$ and satisfy $m(x_1)+m(x_2)\leqslant 1$.
Figure \ref{f74} and \ref{f75} are the change trend of discord and non-specificity, and Figure \ref{f72} and \ref{f73} show their relationships and top view.
\begin{figure}
\begin{minipage}[htbp]{0.48\linewidth}
\centering
\includegraphics[width=0.98\textwidth]{f74.jpg}
\caption{Discord change trend in Example \ref{e3}}
\label{f74}
\end{minipage}
\begin{minipage}[htbp]{0.48\linewidth}
\centering
\includegraphics[width=0.98\textwidth]{f75.jpg}
\caption{Non-specificity change trend in Example \ref{e3}}
\label{f75}
\end{minipage}
\end{figure}
\begin{figure}
\begin{minipage}[htbp]{0.48\linewidth}
\centering
\includegraphics[width=0.98\textwidth]{f73.jpg}
\caption{The relationship of TUM in Example \ref{e3}}
\label{f72}
\end{minipage}
\begin{minipage}[htbp]{0.48\linewidth}
\centering
\includegraphics[width=0.98\textwidth]{f72.jpg}
\caption{The top view of TUM in Example \ref{e3}}
\label{f73}
\end{minipage}
\end{figure}
\end{example}
\begin{proposition}[RB4($\heartsuit$)]\label{p10}
The proposed method is supposed to have corresponding model when meets more generalized theory than evidence theory.
\end{proposition}
\textbf{\emph{Proof.}}
FB entropy distributes mass functions uniformly to the power sets of their focal elements, because DSET uses power set $2^n$ to express information. As a generalization of DSET, the DSmT \cite{smarandache2006advances} proposed by Desert and Smarandache is to extend the power set $2^n$ to $U^n$. According to this idea, FB entropy can also measure the uncertainty of the assignment in DSmT, and only needs to uniformly distribute the mass functions to $U^n$ subsets. So FB entropy satisfies Proposition \ref{p10}.
So the FB entropy satisfies the Proposition \ref{p10}.
\qed
In this Section, $10$ requirements evaluate the general properties of FB entropy and prove its rationality. In particular, in terms of additivity, there was no previous total uncertainty measurement can complete the additivity verification on the basis of joint BPA. In the rest of the paper, we will further show the advantages of FB entropy.
\section{Advantages of FB entropy}\label{fbentropy2}
We make an intuitive comparison through several examples to show the advantages of FB entropy, which are not available in previous methods.
\subsection{View from combination rules: combination interval consistency}
Combination rule of Dempster (CRD) \cite{dempster2008upper} and disjointed combination rule (DCR) \cite{smets1993belief} are most widely used combination rules of normalized BPA. For BPAs $\mathbb{B}_1(2^\Theta)$ and $\mathbb{B}_2(2^\Theta)$ under discernment framework $\Theta$, the CRD $\mathbb{B}_{1\oplus2}$ and DCR $\mathbb{B}_{1\circledtiny{$\cup$}2}$ are
\begin{equation}\label{crde}
m_{1\oplus 2}(F_i)=
\begin{cases}
K^{-1}\cdot \sum_{G_i\subseteq\Theta,H_i\subseteq\Theta}m_1(G_i)m_2(H_i)& \text{$G_i \cap H_i=F_i$}\\
0& \text{$F_i$ = $\varnothing$}
\end{cases},
\end{equation}
\begin{equation}
m_{1\circledtiny{$\cup$}2}(F_i)=\sum_{G_i\subseteq\Theta,H_i\subseteq\Theta,G_i \cup H_i=F_i}m_1(G_i)m_2(H_i),
\end{equation}
where $K=\sum_{H_i\cap G_i=\varnothing}m_1(G_i)m_2(H_i)$.
CRD in DSET corresponds to the cross product in probability theory. The Shannon entropy of the probability after cross product is smaller than all original probabilities, which means that the uncertainty of the distribution are reduced after receiving new information. Intuitively, the uncertainty of the BPA after combination by CRD also should be reduced, so its total uncertainty measurement should satisfy $TUM(\mathbb{B}_{1\oplus2}) \leqslant \min\{TUM(\mathbb{B}_1),TUM(\mathbb{B}_2)\}$. DCR is a conservative combination rule. It assigns the mass functions of conflict evidence to their union, which is bound to cause more uncertainty. Therefore, the total uncertainty measurement of the BPA after using DCR should be larger and satisfies $TUM(\mathbb{B}_{1\circledtiny{$\cup$}2}) \geqslant \max\{TUM(\mathbb{B}_1),TUM(\mathbb{B}_2)\}$. So the total uncertainty measurement should satisfy the combination interval consistency, i.e., $\{TUM(\mathbb{B}_1),TUM(\mathbb{B}_2)\}\in[TUM(\mathbb{B}_{1\oplus2}),TUM(\mathbb{B}_{1\circledtiny{$\cup$}2})]$. For $2$ BPAs $\mathbb{B}_1=\{m(a)=m(b)=\frac{1-\frac{i}{1000}}{2},m(ab)=\frac{i}{1000}\}$ and $\mathbb{B}_2=\{m(a)=0.1,m(b)=0.7,m(ab)=0.2\}$, when $i$ from $0$ to $1$, Figure \ref{FB_combin} and \ref{Ed_combin} show the change trend of FB entropy and Deng entropy. From the Figures, it can be concluded that FB entropy meets the combination interval consistency but Deng entropy does not. So for this property, the measurement effect of FB entropy is better than Deng entropy.
\begin{figure}
\begin{minipage}[htbp]{0.48\linewidth}
\centering
\includegraphics[width=0.8\textwidth]{FB_combin.png}
\caption{The change trend of FB entropy}
\label{FB_combin}
\end{minipage}
\begin{minipage}[htbp]{0.48\linewidth}
\centering
\includegraphics[width=0.98\textwidth]{Ed_combin.png}
\caption{The change trend of Deng entropy}
\label{Ed_combin}
\end{minipage}
\end{figure}
\subsection{View from non-specificity: more rational measurement}
Non-specificity as a peculiar property of DSET, analyzing its uncertainty reasonably is significant. Besides the most well known Hartley entropy \cite{higashi1982measures}, Yang \textit{et al.} \cite{yang2016non} utilized belief interval to measure the non-specificity. In addition, common total uncertainty measurements can separate non-specificity, which are shown in Table\ref{tnon}. We evaluate these methods separately from qualitative and quantitative aspects.
\begin{table}[htbp!]
\caption{Non-specificity of common total uncertainty measurements}
\label{tnon}
\begin{center}
\small
\begin{tabular}{c|cccc}
\Xhline{1.4pt}
Methods& \tabincell{c}{JS entropy~\&\\Pal \textit{et al.}'s entropy} & SU measurement & Deng entropy &FB entropy \\
\hline
Non-specificity&$\sum_{F_i\in 2^\Theta}m(F_i)\log |F_i|$&$\sum_{\theta_i\in\Theta}(Pl(\theta_i)-Bel(\theta_i))$&$\sum_{F_i\in 2^\Theta}m(F_i)\log (2^{|F_i|}-1)$&\tabincell{c}{$\sum_{\theta_i\in\Theta}BetP(\theta_{i})\log BetP(\theta_{i})$\\$-\sum_{F_i\in2^\Theta} m_{F}(F_i)\log m_{F}(F_i)$}\\
\Xhline{1.4pt}
\end{tabular}
\end{center}
\end{table}
\textbf{Qualitative analysis:} SU measurement, JS entropy and other derivative methods \cite{Xue2021Interval} are to calculate the uncertainty of discord and non-specificity respectively and then add them up. In this way, discord and non-specificity are measured separately, which can reflect the relative uncertainty between different pieces of evidence. But for a BPA, we cannot know the proportion of discord and non-specificity in its total uncertainty. So logically speaking, these two parts should be divided from the total uncertainty measurement, instead of using these $2$ parts to form a method. From this point of view, Pal \textit{et al.}, Deng entropy and FB entropy are more reasonable.
\textbf{Quantitative analysis:} For a $4$-element discernment framework $X=\{a,b,c,d\}$ with $2$ evidence $\mathbb{B}_1(2^X)=\{m(ab)=m(bc)=m(cd)=m(ad)=\frac{1}{4}\}$ and $\mathbb{B}_2(2^X)=\{m(ab)=m(bc)=m(cd)=m(ad)=m(ac)=m(bd)=\frac{1}{6}\}$. According to non-specificity of them are shown in Table \ref{tnone}, the results of $\mathbb{B}_1(2^X)$ and $\mathbb{B}_2(2^X)$ measured by FB entropy are different. Although the belief intervals of their elements are all $[0,\frac{1}{2}]$, the probability ranges they can cover are not same. For example, $\mathbb{B}_2(2^X)$ can appear probability distribution $\{p(a)=\frac{1}{2},p(b)=\frac{1}{3},p(c)=\frac{1}{6}\}$, but $\mathbb{B}_1(2^X)$ only can reach $\{p(a)=\frac{1}{2},p(b)=\frac{1}{4},p(c)=\frac{1}{4}\}$. Only FB entropy can express this kind of difference, which proves its advantages in this aspect.
\begin{table}[htbp!]
\caption{Non-specificity of $\mathbb{B}_1(2^X)$ and $\mathbb{B}_2(2^X)$ }
\label{tnone}
\begin{center}
\small
\begin{tabular}{c|cccc}
\Xhline{1.4pt}
Methods& \tabincell{c}{JS entropy~\&\\Pal \textit{et al.}'s entropy} & SU measurement & Deng entropy &FB entropy \\
\hline
$\mathbb{B}_1(2^X)$ &$1$&$2$&$1.5850$&$2.8554$\\
\hline
$\mathbb{B}_2(2^X)$ &$1$&$2$&$1.5850$&$3.1133$\\
\Xhline{1.4pt}
\end{tabular}
\end{center}
\end{table}
\subsection{View from physical model: Stronger ability to express information than Shannon entropy}
In Example \ref{e2}, we have used a physical model to show the difference between the maximum FB entropy and the maximum Shannon entropy. It is proved that the information volume of the maximum FB entropy under $n$-element discernment framework is equivalent to the information volume of the maximum Shannon entropy under $2^n-1$-events random variable. Expanding the cases in Example \ref{e2} can further prove the advantages of FB entropy.
\begin{description}
\item[\textbf{Q}:] How many times inquiring can we find the champion at least?
\item[\textbf{Case1}:]We don’t know the exact number of champions.
\item[\textbf{Case2}:]We don't know the exact number of champions, but we know that the champions are in a certain half.
\item[\textbf{Case3}:]We don't know the exact number of champions, but we know that the champions are in a quarter of the population.
\item[\textbf{Case4}:]We don't know the exact number of champions, but we know that the champions are in a $\frac{1}{64}$ of the population.
\end{description}
Among them, the BPA and probability distribution of Case $1$ and Case $4$ have shown in Example \ref{e2}, and Case $4$ also can be described as we knowing only have one champion. For Case $2$ and Case $3$, each of them can not be expressed by $1$ probability distribution, but BPAs $\mathbb{B}_{Case~2}=\{m(1\cdots 32)=m(33\cdots 64)=\frac{1}{2}\}$ and $\mathbb{B}_{Case~3}=\{m(1\cdots 16)=m(17\cdots 32)=m(33\cdots 48)=m(49\cdots 64)=\frac{1}{4}\}$ can express. The FB entropy of Case $2$ $E_{FB}(\mathbb{B}_{Case~2})\approx 33$, and in reality, we also need to inquire $33$ times to find all champions. First, we inquire $1$ time to find which half contain all champions, and then we can find champions by inquiring all 32 people. For Case $3$, $E_{FB}(\mathbb{B}_{Case~2})\approx 18$, which also be consistent with physical model. Based on the above, we can express the the relationship of $4$ Cases in Figure \ref{ff}, which unifies the process of BPA degeneration to probability distribution and FB entropy degeneration to Shannon entropy. From the superiority of BPA compared to probability distribution, the superiority of FB entropy compared to Shannon entropy is inferred. At this point, FB entropy is better than all existing belief entropies.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.85\textwidth]{FB_phsical2.png}
\caption{The relationship of $4$ Cases.}
\label{ff}
\end{figure}
\section{Conclusion}
This paper utilizes the fractal to simulate the process of pignistic probability transformation, which shows the process of lost information in transformation more intuitive. Based on the process, we propose the fractal-baed belief entropy to measure the BPA's total uncertainty. After 10 required evaluations, we prove that FB entropy can reasonably measure the uncertainty of BPA. In addition, we prove its superiority from $3$ aspects: combination rule, non-specificity and physical model. Based on above, the contributions of paper are summarized as follows:
\begin{itemize}
\item[$\bullet$] [\textbf{Process of probability transformation:}] We consider probability transformation as a process, and propose a possible transformation process of PPT and PMT. This idea provides a new perspective to evaluate probability transformation, so that result orientation is no longer the only evaluation criterion.
\item [$\bullet$][\textbf{Total uncertainty measurement of BPA:}] Based on fractal, we propose the FBBPA and substitute it into Shannon entropy to define the FB entropy. After evaluation, FB entropy can not only measure total uncertainty of BPA reasonably, but satisfy the additivity, which realize the corresponding with Shannon entropy.
\item[$\bullet$] [\textbf{Combination rule interval:}] Based on the CRD and the DCR, we propose the combination interval consistency and prove the FB entropy is better than Deng entropy in this property.
\item[$\bullet$] [\textbf{Discord and non-specificity measurement:}] Since PPT is the end point of the fractal method, we substitute it into Shannon entropy as discord measurement. Through qualitative and quantitative analysis, we prove that FB entropy is superior to all previous uncertainty measurement methods in this aspect.
\item[$\bullet$] [\textbf{Physical model consistency: }] As a generalization of Shannon entropy, FB entropy can not only degenerate into Shannon entropy when the input is probability distribution, but correspond to Shannon entropy in the physical model of maximum entropy as well.
\end{itemize}
In future research, this work can be further extended from three directions. (1) In terms of probability transformation, more probability transformation methods can be simulated based on the proposed process model. (2) In terms of uncertainty measurement, this fractal-based measurement method can be applied to more uncertainty theories. (3) In DSET, FB entropy can be applied to solve practical problems such as information fusion, decision making and fault diagnosis.
\section*{Declaration of interests}
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
\section*{Acknowledgment}
The work is partially supported by National Natural Science Foundation of China (Grant No. 61973332), JSPS Invitational Fellowships for Research in Japan (Short-term). Thanks to the reviewers' valuable comments, which significantly improved the quality of the paper. Thanks to colleagues in the Information Fusion and Intelligent System Laboratory for their help and support.
|
1,314,259,993,460 | arxiv | \section{Introduction}
The skin friction associated with a turbulent boundary layer
constitutes about \SI{50}{\percent} of the total drag of an airplane.
Owing to its importance, passive or
active skin-friction reduction means have widely been investigated \citep{Fan2016book}.
Promising strategies include riblets
\citep{walsh_optimization_1984}, compliant surfaces \citep{Luhar2016jot}, spanwise wall oscillations and
similar variations
\citep{jung_suppression_1992,quadrio_streamwise-travelling_2009}, and
spanwise traveling waves of spanwise forcing
\citep{du_suppressing_2000} or wall-normal deflection
\citep{Klumpp2011ftc,Albers2019,Albers2019Arxiv}.
To determine the optimal actuation settings, the parameter
space is typically scanned by performing a large number of numerical
simulations, which is very costly and sometimes untractable,
e.g., for high Reynolds number or large actuation wavelength. This
reliance on numerical simulations is partially due to the fact that
experiments for many of these actuation concepts are either currently
unrealizable or are limited by design to a small actuation range. In
this study, a methodology is developed to model sparse flow response
data to spanwise traveling surface waves using a machine learning
regression algorithm for interpolation and a ridgeline modeling for
extrapolation and optimization, which reduces the necessity for a
large parametric study. Investigations on the boundary layer response
sensitivities show a self-similar response behavior starting at a
certain wavelength.
Actuation employing a spanwise pressure gradient has been shown to
attenuate the boundary layer low-speed streaks and reduce turbulence
production. This principle was first put into practice by a spanwise
wall oscillation for turbulent channel flows
\citep{jung_suppression_1992,touber_near-wall_2012} and for turbulent
boundary layers \citep{lardeau_streamwise_2013}. The actuation
generates a thin spanwise Stokes layer and reduces the wall-shear
stress. A more efficient variant of the spanwise actuation is the
streamwise traveling wave of spanwise forcing
\citep{quadrio_streamwise-travelling_2009}, where a maximum drag
reduction of \SI{48}{\percent} can be achieved for turbulent channel
flows. Using the same actuation technique
\cite{gatti_reynolds-number_2016} performed a comprehensive study of
4020 direct numerical simulations of a channel flow with varying
wavenumber, amplitude, frequency, and Reynolds number. Such a large
parameter study using high-fidelity simulations is unusual and
computationally very expensive.
The results showed that, for channel flows, the
drag reduction at higher Reynolds numbers can
be estimated using the vertical shift of the logarithmic velocity
profile.
Another actuation variant employs the transversal traveling wave. A
first implementation was conducted for a channel flow by
\cite{du_suppressing_2000}, where the wave effect was generated with a
Lorentz force. To enable real-life applications, more recent studies
proposed a similar traveling wave effect by means of surface
deformation. This approach has been experimentally tested for a
turbulent boundary layer by \cite{li_parametric_2018}, where the
surface was deflected using electromagnetic actuators. They achieved
a drag reduction of \SI{4.5}{\percent}. Spanwise traveling
transversal surface waves have also been numerically simulated for a
turbulent boundary layer over a flat plate \citep{Albers2019Arxiv}, with a maximum
drag reduction of \SI{26}{\percent}, and
over a wing section \citep{Albers2019}, where the pressure varies
in the streamwise direction. For the wing flow, the total drag
was reduced by $7.5\,\%$ and a slight lift increase was also achieved.
Drag reduction optimization in a rich actuation space constitutes a
challenge. In experimental setups, many degrees of freedom are fixed
by design, whereas high-fidelity high Reynolds number
simulations are able to explore a rich spectrum of actuation settings.
However, high-fidelity numerical computations are costly and thus limited to a
small number of control laws.
Surrogate models are computationally cheap models that approximate the
behavior of complex systems, based on a limited number of data. Surrogate
models are typically used for optimization
\citep{Forrester2009pas,Yondo2018pas} and for visualization and design space
analysis \citep{Holden2004phd}. There exists many approaches and algorithms.
The Response Surface Methodology (RSM) is one of the earliest approaches
\citep{box_experimental_1951}. The models from RSM are often
polynomials up to second order
\citep{Sevant2000joa,madsen_response_2000,ahn_response_2001,karami_experimental_2016},
which can not represent highly non-linear or multi-modal design landscapes.
Radial basis functions (RBF) is an interpolation technique based on a weighted
sum of radial basis functions \citep{Broomhead1988}. Various types of basis
functions can be used to accommodate the response surface complexity
\citep{Forrester2009pas}. RBF surrogate models have successfully been used to
optimize groundwater remediation design \citep{Akhtar2016jgo}.
Another widely used technique is
kriging, which is a kernel-based probabilistic approach \citep{Matheron1963eg}.
Kriging models can yield high predictive power \citep{Forrester2009pas}
and have been used, for instance, to optimize a $2D$ airfoil geometry \citep{Jeong2005joa}.
Other computationally more expensive algorithms include support vector
regression (SVR) \citep{Drucker1997anips}, and artificial neural network
(ANN), first developed by \cite{Mcculloch1943bmp}.
SVR is a kernel-based regression technique, which tolerates predictions errors
within an user-defined interval. More details about SVR are given in section
\ref{Sec:SVR}.
SVR has been shown to outperform RSM, kriging, and RBF for a test bed of 26 complex
engineering functions \citep{Clarke2005jmd}, and has successfully been used
to optimize railway wind barriers \citep{Huoyue2017smo}.
ANNs are non-linear regression models inspired by biological neural networks.
They have been used to accurately predict the drag reduction in oil pipelines
\citep{zabihi_artificial_2019}.
A common shortcoming of data-driven surrogate models is their rapidly
diminishing accuracy outside the training parameter range.
This limitation means strong disadvantages for the investigated boundary layer
application. Initial analyses have shown a higher drag reduction trend
leading beyond the training parameter space, where simulations become
increasingly less affordable.
In this study, we present a new modeling methodology capable of extrapolating drag
reduction beyond the parameter range.
The starting point is a
sparse set of 71 large-eddy simulations (LES) of a turbulent boundary layer
actuated by spanwise traveling transversal waves.
Our approach consists of two
steps: First, a surrogate model is built using SVR to interpolate the drag
reduction in the training parameter space. Then, extrapolation is enabled through the
identification of a ridgeline in the drag reduction response.
The model is used to analyze the actuation sensitivities
and to infer higher drag reduction and the corresponding
actuation settings.
The paper is structured as follows. The numerical method of the
high-fidelity simulations
and the computational setup of the flat plate undergoing transversal
spanwise traveling waves are defined in \cref{Sec:ComputationalSetup}. The
modeling approach is described in \cref{Sec:Methodology} for a simple
problem, before being applied to the actuated boundary layer data
in \cref{Sec:Results}. Finally, conclusions are presented in
\cref{Sec:Conclusions}.
\section{Numerical setup}
\label{Sec:ComputationalSetup}
In this section,
the open-loop actuation study of wall turbulence drag reduction is recapitulated.
In particular, the investigated actuation parameters are enumerated.
Section \ref{Subsec:Configuration} describes the configuration,
while section \ref{Sec:NumericalMethod} details
the employed large-eddy simulation (LES) solver.
\subsection{Configuration}
\label{Subsec:Configuration}
The fluid flow is described in a Cartesian frame of reference where
the streamwise, wall-normal, and spanwise coordinates are denoted by
$\mathbf{x} = (x,y,z)^T$ and the velocity components by $\mathbf{u} =
(u,v,w)^T$. The Mach
number is set to $M = 0.1$ such that nearly incompressible
flow is considered. An illustration of the rectangular physical domain is
shown in \cref{fig::grid}. A momentum thickness of $\theta = 1$ at
$x_0$ is achieved such that the momentum thickness based Reynolds
number is $Re_\theta = 1000$ at $x_0$. The domain length and height in the
streamwise and wall-normal direction are $L_x = 190\,\theta$ and $L_y
= 105\,\theta$. In the spanwise direction, different domain widths $L_z
\in [21.65\,\theta, 108.25\,\theta]$ are used to simulate different
actuation wavelengths.
At the domain inlet, a synthetic turbulence generation method
is applied to generate a natural turbulent boundary layer flow after a
transition length of 2-4 boundary layer thicknesses
\citep{Roidl2013}. Characteristic boundary conditions are used at the
domain exit and a no-slip wall boundary condition is enforced at
the lower domain boundary for the unactuated and actuated wall. The
wall actuation is prescribed by the space- and time-dependent
function
\begin{equation}
y_{\text{wall}}^+(z^+,t^+) =
A^+ \cos\left( \frac{2\pi}{\lambda^+}z^+ - \frac{2\pi}{T^+}t^+ \right)
\label{Eqn:Actuation}
\end{equation}
in the interval $-5 \leq x/\theta \leq 140$.
The quantities $\lambda^+$, $T^+$, and $A^+$ denote the wavelength, period, and
amplitude in inner coordinates, i.e., the parameters are scaled by the viscosity
$\nu$ and the friction velocity of the unactuated reference case
$u^n_{\tau}$.
In the area just upstream
and downstream of the wave actuation region, a spatial transition is
used from a flat plate to an actuated plate and vice versa \cite{Albers2019Arxiv}.
In total, 71 actuation configurations with wavelength $\lambda^+ \in [500,3000]$, period
$T^+ \in [20,120]$, and amplitude $A^+ \in [10,78]$ are simulated.
Two additional validation simulations are performed for $\lambda^+=5000$,
$T^+=44$, and $A^+=92$ and for $\lambda^+=5000$, $T^+=44$, and $A^+=99$.
All operating conditions and the corresponding drag reductions are listed in
appendix \ref{Sec:OCLES}.
The current setup is identical to that in \cite{Ishar2019}. However, a
considerably larger parameter set is computed in this study, containing also larger
wavelengths.
The physical domain is discretized by a structured block-type mesh
with a resolution of $\Delta x^+ = 12.0$ in the streamwise and $\Delta
z^+ = 4.0$ in the spanwise direction. In the wall-normal direction, a
resolution of $\Delta_y^+|_{\mathrm{wall}} = 1.0$ at the wall is used
with gradual coarsening away from the wall. Depending on the
domain width, the meshes consist of $24$ to $120$ million cells.
The simulation procedure is as follows. First, the reference
simulations for all domain widths are run for $t u_\infty / \theta =
650$ convective time units. Then, the actuated simulations are
initialized by the solution from the unactuated reference case and
the temporal transition from the flat plate to the actuated wall is
initiated. When a converged state of the friction drag is obtained, statistics
are collected for $t u_\infty / \theta = 1250$ convective times.
The drag coefficient $c_d$ is computed by integrating the wall-shear
stress over the streamwise interval $50 \leq x/\theta \leq 100$ and
over the entire domain spanwise width, i.e., the colored surface
in \cref{fig::grid}
\begin{align*}
c_d &= \frac{2}{\rho_\infty u_\infty^2 A_{\mathrm{ref}}} \int_{A_\mathrm{surf}} \tau_w \mathbf{n}\cdot\mathbf{e}_y dA~.
\end{align*}
The quantities $\mathbf{n}$, $\mathbf{e_y}$ denote the unit normal
vector of the surface and the unit vector in the $y$-direction, the
reference surface is $A_\mathrm{ref} = 1$. The
drag reduction is defined as
\begin{align*}
J=\Delta c_d = \frac{c_{d}^{u} - c_{d}^{a}}{c_{d}^{u}}
\end{align*}
where the superscripts $u$ and $a$ refer to the unactuated
reference and actuated cases.
\begin{figure}
\begin{center}
\begin{tikzpicture}[x={(0.939cm,-0.34cm)}, y={(0cm,1cm)}, z={(0.939cm,0.34cm)}]\
\draw [->,>=stealth] (0,0,-1.5) -- (0,0,-2.0) node [left]{$z$};
\draw [->,>=stealth] (0,0,-1.5) -- (0,0.5,-1.5) node [above]{$y$};
\draw [->,>=stealth] (0,0,-1.5) -- (0.5,0,-1.5) node [right]{$x$};
\draw [<->,>=stealth] (0,0,-0.5) -- (7,0,-0.5) node [pos=.5,below=1.0]{$L_x$};
\draw [<->,>=stealth] (0,0,-0.5) -- (0,2,-0.5) node [pos=.5,left]{$L_y$};
\draw [<->,>=stealth] (7.5,0,0) -- (7.5,0,2) node [pos=.5,below=2.0]{$L_z$};
\draw (0,0,0) -- (1,0,0) sin (1.5,0.05,0) cos (2,0.1,0) -- (5,0.1,0) .. controls (5.3,0.1,0) and (5.7,-0.1,0) .. (7,0.0,0) -- (7,2,0) -- (7,2,0) -- (0,2,0) -- cycle;
\draw (0,0,2) -- (1,0,2) sin (1.5,0.05,2) cos (2,0.1,2) -- (5,0.1,2) .. controls (5.3,0.1,2) and (5.7,-0.1,2) .. (7,0.0,2) -- (7,2,2) -- (7,2,2) -- (0,2,2) -- cycle;
\draw[color=red,fill=red, fill opacity=0.2, domain=0:2, variable=\z] (2.5,0.1,0) -- plot (2.5,{0.1*sin((1.5707+2*3.14159*\z) r)},\z) -- (4.5,0.1,2) -- plot (4.5,{0.1*sin((1.5707+2*3.14159*\z) r)},2.0-\z) -- (4.5,0.1,0) -- (2.5,0.1,0) -- cycle;
\draw (7,0,0) -- (7,0,2);
\draw (0,0,0) -- (0,0,2);
\draw (0,2,0) -- (0,2,2);
\draw (7,2,0) -- (7,2,2);
\draw[opacity=0.5, variable=\z, samples at={0,0.05,...,2.05}]
plot (5, {0.1*sin((1.5707+2*3.14159*\z) r)}, \z);
\draw[opacity=0.5, variable=\z, domain=0:2]
plot (2, {0.1*sin((1.5707+2*3.14159*\z) r)}, \z);
\draw (5,0.1,0) -- (5,0.25,0);
\draw (5,0.1,1) -- (5,0.25,1);
\draw[<->] (5,0.2,0) -- (5,0.2,1) node [pos=.5,sloped,above] {$\lambda$};
\node (x0) at (1.5,0,-1.3) {$x_0$};
\draw[->,>=stealth] (x0) -- (1.5,0,-0.6);
\node (inflow) at (-1,2,1) {Inflow};
\node (flat) at (1, 0,1) {\small{Wall (flat)}};
\node (flat) at (3.3, 0,1) {\small{Wall (wave)}};
\draw[<->,>=stealth] (3,1.8,0) -- (3,1.8,-0.2) -- (3,2.2,-0.2) -- node[pos=.5,sloped,above] {Periodic BC} (3,2.2,2.2) -- (3,1.8,2.2) -- (3,1.8,2.0);
\draw[color=blue, dashed] (0,0,1) .. controls (0.5,0.2,1) .. (0.5,2,1) -- (0,2,1) -- (0,0,1);
\draw[color=blue, ->,>=stealth] (0,0.2,1) -- (0.3,0.2,1);
\draw[color=blue, ->,>=stealth] (0,0.4,1) -- (0.45,0.4,1);
\draw[color=blue, ->,>=stealth] (0,0.6,1) -- (0.5,0.6,1);
\draw[color=blue, ->,>=stealth] (0,0.8,1) -- (0.5,0.8,1);
\draw[color=blue, ->,>=stealth] (0,1.0,1) -- (0.5,1.0,1);
\draw[color=blue, ->,>=stealth] (0,1.2,1) -- (0.5,1.2,1);
\draw[color=blue, ->,>=stealth] (0,1.4,1) -- (0.5,1.4,1);
\draw[color=blue, ->,>=stealth] (0,1.6,1) -- (0.5,1.6,1);
\draw[color=blue, ->,>=stealth] (0,1.8,1) -- (0.5,1.8,1);
\end{tikzpicture}
\caption{Overview of the physical domain of the actuated turbulent
boundary layer flow, where $L_x, L_y,$ and $L_z$ are the domain
dimensions in the Cartesian directions, $\lambda$ is
the wavelength of the spanwise traveling wave, and $x_0$ marks the
actuation onset. The shaded red surface $A_\mathrm{surf}$ marks the integration area of
the wall-shear stress $\tau_w$.}
\label{fig::grid}
\end{center}
\end{figure}
\subsection{Numerical method}
\label{Sec:NumericalMethod}
The actuated flat plate turbulent boundary layer flow is governed by
the unsteady compressible Navier-Stokes equations in the arbitrary
Lagrangian-Eulerian formulation for time-dependent domains. A
second-order accurate finite-volume approximation of the governing
equations is used in which the convective fluxes are computed by the
advection upstream splitting method (AUSM) and time integration is
performed via a 5-stage Runge-Kutta scheme. The smallest dissipative
scales are implicitly modeled through the numerical dissipation of the
AUSM scheme. This monotonically integrated large-eddy simulation
approach \citep{Boris1992} is capable of accurately capturing all
physics of the resolved scales \citep{Meinke2002}. Further details on
the numerical method can be found in \cite{Albers2019} and
\cite{Ishar2019}.
\section{Methodology}
\label{Sec:Methodology}
In this section,
we propose a data-driven response surface methodology
for interpolation and extrapolation.
The methodology is developed to handle the observed relative drag reduction sensitivities $J=\Delta c_D$.
We note that $J$ is \emph{positive} for \emph{reduced} drag.
Initial analyses indicate that, for the spanwise traveling wave, in every $\lambda^+$ plane, the drag reduction
$\Delta c_d$ features a single global maximum $(T_r^+,A_r^+)$ with respect to the
actuation period $T^+$ and amplitude $A^+$ of the current spanwise traveling wave type. The curve of $(\lambda^+,T^+,A^+)$
connecting all these $\lambda^+$-dependent $\Delta c_d$ maxima is the
\emph{ridgeline}, denoted by the subscript $r$.
These optimal amplitude and period $(T_r^+,A_r^+)$ increase with $\lambda^+$ beyond the
currently simulated parameter range. Such response behavior is challenging for
optimization and requires specially-developed tools.
\Cref{Sec:Example} introduces an analytical example
which features similar topology to the drag reduction response distribution.
The machine learning algorithm used to interpolate the response within the
parameter range is detailed in \cref{Sec:SVR}.
In \cref{Sec:Procedure}, a novel data-driven modeling approach is proposed
and exemplified for the analytical example.
\subsection{Analytical response surface}
\label{Sec:Example}
To sharpen our data-driven tools, we start with an analytic response function
$J(\bm{p})$ which behaves qualitatively similar to the drag reduction problem.
From the parameter vector $\bm{p}= (p,q,s)$, $p$ mimics the wavelength, $q$ the
period, and $s$ the amplitude. The analytical example
\begin{equation}
J(p) = \underbrace{\tanh (1+p)}_{=:G(p)} \>
\underbrace{
\exp \left[ -\left (1-2q/\sqrt{p} \right )^2
-\left (1-2s/\sqrt{p} \right )^2
\right]}_{=:F_{p} (q,s)}
\label{Eqn:Example}
\end{equation}
is investigated in the domain
\begin{equation}
\Omega: = [0,1] \times [0,1] \times [0,1].
\label{Eqn:Domain}
\end{equation}
The function $J$ factors into one monotonously increasing term $G(p)$
and one $p$-independent monomodal term $F_{p} (q,s)$ with a single maximum in $(q,s)$.
At a given $p$,
$J$ assumes the maximum
$J_{p} := \tanh(1+p)$
on the ridgeline
$q_r = \frac{1}{2}\sqrt{p}$,
$s_r = \frac{1}{2}\sqrt{p}$.
The ridgeline marks the maxima of $q$ and $s$
for constant $p$.
The global maximum $J_{\rm max} = \tanh (2)$
in the domain $\Omega$ is at the boundary
$\bm{p}_{\rm max} = (1,0.5,0.5)$.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\linewidth]{IsoSurfacesToySystem.pdf}
\end{center}
\caption{Analytical response surface of equation \eqref{Eqn:Example}.
Also shown is the ridgeline (black) in the interpolated (solid line) and the extrapolated (dotted line) regimes.
The red lines denote lines of steepest ascent seeded at various domain locations.}
\label{Fig:IsoExample}
\end{figure}
These trends are observed in \cref{Fig:IsoExample},
which illustrates the analytical response surface as iso-surfaces,
the ridgeline (black), and the lines of steepest ascent (red).
The lines of steepest ascent provide a direct indication to the response sensitivities
and point in the direction of the global optima.
The lines of steepest ascent are simply streamlines of the gradient field $\nabla J= (\frac{\partial J}{\partial p}, \frac{\partial J}{\partial q}, \frac{\partial J}{\partial s})$ seeded from various points.
Evidently, larger $J$ values are obtained outside
the domain $\Omega$ on the ridgeline.
Following this curve is a good extrapolation strategy
for testing new and better parameters.
The extrapolation to suboptimal parameters
outside the domain $p>p_{\rm max}$
is facilitated by the self-similar structure
of this particular response function.
The response $J$ can be parameterized by a $p$-dependent function multiplying a properly scaled $(q, s)$-dependent function
\begin{equation}
J (p,q,s) = J_r (p) \> F \left( q^*, s^* \right)
= J_r (p) \> F \left( \frac{q}{q_r}, \frac{s}{s_r} \right)
\label{Eqn:Toy:Response}
\end{equation}
where
\begin{subequations}
\begin{eqnarray}
J_r (p) &=& \tanh(1+p), \quad q_r = \frac{1}{2} \sqrt{p}, \quad s_r =\frac{1}{2} \sqrt{p}
\\ F(q^*, s^* ) &=& \exp \left[ -(1-q^*)^2 -(1-s^* )^2 \right]
\label{Eqn:SelfSimilarity}
\end{eqnarray}
\end{subequations}
Thus, knowing $J(p,q,s)$ in a plane $p=\text{const} \le p_{\rm max}$
allows to extrapolate all response functions $J$
for $p > p_{\rm max}$ via equation \eqref{Eqn:Toy:Response}.
\subsection{Support vector regression}
\label{Sec:SVR}
In this study,
the analytical response formula from sparse data point
is obtained by support vector regression (SVR) \citep{Cortes1995ml, Drucker1997anips}.
SVR belongs to the family of supervised-learning algorithms
that trains from $M$ observations to find a mapping
between $N$ features or inputs
$\bm{x}_m=[x^1_m,x^2_m,\ldots,x^N_m]$, and the corresponding response $y_m$, $m=1,
\ldots, M$. In the application presented in section \ref{Sec:Results}, the features are the
wavelength $\lambda^+$, period $T^+$, and amplitude $A^+$ and the output is the
relative drag reduction $\Delta c_d$.
Following good practices of machine learning \citep{Burkov2019book},
the inputs for the response formula are centered features which are normalized to unit variance.
This normalization gives every feature a similar weight in interpolation.
In this study,
the normalization is particularly important
as the ranges of investigated wavelengths and periods
differ by more than one order of magnitude.
SVR yields a regression model $\hat{J}(\bm{x})$
smoothly interpolating from data points $(\bm{x}_m,y_m)$, $m=1,\ldots,M$,
employing a Gaussian Kernel $K \left( \bm{x},\bm{x}_m \right)$
and optimized weights $\omega_m$:
\begin{equation}
\hat{J}(\bm{x}) = \mu + \sum\limits_{m=1}^M\omega_m K (\bm{x},\bm{x}_m)
= \mu + \bm{\omega}^\mathrm{T} \bm{K} (\bm{x} ).
\label{Eqn:ReponseModel}
\end{equation}
Here, $\mu$ is a constant to which $\hat{J}$ converges far away from the data points,
$\bm{\omega}^\mathrm{T}=[\omega_1,\omega_2,\ldots,\omega_M]$ denotes the weight vector,
and
$\bm{K}^\mathrm{T}=[K(\bm{x},\bm{x}_1),K(\bm{x},\bm{x}_2),\ldots,K(\bm{x},\bm{x}_M)]$
comprises the Gaussian Kernel functions.
Calibrating the response model \eqref {Eqn:ReponseModel}
for $\hat{J} ( \bm{x}_m) = y_m$, $m=1,\ldots,M$
leads to $m$ linear equations for $m$ weights $\omega_m$.
Under generic conditions,
such a linear system can be solved
and the formula will exactly reproduce the input data.
Yet, this vanishing in-sample error
may come at the price of overfitting.
Noise may be incorporated as data feature,
thus leading to an unphysical model complexity.
The over-fitted model may amplify noise outside the training data,
implying a large generalization error or, equivalently, a large out-of-sample error.
To account for noise and new data points,
an error of $\varepsilon$ is tolerated,
i.e., a prediction $|\hat{J}(\bm{x}_m) - y_m| < \varepsilon$ is accepted.
The generalization error of the formula
is reduced by avoiding unnecessary complexity,
e.g., by replacing two Kernels of very close collocation points with a single one.
Complexity is characterized and penalized by the vector norm $\Vert \bm{w} \Vert^2$.
This leads to the regularized optimization problem
\begin{equation}
\begin{aligned}
\min \quad & \frac{1}{2} \left\Vert \bm{\omega} \right\Vert^2 \\
\text{subject to} \quad & \left \vert y_m - \bm{\omega}^\mathrm{T}\bm{K} - \mu \right \vert \leq \varepsilon.\\
\end{aligned}
\label{Eq:SVR:Opt1}
\end{equation}
However, weights $\bm{\omega}$ which satisfy the $\varepsilon$-constraint
at all points $(\bm{x}_m,y_m) $ might not exist,
particularly for the validation data.
This constraint is relaxed by introducing so-called
\emph{slack variables} $\xi^+_m$ in case $\hat{J}(\bm{x}_m) - y_m > \varepsilon$
and $\xi^-_m$ if $y_m - \hat{J}(\bm{x}_m) < \varepsilon$.
The slack variables extend the permissible $\varepsilon$-interval for ${\hat J} ( \bm{x}_m ) $
to $\left [ y_m-\varepsilon-\xi_m^-, y_m+\varepsilon + \xi_m^+ \right]$.
Now, the relaxed regularized optimization problem becomes
\begin{equation}
\begin{aligned}
\min \quad & \frac{1}{2}\Vert \bm{\omega} \Vert^2 + C \frac{1}{M}\sum\limits_{m=1}^M(\xi_m^+ +\xi_m^-)\\
\text{subject to} \quad & y_m - \bm{\omega}^\mathrm{T}\bm{K} - \mu \leq \varepsilon + \xi_m^+\\
& \bm{\omega}^\mathrm{T}\bm{K} + \mu - y_m \leq \varepsilon + \xi_m^-\\
& \xi_m^+, \quad \xi_m^-\geq 0.\\
\end{aligned}
\label{Eq:SVR:Opt2}
\end{equation}
The tradeoff between model complexity
and errors beyond the $\varepsilon$ limit
is controlled by the penalty parameter $C$.
The extreme choice $C=0$ leads to unpenalized, arbitrarily large slack variables
and minimal complexity since the minimization solely focuses on
$\frac{1}{2}\Vert \bm{\omega} \Vert^2$.
In other words, $\hat J \equiv \mu$ is a constant function.
For sufficiently large $C$, the accuracy of the response model is optimized
tolerating maximum complexity.
The value of $\varepsilon$ is set to the data noise level, if available. Note
that too large a value will decrease the prediction accuracy.
The interpolation is performed with the radial basis function
\begin{equation}
K(\bm{x},\bm{x'}) = \exp{\left(-\frac{|\bm{x}-\bm{x'}|^2}{\sigma^2}\right)}.
\end{equation}
The reader can refer to \citep{Forrester2009pas}
for more details about the SVR formulation and the solution of the constrained
optimization problem (\ref{Eq:SVR:Opt2}).
\subsection{Data-driven response surface}
\label{Sec:Procedure}
The analytical example preludes our data-driven approach
for the actuated turbulent boundary layer.
The approach consists of the following steps:
\begin{description}
\item[Step 1 ] We consider $M$ computed response function values $J_m$
for parameter points $\bm{p}_m$, $m=1,\ldots,M$,
covering well the parameter range of interest $\Omega$.
Each parameter may need to be centered and scaled to unit variance
for the regression problem.
\item[Step 2 ] Interpolate all function values in the domain
with an accurate and smooth machine learning regression.
\item[Step 3 ] Apply a gradient search technique for several initial conditions.
If the corresponding steepest ascent curves converge to a point inside the domain,
the purpose of a response surface model is served.
\item[Step 4 ] Identify the ridgeline coordinates ($q_r(p), s_r(p)$) and response $J_r(p)$
leading out of the domain $\Omega$, and model them using simple functions.
This simple model can now be used to extrapolate the ridgeline outside $\Omega$
towards the global response optimum.
Note the choice of the parametrizing ridgeline parameter (here: $p$)
is problem-dependent.
\item[Step 5 ] In some cases, like in the example \eqref{Eqn:Example},
the response function $J$ exhibits self-similar behavior, and can be expressed
as a $p$-dependent function multiplying a scaled $(q,s)$-dependent function as:
%
\begin{equation*}
\hat{J}(p,q,s) = J_r (p) \> \> F\left ( \frac{q}{q_r(p)}, \frac{s}{s_r(p)} \right),
\end{equation*}
%
where $J_r$ is the ridgeline response, and $F$ is a shape function with the
maximum $F(1,1)=1$.
%
In this case, the shape function and extrapolated ridgeline
can be used to predict the response $\hat{J}$ to parameter inputs away from the ridgeline.
\end{description}
Note that the parameters $p$, $q$, and $s$ used in this analytical example
correspond to the wavelength $\lambda^+$, period $T^+$, and amplitude $A^+$ for
the boundary layer application.
\section{Results}
\label{Sec:Results}
The previous section discussed the modeling methodology of response functions,
which assumes the optimal value at the boundary of the explored parameter space.
This section applies the approach to drag reduction for an actuated boundary layer
with spanwise traveling surface waves.
We follow the steps outlined in section \ref{Sec:Methodology}.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{IsoSurfacesBL.pdf}
\end{center}
\caption{
The drag reduction model $\widehat{\Delta c}_D (\lambda^+,T^+,A^+)$.
The gray surfaces represent three drag reduction levels: 20\%, 23\%, and 25\%.
The ridgeline (black) is displayed in the interpolated (solid line)
and the extrapolated (dotted line) regimes.
The ridgeline leaves the investigated domain at point $A$
and predicts the optimal drag reduction for $\lambda^+ \le 5000$ at point
$B$ ($\lambda^+=5000$, $T^+=44$, and $A^+=99$).
The red curves denote lines of steepest ascent seeded at various domain locations.
The contour distributions at the top (($a$)--($c$))
represent scaled drag reductions (shape functions)
corresponding to the blue-framed $T^+$--$A^+$ rectangles
in the three-dimensional plot. }
\label{Fig:ML:Streamlines}
\end{figure}
The process begins with the interpolation of the sparse parameter space using
support vector regression (SVR).
As presented in section \ref{Sec:ComputationalSetup},
the investigated parameter space spanned by $\lambda^+${}, $T^+${}, and $A^+${} is large,
and a dense coverage is beyond the reach of feasibility.
The SVR algorithm is chosen for its prediction accuracy and its
smooth response distribution (see appendix \ref{Sec:MLModel} for details).
The algorithm is trained on a subset of \SI{80}{\percent} of the dataset,
whilst the remaining \SI{20}{\percent} is used for testing the prediction performance.
This separation of training and testing data reduces the risk of overfitting.
The algorithm hyperparameters are tuned using 3-fold cross-validation.
In this study, the SVR model yields $R^2=0.93$, the definition of which is given
in appendix \ref{Sec:MLModel}.
This value indicates excellent prediction accuracy.
Using the SVR model, we interpolate the parameter space
with drag coefficient predictions.
With the parameter space densely populated, it is now possible to compute and to
visualize the streamlines of the gradient field and the ridgeline.
It is shown in figure \ref{Fig:ML:Streamlines} (d) that the streamlines
(red) and the ridgeline (solid black line) terminate at the domain
boundary in the $T^+$-$\lambda^+$ plane at the exit point $A$ ($\lambda^+=1875$, $T^+=44$, and $A^+=78$).
This indicates that the optimal drag reduction lies outside the current range.
Along the ridgeline, the relative drag reduction increases from $\Delta
c_d=\SI{7.0}{\percent}$ at $\lambda^+=500$ to $\Delta c_d=\SI{22.5}{\percent}$
at the ridgeline exit point $A$.
\Cref{Fig:PM:Extrapolation} shows the projection of the ridgeline onto the $\lambda^+${}--$T^+${} and $\lambda^+${}--$A^+${} planes.
Starting at $\lambda^+\approx1000$, $T_r^+$ asymptotes rapidly toward 44.
In other words, the optimum wave period remains constant at $T^+=44$, even with increasing
wavelength and amplitude. Similarly, $A_r^+$ shows an
asymptotic behavior with higher $\lambda^+$, albeit at a
slower rate.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{Extrapolation.pdf}
\end{center}
\caption{
Projection of the ridgeline onto the $\lambda^+$-$T^+$ and $\lambda^+$-$A^+$ plane,
as well as the drag reduction along the ridge as function of $\lambda^+$.
%
The solid lines are interpolated with SVR, whereas the dotted lines are obtained
by equations (\ref{Eq:PM:MLSACoordinates}) (for
$T_r^+$ and $A_r^+$), and (\ref{Eq:Tomiyama:LinearFit}) (for $\Delta
c_{d,r}$). Points $A$ and $B$
are the same as those in figure \ref{Fig:ML:Streamlines}.
The vertical grey dashed line separates the interpolation and extrapolation
regions.
}
\label{Fig:PM:Extrapolation}
\end{figure}
This asymptotic behavior of the ridgeline starting at $\lambda^+=1000$ is easily modeled as
\begin{align}
T_r^+ & = 44 - 46721 \exp(-0.0128 \lambda^+) \nonumber \\
A_r^+ & = 100 - 113 \exp(-0.0009 \lambda^+)\>.
\label{Eq:PM:MLSACoordinates}
\end{align}
The fitted curves are presented in \cref{Fig:PM:Extrapolation} by dotted lines
and show good agreement with the reference lines over the common range
($1000\leq\lambda^+\leq 1875$).
Having established the $T_r^+$ and $A_r^+$ sole dependence on $\lambda^+${} along the ridgeline, we turn our attention to drag reduction.
Similarly to $T^+${} and $A^+${}, $J_r$ also shows the sole dependence on $\lambda^+${} or equivalently on $T_r^+$ and $A_r^+$.
This is best expressed using a scaling proposed by
\cite{tomiyama_direct_2013}, defined as $A_r^+\sqrt{2\pi/T_r^+}$,
which is the product of the velocity amplitude of the
actuation $2\pi A_r^+ / T_r^+$ and the thickness of the Stokes layer
$\sqrt{T_r^+ / (2\pi)}$ along the ridge. Note that this scaling is originally defined for the skin-friction
coefficient, but for the considered cases, the amount of added wetted surface is
negligible and the scaling holds \citep{Albers2019Arxiv}.
The evolution of $\Delta c_{d,r}$ towards a linear behavior is illustrated in \cref{Fig:PM:Tomiyama}.
As the figure shows, the drag reduction along the ridge starts exhibiting
linearity around $\lambda^+\approx1000$ corresponding approximately to
$A^+\sqrt{2\pi/T^+}=19$.
It is worth to note that this almost perfect linear Tomiyama and Fukagata scaling only holds along the ridgeline.
Away from the ridgeline, the scaling shows scatter.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\linewidth]{RidgeTomiyama.pdf}
\end{center}
\caption{
Drag reduction along the ridgeline as function of the Tomiyama and Fukagata scaling. The
figure shows a linear behavior starting at $\lambda^+\approx1000$.
%
The solid line is obtained from data interpolated with SVR. The dotted line is obtained
with equation (\ref{Eq:Tomiyama:LinearFit}).
Points $A$ and $B$ are the same as those in figure \ref{Fig:ML:Streamlines}.
}
\label{Fig:PM:Tomiyama}
\end{figure}
It is now straightforward to model the
relative drag reduction in the linear range, i.e., $\lambda^+\geq 1000$, as
\begin{equation}
\widehat{\Delta c}_{d,r} = 0.95A_r^+\sqrt{\frac{2\pi}{T_r^+}} -5.16.
\label{Eq:Tomiyama:LinearFit}
\end{equation}
This linear fit is shown with a dotted line in figure \ref{Fig:PM:Tomiyama} as function of the
Tomiyama and Fukagata scaling, and in figure \ref{Fig:PM:Extrapolation} as a function of
$\lambda^+$.
Note that we assume that the linear behavior continues for a finite range beyond
$\lambda^+=1875$.
Based on the optimal drag reduction behavior being only dependent on $\lambda^+$, which is
consistent with a self-similar behavior, we assume a response of the form
\begin{equation}
\hat{J} = \widehat{\Delta c}_d \left( \lambda^+, T^+, A^+ \right)=J_r(\lambda^+)\cdot F(T^*,A^*),
\label{Eq:PM:VariablesSep}
\end{equation}
where $\hat{J}_r=\widehat{\Delta c}_{d,r}$ is the constant-linear model from substituting
\cref{Eq:PM:MLSACoordinates} into
\cref{Eq:Tomiyama:LinearFit}, and $T^*$ and $A^*$ are properly scaled actuation parameters.
The natural scaling choice is the maximum relative drag reduction along the ridgeline,
which yields
\begin{equation}
F\left( \frac{T^+}{T_r^+}, \frac{A^+}{A_r^+} \right) = \frac{J(\lambda^+,T^+,A^+)}{J_r(\lambda^+)}.
\label{Eq:PM:SelfSim}
\end{equation}
Hence, self-similarity is validated when $F$ becomes independent of $\lambda^+$.
This is confirmed in figures \ref{Fig:ML:Streamlines} (a), (b), and (c), where the $F$
distributions collapse starting at $\lambda^+\approx 1000$.
Note that the preceding analysis did not only examine the sensitivities of the flow response and its self-similar behavior,
but also yielded a simple powerful model of the relative drag reductions.
This self-similar drag reduction model for $\lambda^+\geq 1000$ proceeds as follows:
\begin{itemize}
\item For given actuation setting $\lambda^+$, $T^+$ and $A^+$, compute $T_r$ and $A_r$ from
\cref{Eq:PM:MLSACoordinates}.
\item Determine the drag reduction along the ridgeline $J_r=\Delta c_{d,r}(\lambda^+)$ using \cref{Eq:Tomiyama:LinearFit}.
\item Read $F(\frac{T^+}{T_r^+}, \frac{A^+}{A_r^+} )$ from the distributions in \cref{Fig:ML:Streamlines}.
\item Deduce the relative drag reduction from $J=\Delta c_d=J_r\cdot F$.
\end{itemize}
In the interpolation regime, and for $1000\leq\lambda^+\leq3000$, this simple
model has a coefficient of determination of $R^2=0.92$, which is very close to
that of the SVR model.
In the extrapolation regime, the model is validated with two points at
$\lambda^+=5000$, which is well beyond the training range. The first validation point B is situated on the ridgeline (cf. figure
\ref{Fig:ML:Streamlines}), whereas the second point $B'$ is off the ridgeline at
coordinates $\lambda^+=5000$, $T^+=44$, and $A^+=92$. For these two operating
conditions, the relative drag
reductions predicted by the model are $\widehat{\Delta
c}_d=\SI{30.45}{\percent}$ and $\widehat{\Delta c}_d=\SI{30.23}{\percent}$,
which compare favorably with those of the reference LES data of $\Delta
c_d=\SI{31.09}{\percent}$ and $\Delta
c_d=\SI{30.03}{\percent}$.
These predictions yield relative errors of \SI{2.1}{\percent} and
\SI{0.7}{\percent} for $B$ and $B'$, respectively.
The prediction accuracy for the extrapolation at $\lambda^+=5000$,
i.e., \SI{67}{\percent} beyond the maximal investigated values $\lambda^+=3000$
is impressive.
Yet, the model (\ref{Eq:PM:VariablesSep}) should not be assumed to hold
at much larger wavelengths ($\lambda^+\to\infty$).
In this limit, the actuation approaches that of a flat plate moving up and down without height variations
in the spanwise direction.
In this scenario, the boundary layer remains
unchanged and no drag reductions can be expected.
\section{Conclusions}
\label{Sec:Conclusions}
We target improved drag reduction of an actuated turbulent boundary layer with
spanwise traveling surface waves at $Re_{\theta}=1000$.
71 large-eddy simulations are used to determine a machine learned model
to predict drag reduction as a function of the actuation parameters: amplitude,
period, and spanwise wavelength.
The first
enabler for this formula is the support vector regression (SVR) for
smooth interpolation. For this dataset, SVR is found to be distinctly superior to many other
common regression solvers. The second enabler is a ridgeline pointing outside the
computed domain indicating further drag reduction potential at unexplored higher
wavelengths. This ridgeline is then modeled and used for extrapolation. The
results indicate a potential around \SI{31}{\percent} drag reduction with increasing wavelength,
which is denoted as point B in figure \ref{Fig:ML:Streamlines}. This result is
confirmed by an additional LES.
The corresponding period seems to asymptote against 44
plus units while the amplitude slowly increases with wavenumber. The ridgeline
parameters are consistent with the Tomiyama and Fukagata scaling. More precisely, at wavelengths
above 1000 plus units within the analyzed range, the drag reduction linearly increases with the Tomiyama
and Fukagata parameter on the ridgeline.
Surprisingly, the drag reduction formula exhibits a self-similar behavior
starting at $\lambda^+\approx 1000$. As such, drag reduction can be expressed as the
product of a factor depending only on the wavelength and a shape factor
depending on amplitude and period normalized with their ridgeline values. The
ridgeline parameters and the drag reduction values in a plane with constant
wavelength allow to extrapolate drag values for amplitudes and periods
for wavelengths above 1000 plus units. The self-similar drag
reduction formula beautifully parameterizes all investigated simulations and
allow to predict further performance potential at unexplored larger
wavelengths.
The proposed machine learning method
for the drag reduction formula can easily be applied
to other performance parametrics from sparse data.
The strategy is
(1) to interpolate the sparse parameter space
using an accurate machine learning algorithm;
(2) to compute several steepest ascent lines and ridgelines;
(3) to search for the global optimum inside the domain;
(4) if the steepest ascent lines terminate at the boundary
to extrapolate the ridgeline out of the domain;
(5) to test for self-similarity based on this ridgeline.
Self-similarity opens the possibility to extrapolate the performance
away from the ridgeline.
The drag reduction formula may guide future simulations
in search of larger drag reduction.
In addition, the observed self-similarity
guides and constrains future physics-based models.
The authors actively explore these avenues.
\section*{Acknowledgements}
The research was funded by the Deutsche Forschungsgemeinschaft (DFG)
in the framework of the research projects SE 2504/2-1, SCHR 309/52 and SCHR
309/68. The authors gratefully acknowledge the Gauss Centre for
Supercomputing e.V. (www.gauss-centre.eu) for funding this project
by providing computing time on the GCS Supercomputers Hazelhen at
HLRS Stuttgart and JURECA at J\"ulich Supercomputing Centre (JSC).
BRN acknowledges support from the French National Research Agency (ANR)
under grant ANR-17-ASTR-0022 (FlowCon).
\begin{appendix}
\section{Machine learning regression model}
\label{Sec:MLModel}
Drag reduction modeling as a function of the actuation
parameters for the actuated boundary layer is a challenging problem.
The complexity of the response topology led to the utilization of machine
learning (ML) approaches.
For this application, ML is used to model the drag reduction $\Delta c_d$ under
varying actuation conditions ($\lambda^+$, $T^+$ and $A^+$).
ML algorithms are evaluated based on their prediction accuracy, given by the coefficient of determination $R^2$,
defined as
\begin{equation}
R^2 = 1 - \frac{\sum_i^N (\Delta c_{d,i}-\widehat{\Delta c}_{d,i})^2}{\sum_i^N (\Delta c_{d,i}-\overline{\Delta c}_d)^2},
\end{equation}
where $\Delta c_{d,i}$ are the reference computed data points, $\widehat{\Delta c}_{d,i}$ are the predicted
ones, $\overline{\Delta c}_d$ is the mean of $\Delta c_{d,i}$, and $N$ is the number of samples
in the test set. A value of $R^2=1$ denotes perfect prediction score.
Besides accuracy, the model smoothness is the second criterion for the model selection.
The ML algorithm smoothness was quantified with the total variation (TV) defined as
\begin{equation}
\begin{split}
y&=\sum\limits_{i=1}^I\sum\limits_{j=1}^J\sum\limits_{k=1}^K \biggl\{
(\widehat{\Delta c}_{d,i,j,k} - \widehat{\Delta c}_{d,i-1,j,k})^2 \\
&\qquad\quad + (\widehat{\Delta c}_{d,i,j,k} - \widehat{\Delta c}_{d,i,j-1,k})^2 +
(\widehat{\Delta c}_{d,i,j,k} - \widehat{\Delta c}_{d,i,j,k-1})^2
\biggr\}^{\!1/2},
\end{split}
\label{Eq:TotalVariation}
\end{equation}
where $I$, $J$, and $K$ are the number of discretized points in the $\lambda^+$,
$T^+$, and $A^+$ directions.
A smooth response is indicated by a lower TV value.
Three machine learning algorithms were benchmarked: the $k$-nearest neighbors
(kNN), random forest (RF), and support vector regression (SVR). The
hyperparameters of each algorithm were optimized using cross-validation,
yielding 5 neighbors for kNN and 300 trees for RF. Radial basis functions are
used for SVR.
The $R^2$ and TV
values for the three algorithms are summarized in \cref{Tab:R2-Smooth}.
\begin{table}
\centering
\caption{Comparison of the prediction accuracy ($R^2$) and smoothness ($TV$)
of the three tested machine learning algorithms.
}
\begin{tabular}{C{2cm}C{2cm}C{2cm}}
\hline
\hline
Algorithm& $R^2$ & $TV$ \\
\hline
kNN & 0.76 & $1.62.10^6$ \\
RF & 0.97 & $1.60.10^6$\\
SVR & 0.93 & $1.31.10^6$\\
\hline
\hline
\end{tabular}
\label{Tab:R2-Smooth}
\end{table}
Based on the results, SVR offers the best compromise between smoothness and
accuracy; it is smoother than RF and more accurate than kNN. Therefore, it is selected
for this study.
\section{Operating conditions of the LES simulations}
\label{Sec:OCLES}
\begin{center}
\tablefirsthead{%
\hline
\hline
$N$ & $L_z^+$ & $\lambda^+$ & $T^+$ & $A^+$ & $\Delta c_d~[ \% ]$ & $\Delta c_f~[ \% ]$ & $\Delta A_\mathrm{surf}~[ \% ]$ \\
\hline
}
\tablehead{%
\hline
\hline
$N$ & $L_z^+$ & $\lambda^+$ & $T^+$ & $A^+$ & $\Delta c_d~[ \% ]$ & $\Delta c_f~[ \% ]$ & $\Delta A_\mathrm{surf}~[ \% ]$ \\
\hline}
\tablelasttail{\hline\hline}
\topcaption{
Actuation parameters of the turbulent boundary layer simulations, where each
setup is denoted by a case number $N$. The quantity $\lambda^+$ is the
spanwise wavelength of the traveling wave, $T^+$ is the period, and $A^+$ is
the amplitude, all given in inner units, i.e., non-dimensionalized by the
kinematic viscosity $\nu$ and the friction velocity $u_\tau$. Each block
includes setups with varying period and amplitude for a constant wavelength.
The list includes the values of the averaged relative drag reduction $\Delta
c_d$, the averaged relative skin friction reduction $\Delta c_f$, and the
relative increase of the wetted surface $\Delta A_\mathrm{surf}$.
}
\begin{supertabular}{
C{10mm}
C{10mm}
C{10mm}
C{10mm}
C{10mm}
C{10mm}
C{20mm}
C{20mm}
C{20mm}
}
1 & 1000 & 500 & 20 & 30 & 0 & 4 & 3.5 \\
2 & 1000 & 500 & 30 & 22 & 9 & 10 & 1.9 \\
3 & 1000 & 500 & 40 & 21 & 8 & 9 & 1.7 \\
4 & 1000 & 500 & 40 & 30 & 8 & 11 & 3.5 \\
5 & 1000 & 500 & 60 & 30 & 5 & 8 & 3.5 \\
6 & 1000 & 500 & 70 & 36 & 3 & 8 & 4.9 \\
7 & 1000 & 500 & 70 & 64 & -10 & 4 & 14.6 \\
8 & 1000 & 500 & 100 & 48 & -3 & 5 & 8.6 \\
9 & 1000 & 1000 & 20 & 10 & 5 & 5 & 0.1 \\
10 & 1000 & 1000 & 20 & 30 & 13 & 13 & 0.9 \\
11 & 1000 & 1000 & 20 & 50 & 0 & 3 & 2.4 \\
12 & 1000 & 1000 & 40 & 10 & 3 & 3 & 0.1 \\
13 & 1000 & 1000 & 40 & 20 & 7 & 8 & 0.4 \\
14 & 1000 & 1000 & 40 & 30 & 12 & 13 & 0.9 \\
15 & 1000 & 1000 & 40 & 40 & 15 & 16 & 1.6 \\
16 & 1000 & 1000 & 40 & 50 & 15 & 17 & 2.4 \\
17 & 1000 & 1000 & 40 & 60 & 13 & 16 & 3.5 \\
18 & 1000 & 1000 & 80 & 10 & 1 & 1 & 0.1 \\
19 & 1000 & 1000 & 80 & 20 & 3 & 4 & 0.4 \\
20 & 1000 & 1000 & 80 & 30 & 6 & 6 & 0.9 \\
21 & 1000 & 1000 & 80 & 40 & 9 & 10 & 1.6 \\
22 & 1000 & 1000 & 80 & 50 & 9 & 11 & 2.4 \\
23 & 1000 & 1000 & 80 & 60 & 9 & 12 & 3.5 \\
24 & 1000 & 1000 & 120 & 10 & 1 & 1 & 0.1 \\
25 & 1000 & 1000 & 120 & 20 & 0 & 1 & 0.4 \\
26 & 1000 & 1000 & 120 & 30 & 3 & 4 & 0.9 \\
27 & 1000 & 1000 & 120 & 40 & 3 & 5 & 1.6 \\
28 & 1000 & 1000 & 120 & 50 & 2 & 5 & 2.4 \\
29 & 1000 & 1000 & 120 & 60 & 2 & 6 & 3.5 \\
\hline
30 & 1200 & 600 & 30 & 44 & 2 & 7 & 5.1 \\
31 & 1200 & 600 & 40 & 59 & -4 & 5 & 8.9 \\
32 & 1200 & 600 & 50 & 36 & 9 & 12 & 3.5 \\
33 & 1200 & 600 & 60 & 21 & 5 & 6 & 1.2 \\
34 & 1200 & 600 & 70 & 29 & 6 & 8 & 2.3 \\
35 & 1200 & 600 & 80 & 66 & -5 & 6 & 11.0 \\
36 & 1200 & 600 & 90 & 51 & -1 & 6 & 6.8 \\
37 & 1200 & 600 & 100 & 14 & 2 & 2 & 0.5 \\
\hline
38 & 1600 & 1600 & 20 & 22 & 11 & 11 & 0.2 \\
39 & 1600 & 1600 & 40 & 34 & 14 & 14 & 0.4 \\
40 & 1600 & 1600 & 40 & 48 & 19 & 19 & 0.9 \\
41 & 1600 & 1600 & 50 & 60 & 19 & 20 & 1.4 \\
42 & 1600 & 1600 & 50 & 73 & 21 & 22 & 2.0 \\
43 & 1600 & 1600 & 60 & 27 & 8 & 8 & 0.3 \\
44 & 1600 & 1600 & 70 & 71 & 17 & 19 & 1.9 \\
45 & 1600 & 1600 & 80 & 17 & 2 & 2 & 0.1 \\
46 & 1600 & 1600 & 90 & 65 & 13 & 14 & 1.6 \\
47 & 1600 & 1600 & 100 & 40 & 8 & 8 & 0.6 \\
\hline
48 & 1800 & 900 & 30 & 49 & 10 & 12 & 2.9 \\
49 & 1800 & 900 & 40 & 63 & 7 & 12 & 4.7 \\
50 & 1800 & 900 & 50 & 22 & 7 & 7 & 0.6 \\
51 & 1800 & 900 & 50 & 44 & 12 & 14 & 2.3 \\
52 & 1800 & 900 & 70 & 28 & 7 & 8 & 0.9 \\
53 & 1800 & 900 & 80 & 17 & 3 & 4 & 0.4 \\
54 & 1800 & 900 & 80 & 60 & 6 & 9 & 4.3 \\
55 & 1800 & 900 & 90 & 39 & 6 & 7 & 1.8 \\
56 & 1800 & 1800 & 30 & 14 & 5 & 5 & 0.1 \\
57 & 1800 & 1800 & 40 & 51 & 19 & 20 & 0.8 \\
58 & 1800 & 1800 & 40 & 70 & 22 & 23 & 1.5 \\
59 & 1800 & 1800 & 50 & 59 & 20 & 21 & 1.1 \\
60 & 1800 & 1800 & 60 & 44 & 15 & 15 & 0.6 \\
61 & 1800 & 1800 & 60 & 75 & 21 & 22 & 1.7 \\
62 & 1800 & 1800 & 70 & 29 & 7 & 7 & 0.3 \\
63 & 1800 & 1800 & 80 & 36 & 9 & 9 & 0.4 \\
64 & 1800 & 1800 & 90 & 66 & 13 & 14 & 1.3 \\
65 & 1800 & 1800 & 100 & 21 & 3 & 3 & 0.1 \\
\hline
66 & 3000 & 3000 & 40 & 51 & 21 & 21 & 0.3 \\
67 & 3000 & 3000 & 50 & 78 & 26 & 26 & 0.7 \\
68 & 3000 & 3000 & 60 & 26 & 7 & 7 & 0.1 \\
69 & 3000 & 3000 & 70 & 64 & 19 & 19 & 0.4 \\
70 & 3000 & 3000 & 80 & 11 & 1 & 1 & 0.0 \\
71 & 3000 & 3000 & 90 & 66 & 16 & 16 & 0.5 \\
\hline
$B$ & 5000 & 5000 & 44 & 99 & 31 & 31 & 0.0 \\
$B'$& 5000 & 5000 & 44 & 92 & 30 & 30 & 0.0 \\
\end{supertabular}
\end{center}
\end{appendix}
\bibliographystyle{plainnat}
|
1,314,259,993,461 | arxiv | \section{Introduction}
The field of synthetic biology is concerned with designing and constructing biological modules, biological systems, and biological machines \cite{cameron2014brief} and can be traced back to the 1960s when logic in genetic regulation was discovered \cite{jacob1961genetic}. The key goal is to program living cells to exhibit desired functionality. Biomolecular processes are typically nonlinear, stochastic, and non-modular making the design and construction process difficult. This is where control theory meets synthetic biology to produce robust functionality \cite{del2016control}.
Thus far, the main focus of the field of synthetic biology has been in designing and building digital logic genetic circuits. A wide variety of digital computation using synthetic genetic circuits has been achieved including but not limited to switches \cite{gardner2000construction}, counters \cite{friedland2009synthetic}, logic gates \cite{singh2014recent}, classifiers \cite{didovyk2015distributed}, edge detectors \cite{tabor2009synthetic}, and most recently digital displays \cite{millacura2019parallel,shin2020programming}. The state-of-the-art software called CELLO \cite{nielsen2016genetic} automates the design of digital logic genetic circuits and has paved the way for larger genetic circuits to be developed. Moreover, network reconstruction algorithms which infer dynamic network architectures have been developed to debug failure modes in these digital logic devices \cite{yeung2015global,ward2009comparison,sontag2004inferring,hasnain2019data}. However, there are cellular mechanisms and limitations that make it unclear whether or not digital computation can scale up to perform complex computations in living cells.
Biological systems such as colonies of bacteria inherently implement hybrid-analog computing \cite{sarpeshkar2014analog}, which was a staple paradigm of computation in the 1950s and 1960s. Analog computers need to be custom designed for a specific task and are less reliable than their digital counterparts. However, the benefits of analog computing, combined with the non-von Neumann sense of computing, is two-fold, i) it is customizable to address desired functionality (also seen as a disadvantage as noted prior), and ii) it gives rise to a type of computation where memory and processing are collocated, making for a more efficient computing architecture. We want to exploit the analog, tunable nature to perform biological computation. Specifically, we aim to design an input-output function in bacteria that takes chemical inducers as input and outputs the maximum steady state concentration. The bacterial colony as a whole can be viewed as the central processing unit with memory \cite{ben2009learning}. In cellular computing, data may be thought of as being encoded by biomolecules such as DNA strands and molecular biology tools may act on the data to perform various operations \cite{paun2005dna}.
There exists no general theory for how cells perform computation. Likewise, there is no general theory for many of the \textit{known} biomolecular mechanisms. This provides difficulty when designing and constructing synthetic biological components. Hypothesis-driven modeling is one approach to better understand these biomolecular mechanisms and how to control them \cite{yeung2017biophysical,jayanthi2013retroactivity}. A downside of hypothesis-driven modeling is that the models can be difficult to validate, often requiring iteration upon iteration of experimentation to fit model parameters. Futhermore, when the biological process occurs in new environmental conditions, it is likely that new experimental data will need to be collected for model refinement.
The above issues and more motivate the need for purely data-driven algorithms to accelerate the advancement of design of genetic circuits. We frame the analog computing challenge - is it possible to harness the natural dynamics of the cell to generate a steady state input-output function? Specifically, how do we utilize time-series measurements of a biological system to design control inputs to achieve a target steady state output from living cells in a data-driven fashion?
For this, we turn to Koopman operator theory, which is a powerful tool for data-driven analysis of nonlinear dynamical systems. Researchers working in this space have shown that it is possible to identify and learn the fundamental modes of a nonlinear dynamical system from data \cite{rowley2009spectral,mezic2005spectral}. The Koopman operator is an infinite-dimensional linear operator that represents nonlinear dynamics as a dynamically equivalent linear system. The development of Dynamic Mode Decomposition (DMD) \cite{schmid2010dynamic} has led to rapid growth in the use of Koopman spectral analysis of nonlinear dynamical systems in areas such as system identification \cite{mauroy2019koopman,boddupalli2019koopman}, prediction and control \cite{korda2018linear,korda2019optimal,proctor2016dynamic,proctor2018generalizing}, and sensor placement \cite{sinha2016operator,hasnain2019optimal}. More recently, learning higher dimensional Koopman operators from data has become computationally tractable, largely due to advances in integrating machine learning and deep learning to generate efficient representations of observable bases \cite{yeung2019learning,lusch2018deep,otto2019linearly,takeishi2017learning}.
Previously, researchers have explored control strategies to reprogram the steady states of cooperative monotone
dynamical systems \cite{shah2018reprogramming} and in \cite{del2017blueprint}, a synthetic genetic
feedback controller that dynamically steers the concentration
of a genetic regulatory network’s key transcription factors
was developed to reprogram the steady state. We propose to
alter the steady state, or generate a genetic program with output being steady state values, of nonlinear systems with hyperbolic
fixed points by designing an optimal control policy for the
Koopman model identified directly from data. Specifically,
we utilize deep Dynamic Mode Decomposition (deepDMD)
[33] to learn approximate Koopman invariant subspaces
of dynamical systems to program their steady state. The
framework can be extended to systems with other types of
attractors and can be modified to solve many optimal control
problems e.g. target state and reference tracking.
In section \ref{sec:koop}, we briefly review Koopman operator theory for discrete-time nonlinear systems the numerical approximations of the Koopman operator. Section \ref{sec:ssProgramming} describes the steady state programming framework and discusses issues
that arise when dealing with mixed terms of the state and
input. Lastly, we demonstrate our method on two nonlinear
systems that are commonly seen in biomolecular feedback
systems and synthetic biology.
\section{Koopman Operators and their finite-dimensional approximation} \label{sec:koop}
In this section we briefly review Koopman operator theory as it applies to controlled systems. See \cite{kaiser2019data} for an overview of the field. In this paper, we consider discrete-time dynamical systems to be consistent with the nature of time-series data.
\subsection{Koopman operator theory for control systems}
Consider a non-affine control discrete-time nonlinear dynamical system of the form
\begin{equation}
x_{k+1} = f(x_k,u_k),
\label{eq:stateDyn}
\end{equation}
where $x_k \in \mathbb{R}^n$ is the state of the system, $u_k \in \mathbb{R}^m$ is the control input, and $f: \mathbb{R}^n \oplus \mathbb{R}^m \rightarrow \mathbb{R}^n$ is the analytic and unknown transition mapping. The dynamics can be always be decomposed in the following way
\begin{equation}
x_{k+1} = f_x(x_k) + f_{xu}(x_k,u_k) + f_u(u_k),
\label{eq:stateDynDecomp}
\end{equation}
where $f_x$, $f_{xu}$, $f_u$ are terms only in $x$, mixed terms in $x$ and $u$, and only in $u$, respectively. The Koopman operator acts on a set of observables and we consider each observable as an element of an infinite-dimensional Hilbert space $\mathcal{F}$.
The observables $\psi: \mathbb{R}^n \oplus \mathbb{R}^m \rightarrow \mathbb{R}$ are functions of the state and the input such that they can be functions of only $x$, mixed terms of both $x$ and $u$, and only of $u$. The observables can also be thought of as vector-valued functions of the state and the input such that $\psi: \mathbb{R}^n \oplus \mathbb{R}^m \rightarrow \mathbb{R}^{o}$.
Then the Koopman operator, $\mathcal{K}: \mathcal{F} \rightarrow \mathcal{F}$ acts on the Hilbert space of observables to produce linear dynamics
\begin{equation}
\mathcal{K}\psi(x_k,u_k) \triangleq \psi(f(x_k,u_k),u_{k+1})
\label{eq:koopEqgeneral}
\end{equation}
so that
\begin{equation}
\psi(x_{k+1},u_{k+1}) = \mathcal{K}\psi(x_k,u_k)
\end{equation}
which can be decomposed similarly to (\ref{eq:stateDynDecomp}) into the three components
\begin{equation}
\begin{bmatrix}\psi_x(x_{k+1}) \\ \psi_{xu}(x_{k+1},u_{k+1}) \\ \psi_u(u_{k+1}) \end{bmatrix} = \begin{bmatrix} \mathcal{K}_x & \mathcal{K}_{xu} & \mathcal{K}_u \\\mathcal{K}_{21} & \mathcal{K}_{22} & \mathcal{K}_{23} \\ \mathcal{K}_{31} & \mathcal{K}_{32} & \mathcal{K}_{33} \end{bmatrix} \begin{bmatrix}\psi_x(x_{k}) \\ \psi_{xu}(x_{k},u_{k}) \\ \psi_u(u_{k}) \end{bmatrix}
\label{eq:koopDecomp}
\end{equation}
where $\psi_x: \mathbb{R}^n \rightarrow \mathbb{R}^{n_L}$, $\psi_{xu}: \mathbb{R}^n \oplus \mathbb{R}^m \rightarrow \mathbb{R}^{M_L}$, and $\psi_x: \mathbb{R}^m \rightarrow \mathbb{R}^{m_L}$.
\begin{assumption}
\textit{We assume that the inputs $u_k$ are exogenous disturbances without state space dynamics. }
\label{ass:exogenousInputs}
\end{assumption}
The above Assumption \ref{ass:exogenousInputs} allows us to be concerned only with the first block equation in (\ref{eq:koopDecomp}), i.e.
\begin{equation}
\psi_x(x_{k+1}) = \mathcal{K}_x \psi_x(x_{k}) + \mathcal{K}_{xu}\psi_{xu}(x_{k},u_{k}) + \mathcal{K}_u\psi_u(u_{k}).
\label{eq:xkoopDecomp}
\end{equation}
We refer the readers to \cite{yeung2018koopman} for more discussion on this decomposition and a proof for the existence of Koopman operators for nonlinear systems.
\subsection{Finite dimensional approximations}
\subsubsection{Dynamic mode decomposition with control}
Dynamic mode decomposition with control (DMDc) developed in \cite{proctor2016dynamic,proctor2018generalizing} is an extension of DMD for systems with external actuation and is used to identify a finite dimensional approximation to the Koopman operator, $K$. The idea is to find the best-fit linear operators $A$ and $B$ to provide the following relationship:
\begin{equation*}
x_{k+1} = Ax_k + Bu_k
\end{equation*}
for measurement $x_k$, present control $u_k$, and future measurement $x_{k+1}$. To do this, data snapshots are collected from the dynamical system (\ref{eq:stateDyn}) of the state and input over time and formed into the following data matrices:
\begin{equation*}
\begin{aligned}
X_p &= \begin{bmatrix} x_0 & x_1 & \hdots & x_{N-1} \end{bmatrix},\\
X_f &= \begin{bmatrix} x_1 & x_2 & \hdots & x_N \end{bmatrix},\\
U_p &= \begin{bmatrix} u_0 & u_1 & \hdots & u_{N-1} \end{bmatrix},
\end{aligned}
\end{equation*}
where N is the number of snapshots collected. The above matrices are known as snapshot matrices since each column represents a snapshot at time $k$ of the state of the dynamical system. If data is collected from more than one trajectory, the additional time-series can be appended as additional columns in the snapshot matrices. To identify the best-fit linear operators, the algorithm solves the following optimization problem:
\begin{equation*}
\begin{aligned}
\begin{bmatrix}A&B\end{bmatrix} &= \text{arg}\min_{A,B} \sum_{i=0}^{N-1} || x_{i+1} - Ax_i-Bu_i ||_2, \\
&= \text{arg}\min_{A,B} || X_f - AX_p - BU_p||_F, \\
& = X_f\begin{bmatrix}X_p \\ U_p\end{bmatrix}^{\dagger}
\end{aligned}
\end{equation*}
where $\dagger$ represents the Moore-Penrose pseudinverse. Here $A$ corresponds to $K_x$ and $B$ to $K_u$ while $K_{xu}=0$. Thus DMDc is limited in its system identification capabilites as it does not account for mixed terms of $x$ and $u$ in its formulation.
If trajectories of the dynamical system without inputs is available, the state transition matrix $A$ can first be identified and treated to be a constant in the learning of the control matrix $B$ from the trajectories with inputs \cite{hasnain2019data}. This disambiguates the impact of the control input on the state dynamics.
\subsubsection{Deep dynamic mode decomposition with control}
Another approach to learning Koopman invariant subspaces for controlled nonlinear systems is given in \cite{williams2016extending}, but a difficulty in this approach is that the user must choose the observables manually. We use the deep learning approach deep Dynamic Mode Decomposition (deepDMD) developed in \cite{yeung2019learning} to identify an efficient representation of the invariant subspace. The approach here is to represent the observable functions as many compositions of linear transformations and ReLU (Rectified Linear Units) functions and learn both the observables and the Koopman operator simultaneously. The neural network is tasked with solving the following optimization problem
\begin{equation*}
\begin{aligned}
&\min_{K_x,K_{xu},K_u,\theta} || \Psi_x(X_f,\theta) - K_x\Psi_x(X_p,\theta) \\& \qquad \qquad \quad - K_{xu}\Psi_{xu}(X_p,U_p,\theta) - K_u\Psi_{u}(U_p,\theta) ||_F \\& \qquad \qquad \quad
+ \lambda_1||K||_2 + \lambda_2||\theta||_1
\end{aligned}
\end{equation*}
where the parameters $\theta$ are the neural networks biases and weights and $\lambda_1$ and $\lambda_2$ are regularization parameters. The snapshot matrix $\Psi_x$, is now represented as
\begin{equation*}
\Psi_x(X_p,\theta) = \begin{bmatrix} \psi_x(x_0,\theta) & \psi_x(x_1,\theta) & \hdots & \psi_x(x_{N-1},\theta) \end{bmatrix}
\end{equation*}
and the other snapshot matrices are similarly formed. The neural network identifies a model that relates the data in observable space as
\begin{equation*}
\psi_x(x_{k+1}) = K_x\psi_x(x_k) + K_{xu}\psi_{xu}(x,u) + K_u\psi_u(u),
\end{equation*}
where we have dropped the dependency on $\theta$ for brevity.
\section{Steady state programming} \label{sec:ssProgramming}
In this section, when we refer to the Koopman operator, we are referring to the finite-dimensional approximation of the actual Koopman operator. The focus of this paper is to design an input $u$ to maximize a single direction of the state at equilibrium. We emphasize that we are not solving a dynamic control policy planning problem, but that the control policy is a constant input that maximizes the equilibrium value of a single direction in state space. Furthermore, we consider nonlinear systems which have hyperbolic fixed points. At steady state we have the optimization problem
\begin{equation}
\begin{aligned}
&\min_{u \in \mathcal{U}} \quad - \hat{e}^{\top}_i x_{k,e} \\
& \quad \textrm{s.t.} \quad x_{k,e} = f(x_{k,e},u_k),
\label{eq:ssOpt}
\end{aligned}
\end{equation}
where $\mathcal{U}$ is a bounded set of inputs, $\hat{e}_i$ is the unit column vector in the $i^{th}$ direction of the state (the state we aim to maximize), and the subscript $e$ denotes equilibrium values. Since the form of $f$ is unknown, it is difficult to design an input which solves this program. Therefore, we propose to identify a Koopman invariant subspace by extending the states and the inputs into a space of observables. The identified Koopman model then allows us to formulate the optimization problem in the lifted space where the problem has some tractability.
\subsection{Steady state programming using DMD with linear control}
For a linear time-invariant system with $(I - A)$ invertible, if $u$ does not come from a bounded set $\mathcal{U}$ but from an unbounded set $\hat{\mathcal{U}}$ and the observable mapping is the identity, there is no bounded $u$ that solves optimization problem (\ref{eq:ssOpt}). To see this, under the conditions above, the optimization problem (\ref{eq:ssOpt}) becomes
\begin{equation*}
\min_{u \in \hat{\mathcal{U}}} \quad -\hat{e}^{\top}_i(I - A)^{-1}Bu_k
\end{equation*}
and note that the objective is linear in $u$. Differentiating with respect to $u$ we have
\begin{equation*}
\frac{\partial}{\partial u_k} -\hat{e}^{\top}_i(I - A)^{-1}B u_k = -\hat{e}^{\top}_i(I - A)^{-1}B
\end{equation*}
and so for any $(A,B)$, there is no bounded $u^*$ that solves the optimization problem. If $u$ comes from the bounded set $\mathcal{U}$, then the minimum value of the objective on $\mathcal{U}$ occurs on the boundary $\partial \mathcal{U}$ due to the maximum modulus principle. This results from the fact that the objective function to be optimized is linear in $u$ and therefore it is an uninteresting case. This case corresponds to identifying a linear model using DMD with control, where the observable mapping is simply the identity and the measurement snapshots are all that is used to compute the state-transition matrix and the control matrix. For nonlinear systems that can be represented linearly by considering a linear basis, the above approach will work effectively. However, we aim to develop a method which can handle nonlinear systems that \textit{require} a basis that is a nonlinear function of the original states.
\subsection{Steady state programming using deepDMD with nonlinear control }
Consider the case where the observable mapping is not the identity.
\begin{assumption}
\textit{We assume that the state transition Koopman operator $K_x$ has $n_L$ linearly independent eigenvectors and that none of the corresponding $n_L$ eigenvalues are equal to 1. }
\label{ass:invertibleIminusKx}
\end{assumption}
At equilibrium, (\ref{eq:xkoopDecomp}) becomes
\begin{equation}
\psi_x(x_{e}) = K_x\psi_x(x_{e}) + K_{xu}\psi_{xu}(x_{e},u_{k}) + K_u\psi_u(u_{k})
\label{eq:ssKoopEq}
\end{equation}
and under Assumption \ref{ass:invertibleIminusKx}, $(I - K_x)$ is invertible, so we have
\begin{equation*}
\psi_x(x_{e}) = (I - K_x)^{-1} \left[K_{xu}\psi_{xu}(x_{e},u_{k}) + K_u\psi_u(u_{k})\right].
\end{equation*}
For the optimization problem (\ref{eq:ssOpt}) we now have
\begin{equation}
\begin{aligned}
&\min_{\psi_u(u_k) \in \mathcal{U}_{\Psi}} \quad - \hat{e}^{\top}_i \psi_x(x_{e}) \\
& \quad \textrm{s.t.} \quad \psi_x(x_{e}) = (I - K_x)^{-1} \\ & \qquad \qquad \left[K_{xu}\psi_{xu}(x_{e},u_{k}) + K_u\psi_u(u_{k})\right]
\end{aligned}
\label{eq:ssOptKoop}
\end{equation}
stating that we want to find the input $\psi_u(u)$ that maximizes the equilibrium value of a single direction in observable space. Here $\mathcal{U}_{\Psi}$ is a bounded set of lifted inputs. If we assume away the mixed terms of $x$ and $u$ then the problem becomes immediately tractable as we have $\psi_x(x_e)$ as a function of just terms in $\psi_u(u)$. However, assuming away the mixed terms seemingly requires a strong assumption about the nonlinearities present in original dynamics. Specifically it assumes that the input affects the state dynamics through additive means \textit{only}. This is typically not the case in biological systems. Another important point to make is that this is an implicit optimization of $\psi_x(x_e)$, another layer of difficulty when dealing with the mixed terms.
Since we transform the state space coordinates to an abstract observable-space, we want to keep the interpretability of the original system intact. Therefore we make the following assumption:
\begin{assumption}
\textit{We assume state-inclusive and input-inclusive observables $\psi_{x}(x)$ and $\psi_u(u)$, respectively, i.e.}
\begin{equation}
\begin{aligned}
\psi_x(x) &= \begin{bmatrix} x \\ \varphi_x(x) \end{bmatrix}, \\
\psi_u(u) & = \begin{bmatrix} u \\ \varphi_u(u) \end{bmatrix}
\end{aligned}
\end{equation}
\textit{with $\varphi_x(x) \in \mathbb{R}^{n_L-n}$ and $\varphi_u(u) \in \mathbb{R}^{m_L -m}$.}
\end{assumption}
Again, since the mixed terms of $x$ and $u$ cause issues in solving program (\ref{eq:ssOptKoop}), we want to examine the structure of these terms when an EDMD \cite{williams2016extending} with control approach is used. A typical EDMD dictionary may consist of functions such as monomials, polynomials (Legendre and Hermite), radial basis functions, trigonometric functions, and logistic functions. It can be shown that for most of these common dictionary elements, the mixed terms are separable in $x$ and $u$. To illustrate what we want to achieve, consider an example system with state-inclusive ($n=2$ and $m=2$) observables with up to second order monomials in the state and input. Then for $\psi_x: \mathbb{R}^n \rightarrow \mathbb{R}^{n_L = 5}$ and $\psi_u: \mathbb{R}^m \rightarrow \mathbb{R}^{m_L = 5}$ we have that
\begin{equation*}
\psi_x(x) = \begin{bmatrix} x_1 \\ x_2 \\ x_1x_2 \\ x_1^2 \\ x_2^2 \end{bmatrix}, \quad
\psi_u(u) = \begin{bmatrix} u_1 \\ u_2 \\ u_1u_2 \\ u_1^2 \\ u_2^2 \end{bmatrix}
\end{equation*}
where we have left out the zeroth order terms for brevity. For this example, the subscripts on the states and inputs represent different components of the state and input, not timepoints. As we have no knowledge of the form of the dynamics $f$, we may consider forming all possible combinations of $\psi_x(x)$ and $\psi_u(u)$ and using those elements to form $\psi_{xu}(x,u)$,
\begin{equation*}
\psi_{xu}(x,u) = \begin{bmatrix} x_1\psi_u(u) \\ x_2\psi_u(u) \\ x_1x_2\psi_u(u) \\ x_1^2\psi_u(u) \\ x_2^2\psi_u(u) \end{bmatrix} \text{or} \quad \psi_{xu}(x,u) = \begin{bmatrix} u_1\psi_x(x) \\ u_2\psi_x(x) \\ u_1u_2\psi_x(x) \\ u_1^2\psi_x(x) \\ u_2^2\psi_x(x) \end{bmatrix}
\end{equation*}
with $\psi_{xu}: \mathbb{R}^n \oplus \mathbb{R}^m \rightarrow \mathbb{R}^{M_L = 25}$, such that $M_L = n_L \times m_L$. Note that $\psi_{xu}(x,u)$ can now be written as a matrix times $\psi_u(u)$ or a matrix times $\psi_x(x)$ as
\begin{equation}
\psi_{xu}(x,u) = \begin{bmatrix} D_x(x_1) \\ D_x(x_2) \\ D_x(x_1x_2) \\ D_x(x_1^2) \\ D_x(x_2^2) \end{bmatrix} \psi_u(u) = M_x \psi_u(u)
\label{eq:sepxu}
\end{equation}
or
\begin{equation}
\psi_{xu}(x,u) = \begin{bmatrix} D_u(u_1) \\ D_u(u_2) \\ D_u(u_1u_2) \\ D_u(u_1^2) \\ D_u(u_2^2) \end{bmatrix} \psi_x(x) = M_u \psi_x(x)
\label{eq:sepux}
\end{equation}
where $D_x(\cdot) \in \mathbb{R}^{n_L\times m_L}$ and $D_u(\cdot) \in \mathbb{R}^{m_L\times n_L}$ are diagonal matrices with their argument as the diagonal elements. The matrices $M_x \in \mathbb{R}^{M_L \times m_L}$ and $M_u \in \mathbb{R}^{M_L \times n_L}$ consist of terms only in $x$ and only in $u$, respectively.
A more general statement is that if $f(x,u)$ in (\ref{eq:stateDynDecomp}) is separable into some $G(u)h(x)$, then the mixed observable $\psi_{xu}(x,u)$ are also similarly separable. To see this, consider continuous-time dynamics $\dot{x} = f(x,u)$ for ease of presentation. If $f(x,u)$ is multiplicatively separable in $x$ and $u$, it can be written as
\begin{equation}
f(x,u) = WG(u)h(x),
\end{equation}
with the weight matrix $W \in \mathbb{R}^{n\times M_L}$, the matrix $G(u) \in \mathbb{R}^{M_L \times n_L}$ is a function of $u$, and the vector $h(x) \in \mathbb{R}^{n_L}$. From the Koopman generator \cite{budivsic2012applied}, we have that
\begin{equation}
\dot{\psi}_x(x) = \frac{\partial \psi_x(x)}{\partial x }f(x,u) = \frac{\partial \psi_x(x)}{\partial x } WG(u)h(x).
\end{equation}
and therefore, any mixed terms are entirely separable in the observable space as well. This may mean that the deep learning approach can take care of the necessity of having mixed terms without explicitly learning the $K_{xu}$ parameters.
Using the idea that we can separate the mixed terms as in (\ref{eq:sepxu}) and (\ref{eq:sepux}), (\ref{eq:ssKoopEq}) can be written as
\begin{equation}
\psi_x(x_{e}) = K_x\psi_x(x_{e}) + K_{xu}M_u \psi_x(x_e) + K_u\psi_u(u_{k})
\label{eq:ssKoopEqsepux}
\end{equation}
or as
\begin{equation}
\psi_x(x_{e}) = K_x\psi_x(x_{e}) + K_{xu}M_x \psi_u(u_k) + K_u\psi_u(u_{k}).
\label{eq:ssKoopEqsepxu}
\end{equation}
From (\ref{eq:ssKoopEqsepux}) we have that at equilibrium,
\begin{equation*}
\psi_x(x_{e}) = (I-K_x-K_{xu}M_u)^{-1}K_u\psi_u(u_{k})
\end{equation*}
if $(I-K_x-K_{xu}M_u)$ is invertible. From (\ref{eq:ssKoopEqsepxu}) we have that at equilibrium,
\begin{equation*}
\psi_x(x_{e}) = (I-K_x)^{-1}\left[K_{xu}M_x+K_u\psi_u(u_{k})\right]
\end{equation*}
and from Assumption \ref{ass:invertibleIminusKx} $(I-K_x)$ is invertible.
Of these two options, if we use (\ref{eq:ssKoopEqsepxu}) to program the steady state of nonlinear systems, it only requires that $(I-K_x)$ to be invertible, which has already been assumed. Equation (\ref{eq:ssKoopEqsepux}) on the other hand requires that $(I-K_x-K_{xu}M_u)$ be invertible. An advantage is that the right hand side is entirely a function of $u$ whereas (\ref{eq:ssKoopEqsepxu}) has a right hand side which is a function of both $u$ and $x$, requiring implicit optimization. From this analysis, we can again rewrite the steady state programming problem (\ref{eq:ssOptKoop}) as
\begin{equation}
\begin{aligned}
&\min_{\psi_u(u_k) \in \mathcal{U}_{\Psi}} \quad - \hat{e}^{\top}_i \psi_x(x_{e}) \\
& \quad \textrm{s.t.} \quad \psi_x(x_{e}) = (I-K_x-K_{xu}M_u)^{-1}K_u\psi_u(u_{k}) \\
& \quad \textrm{or s.t.} \quad \psi_x(x_{e}) = (I-K_x)^{-1}\left[K_{xu}M_x+K_u\psi_u(u_{k})\right] \\
& \quad \textrm{or s.t.} \quad \psi_x(x_{e}) = (I-K_x)^{-1}K_u\psi_u(u_{k})
\label{eq:ssOptFinal}
\end{aligned}
\end{equation}
where the last constraint comes from setting $K_{xu}=0$ i.e. stating that $\psi_{xu}$ terms are unimportant. We consider this last case since deepDMD offers the flexibility to learn accurate Koopman invariant subspaces even without considering the mixed terms, as will be demonstrated.
This program is in its most general form nonlinear and will yield local optima. EDMD or any neural network approach do not provide a structure to guarantee a global optimum for this problem and is thus NP-hard. We use the Sequential Least Squares Quadratic Programming (SLSQP) solver from the scipy.optimize python package to solve program (\ref{eq:ssOptFinal}). If DMD is used for steady state programming, any convex optimization solver will suffice.
\section{Numerical Results}
\subsection{Example: Feedforward Loop}
We consider the following nonlinear dynamical system describing a feedforward loop of five proteins (states) under the influence of two inducers (inputs):
\begin{equation} \label{eq:iffl}
\begin{aligned}
\dot{x}_0 &= \frac{k_0u_0}{1+\frac{u_1}{K_{d_4}}} - \delta_0x_0 \\
\dot{x}_1 &= \frac{k_1u_1}{1+\frac{x_0}{K_{d_1}}} - \delta_1x_1 \\
\dot{x}_2 &= k_2x_1+k_3u_0 - \delta_2x_2 \\
\dot{x}_3 &= \frac{k_4u_1}{1+\frac{x_2}{K_{d_2}}} - \delta_3x_3 \\
\dot{x}_4 &= \frac{k_5u_0}{1+\frac{x_3}{K_{d_3}}} - \delta_4x_4 .
\end{aligned}
\end{equation}
\normalsize
The states, $x$, describe the concentration of proteins, the inputs $u$ are inducers which activate or repress the protein production and $K_d, \delta, k$ are constant parameters. For simulation, a fourth-order Runge-Kutta scheme is used. 100 timesteps from 100 different initial conditions each with a different input are used for training and testing. The inputs are always step inputs in these simulations. The neural network is a dense feedforward network with the output being the observable basis which spans the approximate Koopman invariant subspace. We take the case where the mixed terms are assumed to be zero. To evaluate accuracy of our Koopman model, we perform a multi-step prediction test. Predictions on a single test trajectory (a trajectory that the neural network has not seen) is given in Figure \ref{fig:ifflPred}. This prediction is done starting from an initial condition and predicting 99 steps into the future and compared with the actual trajectory. The error for this trajectory is $\approx 5\%$ and similar errors are computed for 49 other test trajectories. We then solve program (\ref{eq:ssOptFinal}) and obtain the optimal step input ($0 < u^* < 10$) to achieve a maximum steady state for both state $x_0$ and state $x_3$ as can be seen in Figure \ref{fig:iffl_optInput}. Each trajectory starts from the same initial condition, but only the trajectory corresponding to the black dashed line sees the optimal input and all the other trajectories see a random input between ($0 < u < 10$). It can be seen that the optimal input computed from our framework does give the maximum steady state when applied to the system. Note that the optimal input was applied to the original nonlinear system, \textit{not} to the identified Koopman model.
\begin{figure}
\centering
\includegraphics[scale=0.3]{figures/incoherent_ff_loop_final_nstep_prediction.pdf}
\caption{99 step prediction from a new initial condition using the neural network identified Koopman model of the feedforward loop (\ref{eq:iffl}). Predictions are given by dashed lines while the true trajectory is given by dotted lines.}
\label{fig:ifflPred}
\end{figure}
\begin{figure}
\centering
\hspace*{-0.3cm}
\includegraphics[scale=0.25]{figures/iffl_optimal_input_x0x3.pdf}
\caption{Results of the optimal input computed from the steady state programming problem applied to states $x_0$ (left) and $x_3$ (right) of the feedforward loop (\ref{eq:iffl}). The trajectory corresponding to the optimal input is given in black dashed lines and the other trajectories correspond to other inputs that are suboptimal.}
\label{fig:iffl_optInput}
\end{figure}
\subsection{Example: Combinatorial promoter}
For a second example, we consider a combinatorial promoter which takes both a repressor and an activator as input, allowing genes to be switched on and off based on the concentration of the activator and repressor \cite{del2015biomolecular}. The dynamics are given by the following set of differential equations
\begin{equation} \label{eq:combPromoter}
\begin{aligned}
\dot{x}_0 &= -k_{1f}x_0u_0 + k_{1r}x_2 \\
\dot{x}_1 &= -k_{2f}x_1u_1 + k_{2r}x_3 -k_{4f}x_1x_4 + k_{4r}x_6 - k_{5f}x_1x_5 \\ &+ k_{5r}x_7 + 0.2x_{10} \\
\dot{x}_2 &= k_{1f}x_0u_0 - k_{1r}x_2 - k_{3f}x_2x_4 + k_{3r}x_5 - k_{6f}x_2x_6 \\ &+ k_{6r}x_7\\
\dot{x}_3 &= k_{2f}x_1u_1 - k_{2r}x_3 - 0.2x_3 \\
\dot{x}_4 &= -k_{3f}x_2x_4 + k_{3r}x_5 - k_{4f}x_1x_4 + k_{4r}x_6\\
\dot{x}_5 &= k_{3f}x_2x_4 - k_{3r}x_5 - k_{5f}x_1x_5 + k_{5r}x_7 - k_{7f}x_5x_8 \\ &+ k_{7r}x_9 + k_{8f}x_9\\
\dot{x}_6 &= -k_{6f}x_2x_6 + k_{6r}x_7 - k_{4r}x_6 + k_{4f}x_1x_4\\
\dot{x}_7 &= k_{5f}x_1x_5 - k_{5r}x_7 + k_{6f}x_2x_6 - k_{6r}x_7\\
\dot{x}_8 &= -k_{7f}x_5x_8 + (k_{7r}+ k_{8f})x_9\\
\dot{x}_9 &= k_{7f}x_5x_8 - (k_{7r}+ k_{8f})x_9\\
\dot{x}_{10} &= k_{8f}x_9 - \delta x_9^2\\
\end{aligned}
\end{equation}
\normalsize
where the inputs $u$ are the repressor or activator and are continuous, monotonically increasing functions of time. For simulation, a fourth-order Runge-Kutta scheme is used. 1000 timesteps from 80 different initial conditions each with a different input are used for training and testing. The neural network is the same as in Example 1. We again take the case where the mixed terms are assumed to be zero since the deep learning approach is flexible enough to provide a model that predicts well many steps into the future. We again solve program (\ref{eq:ssOptFinal}) and obtain the optimal step input ($0.01 < u^* < 1$) to achieve a maximum steady state for both state $x_6$ and state $x_{10}$ as can be seen in Figure \ref{fig:combPromoter_optInput}. Each trajectory starts from the same initial condition, but only the trajectory corresponding to the black dashed line sees the optimal input and all the other trajectories see a random input between ($0.01 < u < 1$). For this example of a combinatorial promoter, the learned optimal input gives the maximum steady state value for each direction in state space. Again, we want to emphasize that the optimal input was identified using the Koopman model, but it was applied to the original nonlinear system.
\begin{figure}
\centering
\hspace*{-0.4cm}
\includegraphics[scale=0.26]{figures/combinatorial_promoters_optimal_input_x6x10.pdf}
\caption{Results of the optimal input computed from the steady state programming problem applied to states $x_6$ (left) and $x_{10}$ (right) of the combinatorial promoter system (\ref{eq:combPromoter}). The trajectory corresponding to the optimal input is given in black dashed lines and the other trajectories correspond to other inputs that are suboptimal.}
\label{fig:combPromoter_optInput}
\end{figure}
These numerical results demonstrate that it is possible, by harnessing the natural analog computational power of cells and through a data-driven operator theoretic framework, to generate steady state input-output functions.
\section{Conclusions}
In this paper we presented a formulation for programming the steady state of controlled nonlinear systems with hyperbolic fixed points. We used deep dynamic mode decomposition (deepDMD) with control to compute approximate Koopman invariant subspaces and Koopman operators which represent the original nonlinear system as an approximate linear system. This allowed us to pose and solve an optimization problem wherein we maximize the steady state value of a single direction in state space. The formulation can be extended to handle various cases e.g. maximize the ratio of two directions in state space at steady state. The formulation can be slightly revised to solve common optimal control tasks e.g. reaching a target state and dynamic reference tracking. The method can also be extended to nonlinear systems with other types of attractors. It was shown that mixed terms of the state and the input can provide difficulties and we discuss a way of dealing with these terms. Finally, we demonstrated our method on two example nonlinear systems that are commonly dealt with in biological processes and briefly discuss broader biological computation implications.
\section*{Acknowledgements}
The authors thank Igor Mezic, Robert Egbert, Bassam Bamieh, Sai Pushpak, Sean Warnick, and Umesh Vaidya for stimulating conversations. This work was supported by a Defense Advanced Research Projects Agency (DARPA) Grant No. DEAC0576RL01830 and an Institute of Collaborative Biotechnologies Grant. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA), the Department of Defense, or the United States Government.
\bibliographystyle{abbrv}
|
1,314,259,993,462 | arxiv | \section{Introduction} \label{sec:intro}
Detecting 21 cm emission from the high redshift intergalactic medium (IGM)
will potentially revolutionize our understanding of cosmic reionization.
The 21 cm signal promises direct, three-dimensional information regarding the
state of the IGM during reionization (e.g. Scott \& Rees 1990, Madau et al. 1997,
Furlanetto et al. 2006a).
Unfortunately, experimental challenges are substantial. In particular, astrophysical
foregrounds are expected to be four to five orders of magnitude larger
than the signal from the high redshift IGM. However, {\it known} foreground
contaminants are spectrally smooth and should be distinguishable from the high redshift
21 cm signal itself (Zaldarriaga et al. 2004).
Given that we anticipate observational complications, it is important to
develop diagnostics to confirm that the detected 21 cm signal indeed originates from
the high redshift IGM.
One such approach is to measure the cross correlation between
21 cm emission and a high redshift galaxy survey
(Furlanetto \& Lidz 2007, Wyithe \& Loeb 2007).
Since most of the anticipated foregrounds come from low redshift -- primarily
galactic synchrotron -- and not
from high redshift galaxies, the mean 21 cm-galaxy cross power spectrum signal is largely
immune to foreground contamination (Furlanetto \& Lidz 2007). Detecting a
21 cm-galaxy cross correlation
should hence confirm that the detected 21 cm signal comes from
the high redshift IGM. Moreover, continuing efforts
are pushing galaxy surveys towards higher redshifts, and it is natural to consider the
information
that may be gleaned from combining galaxy and 21 cm surveys. Detecting galaxies at very high
redshift is extremely challenging (e.g. Stark et al. 2007, Bouwens et al. 2008), but
we will show here that a cross spectrum detection may already be possible with
modest extensions
to the Subaru survey (Kashikawa et al. 2006; see
also Wyithe \& Loeb 2007, Furlanetto \& Lidz 2007).
In addition to these important practical advantages, the 21 cm-galaxy cross
correlation is potentially sensitive to the size and filling factor of H II regions,
the clumpiness of the IGM, and the nature of the ionizing sources.
The 21 cm-galaxy cross correlation will also provide a more direct tracer
of the interplay
between the reionizing sources and the surrounding IGM, than the 21 cm auto power
spectrum. The
21 cm-galaxy cross correlation should hence provide a unique and powerful
probe of the Epoch of Reionization (EoR) and early structure formation. Here we follow up on earlier work
by Wyithe \& Loeb (2007) and Furlanetto \& Lidz (2007), and
focus on modeling the 21 cm-galaxy cross power spectrum, and exploring the
insights that future surveys will provide regarding the EoR.
The outline of this paper is as follows. In \S \ref{sec:cross} we establish
notation, describe the reionization simulations used in our analysis, and
examine the basic simulated signal.
We then characterize (\S \ref{sec:evol_sig}) the dependence of the cross spectrum
on redshift and
ionization fraction. In \S \ref{sec:sources} we illustrate how the
signal is sensitive to the properties of the ionizing sources. In \S \ref{sec:recombs} we
describe its variation with the abundance of Lyman-limit systems. We then examine
the signal's dependence on the way in which high redshift galaxies are selected
(\S \ref{sec:selection}), contrasting the results for Ly-$\alpha$ selected galaxies
with galaxies selected through other techniques.
In \S \ref{sec:detectability} we briefly discuss the statistical power of
future surveys to constrain reionization through measurements of the
21 cm-galaxy cross power spectrum. Finally, we summarize our main
results and conclude in \S \ref{sec:conclusions}.
Throughout we consider a $\Lambda$CDM cosmology parameterized by:
$n_s = 1$, $\sigma_8 = 0.8$,
$\Omega_m = 0.27$, $\Omega_\Lambda = 0.73$, $\Omega_b = 0.046$, and $h=0.7$,
(all symbols
have their usual meanings), consistent with the WMAP constraints from
Spergel et al. (2007) and Komatsu et al. (2008).
\section{The 21 cm-galaxy cross power spectrum}
\label{sec:cross}
In this paper we focus on the cross power spectrum between the 21 cm and galactic abundance
fields. Each field is non-Gaussian and so the cross and auto power spectra
alone provide
an incomplete description of the fields' statistical properties.
However, the limited sensitivity of
first generation 21 cm surveys will prohibit detailed imaging of the 21 cm
field (McQuinn et al. 2006), and we expect these surveys to have relatively low signal
to noise for detecting higher order moments of the 21 cm field. We hence focus on the
cross power spectrum throughout.
In order to explore the information content of the 21 cm-galaxy cross power spectrum, it
is useful to decompose the signal into the sum of several contributing terms.
Throughout this work we adopt the limit that the spin temperature of
the 21 cm transition is much higher than the
CMB temperature globally (Ciardi \& Madau 2003, Pritchard \& Furlanetto 2007), $T_s >> T_{\rm CMB}$, and we ignore peculiar velocities -- which should be a good approximation
during most of the reionization epoch (Mesinger \& Furlanetto 2007a).
With these assumptions the 21 cm-galaxy cross power spectrum can be written as:
\begin{eqnarray}
\Delta^2_{\rm 21, gal}(k) = && \tilde{\Delta}^2_{\rm 21, gal}(k)/T_0 = \avg{x_{\rm HI}}
[\Delta^2_{\rm x, gal}(k) \nonumber \\ && + \Delta^2_{\rm \rho, gal}(k)
+ \Delta^2_{\rm x \rho, gal}(k)].
\label{eq:p21_gal_cross}
\end{eqnarray}
In this equation $\Delta^2_{\rm 21, gal}(k)$ denotes the cross power
spectrum
between the 21 cm field and the galaxy overdensity at wavenumber
$k=|{\bbox{k}}|$. The 21 cm field at spatial
position ${\bbox{r}}$
is given by $\delta_T ({\bbox{r}}) = T_0 \avg{x_H} \left(1 + \delta_x({\bbox{r}})\right)
\left(1 + \delta_\rho({\bbox{r}})\right)$;
$T_0$ is the 21 cm brightness temperature, relative to the CMB, at
the redshift in question for a fully neutral gas element at the cosmic
mean density, and $\avg{x_H}$ is the volume-averaged hydrogenic
neutral fraction. The field $\delta_x({\bbox{r}}) = (x_H({\bbox{r}}) - \avg{x_H})/\avg{x_H}$
is the fractional fluctuation in the neutral hydrogen fraction, while
$\delta_\rho$ is the fractional gas density fluctuation.
Similarly $\delta_g({\bbox{r}}) = (n_g({\bbox{r}}) - \avg{n_g})/\avg{n_g}$ is the fractional fluctuation in galaxy abundance, where $n_g({\bbox{r}})$
specifies the co-moving number density of galaxies at spatial position
${\bbox{r}}$, and $\avg{n_g}$ denotes
the volume-averaged galactic abundance. Our notation labels the dimensionless cross
spectrum of two random fields, $a$ and $b$, by $\Delta^2_{\rm a, b}(k) =
k^3 P_{\rm a, b}(k)/(2 \pi^2)$
and $\Delta^2_{\rm x, gal}$, for example, is shorthand for the cross power
spectrum between $\delta_x$ and $\delta_g$ (and $P_{\rm a, b}$ is the usual dimensionful
cross spectrum).
We use a similar shorthand for $\Delta^2_{\rm \rho, gal}(k)$ and
$\Delta^2_{\rm x \rho, gal}(k)$. Throughout we work with the power spectrum of the dimensionless
field $\delta_T({\bbox{r}})/T_0$ which we denote by $\Delta^2_{\rm 21, gal}(k)$, which is distinguished
from the dimensionful power spectrum $\tilde{\Delta}^2_{\rm 21, gal}(k)$ by the factor of $T_0$
as in Equation \ref{eq:p21_gal_cross}.
The individual terms contributing to the 21 cm-galaxy cross power spectrum have the
following
physical interpretations. The first term, $\Delta^2_{\rm x, gal}(k)$, represents the
cross power spectrum between the neutral hydrogen fraction and galaxy density fields. The second
term,
$\Delta^2_{\rm \rho, gal}(k)$, is the cross power spectrum between the matter
and galaxy overdensity fields. The final term, $\Delta^2_{\rm x \rho, gal}(k)$,
is a three field term which would vanish if each contributing field were Gaussian. In fact,
we show below that this term is generally significant during reionization (see Lidz et al. 2007a
for discussion of a similar term, $\Delta^2_{\rm x \rho, \rho}(k)$,
which contributes to the 21 cm auto power spectrum).
We will sometimes refer to these terms respectively as
the {\em `x-gal'}, {\em `$\rho-gal$'}, and {\em `three-field'} terms.
Let us examine each contributing term
from our reionization simulations.
\subsection{Reionization Simulations}
\label{sec:sims}
First, we briefly describe the two types of reionization simulations used in this work. The first type
are the reionization simulations of McQuinn et al. (2007b).
In these simulations, radiative transfer
is treated in a post-processing stage using the code of McQuinn et al. (2007a), a refinement of
the Sokasian et al. (2001, 2003) code,
which in turn uses the adaptive ray-tracing scheme of Abel \& Wandelt (2002).
The radiative transfer calculation is performed on top of a $130$ Mpc/$h$, $1024^3$ particle dark
matter simulation run with an enhanced version of Gadget-2 (Springel 2005). The minimum resolved
halo in this simulation is $\sim 10^{10} M_\odot$, but smaller mass halos down to the atomic
cooling mass (Barkana \& Loeb 2001), $M_{\rm cool} \sim 10^8 M_\odot$, are incorporated with
the appropriate abundance and clustering as in McQuinn et al. (2007a). Ionizing sources
are placed
in simulated halos with simple prescriptions. In our fiducial model, we assume that a source's
ionizing luminosity is proportional to its host halo mass. We assume that gas directly
traces the dark matter, which should be a good approximation on the large scales
of interest here.
Second, we use an improved version of the hybrid simulation technique of Zahn et al. (2007), which
is essentially a Monte-Carlo implementation of the analytic model developed by Furlanetto et al.
(2004). This technique has the advantage of being extremely fast, while maintaining accuracy.
In comparison to the scheme described in Zahn et al. (2007), our present scheme is improved in
several ways. First, we use 2nd-order Lagrangian Perturbation Theory (2LPT) to generate realizations of the density field (Crocce et al. 2006) during reionization (as in Lidz et al. 2007b),
rather than generating Gaussian random fields. This allows us to incorporate quasi-linear effects.
Next, we use a scheme similar to that of Mesinger \& Furlanetto (2007a) to predict the
halo distribution from an initial, linear displacement field.
\subsection{Basic Simulated Signal}
\label{sec:sim_sig}
\begin{figure*}
\begin{center}
\includegraphics[width=17.5cm]{f1.ps}
\caption{Simulated maps of the density, halo, ionization, and 21 cm fields.
Each map is $130$ Mpc/$h$ on a side and is drawn from a simulation
snapshot at $z=7.32$ at which point $\avg{x_i} = 0.54$ in our model.
The density, ionization, and 21 cm maps are each $1$-cell thick ($0.25$
Mpc/$h$), while the halo field is from a $60$-cell ($15$ Mpc/$h$) wedge.
On large scales, the bright regions in the overdensity map tend to have
more halos, be ionized, and be dim in 21 cm. The correspondence between
the bright regions in the halo field, and the dim regions in the 21 cm
field, is the signal we characterize and quantify in this paper.
}
\label{fig:maps}
\ec
\end{figure*}
Let us now examine the main features of the simulated signal. To begin with, we consider the
McQuinn et al. (2007b) simulations, and focus on a model in which all
halos down to the atomic cooling mass contain sources with an ionizing
luminosity proportional to host halo mass. Further, we assume that
all halos above $M_{\rm g, min} = 10^{10} M_\odot$ contain galaxies
detectable by our hypothetical survey.
In what follows, this prescription for the ionizing sources and the
minimum detectable host halo mass constitutes our fiducial model. We denote
the minimum detectable host halo mass by $M_{\rm g, min}$, and the minimum host halo
mass for the ionizing sources by $M_{\rm x, min}$.
Presently we consider
a simulation snapshot at $z = 7.32$, at which point the filling factor of
ionized regions in our model is $\avg{x_i} = 0.54$.
It is illuminating to inspect the simulated fields visually before calculating
their detailed statistics. In Figure \ref{fig:maps} we show narrow
slices through
our simulated density, halo, ionization, and 21 cm fields. Here one
can clearly see that the bright regions in the halo map correspond
to dim regions in the 21 cm map, while dim regions in the halo map
correspond to bright regions in the 21 cm map. This {\em anti-correlation}
is the signal we characterize and calculate in the present paper.
As one can see from the panels of Figure \ref{fig:maps}, the
anti-correlation arises because galaxies are more abundant in large scale
overdense regions,
which hence ionize before typical regions. As a result, the overdense
regions contain less neutral hydrogen during reionization, and emit more
dimly in 21 cm than typical regions, while containing more galaxies
(see also Wyithe \& Loeb 2007).
\begin{figure}
\begin{center}
\includegraphics[width=9.2cm]{f2.eps}
\caption{The 21 cm-galaxy cross power spectrum and its constituent terms.
The signal is shown for our fiducial model at $\avg{x_i} = 0.54$.
{\em Top panel}: The absolute value of the 21 cm-galaxy cross power
spectrum (black solid line). The blue dashed line
shows the {\em x-gal} cross power spectrum, the
magenta long-dashed line shows the {\em $\rho-gal$} cross power spectrum,
and the green dotted line shows the {\em three-field} term. On small scales
the {\em three-field} and {\em $\rho-gal$} cross power spectra cancel each other
out rather closely. For contrast, we also show the cross power spectrum
between the neutral hydrogenic fraction and the density field (cyan dot-dashed
line).
{\em Bottom panel}: The cross correlation coefficient between the 21 cm
and galaxy fields as a function of wavenumber. The cyan dot-dashed line indicates the
cross-correlation coefficient between the neutral hydrogenic and density fields.
The red dotted line indicates
zero correlation coefficient. The sign of the signal in the top panel
can be inferred from the correlation coefficient shown here.
}
\label{fig:fid_decomp}
\ec
\end{figure}
In order to quantify these visual impressions, we calculate and
show the
21 cm-galaxy cross power spectrum in Figure \ref{fig:fid_decomp}. The
{\em top panel} shows the absolute value of the 21 cm-galaxy cross power spectrum,
as well as the individual terms of Equation (\ref{eq:p21_gal_cross}). The {\em bottom
panel} shows the cross correlation coefficient between the two fields,
$r(k) = P_{\rm 21, gal}(k)/\left[P_{\rm 21}(k) P_{\rm gal}(k)\right]^{1/2}$. In
estimating the cross-correlation coefficient here and throughout this paper, we
subtract shot-noise from the galaxy
power spectrum (before calculating $r(k)$) assuming that it is Poisson -- i.e., we assume
$P_{\rm shot} = 1/n_{\rm gal}$, where
$n_{\rm gal}$ is the abundance of halos above $M_{\rm g, min}$.
The figure reveals several interesting features of the signal. On large scales
the 21 cm field is anti-correlated with the galaxy field. As explained and
visualized in Figure \ref{fig:maps}, this occurs because galaxies form
first, and ionize their surroundings, in overdense regions.
On the other hand, on small scales the 21 cm and galaxy fields are
roughly un-correlated. We can understand this by examining the small-scale behavior
of the constituent terms, as shown in the {\em top panel}. The cross power spectrum
between neutral hydrogen fraction and galactic density ($\Delta^2_{\rm x, gal}(k)$, the
{\em x-gal} term) turns over on small scales, as
indicated by the blue-dashed line. This behavior is naturally similar to that of the
density-ionization cross power spectrum, which turns over on scales smaller than
the size of the H II regions during
reionization (Furlanetto et al. 2004, Zahn et al. 2007). The correlations die off on sub-bubble
scales because the
entire interior of each H II region is highly ionized, irrespective of the interior
density and galaxy fields.
For comparison, we additionally plot the cross
power spectrum between neutral hydrogen fraction and matter density. This resembles
the cross power spectrum between neutral hydrogen fraction and galactic density, but
it turns over on slightly smaller scales. As we explore further in \S \ref{sec:sources} and
\S \ref{sec:min_mass},
the turnover is on smaller scales owing to ionized bubbles around low mass halos, which
host galaxies below the detection threshold of our hypothetical galaxy survey.
The cross power spectrum
between the density field and the galaxy field is shown by the long-dashed magenta
line. Note the very strong clustering of these rare galaxies: the cross power spectrum
has an amplitude of unity, $\Delta^2_{\rm \rho, gal}(k) \sim 1$, on a scale of
$k \sim 1.8 h$ Mpc$^{-1}$. On the same scale the amplitude of the matter power
spectrum, $\Delta^2_{\rm \rho, \rho}(k)$, is a factor of $\gtrsim 7$ smaller. Hence
even though dark matter clustering is quasi-linear on relevant scales, the clustering
of detectable host-halos may be quite non-linear on the same scales.
Finally, let us
examine the {\em three-field} term, $\Delta^2_{\rm x \rho, gal}(k)$. This term is negative
in our calculations and appears to closely cancel out the {\em $\rho-gal$} cross power
spectrum on small scales. Owing to this cancellation, the shape of the 21 cm-galaxy
cross power spectrum closely mimics that of the {\em x-gal} cross
power spectrum.
The 21 cm-galaxy cross correlation may then offer a relatively direct tracer of bubble
growth during reionization: it traces the {\em x-gal} term, which turns over
on scales smaller than that of the H II regions around the minimum mass detectable galaxies.
We examine this further in \S \ref{sec:evol_sig}, but we first pause to
consider the {\em three-field} term more closely.
In order to understand why the {\em three field} and {\em $\rho-gal$} terms
cancel each other on small scales, it
is helpful to combine the two terms into a single one, and consider the two-point
correlation function rather than the power spectrum. Here we use similar reasoning
to that of Lidz et al. (2007a); see their \S 3.2.
The two terms are combined as:
\begin{eqnarray}
\Delta^2_{\rm \rho, gal}(k) + \Delta^2_{\rm x \rho, gal}(k) \propto
\rm{F.T.}\left[\avg{x(1) \delta_\rho (1) n_g(2)}\right].
\label{eq:combine_cancel}
\end{eqnarray}
Here $1$ and $2$ indicate spatial positions ${\bbox{x}}_1$ and ${\bbox{x}}_2$, respectively,
while $\rm{F.T.}$ refers to a Fourier transform. This equation follows from
expanding $x(1)$ on the right-hand side of the equation as
$x(1) = \avg{x} (1 + \delta_x(1))$, and using
$\avg{\left[1+\delta_x(1)\right] \delta_\rho(1) n_g(2)} =
\avg{\delta_\rho(1) n_g(2)} + \avg{\delta_x(1) \delta_\rho(1) n_g(2)} $.
We can write the above two-point function as:
\begin{eqnarray}
\avg{x(1) \delta_\rho(1) n_g(2)} = \int dx(1) d\delta_\rho(1) dn_g(2) \times \nonumber \\
x(1) \delta_\rho(1) n_g(2) P\left[x(1),\delta_\rho(1) | n_g(2)\right]
P\left[n_g(2)\right].
\label{eq:twop}
\end{eqnarray}
Provided we consider separations much smaller than the size of the H II regions,
a pair of points $(1)$ and $(2)$ will mostly be either each within the same ionized
bubble, or both outside an ionized bubble. If each point is within a bubble then the
pixel at position $(1)$ is ionized, $x(1)=0$, and this gives no
contribution to the two-point function of Equation (\ref{eq:twop}). On the other hand,
spatial points outside of bubbles do not contain detectable galaxies in this
model (although see \S \ref{sec:sources} for alternate cases), $n_g(2)=0$, and again
yield vanishing contributions to the two-point function of Equation (\ref{eq:twop}).
The two-point function of Equation (\ref{eq:twop}) must hence vanish on small scales in
our fiducial scenario, which explains the cancellation between the {\em $\rho-gal$} and
{\em three-field} terms seen in Figure \ref{fig:fid_decomp}. The ionization field
is a `mask' that surrounds {\em each}
galaxy in this model, eliminating the two-point function of Equation \ref{eq:twop} on
small scales.
\subsection{Hybrid Simulations}
\label{sec:hybrid}
In addition to the full radiative transfer simulations of McQuinn et al. (2007a, 2007b),
we perform some calculations with the rapid hybrid scheme of
Zahn et al. (2007), Mesinger \& Furlanetto (2007a). We use hybrid simulations with
two different boxsizes in this
work: one has $L_{\rm box} = 70$ Mpc/$h$, while the other
has $L_{\rm box} = 130$ Mpc/$h$. The density, halo, ionization, and 21 cm fields in each
simulation are tabulated on $512^3$ grid cells. In order to locate the halos using the
scheme of Mesinger \& Furlanetto (2007a), we
employ a grid of $1200^3$ cells in each simulation.
The smaller box calculation has higher resolution -- resolving halos down to
$M_{\rm min} \sim 10^8 M_\odot$ -- allowing us to accurately identify
halos with mass around the atomic cooling mass.
The larger
box has coarser mass resolution, $M_{\rm min} \sim 10^9 M_\odot$, but
better captures the small-$k$ 21 cm-galaxy cross spectrum. We refer the reader
to Zahn et al. (2007), Mesinger \& Furlanetto (2007a) for a detailed description and
tests of the hybrid scheme. Here we briefly show that estimates of the 21 cm-galaxy
cross power spectrum from our hybrid simulations agree well with those from the full
radiative transfer simulations.
\begin{figure}
\begin{center}
\includegraphics[width=9.2cm]{f3.eps}
\caption{Comparison between the hybrid and simulated 21 cm-galaxy cross
spectrum. {\em Top panel}: The absolute value of the 21 cm-galaxy cross power spectrum.
{\em Bottom panel}: The cross-correlation coefficient between the two fields. In each panel,
the red dotted line shows the results from the full radiative transfer simulation. The black
solid line shows results from cross-correlating the hybrid 21 cm field with the hybrid halo
field. The blue dashed line shows the cross-correlation between the hybrid 21 cm field, and
the halo field from the N-body simulation.
}
\label{fig:rco_21cm_gal_hf}
\ec
\end{figure}
In order to do this comparison, we use the initial conditions from the McQuinn et al. (2007b)
N-body simulation, and generate the halo field and ionization field using our hybrid scheme.
For the purposes of this comparison, in each of our hybrid and radiative transfer calculations,
we include ionizing sources only in halos that are
well resolved by the N-body simulation, with $M_{\rm x, min} \geq 8 \times 10^9 M_\odot$. That
is,
here we do not add low mass halos into the radiative transfer simulation with the appropriate
statistical properties as in McQuinn et al. (2007a) and other sections of this paper. We limit
our comparison to masses above $8 \times 10^9 M_\odot$
because these are the halos directly resolved in our N-body simulation, before small mass
halos are included statistically.
We cross-correlate the resulting 21 cm field with all halos above our fiducial choice of
$M_{\rm g, min} = 10^{10} M_\odot$.
The results of this comparison are shown in Figure \ref{fig:rco_21cm_gal_hf}, for outputs
with $\avg{x_i}$ just slightly below $\avg{x_i} = 0.5$.
The agreement
between the hybrid and full radiative transfer calculations is quite good. In order to check how
much of the small difference between the two calculations comes from differences in the 21 cm
field, and how much from differences in the halo fields, we cross-correlate the hybrid
21 cm field with the simulated halo field (blue dashed lines).
Differences in the simulated and hybrid halo fields seem to be important on small scales, while
differences between the 21 cm fields in the two calculations lead to most of the difference
on large scales.
Regardless, the hybrid calculations agree well with the
full radiative transfer ones, and provide a useful means to estimate the 21 cm-galaxy cross
spectrum rapidly.
\section{Redshift Evolution of 21 cm-galaxy cross power spectrum}
\label{sec:evol_sig}
Now that we have introduced our simulation tools and
understand the basic 21 cm-galaxy cross power spectrum signal,
let us examine its dependence on redshift and ionization fraction.
How does the signal evolve as the filling factor of H II regions, and their characteristic
size, increase? To address this, we calculate the 21 cm-galaxy cross power spectrum
from our radiative transfer simulations, considering a wide range of redshifts in
order to span most of the reionization epoch. We start by adopting our fiducial
model with $M_{\rm g, min} = 10^{10} M_\odot$ at each redshift for simplicity.
\begin{figure}
\begin{center}
\includegraphics[width=9.2cm]{f4.eps}
\caption{Redshift evolution of the 21 cm-galaxy cross power spectrum
in our fiducial model. {\em Top panel}: The absolute value
of the 21 cm-galaxy cross power spectrum at different ionization
fractions/redshifts. The redshifts and ionization fractions shown
are $(\avg{x_i}, z) = (0.02, 11.46); (0.15, 8.76);
(0.21, 8.34); (0.54, 7.32); (0.82, 6.90);$ and $(0.96, 6.77)$.
{\em Bottom panel}: The cross-correlation coefficient between the 21 cm
and galaxy fields as a function of wavenumber.
}
\label{fig:cross_v_z}
\ec
\end{figure}
The results of this calculation are shown in Figure \ref{fig:cross_v_z}.
At early times, near the very beginning of the reionization process
($\avg{x_i} = 0.02$), the galaxy and 21 cm fields are
{\em positively} correlated on large scales. At this stage, galaxies are
extremely rare objects and are only just starting to ionize their surroundings.
The galaxies turn on first in large scale overdense regions, which contain
more matter and initially more neutral hydrogen than large scale underdense
regions. These regions
hence glow more brightly in 21 cm emission and lead to a positive
21 cm-galaxy cross-correlation on large scales, as shown by the black solid
line in Figure \ref{fig:cross_v_z}.
The galaxies
quickly ionize their overdense surroundings which consequently dim in
21 cm emission. On the other hand, the large scale underdense regions are
still mostly free of galaxies and roughly maintain their initial 21 cm brightness
temperature. This leads to a brief period where the 21 cm-galaxy cross
correlation has low amplitude on large scales, as large scale overdense
regions dim in 21 cm emission and roughly equilibrate in brightness temperature
with large scale underdense regions
(Furlanetto et al. 2004, Wyithe \& Morales 2007; Lidz et al. 2007b
discuss a similar low-amplitude
epoch for the 21 cm power spectrum).
In our fiducial model, this `equilibration phase' occurs
when $\avg{x_i} \sim 0.15$, as shown by the red dotted line in the figure. This equilibration
epoch is relatively brief; the two fields quickly become anti-correlated on large scales.
A caveat to this discussion is that our calculations assume that the spin temperature
of the 21 cm transition is globally much larger than the CMB temperature. This approximation
will be inaccurate early in the reionization process (Pritchard \& Furlanetto 2007,
Pritchard \& Loeb 2008), and spin
temperature fluctuations may
complicate the cross-correlation signal close to the equilibration phase.
This
is beyond the scope of the present paper, but may modify our results at very early times, perhaps
when $\avg{x_i} \lesssim 0.1$ (Pritchard \& Furlanetto 2007).
More robust, and detectable in the near future, are our results during the bulk of
the reionization process, at which point the 21 cm and galaxy fields are anti-correlated.
Once the anti-correlation is established, its scale dependence varies with redshift and
ionization fraction.
This behavior is shown in the green short-dashed, blue long-dashed, cyan dot-dashed and
magenta dash-dotted lines which span model ionization fractions of
$\avg{x_i} = 0.21$ -- $0.96$.
As discussed in \S \ref{sec:cross}, this anti-correlation reflects the fact that galaxies
turn on first in
overdense environments and ionize their surroundings.
As the ionized fraction increases, and the H II regions grow, the cross-correlation turns over
on progressively larger scales.
This illustrates that the 21 cm-galaxy cross power spectrum
provides a relatively direct probe of bubble growth during reionization.
We pause here to mention one slight caveat regarding our modeling of the small scale
cross spectrum. The galactic
sources will themselves contain neutral hydrogen, a feature which is not properly included in our
calculations. (This leads to a 21 cm signal {\em after} reionization; see Wyithe \& Loeb 2008, Chang
et al. 2008.) This neglected contribution should cause the cross spectrum to
become positive on small
scales. Since the signal
from the diffuse IGM is much stronger on relevant scales than this galactic contribution (see
Lidz et al. 2007b, their \S 2.3, for an estimate), we do not expect this
to confuse the determination of the bubble-size induced turnover.
As remarked previously, the precise turnover scale depends on the minimum host halo mass
of the galaxies observed by our hypothetical survey. Roughly speaking, the turnover
is set by the size of ionized regions around detectable galaxy hosts, and is insensitive
to the size of ionized regions around fainter galaxies (see \S \ref{sec:min_mass}).
In practice, the minimum detectable host mass -- which impacts the turnover scale --
may vary with redshift in a complicated
way, depending on the flux-limit of the survey and the correlation between luminosity
and halo mass. This will make the evolution of the 21 cm-galaxy cross spectrum more
complicated than the illustrative results of Figure \ref{fig:cross_v_z}, which are at fixed
minimum host mass. In a flux-limited survey, the turnover scale will generally evolve
{\em less} strongly with time: in this case, one detects only massive galaxies at early
times, which tend to reside in larger bubbles than average.
In order to disentangle the impact of varying minimum host mass and that
of varying bubble size and ionization fraction, one could cross-correlate the 21 cm
signal with galaxies of varying luminosity and
use the galaxy auto spectrum and luminosity function to help understand
the correlation between galaxy luminosity and host halo mass. As we show
in \S \ref{sec:min_mass},
measuring
the turnover scale as a function of galaxy luminosity, allows one to determine
the characteristic size of ionized bubbles as a function of luminosity.
The 21 cm auto spectrum itself evolves as the filling factor of H II regions
increases. Lidz et al. (2007b) explored how one might use the redshift evolution of the auto
spectrum to constrain the evolution of the H II region filling factor. The redshift
evolution of the cross spectrum, as considered here, would provide a complementary and
essentially independent means for constraining H II region growth. Ultimately, combining
the two measurements should provide a cross check on each measurement
and increase constraining power. More important, the cross spectrum provides a much
more direct indicator of characteristic bubble size than the auto spectrum. By measuring the
cross spectrum
in different galaxy luminosity bins, one can additionally determine how the bubble size depends
on galaxy luminosity, information which is not obtainable from the 21 cm auto spectrum alone.
Note that the ionized regions form under the collective influence of many individual galaxies,
but one still expects a statistical trend of bubble size with galaxy luminosity: more luminous
galaxies tend to live in more massive halos, which inhabit larger overdensities, and are typically
surrounded by larger ionized regions. Measuring the turnover in the cross spectrum for different
galaxy luminosity bins offers a unique means of quantifying this trend.
We will discuss the statistical power of several future surveys to detect the
cross spectrum evolution in \S \ref{sec:detectability}.
\section{Dependence on Ionizing Source Properties}
\label{sec:sources}
Now that we understand the basic features of the 21 cm-galaxy cross spectrum, we consider
variations around our fiducial model parameters.
First, let us examine how the signal depends on the properties
of the ionizing sources. Precisely which sources of light produce most of the photons
that reionize the IGM is highly uncertain. This depends on many poorly constrained quantities
such as the efficiency of star formation as a function of galaxy mass, the high redshift
stellar initial mass function (IMF),
the fraction of ionizing
photons that escape host galaxies to ionize the IGM and its dependence on host mass, the
degree to which photoionization and supernova feedback suppress star formation in low mass halos,
and other factors (e.g. Furlanetto et al. 2006a). A promising route to constrain some of these
uncertain parameters is to study
the differing impact these sources have on the surrounding IGM. Put simply, the IGM may provide
a valuable laboratory for studying the first luminous sources.
In this section we show that measurements of the 21 cm-galaxy cross spectrum may help constrain
ionizing source properties.
To explore this, let us start with a simple model and vary two of our model parameters.
First, we vary $M_{\rm x, min}$, the minimum mass of halos that host sources contributing to
reionization. Next, we vary $M_{\rm g, min}$, the
minimum host mass of galaxies detectable by our hypothetical galaxy survey. We explore the
impact of varying these parameters using $70$ Mpc/$h$ hybrid simulations, each normalized to
$\avg{x_i} = 0.5$ at $z = 6.9$.
To begin with, we fix the parameter $M_{\rm g, min}$ at $10^{10} M_\odot/h$ and
consider $M_{\rm x, min} = 10^8 M_\odot/h$,
$10^9 M_\odot/h$,
and $10^{10} M_\odot/h$ respectively.\footnote{Note that we generally quote masses in units
of $M_\odot$, but here and in \S \ref{sec:min_mass} (owing to imperfect planning) we use $M_\odot/h$
units, and so the choice of
$M_{\rm g, min} = 10^{10} M_\odot/h$ is slightly different than our fiducial choice of
$M_{\rm g, min} = 10^{10} M_\odot$.}
These are clearly simplified models, but they
suffice to illustrate the basic sensitivity of the signal to ionizing source properties.
These models should approximate
scenarios in which photo-heating (Thoul \& Weinberg 1996, Navarro \& Steinmetz 1997,
Dijkstra et al. 2004) or supernova feedback (e.g. Springel \& Hernquist 2003) limit the
efficiency of star-formation in small mass halos and diminish the contribution of
these halos to reionization.
\begin{figure}
\begin{center}
\includegraphics[width=9.2cm]{f5.eps}
\caption{The 21 cm-galaxy cross power spectrum for models of varying $M_{\rm x, min}$.
In each model curve the efficiency of the ionizing sources is adjusted to yield
$\avg{x_i} \sim 0.5$ at $z=6.9$, and the minimum detectable galaxy host has a mass
of $M_{\rm g, min} = 10^{10} M_\odot/h$. {\em Top panel}: The absolute value of the
21 cm-galaxy cross power spectrum. {\em Bottom panel}: The correlation coefficient
between the galaxy and 21 cm fields in each case. The dependence of the cross power
spectrum on the host halo mass of the ionizing sources is rather mild.
}
\label{fig:rco_xmin}
\ec
\end{figure}
Varying $M_{\rm x, min}$ across the range shown in Figure \ref{fig:rco_xmin}
only weakly
influences the 21 cm-galaxy cross
spectrum. On large scales, the amplitude increases slightly with
increasing $M_{\rm x, min}$,
since the bias of the ionized regions is larger for larger values of
$M_{\rm x, min}$.
Note, however, that the small-scale turnover occurs at very similar scales
for each $M_{\rm x, min}$. This occurs because, in each case shown here, the minimum
detectable galaxy mass is larger than (or equal to) $M_{\rm x, min}$.
The cross-correlation is mostly
insensitive to the bubble sizes around smaller mass, undetectable sources. As alluded
to earlier, the {\em turnover in the cross spectrum depends on the bubble sizes around
galaxies above the minimum mass detectable by our hypothetical galaxy survey}, and is
mostly insensitive to the bubble sizes around lower mass hosts. Note that the auto spectra
of the ionization and 21 cm fields {\em do} depend on $M_{\rm x, min}$ (Furlanetto et al. 2006b,
McQuinn et al. 2007a, Lidz et al. 2007b) -- models with larger $M_{\rm x, min}$ have larger
bubbles (on average) at a given $\avg{x_i}$. However, it appears that the bubble sizes around high mass
galaxies (with $M \gtrsim M_{\rm g, min}$) change only slightly with increasing $M_{\rm x, min}$,
and hence the turnover scale in the cross spectrum is insensitive to $M_{\rm x, min}$. We have
verified this explicitly by calculating the average ionization as a function of distance
around halos with $M=M_{\rm g, min}$, for each of $M_{\rm x, min} = 10^8 M_\odot/h$ and
$M_{\rm x, min} = 10^{10} M_\odot/h$. The ionization profiles around the massive halos are very
similar in these models, supporting our interpretation.
Another possibility is that ionizing photons escape efficiently only from high mass
galaxies,
and that low mass sources do not contribute to reionizing the IGM. This possibility
is, in fact, suggested by the recent
escape fraction simulations of Gnedin et al. (2008).
Even if low mass galaxies have a
negligible
escape fraction, they may still form stars efficiently and
be detectable at wavelengths longward of the hydrogen ionization edge.
This scenario
produces an interesting signature in the 21 cm-galaxy cross power spectrum, provided
one has
a galaxy survey capable of detecting the, presumably faint, sources in these low mass
halos.
In order to explore this, we fix the minimum host halo mass of sources contributing to
the reionization of the IGM at $M_{\rm x, min} = 10^{10} M_\odot/h$ and calculate the
21 cm-galaxy cross spectrum with a galaxy survey probing sources in host halo masses
larger than each of $M_{\rm g, min} = 10^8, 10^9,$ and $10^{10} M_\odot/h$.
\begin{figure}
\begin{center}
\includegraphics[width=9.2cm]{f6.eps}
\caption{21 cm-galaxy cross power spectra for models where sources in high
mass halos produce most of the ionizing photons. In each model, the minimum
host halo mass of sources that allow ionizing photons to escape into the IGM
is $M_{\rm x, min} = 10^{10} M_\odot/h$. {\em Top panel}: The absolute value
of the 21 cm-galaxy cross spectrum for galaxy surveys with minimum detectable
host halo masses of $M_{\rm g, min} = 10^8, 10^9$ and $10^{10} M_\odot/h$. {\em Bottom panel}: The
correlation coefficient between the galaxy and 21 cm fields in each case. The cross spectrum
and correlation coefficient turn positive on small scales for cases in which the galaxy
survey detects sources with mass below the minimum mass ionizing source.
}
\label{fig:rco_gmin}
\ec
\end{figure}
The results of this calculation are shown in Figure \ref{fig:rco_gmin}. On large scales,
the amplitude of the cross spectrum increases as one raises the minimum detectable host
halo mass. This increase simply owes to the usual increase in galaxy bias with increasing
minimum host halo mass.
Perhaps more interesting, however, are
the results on small scales when the minimum detectable
host halo mass is {\em lower} than the minimum ionizing source mass. In this case (see
the model curves with $M_{\rm g, min} = 10^8$ and $10^9 M_\odot/h$), the
cross spectrum turns over on larger scales than in the model in which
$M_{\rm g, min} = M_{\rm x, min}$, it then reverses sign, and goes positive on small
scales.
Detecting this behavior would indicate that ionizing photons escape only from massive
host halos,
and not from lower mass hosts. An ambitious survey is needed to detect the faint
sources in
low mass halos, and to detect the 21 cm-galaxy cross spectrum on small scales (see
\S \ref{sec:detectability}).
Nonetheless,
the proposed signature would provide an interesting indication of a
small escape fraction from low mass galaxies. Moreover, this signature is relatively
direct -- any indication of a small escape fraction from low mass galaxies in the 21 cm auto
spectrum will be more subtle, and likely degenerate with other effects. Note, however, that
neutral hydrogen in the galaxies themselves may also result in a positive small-scale cross
spectrum (\S \ref{sec:evol_sig}), and it might be tricky to distinguish this from our
escape fraction scenario.
\begin{figure}
\begin{center}
\includegraphics[width=9.2cm]{f7.eps}
\caption{Decomposition of the 21 cm-galaxy cross spectrum for a model in which the minimum
detectable galaxy host mass is below the minimum host mass for ionizing sources. Here
we show the decomposition for a
model with $M_{\rm x, min} = 10^{10} M_\odot/h$ and $M_{\rm g, min} = 10^8 M_\odot/h$.
{\em Top panel}: As in Figure \ref{fig:fid_decomp}, the absolute value of the
21 cm-galaxy cross spectrum, as well as the {\em x-gal}, {\em $\rho-gal$} and
{\em three-field} terms. Additionally, we show the cross power spectrum between the
neutral hydrogen field and the density field itself. Unlike in our previous models, the
{\em three-field} and {\em $\rho-gal$} terms do not perfectly cancel each other on small
scales. This results because there are
detectable galaxies outside
of ionized regions in this model. Consequently, the 21 cm-galaxy cross spectrum changes
sign around $k \sim 1 h$ Mpc$^{-1}$ and goes positive on small scales.
{\em Bottom panel}: The cross-correlation coefficient between the 21 cm and galaxy fields,
as well as the cross-correlation between the neutral hydrogen and density fields.
}
\label{fig:decomp_gmin}
\ec
\end{figure}
In order to understand this effect better, it is useful to calculate each of the
terms in Equation (\ref{eq:p21_gal_cross}) separately. In Figure \ref{fig:decomp_gmin}
we examine each of these
terms for our model with $M_{\rm x, min} = 10^{10} M_\odot/h$ and
$M_{\rm g, min} = 10^8 M_\odot/h$. In this case, the {\em three-field} and {\em $\rho-gal$}
terms do not cancel each other, unlike in our fiducial case (Figure \ref{fig:fid_decomp}).
This occurs because in this model low mass galaxies do not leak ionizing photons into
the IGM, and can hence reside outside of the ionized regions which -- in this model -- are formed
only by sources residing
in higher mass halos. In this way, some low mass halos escape the `masking' effect of the
ionized regions (see \S \ref{sec:sim_sig} and Equation \ref{eq:twop}), and -- since
these low mass galaxies are correlated with the underlying
density field -- produce a {\em positive} small scale 21 cm-galaxy cross power spectrum.
\section{The Impact of Lyman-limit Systems}
\label{sec:recombs}
Next we consider the impact of Lyman-limit systems on our 21 cm-galaxy cross spectrum
calculations.
Once reionization is complete, most ionizing photons are absorbed in dense blobs of
neutral gas
known as Lyman-limit systems. Lyman-limit systems can also limit
the mean free path of ionizing photons and halt the growth of
H II regions (Furlanetto \& Oh 2005, McQuinn et al. 2007a) during reionization itself,
particularly
towards the end of reionization.
The precise physical nature and abundance of these systems at high redshift is
highly uncertain,
as is their role as photon sinks during reionization.
Lyman-limit systems may be especially numerous and have a strong effect if `mini-halos' --
halos with mass less than the atomic cooling mass --
manage to survive pre-heating prior to reionization (Oh \& Haiman 2003) and are abundant
during
reionization (Haiman et al. 2001, Barkana \& Loeb 2002, Shapiro et al. 2004).
In order to quantify the impact of Lyman-limit systems on the 21 cm-galaxy cross
spectrum, we use the hybrid simulation scheme of
Zahn et al. (2007), generalized to include the recombination excursion-set
barrier of Furlanetto \& Oh (2005). In order to capture the small-$k$ power spectrum --
where we expect the Lyman-limit systems to have the most impact -- we use the
$L_{\rm box} = 130$ Mpc/$h$ hybrid simulation. In this section, for simplicity, our hybrid
simulation adopts the pure Press-Schechter ionization barrier
of Furlanetto et al. (2004), rather than the halo-smoothing algorithm (see Zahn et al.
(2007) for comparisons). Since the present work is the first to incorporate the recombination
barrier into a hybrid simulation scheme, we briefly review this model here, but refer
the reader to Furlanetto \& Oh (2005) for more details.
The recombination barrier reflects the requirement that for an H II region to grow, the instantaneous rate of
photon production from the sources within the H II region must at least match the recombination rate
of the ionized material inside the H II region.
In Furlanetto \& Oh (2005), the recombination rate is calculated using the model of
Miralda-Escud\'e et al. (2000). In this model at any given time, the interior of an H II region
is ionized up to islands of small-scale
overdensity $\Delta_i$, above which the gas is neutral. The mean free path to ionizing photons is
then
determined by the volume-filling factor of these overdense islands. In
particular,
the {\em proper} mean free path to ionizing photons is given by:
\begin{eqnarray}
\lambda(z) = \lambda_0(z) \left[1 - F_v(\Delta_i)\right]^{-2/3},
\label{eq:lambda}
\end{eqnarray}
where $F_v(\Delta_i)$ denotes the volume-filling factor of regions with $\Delta < \Delta_i$, and
$\lambda_0(z)$ is a normalization factor, which is given by $\lambda_0(z) H(z) = 60$ km/s
in Miralda-Escud\'e et al (2000).
Here we leave $\lambda_0(z)$ as a free parameter to gauge the dependence of our results on the
observationally and theoretically uncertain mean
free path. The filling factor $F_v(\Delta_i)$ is computed using the gas density pdf in Miralda-Escud\'e et al. (2000). Similarly, the recombination rate for the ionized gas, in a region of large scale over-density $\delta$
ionized up to an overdensity $\Delta_i$,
is given by:
\begin{eqnarray}
A(\delta, \Delta_i) = \alpha_A n_e (1 + \delta) \int_0^{\Delta_i} d\Delta \Delta^2 P(\Delta).
\label{eq:cfac}
\end{eqnarray}
Here $\alpha_A$ denotes the case-A recombination coefficient for the ionized gas, which we assume to be
at $10^4$ K, $n_e$ denotes the mean electron density in the IGM, and $P(\Delta)$ is the gas density
pdf from Miralda-Escud\'e et al. (2000). We assume helium is mostly singly-ionized,
but not doubly-ionized, within the bubble interiors. The recombination rate
formula assumes that the density pdf, $P(\Delta)$,
is independent of large-scale overdensity, $\delta$, which should be a good approximation for the large
scales relevant here (Furlanetto \& Oh 2005).
With this formula for the recombination rate in hand, Furlanetto \& Oh (2005) write down an excursion set
barrier for a region of size $R$ and overdensity $\delta$ to overcome recombinations and be ionized by
interior sources. In our notation, this formula is:
\begin{eqnarray}
\zeta \frac{df_{\rm coll}(\delta, R)}{dt} > A(\delta, R),
\label{eq:rec_barr}
\end{eqnarray}
where $\zeta$ denotes the ionizing efficiency of the sources, $f_{\rm coll}$ denotes the collapse fraction
in halos above the minimum host halo mass, and $R$ is equated with the mean free path, $\lambda$,
which sets $\Delta_i$ through Equation (\ref{eq:lambda}), and $A(\delta, R)$ through Equation (\ref{eq:cfac}).
We implement this barrier, and apply it in a Monte Carlo fashion (Zahn et al. 2007, Mesinger \& Furlanetto 2007a), in conjunction with the normal
Furlanetto et al. (2004) barrier.
This barrier effectively prohibits ionizing photons from propagating long distances, as regulated by the parameter $\lambda_0(z)$, and decreases the level of ionization fluctuations on large scales.
\begin{figure}
\begin{center}
\includegraphics[width=9.2cm]{f8.eps}
\caption{Dependence of the 21 cm-galaxy cross power spectrum on the
abundance of Lyman-limit systems. {\em Top panel}: The absolute value of the
cross spectrum. {\em Bottom panel}: The cross-correlation coefficient.
The black solid line shows the cross spectrum in our
fiducial model at $\avg{x_i} = 0.8$ ($z=6.90$). The red dotted line shows an equivalent
model from our hybrid simulation scheme. The green short-dashed and blue long-dashed lines
show cross spectrum calculations for models with abundant Lyman-limit systems (see text).
The recombination barrier scales (see text) for the models with
$\lambda_0(z) H(z) = 5$ km/s, and $\lambda_0(z) H(z) = 10$ km/s are
$R_{\rm rec, barr} = 4, 7$ Mpc/$h$ respectively.
The Lyman-limit systems force the cross spectrum to turn-over towards large scales, but the
effect is relatively mild.
}
\label{fig:rco_recs}
\ec
\end{figure}
In order to see how this can impact the 21 cm-galaxy cross power spectrum, we calculate
the signal
with hybrid simulations of varying $\lambda_0(z)$. In particular, we
consider $\avg{x_i} = 0.8$ and
vary $\lambda_0(z)$ over the range $\lambda_0(z) H(z) = 5-60$ km/s. We span here a
rather broad range of models, which is appropriate given our limited observational
constraints on the mean free path to ionizing photons at high redshift. The results of
calculations with no Lyman-limit systems, and each of $\lambda_0(z) H(z) = 5$ and
$10$ km/s,
are shown in Figure \ref{fig:rco_recs}. Since $\lambda_0(z) H(z) = 5$ km/s is $1/12$th of
the fiducial Miralda-Escud\'e et al. (2000) value, our values represent rather extreme
choices for the mean free path.
A more meaningful characterization than
$\lambda_0(z)$, is the characteristic scale where the recombination barrier
(Equation \ref{eq:rec_barr}) crosses the usual Furlanetto et al. (2004) barrier
(see Furlanetto \& Oh 2005). We
denote the scale where the two barriers cross by $R_{\rm rec, barr}$.
For models with $\lambda_0(z) H(z) = 5, 10$ km/s (and $\avg{x_i} = 0.8$, $z=6.90$),
the barriers cross at respective radii of $R_{\rm rec, barr} = 4$ and $7$ Mpc/$h$.
Note that, for our choice of model parameters, the recombination barriers are
not so steep, and so some fraction of
points do manage to cross the barriers on smoothing scales roughly twice as large as
$R_{\rm rec, barr}$.
Comparing the red and black model
curves, we see that the hybrid scheme accurately captures the 21 cm-galaxy cross power
spectrum signal from our full radiative transfer simulations in the no Lyman-limit system
case as in Figure \ref{fig:rco_21cm_gal_hf}.
The model curves with $\lambda_0(z) H(z) = 10$ km/s
($R_{\rm rec, barr} = 7$ Mpc/$h$) and $\lambda_0(z) H(z) = 5$ km/s
($R_{\rm rec, barr} = 4$ Mpc/$h$)
illustrate that decreasing the mean free path forces the 21 cm-galaxy cross power
spectrum to turn over towards large scales, rather than simply flattening out on large
scales as in our fiducial model.\footnote{In our fiducial model, the 21 cm-galaxy cross power
spectrum should turn over on some scale larger than that of our simulation box. Note however
that foreground contamination may make such scales inaccessible to future 21 cm observations
(McQuinn et al. 2006).} The cross spectrum
develops a more well defined characteristic scale as the mean free path decreases.
This trend results because decreasing the mean free path limits the formation
of very large H II regions which in turn reduces the amount of large scale cross power.
Although the figure illustrates a clear trend of decreasing large scale power with
decreasing mean free path, we caution that there is no simple one-to-one correspondence
between mean free path and H II region size. As a specific illustration of the
distinction between the mean free path and H II region size, consider the
post-reionization IGM. In the post reionization IGM, essentially the entire volume of
the IGM is ionized, and the bubble size hence
infinite, while the mean free path is still finite.
Note that although the figure shows a clear trend of decreasing large scale power with
decreasing mean free path, the dependence on the abundance of Lyman-limit systems
is rather weak. This may
preclude strong constraints on the abundance of Lyman-limit systems from future measurements
of the 21 cm-galaxy cross power spectrum. On the other hand, it implies that the
mean free path to ionizing photons is not a very important factor in our modeling
of the cross spectrum. This should allow us to robustly constrain other parameters
from future measurements, in spite of our ignorance of the high redshift mean free path.
\section{Dependence on Galaxy-Selection Technique}
\label{sec:selection}
In this section, we consider the dependence of the
21 cm-galaxy cross power spectrum on the manner in which the galaxies are selected.
Thus far we have calculated the 21 cm-galaxy cross power spectrum by cross-correlating our
21 cm field with all simulated halos above some minimum detectable halo mass cut. In
other
words, we assume that each simulated dark matter halo contains one luminous galaxy, and that the
flux limit of our hypothetical galaxy survey corresponds precisely to a minimum host halo mass.
This is clearly a vast simplification, and so it is important to explore the signal's sensitivity to
the minimum mass cut. Note, however, that since we consider only scales much larger than
the halo virial radius, we are {\em not} sensitive to the distribution of galaxies within
each host halo (e.g. Scoccimarro et al. 2001).
Another important effect is that galaxies selected on the basis of Ly-$\alpha$
emission will have a different 21 cm-galaxy cross-correlation than galaxies selected by, for
example,
the Lyman-break technique. We presently explore the sensitivity of our results to the {\em type} of
galaxy selected by our hypothetical survey.
\subsection{Minimum Detectable Mass}
\label{sec:min_mass}
Let us first fix the population of galaxies responsible for reionizing
the IGM, and the resulting ionization field, while varying the minimum host halo mass containing
galaxies detectable by our hypothetical survey.
We explored this issue somewhat already in \S \ref{sec:sources}, but there we focused
on scenarios in which ionizing photons do not escape from low mass galaxies -- i.e.,
cases where $M_{\rm g, min} < M_{\rm x, min}$. Here we focus on models in which
ionizing photons manage to escape from low mass halos, yet such sources are too faint
to be detectable by our hypothetical galaxy survey. In other words, we consider cases
where $M_{\rm g, min} > M_{\rm x, min}$. This is likely the more relevant case
for first generation surveys where it will be difficult to detect the presumably
faint galaxies that reside in low mass halos.
Another point is that it is unlikely that {\em all} halos above some $M_{\rm g, min}$
host galaxies that actively produce detectable photons at any given instant of time.
In other words, the `duty cycle' -- the fraction of halos above a given mass which
contain galaxies actively radiating at a particular time -- is likely less
than unity.
As quantified below, varying the minimum detectable host mass impacts the mean
21 cm-galaxy cross power
spectrum, but
reducing the duty cycle of detectable galaxies does not by itself change the average cross power
spectrum signal. This is because galaxy bias is independent of duty cycle, provided that
the duty
cycle is itself independent of mass for halos above the minimum detectable host mass.
Decreasing the
duty cycle instead increases the level of Poisson fluctuations in the galaxy abundance,
which
increases the cross spectrum {\em variance} -- and makes the cross spectrum more difficult to
detect (Furlanetto \& Lidz 2007, \S \ref{sec:detectability}) -- while preserving
the {\em average} cross power spectrum.
Presently, we focus on how the minimum host halo mass impacts the mean signal, and defer
a discussion of the signal variance to \S \ref{sec:detectability}.
Of course our assumption that the duty cycle is independent of host mass may be too simplistic and
modifying this assumption may impact our results in detail. Our simple model should, however, suffice
to
illustrate the basic sensitivity to host halo mass. Moreover, in practice one can constrain
the run of duty cycle with halo mass from the observed galaxy luminosity function and
galaxy-galaxy auto power spectrum, which can then inform models for the 21 cm-galaxy
cross spectrum.
\begin{figure}
\begin{center}
\includegraphics[width=9.2cm]{f9.eps}
\caption{Dependence of 21 cm-galaxy cross power spectrum on the minimum
detectable galaxy mass. {\em Top panel}: The cross power spectrum
for a survey that detects all galaxies in halos of mass larger
than $M_{\rm g, min} = 10^9 M_\odot/h$,
$M_{\rm g, min} = 10^{10} M_\odot/h, 5 \times 10^{10} M_\odot/h$ and
$10^{11} M_\odot/h$ respectively. For each model curve, we fix the minimum
host mass of the ionizing sources at $M_{\rm x, min} = 10^8 M_\odot/h$.
{\em Bottom panel}: The correlation coefficient between the
galaxy and 21 cm fields in each case.
}
\label{fig:rco_galmass}
\ec
\end{figure}
The results of varying the minimum detectable host halo mass are shown in Figure
\ref{fig:rco_galmass}. Here we use the $70$ Mpc/$h$ hybrid simulation, fix
$M_{\rm x, min} = 10^8 M_\odot/h$ (just a little above the atomic cooling mass), and vary
$M_{\rm g, min}$ from $10^9 M_\odot/h$ to $10^{10} M_\odot/h, 5 \times 10^{10} M_\odot/h$,
and $10^{11} M_\odot/h$. On large scales one sees the usual increase in the
amplitude of the cross spectrum as $M_{\rm g, min}$ increases. On small scales, the cross spectrum
turns over on progressively smaller scales as $M_{\rm g, min}$ decreases, and the
cross-correlation starts
to sample the small bubbles around the lower mass halos. This is a continuation of
the behavior seen in Figure \ref{fig:rco_xmin}. It illustrates that the turnover scale needs
to be interpreted with caution, since it is sensitive to $M_{\rm g, min}$. The dependence of
turnover scale on luminosity is very interesting, however; examining it amounts to a measurement of
the characteristic bubble size around galaxies of varying luminosity. In order to best constrain
this dependence one needs a galaxy survey with a sufficiently large dynamic range in
luminosity, and
one needs
to examine the luminosity dependence of the galaxy luminosity function, auto spectrum and
cross spectrum. This also highlights the scientific benefit of measuring the 21 cm-galaxy cross
spectrum, as it is impossible to determine the luminosity dependence of the bubble
size distribution
from the 21 cm auto spectrum alone.
\subsection{Lyman-alpha Selected Galaxies}
\label{sec:laes}
A successful approach for finding high redshift galaxies is to search for Ly-$\alpha$ emission,
which is frequently strong in young galaxies (Partridge \& Peebles 1967). There are numerous
existing and planned Ly-$\alpha$ emitter (LAE) surveys (e.g. Rhoads et al. 2004, Kashikawa et al. 2006, Stark et al. 2007), with the Subaru telescope currently providing the largest high redshift
sample, consisting of $\sim 58$ photometric LAEs at $z = 6.5$ discovered
in a $\sim 30^{'} \times 30^{'}$
field (Kashikawa et al. 2006). LAE surveys have an advantage over high redshift Lyman break
surveys in that they target narrow wavelength intervals, in between strong night sky background lines, in
search of strong emission lines. This allows one to detect galaxies that are unobservable by Lyman
break selection owing to the strong night sky background at the relevant wavelengths; sizable Lyman-break
galaxy catalogues likely await a widefield, near-infrared instrument in space or 30-meter class telescopes
on the ground.
Existing LAE surveys and their extensions
hence likely provide the first opportunity to detect the 21 cm-galaxy
cross power spectrum, particularly if the IGM is partly neutral at $z \sim 6.6$ (Wyithe \& Loeb 2007,
Furlanetto \& Lidz 2007, \S \ref{sec:detectability}).
To this end, we would like to consider the cross correlation between 21 cm and {\em Ly-$\alpha$ selected
galaxies}. In contrast to galaxies selected via, for example the Lyman break or H-$\alpha$, the abundance
of observable Ly-$\alpha$ selected galaxies will be modulated by the presence of neutral hydrogen, impacting
their clustering (Furlanetto et al. 2006c, McQuinn et al. 2007a, 2007b, Mesinger \& Furlanetto 2007b, 2007c)
and the 21 cm-galaxy cross power
spectrum. This modulation occurs because damping wing absorption extinguishes the Ly-$\alpha$ line
for sources sufficiently close to the edge of an H II region (Miralda-Escud\'e 1998), where there
is an adjacent column of neutral hydrogen. This means Ly-$\alpha$ selected galaxies will lie towards
the center of large-ish, $R \gtrsim 1$ proper Mpc, H II regions
(Furlanetto et al. 2006c, McQuinn et al. 2007a, 2007b, Mesinger \& Furlanetto 2007b, 2007c). Owing to
this, and because observable galaxies will have larger masses after Ly-$\alpha$ selection, the
clustering of Ly-$\alpha$ selected galaxies should increase as such galaxies are detected at earlier
and earlier stages of reionization.
Thus far our mock galaxies have been {\em uniformly selected} -- i.e., not modulated by the presence
of neutral hydrogen. While this is appropriate for Lyman break selected galaxies, it is incorrect for
Ly-$\alpha$ selected galaxies before reionization completes.
In order to examine the impact of Ly-$\alpha$ selection on the 21 cm-galaxy cross spectrum, we compute
the damping wing optical depth, $\tau_D$, towards each of our target halos. For simplicity, we calculate only
the damping wing optical depth at line
center (see e.g. Equations (1) and (2) of Mesinger \& Furlanetto 2007c) and do not model resonant
absorption (see McQuinn et al. 2007b, Mesinger \& Furlanetto 2007b, Dijkstra et al. 2007 for discussion). Assuming that
each source's luminosity is proportional to its host halo mass, and adopting our fiducial choice
of $M_{\rm g, min} = 10^{10} M_\odot$, our Ly-$\alpha$ survey detects sources with $M \rm{exp}[-\tau_D] \geq
M_{\rm g, min}$.
\begin{figure}
\begin{center}
\includegraphics[width=9.2cm]{f10.eps}
\caption{The 21 cm-galaxy cross power spectrum for Ly-$\alpha$ selected
galaxies. {\em Top panel}: The 21 cm-galaxy cross power spectrum for each
of Ly-$\alpha$ selected galaxies (dashed lines) and `all galaxies' (solid lines).
We show results at $(\avg{x_i}, z) = (0.21, 8.34), (0.54, 7.32), (0.82, 6.90)$, and
$M_{\rm g, min} = 10^{10} M_\odot$.
{\em Bottom panel}: The cross-correlation
coefficient between the 21 cm and galaxy fields for the model curves in the top panel.
}
\label{fig:rco_lae}
\ec
\end{figure}
With mock Ly-$\alpha$ selected galaxy catalogues in hand, we calculate the cross
power spectrum
of these galaxies with the 21 cm field for a few outputs of differing ionization fractions and
redshifts. The results of this calculation are shown in Figure \ref{fig:rco_lae}. Comparing
the cross spectra of the Ly-$\alpha$ selected galaxies ({\em top panel}, dashed lines) and uniformly
selected galaxies (solid lines), we
see that the large-scale amplitude of the cross spectra are higher,
and that the signal turns over on larger scales, for the Ly-$\alpha$ selected galaxy
samples. It is easy to understand these trends qualitatively.
For a galaxy to be visible
in Ly-$\alpha$ it must reside in a sufficiently large H II region -- or more accurately, it needs to reside
along a sufficiently long ionized {\em skewer} -- to avoid complete attenuation owing
to damping wing absorption.
The largest H II regions form around the most clustered sources, and
so the galaxies detectable in Ly-$\alpha$ are more clustered than uniformly selected
galaxies of the same host halo mass
(Furlanetto et al. 2006c, McQuinn et al. 2007a, 2007b, Mesinger \& Furlanetto 2007b, 2007c). This
enhanced clustering is reflected
in the boosted large scale 21 cm-galaxy cross power spectrum.
Likewise, the turnover on small scales
is set by the characteristic H II region size around {\em detectable} galaxies, which increases for the
Ly-$\alpha$ selected galaxies: Ly-$\alpha$ galaxies residing in small bubbles are attenuated
out of the sample by damping wing absorption.
This is visualized most clearly in the cross-correlation coefficient between the 21 cm and galaxy
fields (Figure \ref{fig:rco_lae}, {\em bottom panel}). In the uniformly-selected galaxy sample,
the correlation coefficient turns over on progressively larger scales as reionization proceeds. By contrast,
the small scale turnover in the Ly-$\alpha$ selected sample is relatively fixed. At early times when
the bubbles are small, the
turnover in the cross spectrum with the Ly-$\alpha$ selected sample
{\em largely reflects the damping wing scale}. In order to best characterize bubble growth in the early
and middle stages of reionization, one requires a uniformly-selected galaxy sample, rather than a Ly-$\alpha$
selected sample. Finally, note that the cross correlation coefficient in the Ly-$\alpha$ selected samples
does not quite reach $r=-1$ on large scales. This behavior is enhanced early in reionization, and
presumably results because small bubbles are missed by Ly-$\alpha$ selection, which contribute most
significantly at low ionized fractions.
In summary, while Ly-$\alpha$ selected samples will be interesting for initial cross
spectrum detections,
uniformly-selected samples will be required to best constrain bubble growth during reionization.
\section{Detectability}
\label{sec:detectability}
In this section we calculate the statistical significance
at which future surveys can detect the 21 cm-galaxy cross spectrum and briefly consider the
resulting insights
into reionization. Here we follow closely the calculations in Furlanetto \& Lidz (2007), simply
extending them to incorporate our simulated cross spectrum signal.
In our calculations we consider a 21 cm survey with the specifications planned for each of the
MWA (Bowman et al. 2006, McQuinn et al. 2006, Mao et al. 2008) and LOFAR, which we review below
(\S \ref{sec:mwa_survey} and \S \ref{sec:lofar}). We consider two basic types of galaxy surveys. First,
we consider a survey similar to the Subaru deep field survey for Ly-$\alpha$ emitters (Kashikawa 2006).
Since the Subaru survey is ongoing, this calculation should illustrate what is achievable in the near
future as the MWA and LOFAR come online. Note that the present Subaru deep field does not overlap
with the planned MWA target fields (M. Morales, private communication, 2008), but our calculations
still serve to illustrate what is possible in the near future.
Next, we consider a more futuristic galaxy survey. Coupling our futuristic galaxy survey with the
MWA or LOFAR, one can potentially measure the cross power spectrum at several redshifts, probing the
{\em evolution} in the cross spectrum signal, and tracing the growth of H II regions during
reionization as in Figure \ref{fig:cross_v_z}.
\subsection{Statistical Error Estimates}
\label{sec:errorbar}
To begin with, we describe our statistical error estimates, reviewing the formulae for cross spectrum
error bars for a survey of given specifications, incorporating sample variance, thermal noise in the 21 cm
radio telescope, and shot-noise and redshift errors in the galaxy distribution.
Here we restrict ourselves to the spherically averaged
cross spectrum, since the MWA and LOFAR have limited transverse sensitivity and since very precise
galaxy redshifts will be required to measure the angular dependence of the cross
spectrum (Furlanetto \& Lidz 2007).
We generally find it convenient to estimate error bars on the cross-correlation
coefficient, $r(k)$, rather
than on the cross spectrum itself. We desire an estimate of the error bar on $r(k)$ calculated
from spherically averaged auto and cross spectra in a bin of logarithmic width $\epsilon = d\rm{ln}k$.
For notational convenience let us denote the cross-correlation coefficient by
$r(k) = P_{\rm 21, gal}(k)/[P_{\rm 21}(k) P_{\rm gal}(k)]^{1/2} =
A(k)/[B(k) C(k)]^{1/2}$. Propagating errors, the fractional error on the
cross-correlation coefficient is:
\begin{eqnarray}
\frac{\sigma_r^2}{r^2}(k) = && \frac{\sigma_A^2}{A^2}(k) + \frac{\sigma_B^2}{4 B^2}(k) +
\frac{\sigma_C^2}{4 C^2}(k) - \frac{\sigma_{AB}^2}{A B}(k) \nonumber \\
&& - \frac{\sigma_{AC}^2}{A C}(k) + \frac{\sigma_{BC}^2}{2 B C}(k).
\label{eq:error_rco}
\end{eqnarray}
This expression involves the the cross spectrum variance, the 21 cm and galaxy
power spectrum variances, and the co-variance between the various power spectra, each
calculated for spherically averaged power spectra in shells of logarithmic width
$\epsilon$. The one disadvantage of considering the cross-correlation coefficient
is that the cross spectrum can, under appropriate circumstances, be detected at higher
sensitivity than the 21 cm auto spectrum (Furlanetto \& Lidz 2007). In this case,
the error bar on the cross correlation coefficient, which includes an error term from
the auto spectrum, will be larger than that for the cross spectrum alone. Furthermore,
estimating the cross-correlation coefficient requires an auto spectrum estimate and is hence
more susceptible to residual foreground contamination than the auto spectrum alone.
Consider first the power spectrum variance terms for a single ${\bbox{k}}$-mode, with line
of sight component $k_\parallel = \mu k$, restricting
ourselves to modes in the upper-half plane. The power spectrum variance expressions
are as follows
(Furlanetto \& Lidz 2007):
\begin{eqnarray}
\sigma_A^2(k,\mu) && = {\rm var}\left[P_{\rm 21, gal} (k,\mu)\right] \nonumber \\&&= \frac{1}{2}
\left[P_{\rm 21,gal}^2(k,\mu) + \sigma_B(k,\mu) \sigma_C(k,\mu)\right],
\label{eq:var_cross}
\end{eqnarray}
\begin{eqnarray}
\sigma_B^2(k,\mu) && = {\rm var}\left[P_{\rm 21} (k, \mu)\right] \nonumber \\
&& = \left[P_{\rm 21}(k, \mu) + \frac{T^2_{\rm sys}}{T_0^2} \frac{1}{B t_{\rm int}}
\frac{D^2 \Delta D}{n(k_\perp)}\left(\frac{\lambda^2}{A_e}\right)^2\right]^2, \nonumber \\
\label{eq:var21}
\end{eqnarray}
\begin{eqnarray}
\sigma_C^2(k, \mu) && = {\rm var}\left[P_{\rm gal} (k, \mu)\right] \nonumber \\
&& = \left[P_{\rm gal}(k, \mu) + n^{-1}_{\rm gal} e^{k^2_\parallel \sigma^2_\chi}\right]^2.
\label{eq:vargal}
\end{eqnarray}
The second term in Equation (\ref{eq:var21}) comes from thermal noise in the radio telescope,
the second term in Equation (\ref{eq:vargal}) expresses the shot noise error, while the other terms
in the above equations are sample variance contributions. The thermal noise term
depends on the system temperature,
$T_{\rm sys}$; the co-moving distance to the center of the survey at redshift $z$, $D(z)$; the
survey depth, $\Delta D$; the observed wavelength, $\lambda$; the effective area of each
antenna tile, $A_e$; the survey bandwidth, $B$; the total observing time, $t_{\rm int}$; and
the distribution of antennas.
The factor $T_0$ in the denominator of the detector noise term arises because we
normalize the 21 cm field by $T_0$ so that it is dimensionless -- i.e., we work with the
field $\delta_T/T_0$.
The dependence on antenna configuration is encoded in
$n(k_\perp)$ which denotes the number density of baselines observing a mode with transverse
wavenumber $k_\perp$ (McQuinn et al. 2006, Bowman et al. 2006, Lidz et al. 2007b). The
galaxy shot-noise term depends on $n_{\rm gal}$
which is the abundance of galaxies observable in our hypothetical survey, and on the accuracy
of the galaxy redshifts obtained by the survey. The galaxy redshift error is given in co-moving
units by $\sigma_\chi = c \sigma_z/H(z)$.
We also require expressions for the co-variance between the different power spectra.
These can be computed straightforwardly:
\begin{eqnarray}
\sigma_{AB}^2(k,\mu) && = {\rm cov}\left[P_{\rm 21, gal} (k,\mu), P_{\rm 21} (k,\mu)\right] \nonumber \\ && =
P_{\rm 21, gal}(k,\mu) P_{\rm 21}(k,\mu), \\
\sigma_{AC}^2(k,\mu) && = {\rm cov}\left[P_{\rm 21, gal} (k,\mu), P_{\rm gal} (k, \mu)\right] \nonumber \\ && =
P_{\rm 21, gal}(k,\mu) P_{\rm gal}(k), \\
\sigma_{BC}^2(k,\mu) && = {\rm cov}\left[P_{\rm 21} (k,\mu), P_{\rm gal} (k,\mu)\right] \nonumber \\ && =
P_{\rm 21, gal}^2(k,\mu).
\label{eq:covs}
\end{eqnarray}
Finally, we can estimate the error bar on the cross-correlation coefficient formed from our
spherically averaged power spectra. We do this by adding the power spectrum error bars for individual
${\bbox{k}}$-modes from Equations (\ref{eq:var_cross})--(\ref{eq:covs}) in inverse
quadrature, performing a similar calculation for each individual term in
Equation (\ref{eq:error_rco}).
For example, the variance of the cross-spectrum averaged over
a spherical shell of logarithmic width $\epsilon = d\rm{ln}k$ is:
\begin{eqnarray}
\frac{1}{\sigma_A^2(k)} = \sum_\mu \frac{\epsilon k^3 V_{\rm survey}}{4 \pi^2}
\frac{\Delta \mu}{\sigma_A^2(k,\mu)}.
\label{eq:var_shell}
\end{eqnarray}
The effective survey volume for our radio telescope
is $V_{\rm survey} = D^2 \Delta D \left(\lambda^2/A_e\right)$. If the galaxy survey has
a lesser volume, $V_{\rm gal}$, then the variance of the binned power spectrum estimated
from this lesser volume (for a mode contained within the lesser survey volume) is larger
by a factor of $\sim V_{\rm gal}/V_{\rm survey}$.
\subsection{The MWA}
\label{sec:mwa_survey}
With these expressions in hand let us briefly describe the specifications
we assume for our 21 cm and galaxy surveys. The MWA will have a large field of view,
spanning $\sim 800$ deg$^2$ on the sky,
and consisting of $500$ antenna tiles each
with an effective area of $A_e = 14 m^2$ at $z = 8$ (Bowman et al. 2006). Each antenna tile is $4 m$ wide,
and we follow Bowman et al. (2006), McQuinn et al. (2006) in assuming that the antennas are packed
as closely as possible within a compact core, with the distribution subsequently falling off as $r^{-2}$
in order to capture large baselines, out to a maximum baseline of $1.5$ km.
Lidz et al. (2007b) argued that a compact antenna configuration, with all of the MWA's
antennas packed as close as possible, is a superior configuration for 21 cm auto spectrum
measurements. This configuration is less good for the cross spectrum: given a galaxy
survey with photometric redshifts, one needs to balance the MWA's high
line-of-sight sensitivity, yet poor transverse sensitivity, with the galaxy survey's
high transverse sensitivity, yet poor line-of-sight sensitivity owing to redshift
uncertainties.
We assume that the
system temperature is set by the sky temperature, which we take to be
$T_{\rm sys} = 280 (1+z/7.5)^{2.3}$ K, following Wyithe \& Morales (2007). We consider
a bandwidth of $B = 6$ Mhz observing for a total time of $t_{\rm int} = 1000$ hrs. The bandwidth
is chosen to be small enough to ensure that the signal evolves minimally over the corresponding
redshift interval (McQuinn et al. 2006).
\subsection{LOFAR}
\label{sec:lofar}
LOFAR and the MWA are expected to have comparable sensitivity for
detecting the 21 cm auto spectrum (McQuinn et al. 2006, Mao et al. 2008).\footnote{We recently
learned that budget setbacks are forcing LOFAR to reduce its collecting area. We are unaware of
the details of the reduction, but this will reduce LOFAR's sensitivity compared to our
estimates here.}
LOFAR will observe
a smaller field of view than the MWA (by a factor of $\gtrsim 10$), but its larger collecting area
compensates for its reduced sky coverage.
The larger field of view of the MWA is, however, wasted when
cross-correlating with
a galaxy survey that covers a much smaller patch on the sky. We anticipate
then that LOFAR should at least initially
provide a {\em more sensitive} detection
of the 21 cm-galaxy cross spectrum than the MWA (Furlanetto \& Lidz 2007).
The precise collecting area and antenna configuration for LOFAR are
still evolving, but we follow the simple model of McQuinn et al. (2006) as a
plausible estimate. LOFAR
will consist of $32$ large antenna stations within $1$ km, with minimum
baselines of $100$ m. Each LOFAR station simultaneously
observes $4$ separate regions on the sky.
We assume that LOFAR's antenna stations are closely packed in
a compact core, before tapering off in an $r^{-2}$ configuration out to
a maximum radius of $1$ km. The effective area of each
antenna is $656$ m$^2$ at $z=8$, and
we linearly interpolate between the values in McQuinn et al. (2006)
(their Table 1), to find the
collecting area at other redshifts. As for the MWA, we consider $1,000$ hrs. of LOFAR
observations over a bandwidth of $B=6$ Mhz.
\subsection{Subaru-like Survey}
\label{sec:survey_subaru}
We first consider the detectability of the 21 cm-galaxy cross spectrum obtainable by
combining the MWA and LOFAR with the Subaru deep field survey, and plausible extensions. The
existing
Subaru deep field survey has a $0.25$ deg$^2$ field of view and locates Ly-$\alpha$
emitters near $z=6.6$ to a depth of $130$ \AA.
The existing spectroscopically-confirmed Subaru deep
field sample at redshift $z=6.6$ consists of $36$ emitters (Kashikawa et al. 2006).
The number density of spectroscopically-confirmed emitters corresponds to
$n_{\rm gal} = 1.6 \times 10^{-4}$ Mpc$^{-3}$.
An extension to the Subaru deep field, the Subaru/XMM-Newton Deep Survey is already underway,
and promises to increase the observed $z=6.6$ field of view by a factor of $\sim 4$, reaching
a survey area of $A_{\rm survey} \sim 1$ deg$^2$ by the end of the year (Ouchi 2005).
Given the rapid progress in area surveyed, we examine how the
detectability of the cross spectrum scales
with increasing field-of-view, at fixed depth and galaxy number density. In practice,
we calculate the cross spectrum $S/N$ for a galaxy survey that covers
the full field-of-view of the MWA ($\sim 800$ deg$^2$ at this redshift), and scale the
signal-to-noise (squared) in each $k$-bin downwards by the
ratio of the galaxy survey volume to
that of the MWA (see
Equation \ref{eq:var_shell}). We perform a similar calculation for LOFAR.
For each model cross-spectrum, our $S/N$ estimates assume
a galaxy number density of
$n_{\rm gal} = 1.6 \times 10^{-4}$ Mpc$^{-3}$, and redshift errors for the spectroscopically
confirmed galaxies of $\sigma_z = 0.01$.
The assumed redshift error corresponds to a velocity of several hundred km/s, motivated
by the typical velocity offsets for Ly-$\alpha$ lines observed by Shapley et al. (2003)
in Lyman break galaxies at $z \sim 3$.
The total $S/N$ is determined by summing the signal squared divided by our variance estimate (Equations
\ref{eq:var_cross} and \ref{eq:var_shell}) over all detected $k$-bins.
\begin{figure}
\begin{center}
\includegraphics[width=9.2cm]{f11.eps}
\caption{Signal to noise for cross spectrum detection. The signal to noise
at which Subaru-like surveys, coupled with the MWA (solid lines) and LOFAR (dashed lines),
can detect the 21 cm-galaxy cross spectrum as
a function of survey area at $z = 6.6$. The different curves indicate different models
for the ionization fraction. Each curve extends from the current Subaru
area ($\sim 0.25$ deg$^2$) to the full field of view of MWA ($\sim 800$ deg$^2$) or
LOFAR ($\sim 70$ deg$^2$).
The red dotted line indicates a $3-\sigma$ detection of the cross spectrum.}
\label{fig:ston}
\ec
\end{figure}
In Figure \ref{fig:ston} we show the detectability of the cross spectrum for
a few different models over a range of survey areas. In each model we adopt a plausible
minimum detectable
galaxy mass of $M_{\rm g, min} = 10^{10} M_\odot$, fixing the galactic duty cycle
to match the observed Subaru deep field abundance of
$n_{\rm gal} = 1.6 \times 10^{-4}$ Mpc$^{-3}$. The corresponding duty cycle in our models is around
$\sim 1\%$.
Given that we currently have few
direct observational constraints on the filling factor of H II regions near
$z = 6.6$, we consider models in which the ionized fraction is $\avg{x_i} = 0.54, 0.82$,
and $0.96$ at this redshift. Strictly speaking, we should use our Ly-$\alpha$ selected
cross spectrum models here, but at these ionization fractions we expect this to boost
our $S/N$ only
slightly (Figure \ref{fig:rco_lae}). For simplicity, we conservatively ignore the
clustering-boost from Ly-$\alpha$ selection here.
The amplitude of the cross spectrum
is largest amongst these models at $\avg{x_i} = 0.54$, and is substantially smaller
by $\avg{x_i} = 0.96$ (see Figure \ref{fig:cross_v_z}), and so the more neutral
models will be easier to detect.
The results shown in Figure \ref{fig:ston} illustrate
that cross-correlating the MWA with a galaxy survey of size comparable to the
present Subaru deep field survey will not allow a significant cross spectrum detection
($S/N \lesssim 1-\sigma$), even if the IGM is significantly neutral at $z=6.6$. However,
extensions to the Subaru deep field that
cover a larger area on the sky should yield significant cross spectrum
detections. For example, extending the present sky coverage by a factor of $\sim 10-15$ to
$3$ deg$^2$ should provide a $\gtrsim 2-3 \sigma$ cross spectrum detection in our
$\avg{x_i} = 0.54$ and $\avg{x_i} = 0.82$ models, but only
a $\sim 1-\sigma$ detection in our $\avg{x_i} = 0.96$ model.
As anticipated, cross-correlating with LOFAR can improve the $S/N$ by a factor of a few.
Cross-correlating LOFAR with a galaxy survey of only $1-2$ deg$^2$ should allow a $3-\sigma$
cross spectrum detection in our $\avg{x_i} = 0.54$ and $\avg{x_i} = 0.82$ models.
As mentioned earlier, the Subaru survey should reach this sky coverage soon, making a cross
spectrum detection feasible in the next few years if the IGM is partly neutral around $z \sim 6.6$.
More ambitious surveys covering
the entire MWA sky $\sim 800$ deg$^2$ would clearly move beyond mere detections --
the detection $S/N$ for such surveys is at the tens of sigma level
(see Figure \ref{fig:ston}) -- and provide valuable constraints on reionization models.
\subsection{Futuristic Survey}
\label{sec:survey_future}
Since more futuristic surveys will go beyond mere detections, we proceed to consider
the constraining power of a large field-of-view galaxy survey -- cross-correlated with
the MWA -- in more
detail.
Futuristic surveys will allow one to probe small scales, capture the turnover in the
cross-correlation coefficient and hence constrain bubble growth during reionization.
We calculate
the expected error bar on the cross correlation coefficient as a function of wavenumber
for a galaxy survey spanning the full MWA field of view, and consider the ability
of this survey to constrain reionization models. Here we assume that the galaxy survey
can detect fainter galaxies, reaching a galactic abundance $100$ times larger than in the previous
section, with the same redshift accuracy of $\sigma_z = 0.01$. We consider a redshift
of $z = 7.3$.
\begin{figure}
\begin{center}
\includegraphics[width=9.2cm]{f12.eps}
\caption{Error estimate for the 21 cm-galaxy cross correlation coefficient. Here we
consider a futuristic galaxy survey covering the entire MWA field of view, cross-correlated
with the MWA.
The blue points show the mean signal and error estimates for our hypothetical
21 cm/galaxy survey when $\avg{x_i} = 0.54$. The other curves show
the cross correlation coefficient when $\avg{x_i} = 0.21$ and $0.82$
respectively. Our hypothetical survey should help constrain the
volume-weighted ionization fraction. The vertical black dashed line shows
the wavenumber corresponding to the survey depth, below which foreground
contamination will prohibit extracting the signal.
}
\label{fig:rco_detect}
\ec
\end{figure}
Using again the models of \S \ref{sec:evol_sig} as input, we
estimate the statistical sensitivity of our futuristic galaxy survey.
The results of our sensitivity calculation are shown in
Figure \ref{fig:rco_detect}, for spherical bins of logarithmic width $\epsilon = 0.5$.
Here we plot the simulated signal when the IGM
is $\sim 50\%$ ionized along with a statistical error estimate for our
hypothetical survey. For contrast, we additionally show theoretical model
curves when the IGM is each of $\sim 20\%$ and $\sim 80\%$ ionized.
The curves and errorbars in Figure \ref{fig:rco_detect} show that the statistical precision of
our hypothetical survey is high enough to distinguish between the different stages of reionization
shown in the figure over about a decade in scale. On large scales, the measurement is limited
by foreground removal while on small scales 21 cm detector noise and galaxy redshift errors limit
the statistical precision of the measurements (Furlanetto \& Lidz 2007).
Although the 21 cm-galaxy cross power spectrum signal is much less susceptible than the
21 cm auto power spectrum to foreground contamination, free-free and synchrotron emission from
the high redshift galaxies in our survey still contaminate the 21 cm-galaxy cross power spectrum
somewhat (Furlanetto \& Lidz 2007).
This prohibits measuring modes with lines of sight wave numbers
$k_\parallel < 2 \pi/\Delta D$, where $\Delta D$ is the depth of the survey. The discreteness of
the survey means that the only modes in our survey that satisfy this requirement
have $k_\parallel = 0$, hence
all modes with $k \leq k_\parallel$ will be removed in the foreground cleaning process.
The black dashed line indicates the wavenumber corresponding to the survey depth in our
hypothetical
survey. We should remind the reader here of one trade-off involved with considering the cross-correlation
coefficient rather than the cross spectrum alone.
The cross-correlation coefficient is a more convenient quantity than the cross spectrum
for visualizing the small-scale turn-over (see Figure \ref{fig:rco_detect}), but it is
less desirable in that it includes the
auto spectrum, which is more susceptible to foreground contamination.
The sensitivity estimates shown in Figure \ref{fig:rco_detect} are encouraging, and suggest that
future 21 cm-galaxy surveys may help constrain the filling factor
and size distribution of H II regions during reionization. Comparing our error estimates
with the results of Figure \ref{fig:rco_galmass} suggests that futuristic surveys might also -- by
measuring the cross spectrum in different galaxy luminosity bins -- weakly
constrain the dependence of bubble size on host halo mass. Note also that the thermal noise term in the
21 cm variance (see Equation \ref{eq:var21}) still contributes significantly for most $k$-bins shown here,
and so futuristic 21 cm surveys with more antennas and larger collecting areas than the MWA can further
improve cross spectrum sensitivity. In particular, a future FFT telescope (Tegmark \& Zaldarriaga 2008) should
boost the sensitivity compared to our estimates here (see Mao et al. 2008 for estimates of the auto spectrum
sensitivity with an FFT telescope).
\section{Conclusions} \label{sec:conclusions}
In this paper, we considered the scientific return of future 21 cm-galaxy cross
power spectrum measurements. A strong cross spectrum measurement ultimately
requires detecting a sizable number of high redshift galaxies over a large field of
view, which presents a significant observational challenge. Nonetheless, we showed
that a detection of the cross spectrum may be achieved in the near future by combining LOFAR and
the Subaru survey for LAEs at $z \sim 6.6$. We estimate that a $\sim 3-\sigma$ detection is
feasible, provided the IGM is $\gtrsim 20\%$ neutral at this redshift, and that the Subaru
survey's sky coverage is extended
from $0.25$ deg$^2$ to $\sim 2$ deg$^2$. This detection would already be quite valuable,
as it would help confirm that the detected 21 cm signal comes from the high redshift IGM, and
not from foreground contamination, which should mostly be uncorrelated with high redshift
galaxies (Furlanetto \& Lidz 2007).
Futuristic galaxy surveys covering $100$s of square degrees on the sky, can be combined
with the MWA, LOFAR, and other 21 cm surveys, to move beyond a mere detection of the
cross spectrum signal and map out its detailed scale dependence.
The galaxy surveys required for these measurements are clearly very challenging, but
rapid progress is being made in this direction as deep, widefield surveys
are being designed to study baryonic acoustic oscillations and/or weak-lensing at high redshift
(e.g., ADEPT, HETDEX\footnote{http://www.as.utexas.edu/hetdex/}, CIP, and others). Another
option is to sparsely sample the MWA or LOFAR fields, in order to capture the large-scale modes
(Furlanetto \& Lidz 2007).
We have shown that the 21 cm-galaxy cross spectrum is a relatively direct tracer of bubble
growth during reionization. Measuring the turnover scale as a function of galaxy luminosity
constrains the luminosity dependence of the characteristic bubble size. This information is
difficult, or impossible, to obtain with the 21 cm auto spectrum alone.
In order to extract the most
information out of the cross spectrum, it should be combined with measurements of the
galaxy auto spectrum and luminosity function, which will help to constrain the galaxy
luminosity-halo mass correlation.
A further interesting feature of the simulated
signal is that the cross-correlation changes sign on large scales near the beginning
of reionization (Figure \ref{fig:cross_v_z}). At this early phase of reionization, our
results may, however, be modified by spin temperature fluctuations, which we presently neglect.
Future work should incorporate these fluctuations. If our signature holds up, the change in
sign of the cross correlation would provide a very interesting observational
indicator of the earliest phases of reionization. Finally, we found that the 21 cm-galaxy
cross power spectrum might provide an interesting observational signature of scenarios where
ionizing
photons fail to escape from low mass halos. Provided galaxies in these
low mass halos are detectable longward of the ionization edge, we expect the cross spectrum
to change sign and turn positive on small scales.
Generally speaking, the 21 cm-galaxy cross
spectrum
is a more direct tracer of the impact of galaxies on the surrounding IGM than the 21 cm auto
spectrum.
As such, it can potentially provide a wealth of information about the EoR and early structure
formation.
\section*{Acknowledgments}
We thank Mark Dijkstra and Miguel Morales for helpful discussions. We thank Suvendra Dutta
for useful conversations and for his collaboration in related work.
Support was provided, in part, by the David and Lucile Packard Foundation, the
Alfred P. Sloan Foundation, and grants AST-0506556 and NNG05GJ40G. OZ
acknowledges additional support by a Berkeley Center for
Cosmological Physics (BCCP) Fellowship.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.